source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
48,298 | I used rsync to copy a large number of files, but my OS (Ubuntu) restarted unexpectedly: sudo rsync -azvv /home/path/folder1/ /home/path/folder2 After reboot, I ran rsync again, but from the output on the terminal, I found that rsync still copied those already copied before. But I heard that rsync is able to find differences between source and destination, and therefore to just copy the differences. Source and target are both NTFS. The source is an external HDD and target is an internal HDD. I wonder in my case if rsync can resume what was left last time? | First of all, regarding the "resume" part of your question, --partial just tells the receiving end to keep partially transferred files if the sending end disappears as though they were completely transferred. While transferring files, they are temporarily saved as hidden files in their target folders (e.g. .TheFileYouAreSending.lRWzDC ), or a specifically chosen folder if you set the --partial-dir switch. When a transfer fails and --partial is not set, this hidden file will remain in the target folder under this cryptic name, but if --partial is set, the file will be renamed to the actual target file name (in this case, TheFileYouAreSending ), even though the file isn't complete. The point is that you can later complete the transfer by running rsync again with either --append or --append-verify . So, --partial doesn't itself resume a failed or cancelled transfer. To resume it, you'll have to use one of the aforementioned flags on the next run. So, if you need to make sure that the target won't ever contain files that appear to be fine but are actually incomplete, you shouldn't use --partial . Conversely, if you want to make sure you never leave behind stray failed files that are hidden in the target directory, and you know you'll be able to complete the transfer later, --partial is there to help you. With regards to the --append switch mentioned above, this is the actual "resume" switch, and you can use it whether or not you're also using --partial . Actually, when you're using --append , no temporary files are ever created. Files are written directly to their targets. In this respect, --append gives the same result as --partial on a failed transfer, but without creating those hidden temporary files. So, to sum up, if you're moving large files and you want the option to resume a cancelled or failed rsync operation from the exact point that rsync stopped, you need to use the --append or --append-verify switch on the next attempt. As @Alex points out below, since version 3.0.0 rsync now has a new option, --append-verify , which behaves like --append did before that switch existed. You probably always want the behaviour of --append-verify , so check your version with rsync --version . If you're on a Mac and not using rsync from homebrew , you'll (at least up to and including El Capitan) have an older version and need to use --append rather than --append-verify . Why they didn't keep the behaviour on --append and instead named the newcomer --append-no-verify is a bit puzzling. Either way, --append on rsync before version 3 is the same as --append-verify on the newer versions. --append-verify isn't dangerous: It will always read and compare the data on both ends and not just assume they're equal. It does this using checksums, so it's easy on the network, but it does require reading the shared amount of data on both ends of the wire before it can actually resume the transfer by appending to the target. Second of all, you said that you "heard that rsync is able to find differences between source and destination, and therefore to just copy the differences." That's correct, and it's called delta transfer, but it's a different thing. To enable this, you add the -c , or --checksum switch. Once this switch is used, rsync will examine files that exist on both ends of the wire. It does this in chunks, compares the checksums on both ends, and if they differ, it transfers just the differing parts of the file. But, as @Jonathan points out below, the comparison is only done when files are of the same size on both ends — different sizes will cause rsync to upload the entire file, overwriting the target with the same name. This requires a bit of computation on both ends initially, but can be extremely efficient at reducing network load if for example you're frequently backing up very large files fixed-size files that often contain minor changes. Examples that come to mind are virtual hard drive image files used in virtual machines or iSCSI targets. It is notable that if you use --checksum to transfer a batch of files that are completely new to the target system, rsync will still calculate their checksums on the source system before transferring them. Why I do not know :) So, in short: If you're often using rsync to just "move stuff from A to B" and want the option to cancel that operation and later resume it, don't use --checksum , but do use --append-verify . If you're using rsync to back up stuff often, using --append-verify probably won't do much for you, unless you're in the habit of sending large files that continuously grow in size but are rarely modified once written. As a bonus tip, if you're backing up to storage that supports snapshotting such as btrfs or zfs , adding the --inplace switch will help you reduce snapshot sizes since changed files aren't recreated but rather the changed blocks are written directly over the old ones. This switch is also useful if you want to avoid rsync creating copies of files on the target when only minor changes have occurred. When using --append-verify , rsync will behave just like it always does on all files that are the same size. If they differ in modification or other timestamps, it will overwrite the target with the source without scrutinizing those files further. --checksum will compare the contents (checksums) of every file pair of identical name and size. UPDATED 2015-09-01 Changed to reflect points made by @Alex (thanks!) UPDATED 2017-07-14 Changed to reflect points made by @Jonathan (thanks!) | {
"source": [
"https://unix.stackexchange.com/questions/48298",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
48,305 | Both bash and zsh support a shorthand of not placing a command in history if you prepend it with a space. This works great across sessions (if you've setopt histignorespace ). However, the command is still in the current session's history. How can I avoid placing a command in the current session's history (or remove if after executing it)? | With setopt histignorespace , the command is removed from the current session history. If you tested by pressing Up and seeing that the command line is still there, it's a feature. Note that the command lingers in the internal history until the next command is entered before it vanishes, allowing you to briefly reuse or edit the line. If you want to make it vanish right away without entering another command, type a space and press return. If you typed a command that didn't start with a space or didn't have the histignorespace option turned on, then there's no way to remove the command from the current session's history (you can edit the history file externally). | {
"source": [
"https://unix.stackexchange.com/questions/48305",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23443/"
]
} |
48,392 | I am trying out the command $ b=5; echo `$b`;
-bash: 5: command not found but it does not print 5 as it is supposed to. What am I missing here? What does ` (backquote/backtick) mean in commands? seems to say that ` evaluates the commands within and replaces them with the output. | Text between backticks is executed and replaced by the output of the command (minus the trailing newline characters, and beware that shell behaviors vary when there are NUL characters in the output). That is called command substitution because it is substituted with the output of the command. So if you want to print 5, you can't use backticks, you can use quotation marks, like echo "$b" or just drop any quotation and use echo $b . As you can see, since $b contains 5, when using backticks bash is trying to run command 5 and since there is no such command, it fails with error message. To understand how backticks works, try running this: $ A=`cat /etc/passwd | head -n1`
$ echo "$A" cat /etc/passwd |head -n1 should print first line of /etc/passwd file. But since we use backticks, it doesn't print this on console. Instead it is stored in A variable. You can echo $A to this. Note that more efficient way of printing first line is using command head -n1 /etc/passwd but I wanted to point out that expression inside of backticks does not have to be simple. So if first line of /etc/passwd is root:x:0:0:root:/root:/bin/bash , first command will be dynamically substituted by bash to A="root:x:0:0:root:/root:/bin/bash" . Note that this syntax is of the Bourne shell. Quoting and escaping becomes quickly a nightmare with it especially when you start nesting them. Ksh introduced the $(...) alternative which is now standardized ( POSIX ) and supported by all shells (even the Bourne shell from Unix v9). So you should use $(...) instead nowadays unless you need to be portable to very old Bourne shells. Also note that the output of `...` and $(...) are subject to word splitting and filename generation just like variable expansion (in zsh, word splitting only), so would generally need to be quoted in list contexts. | {
"source": [
"https://unix.stackexchange.com/questions/48392",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23489/"
]
} |
48,399 | I am having some trouble with NFS, and I'd like to try using just plain old TCP. I have no idea where to begin, though. Hardware-wise, I am using an ethernet crossover cable to network two netbooks. To network them, I type $ sudo ifconfig eth0 192.168.1.1 up && ping -c 10 -s 10 192.168.1.2 && sudo /etc/init.d/nfs-kernel-server start on the first netbook and $ sudo ifconfig eth0 192.168.1.2 up
$ ping -c 10 -s 10 192.168.1.1
$ mount /mnt/network1 on the second where /mnt/network1 is specified in /etc/fstab as 192.168.1.1:/home /mnt/network1 nfs noauto,user,exec,soft,nfsvers=2 0 0 as well as in /etc/exports (using the syntax of that file), on the first netbook. The above works fine, but the files and directories are huge. The files average about half a gigabyte a piece, and the directories are all between 15 and 50 gigabytes. I'm using rsync to transfer them, and the command (on 192.168.1.2 ) is $ rsync -avxS /mnt/network1 ~/somedir I'm not sure if there's a way to tweak my NFS settings to handle huge files better, but I'd like to see if running an rsync daemon over plain old TCP works better than rsync over NFS. So, to reiterate, how do I set up a similar network with TCP? UPDATE: So, after a good at few hours of attempting to pull myself out of the morass of my own ignorance (or, as I like to think of it, to pull myself up by my own bootstraps) I came up with some useful facts. But first of all, what led me on this rabbit trail instead of simply accepting the current best answer was this: nc is an unbelievably cool program that resolutely fails to work for me. I've tried the netcat-openbsd and netcat-traditional packages with no luck whatsoever. The error I get on the receiving machine ( 192.168.1.2 ) is: me@netbook:~$ nc -q 1 -l -p 32934 | tar xv
Can't grab 0.0.0.0:32934 with bind
tar: This does not look like a tar archive
tar: Exiting with failure status due to previous errors route gives: me@netbook:~$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default dir-615 0.0.0.0 UG 0 0 0 wlan0
link-local * 255.255.0.0 U 1000 0 0 eth0
192.168.0.0 * 255.255.255.0 U 2 0 0 wlan0
192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 But, here's the good news: having the static IP addresses set in /etc/network/interfaces , which I started doing while trying to get nc working, fixed all my NFS problems and rekindled my love for NFS. The exact configuration I used (with 192.168.1.1 for the first netbook, of course) was: auto eth0
iface eth0 inet static
address 192.168.1.2
netmask 255.255.255.0 With those settings, the two netbooks will be able to ping each other directly after being booted up, without even an ifup . Anyway, I'd still really like to see nc in action, so I'm hoping someone help me debug this process. | The quick way The quickest way to transfer files over a LAN is likely not rsync, unless there are few changes. rsync spends a fair bit of time doing checksums, calculating differences, etc. If you know that you're going to be transferring most of the data anyway, just do something like this (note: there are multiple implementations of netcat ; check the manual for the correct options. In particular, yours might not want the -p ): user@dest:/target$ nc -q 1 -l -p 1234 | tar xv
user@source:/source$ tar cv . | nc -q 1 dest-ip 1234 That uses netcat ( nc ) to send tar over a raw TCP connection on port 1234. There is no encryption, authenticity checking, etc, so its very fast. If your cross-connect is running at gigabit or less, you'll peg the network; if its more, you'll peg the disk (unless you have a storage array, or fast disk). The v flags to tar make it print file names as it goes (verbose mode). With large files, that's practically no overhead. If you were doing tons of small files, you'd turn that off. Also, you can insert something like pv into the pipeline to get a progress indicator: user@dest:/target$ nc -q 1 -l -p 1234 | pv -pterb -s 100G | tar xv You can of course insert other things too, like gzip -1 (and add the z flag on the receiving end—the z flag on the sending end would use a higher compression level than 1, unless you set the GZIP environment variable, of course). Though gzip will probably actually be slower, unless your data really compresses. If you really need rsync If you're really only transferring a small portion of the data that has changed, rsync may be faster. You may also want to look at the -W / --whole-file option, as with a really fast network (like a cross-connect) that can be faster. The easiest way to run rsync is over ssh. You'll want to experiment with ssh ciphers to see which is fastest, it'll be either AES, ChaCha20, or Blowfish (though there are some security concerns with Blowfish's 64-bit block size), depending on if your chip has Intel's AES-NI instructions (and your OpenSSL uses them). On a new enough ssh, rsync-over-ssh looks like this: user@source:~$ rsync -e 'ssh -c [email protected]' -avP /source/ user@dest-ip:/target For older ssh/sshd, try aes128-ctr or aes128-cbc in place of [email protected] . ChaCha20 would be [email protected] (also needs a new enough ssh/sshd) and Blowfish would be blowfish-cbc. OpenSSH does not allow running without a cipher. You can of course use whichever rsync options you like in place of -avP . And of course you can go the other direction, and run the rsync from the destination machine (pull) instead of the source machine (push). Making rsync faster If you run an rsync daemon, you can get rid of the crypto overhead. First, you'd create a daemon configuration file ( /etc/rsyncd.conf ), for example on the source machine (read the rsyncd.conf manpage for details): [big-archive]
path = /source
read only = yes
uid = someuser
gid = somegroup Then, on the destination machine, you'd run: user@dest:~$ rsync -avP source-ip::big-archive/ /target You can do this the other way around too (but of course you'll need to set read only to no). There are options for authentication, etc., check the manpage for details. | {
"source": [
"https://unix.stackexchange.com/questions/48399",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1389/"
]
} |
48,425 | For example, #!/bin/bash
while :
do
sl
done How to terminate this bash script? | The program sl purposely ignores SIGINT , which is what gets sent when you press Ctrl+C . So, firstly, you'll need to tell sl not to ignore SIGINT by adding the -e argument. If you try this, you'll notice that you can stop each individual sl , but they still repeat. You need to tell bash to exit after SIGINT as well. You can do this by putting a trap "exit" INT before the loop. #!/bin/bash
trap "exit" INT
while :
do
sl -e
done | {
"source": [
"https://unix.stackexchange.com/questions/48425",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23501/"
]
} |
48,469 | Is there a similar piece of software to SourceTree , a GUI for git, for Linux? I know about Giggle, git cola, etc. I'm looking for a beautiful, easy to use GUI for git. | A nice alternative is SmartGit . It has very similar features to SourceTree and has built in 3-column conflict resolution, visual logs, pulling, pushing, merging, syncing, tagging and all things git :) | {
"source": [
"https://unix.stackexchange.com/questions/48469",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4565/"
]
} |
48,492 | I can use ls -ld */ to list all the directory entries in the current directory. Is there a similarly easy way to just list all the regular files in the current directory? I know I can use find find . -maxdepth 1 -type f or stat stat -c "%F %n" * | grep "regular file" | cut -d' ' -f 3- but these do not strike me as being overly elegant. Is there a nice short way to list only the regular files (I don't care about devices, pipes, etc.) but not the sub-directories of the current directory? Listing symbolic links as well would be a plus, but is not a necessity. | ls -p | grep -v / This command lists all non-hidden files that aren't directories (regular files, links, device files , etc.). To also include hidden files, add the -A option to ls It assumes none of the files have newline characters in their name. Adding a -q option to ls would transform all non-printable characters including newline to ? , guaranteeing they're on one line and so suitable for feeding to a line-based utility like grep and for printing on a terminal. | {
"source": [
"https://unix.stackexchange.com/questions/48492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10788/"
]
} |
48,502 | I have multiple users on a server. They upload and download their files through FTP. Sometimes some heavy transfer causes high load on the server. I am wondering, if there is any way to limit the ftp speed to avoid high load. Any help would be much appreciated. | ls -p | grep -v / This command lists all non-hidden files that aren't directories (regular files, links, device files , etc.). To also include hidden files, add the -A option to ls It assumes none of the files have newline characters in their name. Adding a -q option to ls would transform all non-printable characters including newline to ? , guaranteeing they're on one line and so suitable for feeding to a line-based utility like grep and for printing on a terminal. | {
"source": [
"https://unix.stackexchange.com/questions/48502",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19087/"
]
} |
48,505 | A solution that does not require additional tools would be prefered. | Almost like nsg's answer: use a lock directory . Directory creation is atomic under linux and unix and *BSD and a lot of other OSes. if mkdir -- "$LOCKDIR"
then
# Do important, exclusive stuff
if rmdir -- "$LOCKDIR"
then
echo "Victory is mine"
else
echo "Could not remove lock dir" >&2
fi
else
# Handle error condition
...
fi You can put the PID of the locking sh into a file in the lock directory for debugging purposes, but don't fall into the trap of thinking you can check that PID to see if the locking process still executes. Lots of race conditions lie down that path. | {
"source": [
"https://unix.stackexchange.com/questions/48505",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/863/"
]
} |
48,520 | Is there a way to search, while I'm typing in less ? Just like the vim option, set incsearch . As I didn't find a proper way to do it, is there any similar tool that can do it? | You can do a search from the command line: less -ppattern filename Or, once inside less , use / followed by your pattern to do interactive searching (forwards). n and N repeat the search in the forward and reverse direction, respectively. That's the bare minimum you need to know; there are many more commands for more complex or specific searches. Edit : To respond to your updated question, there's currently no way to do immediate incremental searching with less . Have you considered using view instead (opens Vim in read-only mode, so will use your incsearch setting)? Vim can be made even more pager-like with the vimpager script. Some additional information: There is an open bug on the Ubuntu bug-tracker for incremental search support, but it doesn't look like it's going anywhere soon. Somebody has implemented incremental support on a github fork, but obviously you're going to have to compile a custom less to use that. (And apart from the Ubuntu enhancement request there is currently (as of 2016-05-17) no such enhancement request on the official less bugtracker .) | {
"source": [
"https://unix.stackexchange.com/questions/48520",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
48,527 | I'm trying to connect to machine one with ssh and then connect to another machine two with ssh, but I get this error. ssh [email protected] 'ssh [email protected]'
stdin: is not a tty Why? | By default, when you run a command on the remote machine using ssh, a TTY is not allocated for the remote session. This lets you transfer binary data, etc. without having to deal with TTY quirks. This is the environment provided for the command executed on computerone . However, when you run ssh without a remote command, it DOES allocate a TTY, because you are likely to be running a shell session. This is expected by the ssh [email protected] command, but because of the previous explanation, there is no TTY available to that command. If you want a shell on computertwo , use this instead, which will force TTY allocation during remote execution: ssh -t [email protected] 'ssh [email protected]' This is typically appropriate when you are eventually running a shell or other interactive process at the end of the ssh chain. If you were going to transfer data, it is neither appropriate nor required to add -t , but then every ssh command would contain a data-producing or -consuming command, like: ssh [email protected] 'ssh [email protected] "cat /boot/vmlinuz"' | {
"source": [
"https://unix.stackexchange.com/questions/48527",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16798/"
]
} |
48,533 | Consider this snippet: stop () {
echo "${1}" 1>&2
exit 1
}
func () {
if false; then
echo "foo"
else
stop "something went wrong"
fi
} Normally when func is called it will cause the script to terminate, which is the intended behaviour. However, if it's executed in a sub-shell, such as in result=`func` it will not exit the script. This means the calling code has to check the exit status of the function every time. Is there a way to avoid this? Is this what set -e is for? | You could decide that the exit status 77 for instance means exit any level of subshell, and do set -E
trap '[ "$?" -ne 77 ] || exit 77' ERR
(
echo here
(
echo there
(
exit 12 # not 77, exit only this subshell
)
echo ici
exit 77 # exit all subshells
)
echo not here
)
echo not here either set -E in combination with ERR traps is a bit like an improved version of set -e in that it allows you to define your own error handling. In zsh, ERR traps are inherited automatically, so you don't need set -E , you can also define traps as TRAPERR() functions, and modify them through $functions[TRAPERR] , like functions[TRAPERR]="echo was here; $functions[TRAPERR]" | {
"source": [
"https://unix.stackexchange.com/questions/48533",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23451/"
]
} |
48,535 | As a part of this script, I need to be able to check if the first argument given matches the first word of file. If it does, exit with an error message; if it doesn't, append the arguments to the file. I understand how to write the if statement, but not how to use grep within a script. I understand that grep will look something like this grep ^$1 schemas.txt I feel like this should be much easier than I am making it. I'm getting an error "too many arguments" on the if statement. I got rid of the space between grep -q and then got an error binary operator expected. if [ grep -q ^$1 schemas.txt ]
then
echo "Schema already exists. Please try again"
exit 1
else
echo "$@" >> schemas.txt
fi | grep returns a different exit code if it found something (zero) vs. if it hasn't found anything (non-zero). In an if statement, a zero exit code is mapped to "true" and a non-zero exit code is mapped to false. In addition, grep has a -q argument to not output the matched text (but only return the exit status code) So, you can use grep like this: if grep -q PATTERN file.txt; then
echo found
else
echo not found
fi As a quick note, when you do something like if [ -z "$var" ]… , it turns out that [ is actually a command you're running, just like grep. On my system, it's /usr/bin/[ . (Well, technically, your shell probably has it built-in, but that's an optimization. It behaves as if it were a command). It works the same way, [ returns a zero exit code for true, a non-zero exit code for false. ( test is the same thing as [ , except for the closing ] ) | {
"source": [
"https://unix.stackexchange.com/questions/48535",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23551/"
]
} |
48,579 | If I create a file and then change its permissions to 444 (read-only), how come rm can remove it? If I do this: echo test > test.txt
chmod 444 test.txt
rm test.txt ... rm will ask if I want to remove the write-protected file test.txt . I would have expected that rm can not remove such a file and that I would have to do a chmod +w test.txt first. If I do rm -f test.txt then rm will remove the file without even asking, even though it's read-only. Can anyone clarify? I'm using Ubuntu 12.04/bash. | All rm needs is write+execute permission on the parent directory. The permissions of the file itself are irrelevant. Here's a reference which explains the permissions model more clearly than I ever could: Any attempt to access a file's data requires read permission. Any
attempt to modify a file's data requires write permission. Any
attempt to execute a file (a program or a script) requires execute
permission... Because directories are not used in the same way as regular files, the
permissions work slightly (but only slightly) differently. An attempt
to list the files in a directory requires read permission for the
directory, but not on the files within. An attempt to add a file to a
directory, delete a file from a directory, or to rename a file, all
require write permission for the directory, but (perhaps surprisingly)
not for the files within . Execute permission doesn't apply to
directories (a directory can't also be a program). But that
permission bit is reused for directories for other purposes. Execute permission is needed on a directory to be able to cd into it
(that is, to make some directory your current working directory). Execute is needed on a directory to access the "inode" information of
the files within. You need this to search a directory to read the
inodes of the files within. For this reason the execute permission on
a directory is often called search permission instead. | {
"source": [
"https://unix.stackexchange.com/questions/48579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23478/"
]
} |
48,601 | In the grub.conf configuration file I can specify command line parameters that the kernel will use, i.e.: kernel /boot/kernel-3-2-1-gentoo root=/dev/sda1 vga=791 After booting a given kernel, is there a way to display the command line parameters that were passed to the kernel in the first place? I've found sysctl, sysctl --all but sysctl shows up all possible kernel parameters. | $ cat /proc/cmdline
root=/dev/xvda xencons=tty console=tty1 console=hvc0 nosep nodevfs ramdisk_size=32768 ip_conntrack.hashsize=8192 nf_conntrack.hashsize=8192 ro devtmpfs.mount=1
$ | {
"source": [
"https://unix.stackexchange.com/questions/48601",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13547/"
]
} |
48,671 | I'm searching for a equivalent of "iwconfig eth0 mode Monitor" in Mac OS. From man iwconfig mode Monitor does the following: "the node is not associated with any cell and passively monitor all packets on the frequency" | What you're looking for is /System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport . It's a binary command, which I've symlinked into /usr/local/bin/ for convenience. Creating Symlink: sudo ln -s /System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport /usr/local/bin/airport Example of sniffing in monitor mode: sudo airport en1 sniff 1 This sniffs on channel 1 and saves a pcap capture file to /tmp/airportSniffXXXXXX.pcap (where XXXXXX will vary). You can view this with tcpdump -r <filename> or by opening it in wireshark . To search for active channels nearby that you can sniff, run this: sudo airport en1 -s Although you can capture any traffic, you can only effectively read if the network is open or you have the encryption key. | {
"source": [
"https://unix.stackexchange.com/questions/48671",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14686/"
]
} |
48,672 | I have a input file delimited with commas ( , ). There are some fields enclosed in double quotes that are having a comma in them. Here is the sample row 123,"ABC, DEV 23",345,534.202,NAME I need to remove all the comma's occuring within inside the double quotes and the double quotes as well. So the above line should get parsed into as shown below 123,ABC DEV 23,345,534.202,NAME I tried the following using sed but not giving expected results. sed -e 's/\(".*\),\(".*\)/\1 \2/g' Any quick tricks with sed , awk or any other unix utility please? | If the quotes are balanced, you will want to remove commas between every other quote, this can be expressed in awk like this: awk -F'"' -v OFS='' '{ for (i=2; i<=NF; i+=2) gsub(",", "", $i) } 1' infile Output: 123,ABC DEV 23,345,534.202,NAME Explanation The -F" makes awk separate the line at the double-quote signs, which means every other field will be the inter-quote text. The for-loop runs gsub , short for globally substitute, on every other field, replacing comma ( "," ) with nothing ( "" ). The 1 at the end invokes the default code-block: { print $0 } . | {
"source": [
"https://unix.stackexchange.com/questions/48672",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
48,713 | I really enjoying using control+r to recursively search my command history. I've found a few good options I like to use with it: # ignore duplicate commands, ignore commands starting with a space
export HISTCONTROL=erasedups:ignorespace
# keep the last 5000 entries
export HISTSIZE=5000
# append to the history instead of overwriting (good for multiple connections)
shopt -s histappend The only problem for me is that erasedups only erases sequential duplicates - so that with this string of commands: ls
cd ~
ls The ls command will actually be recorded twice. I've thought about periodically running w/ cron: cat .bash_history | sort | uniq > temp.txt
mv temp.txt .bash_history This would achieve removing the duplicates, but unfortunately the order would not be preserved. If I don't sort the file first I don't believe uniq can work properly. How can I remove duplicates in my .bash_history, preserving order? Extra Credit: Are there any problems with overwriting the .bash_history file via a script? For example, if you remove an apache log file I think you need to send a nohup / reset signal with kill to have it flush it's connection to the file. If that is the case with the .bash_history file, perhaps I could somehow use ps to check and make sure there are no connected sessions before the filtering script is run? | Sorting the history This command works like sort|uniq , but keeps the lines in place nl|sort -k 2|uniq -f 1|sort -n|cut -f 2 Basically, prepends to each line its number. After sort|uniq -ing, all lines are sorted back according to their original order (using the line number field) and the line number field is removed from the lines. This solution has the flaw that it is undefined which representative of a class of equal lines will make it in the output and therefore its position in the final output is undefined. However, if the latest representative should be chosen you can sort the input by a second key: nl|sort -k2 -k 1,1nr|uniq -f1|sort -n|cut -f2 Managing .bash_history For re-reading and writing back the history, you can use history -a and history -w respectively. | {
"source": [
"https://unix.stackexchange.com/questions/48713",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10287/"
]
} |
48,750 | I would like to create many directories using mkdir . Each directory name will consist of a prefix (a string) and an index (an integer). Suppose that I would like the prefix to be "s" and the indices to range from 1 to 50. This means that I would like to create directories titled: s1 , s2 , ... , s49 , s50 Is there a way to do this automatically using mkdir ? Thanks for your time. | One for i in {1..50}; do
mkdir s"$i"
done Two mkdir s{1..50} This option works in bash , zsh and ksh93 Three mkdir $(printf "s%02i " $(seq 1 50)) | {
"source": [
"https://unix.stackexchange.com/questions/48750",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9605/"
]
} |
48,770 | I would like to list all files matching a certain pattern while ignoring the case. For example, I run the following commands: ls *abc* I want to see all the files that have "abc" as a part of the file name, ignoring the case, like -rw-r--r-- 1 mtk mtk 0 Sep 21 08:12 file1abc.txt
-rw-r--r-- 1 mtk mtk 0 Sep 21 08:12 file2ABC.txt Note I have searched the man page for case, but couldn't find anything. | This is actually done by your shell, not by ls . In bash , you'd use: shopt -s nocaseglob and then run your command. Or in zsh : unsetopt CASE_GLOB Or in yash: set +o case-glob and then your command. You might want to put that into .bashrc , .zshrc or .yashrc , respectively. Alternatively, with zsh: setopt extendedglob
ls -d -- (#i)*abc* (that is turn case insensitive globbing on a per-wildcard basis) With ksh93: ls -d -- ~(i:*abc*) You want globbing to work different, not ls , as those are all files passed to ls by the shell. | {
"source": [
"https://unix.stackexchange.com/questions/48770",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
48,786 | Considering that POSIX is the closest thing to a common standard among all unices, I'm interested in knowing if there's a shell that supports it exclusively. While most modern shells provide support for POSIX (and will run POSIX compliant scripts without any problem), they don't do a good job at pointing out non-compliant features. Is there any shell that implements POSIX and POSIX only, in such a way that it'd throw an error for any non compliant feature? EDIT I want to clarify that I'm not asking for general tips for writing portable shell scripts. The related question mentioned in the comments already covered this. I thought of this question when I found out that bash has a --posix option but only to discover that it only affects some intialization behaviors which is not exactly what I'm looking for. | Unfortunately, 'portable' is usually a stronger requirement than 'POSIX-compliant' for shell scripts. That is, writing something that runs on any POSIX shell isn't too hard, but getting it to run on any real-world shell is harder. You can start by installing every shell in your package manager, in particular debian's posh sounds like what you want (Policy-compliant Ordinary SHell). Debian's policy is POSIX with a few exceptions ( echo -n specified, local ...). Beyond that though, testing has to cover a few shells (/bin/sh especially) on a range of platforms. I test on Solaris (/bin/sh and xpg4/sh), and BSD. AIX and HP-UX are very compliant and don't cause problems. bash is a little world of its own. I'd recommend the Autoconf guide to portable shell , which is absolutely brilliant and saves a lot of time. Large chunks of it are obsolete, but that's OK; just skip TruUnix and Ultrix and so on if you don't care! | {
"source": [
"https://unix.stackexchange.com/questions/48786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4098/"
]
} |
48,805 | Why put semicolons in one and not in another? The result is the same Code one if [ "a" == "a" ]
then
echo "true"
fi Code two if [ "a" == "a" ];
then
echo "true";
fi Semicolons in the second code are unnecessary? When it is necessary to place semicolons? | The semicolon is needed only when the end of line is missing: if [ "a" == "a" ] ; then echo "true" ; fi Without semicolons, you get Syntax error. I do not understand your question about quotes. Can you be more specific? (And by the way, using = instead of == is more portable and POSIX compliant). | {
"source": [
"https://unix.stackexchange.com/questions/48805",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16798/"
]
} |
48,845 | If I run: sudo chown -R user:user / Can I revert it to what it was before I ran it? | In short: no. You'll need to restore from a backup. (Some backup tools might have options to only restore permission, others can list backed-up files with their permissions and you can use that to fix your system.) If you don't have a backup, you'll need to fix all that manually. | {
"source": [
"https://unix.stackexchange.com/questions/48845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8932/"
]
} |
48,862 | Most of my my aliases are of this form: alias p='pwd' I want to alias git commit so that it does git commit -v But trying to create an alias with a space gives an error: $ alias 'git commit'='git commit -v'
-bash: alias: `git commit': invalid alias name | Not a direct answer to your question (since aliases can only be one word), but you should be using git-config instead: git config --global alias.civ commit -v This creates a git alias so that git civ runs git commit -v . Unfortunately, AFAIK there is no way to override existing git commands with aliases . However, you can always pick a suitable alias name to live with as an alternative. | {
"source": [
"https://unix.stackexchange.com/questions/48862",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
48,863 | I've been trying to get ssh-add working on a RaspberryPi running Raspbian. I can start ssh-agent , when I do it gives the following output into the terminal: SSH_AUTH_SOCK=/tmp/ssh-06TcpPflMg58/agent.2806; export SSH_AUTH_SOCK;
SSH_AGENT_PID=2807; export SSH_AGENT_PID;
echo Agent pid 2807; If I run ps aux | grep ssh I can see it is running. Then I try to run ssh-add in order to add my key passphrase, and I get the following: Could not open a connection to your authentication agent. Any ideas? | Your shell is meant to evaluate that shell code output by ssh-agent . Run this instead: eval "$(ssh-agent)" Or if you've started ssh-agent already, copy paste it to your shell prompt (assuming you're running a Bourne-like shell). ssh commands need to know how to talk to the ssh-agent , they know that from the SSH_AUTH_SOCK environment variable. | {
"source": [
"https://unix.stackexchange.com/questions/48863",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21525/"
]
} |
48,870 | I realize there are /etc/profile and /etc/bashrc files for setting global environment variables and maybe I'm just misunderstanding their purposes, but... Is there a global bash_profile file? I'm using Mac OS X | It's not called bash_profile , but the standard place for global bash configuration is /etc/bash.bashrc . It's usual to call this from /etc/profile if the shell is bash. For example, in my /etc/profile I have: if [ "$PS1" ]; then
if [ "$BASH" ] && [ "$BASH" != "/bin/sh" ]; then
# The file bash.bashrc already sets the default PS1.
# PS1=’0
if [ ‐f /etc/bash.bashrc ]; then
. /etc/bash.bashrc
fi
fi
fi In terms of usage, /etc/profile provides system-wide configuration for all Bourne compatible shells (sh, bash, ksh, etc.). There's normally no need for an equivalent /etc/bash_profile , because the intention of the profile file is to control behaviour for login shells. Normally anything you want to do there is not going to be bash-specific. /etc/bash.bashrc is bash-specific, and will be run for both login and non-login shells. To further complicate things, it looks like OS X doesn't even have an /etc/bash.bashrc . This is probably related to the fact that Terminals in OS X default to running as login shells , so the distinction is lost: An exception to the terminal window guidelines is Mac OS X’s
Terminal.app, which runs a login shell by default for each new
terminal window, calling .bash_profile instead of .bashrc. Other GUI
terminal emulators may do the same, but most tend not to. I don't run OS X, so the extent of my knowledge ends there. | {
"source": [
"https://unix.stackexchange.com/questions/48870",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
48,973 | I want to execute a simple command just before the computer shuts down (timing is not essential). For startup, I can use /etc/rc.local; is there something similar for shutdown? Note that I would still like to use the integrated shutdown button from menu; i.e. I don't want to use a custom script every time I shutdown via terminal - it needs to be automatic. | Linux Mint is based on Ubuntu, so I'm guesing the runlevel system is probably the same. On Ubuntu, scripts for the different runlevels are executed according to their presence in the /etc/rc[0-6].d directories. Runlevel 0 corresponds to shutdown, and 6 to reboot. Typically the script itself is stored in /etc/init.d , and then symlinks are placed in the directories corresponding to the runlevels you require. So in your case, write your script, store it in /etc/init.d/ , then create a symlink in each of /etc/rc0.d and /etc/rc6.d (if you want both) pointing to your script. The scripts in each runlevel directory will be executed in asciibetical order , so if the order within the runlevel matters to you, choose the name of your symlink accordingly. | {
"source": [
"https://unix.stackexchange.com/questions/48973",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23792/"
]
} |
48,984 | I'd like Terminal to open near the bottom of my screen. Is there a way to set the default size and position? I'm using Linux Mint 13, Cinnamon. | Most terminals can be launched using the geometry switch allowing you to specify terminal's size and position (COLUMNSxROWS+X+Y) e.g.: gnome-terminal --geometry 73x31+100+300 or xterm -geometry 93x31+100+350 If you want to make the above permanent, copy the terminal launcher (terminal's .desktop file) from /usr/share/applications/ to ~/.local/share/applications/ and edit the Exec field accordingly. E.g. for gnome-terminal Exec=gnome-terminal --geometry 73x31+100+300 Having that custom launcher in your $HOME would preserve your settings after terminal-package upgrades (that would otherwise overwrite the default .desktop file in /usr/share/applications ). | {
"source": [
"https://unix.stackexchange.com/questions/48984",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23295/"
]
} |
48,994 | I want to run a program in an empty environment (i.e. with no envariables set). How to do this in bash? | You can do this with env : env -i your_command Contrary to comments below, this does completely clear out the environment, but it does not prevent your_command setting new variables. In particular, running a shell will cause the /etc/profile to run, and the shell may have some built in settings also. You can check this with: env -i env i.e. wipe the environment and then print it. The output will be blank. | {
"source": [
"https://unix.stackexchange.com/questions/48994",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/903/"
]
} |
49,042 | I am using the bash shell. I frequently use nohup to ensure that my processes are not stopped when I close the shell/terminal that started them. I use a syntax like: nohup myprocess When starting, nohup gives the message: nohup: ignoring input and appending output to 'nohup.out' Then, nohup gives no more output to the screen; it is all written to nohup.out . Frequently, however, I would like to monitor the progress of my computation. I can do this by reading nohup.out using vi or tail , but this can be time consuming to do a lot, especially when my computations take several hours. Is there any way that I can print the output to both nohup.out (in case I lose internet connection and thus the terminal that started the process is closed) and to the screen? Thanks for your time. | You can run nohup yourprocess & tail -f nohup.out | {
"source": [
"https://unix.stackexchange.com/questions/49042",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9605/"
]
} |
49,050 | I am trying to find a music management software that can sync playlists onto my Android phone running Cyanogenmod 9. BTW, this handset does not have an Internet connection. Specifically, I want the device to appear in the software so that I can drag and drop a playlist into it. When this happens, I want the software to not only copy the songs over, but the playlist itself (whether a cue sheet, m3u, or something like that). The more seamless the integration the better. This way, the music player on my handset can start playing the playlist right away and I won't have to reconstruct the list. My search so far seems to indicate Rythmbox and Amarok can do this (can someone confirm?), but are there any others? I would like to know what my choices are before settling on one music manager. | You can run nohup yourprocess & tail -f nohup.out | {
"source": [
"https://unix.stackexchange.com/questions/49050",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1375/"
]
} |
49,053 | I have Linux ( RH 5.3) machine I need to add/calculate 10 days plus date so then I will get new date (expiration date)) for example # date
Sun Sep 11 07:59:16 IST 2012 So I need to get NEW_expration_DATE = Sun Sep 21 07:59:16 IST 2012 Please advice how to calculate the new expiration date ( with bash , ksh , or manipulate date command ?) | You can just use the -d switch and provide a date to be calculated date
Sun Sep 23 08:19:56 BST 2012
NEW_expration_DATE=$(date -d "+10 days")
echo $NEW_expration_DATE
Wed Oct 3 08:12:33 BST 2012 -d, --date=STRING display time described by STRING, not ‘now’ This is quite a powerful tool as you can do things like date -d "Sun Sep 11 07:59:16 IST 2012+10 days"
Fri Sep 21 03:29:16 BST 2012 or TZ=IST date -d "Sun Sep 11 07:59:16 IST 2012+10 days"
Fri Sep 21 07:59:16 IST 2012 or prog_end_date=`date '+%C%y%m%d' -d "$end_date+10 days"` So if $end_date = 20131001 then $prog_end_date = 20131011 . | {
"source": [
"https://unix.stackexchange.com/questions/49053",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
49,077 | I have a script run from a non-privileged users' crontab that invokes some commands using sudo . Except it doesn't. The script runs fine but the sudo'ed commands silently fail. The script runs perfectly from a shell as the user in question. Sudo does not require a password. The user in question has (root) NOPASSWD: ALL access granted in /etc/sudoers . Cron is running and executing the script. Adding a simple date > /tmp/log produces output at the right time. It's not a permissions problem. Again the script does get executed, just not the sudo'ed commands. It's not a path problem. Running env from inside the script being run shows the correct $PATH variable that includes the path to sudo. Running it using a full path doesn't help. The command being executed is being given the full path name. Trying to capture the output of the sudo command including STDERR doesn't show anything useful. Adding sudo echo test 2>&1 > /tmp/log to the script produces a blank log. The sudo binary itself executes fine and recognizes that it has permissions even when run from cron inside the script. Adding sudo -l > /tmp/log to the script produces the output: User ec2-user may run the following commands on this host: (root) NOPASSWD: ALL Examining the exit code of the command using $? shows it is returning an error (exit code: 1 ), but no error seems to be produced. A command as simple as /usr/bin/sudo /bin/echo test returns the same error code. What else could be going on? This is a recently created virtual machine running the latest Amazon Linux AMI. The crontab belongs to the user ec2-user and the sudoers file is the distribution default. | sudo has some special options in its permissions file, one of which allows a restriction on its usage to shells that are are running inside a TTY , which cron is not. Some distros including the Amazon Linux AMI have this enabled by default. The /etc/sudoers file will look something like this: # Disable "ssh hostname sudo <cmd>", because it will show the password in clear.
# You have to run "ssh -t hostname sudo <cmd>".
#
Defaults requiretty
#
# Refuse to run if unable to disable echo on the tty. This setting should also be
# changed in order to be able to use sudo without a tty. See requiretty above.
#
Defaults !visiblepw If you had captured output to STDERR at the level of the shell script rather than the sudo command itself, you would have seem a message something like this: sorry, you must have a tty to run sudo The solution is to allow sudo to execute in non TTY environments either by removing or commenting out these options: #Defaults requiretty
#Defaults !visiblepw | {
"source": [
"https://unix.stackexchange.com/questions/49077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1925/"
]
} |
49,214 | I'm working in Mac OSX, so I guess I'm using bash...? Sometimes I enter something that I don't want to be remembered in the history. How do I remove it? | Preventative measures If you want to run a command without saving it in history, prepend it with an extra space prompt$ echo saved
prompt$ echo not saved \
> # ^ extra space For this to work you need either ignorespace or ignoreboth in HISTCONTROL . For example, run HISTCONTROL=ignorespace To make this setting persistent, put it in your .bashrc . Post-mortem clean-up If you've already run the command, and want to remove it from history, first use history to display the list of commands in your history. Find the number next to the one you want to delete (e.g. 1234) and run history -d 1234 Additionally, if the line you want to delete has already been written to your $HISTFILE (which typically happens when you end a session by default), you will need to write back to $HISTFILE, or the line will reappear when you open a new session: history -w | {
"source": [
"https://unix.stackexchange.com/questions/49214",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11815/"
]
} |
49,249 | Using the top command with redirection works fine: top > top.log Now I want to use grep to filter a certain line: top | grep "my_program" > top.log But the log file will remain empty. But grep delivers an output when using top | grep "my_program" Where my_program has to be replaced by a running program to see some output. Why does my approach not work ? And how can I fix it ? | I get the same behavior that you describe. On Ubuntu 11.10 top | grep "my_program" > top.log does not produce any output. I believe the reason for this is that grep is buffering its output. To tell GNU grep to spit out output line-by-line, use the --line-buffered option: top | grep --line-buffered "my_program" > top.log See also this SO question for other potential solutions. | {
"source": [
"https://unix.stackexchange.com/questions/49249",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
49,263 | Is there a linux command that I'm overlooking that makes it possible to do something along the lines of:
(pseudo) $ mkdir -R foo/bar/zoo/andsoforth Or is there no alternative but to make the directories one at a time? | $ mkdir -p foo/bar/zoo/andsoforth Parameter p stands for 'parents'. | {
"source": [
"https://unix.stackexchange.com/questions/49263",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
49,335 | I checked ~/.libvirt* and ~/.config/libvirt* , non of them seem to contain config file of created VM, where is the config stored? I'm not running virt-manager as root user. | Oddly enough, under /etc/libvirt . virt-manager doesn't run as root, but it communicates with libvirtd that does . | {
"source": [
"https://unix.stackexchange.com/questions/49335",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
49,377 | I have a text file ( devel.xml ). I added the word REPLACETHIS to it in order to replace this string with the content within a different file ( temp.txt ). The closest thing I have is this: sed -i -e "/REPLACETHIS/r temp.TXT" -e "s///" devel.txt; This inserts the content after the string, and then deletes the string afterwards. Is this the best way to do it? | What you've done is to remove SUBSTITUTETHIS wherever it appears in the file (but not the rest of the line where it appears) and insert the content of temp.TXT below that line. If SUBSTITUTETHIS appears multiple times on a line, only the first occurrence is removed, and only one copy of temp.TXT is added. If you want to replace the whole line when SUBSTITUTETHIS appears, use the d command. Since you need to run both r and d when there's a match, put them in a braced group. sed -e '/SUBSTITUTETHIS/ {' -e 'r temp.TXT' -e 'd' -e '}' -i devel.txt Some sed implementations let you use semicolons to separate commands and omit separators altogether around braces, but you still need a newline to terminate the argument to the r command: sed -e '/SUBSTITUTETHIS/ {r temp.TXT
d}' -i devel.txt If you want to replace SUBSTITUTETHIS by the content of the file, but retain what comes before and after it on the line, it's more complicated. The simplest method is to include the content of the file in the sed command; note that you'll have to properly quote its contents. sed -e "s/SUBSTITUTETHIS/$(<temp.TXT sed -e 's/[\&/]/\\&/g' -e 's/$/\\n/' | tr -d '\n')/g" -i devel.txt Or use Perl. This is short but runs cat once for each substitution: perl -pe 's/SUBSTITUTETHIS/`cat temp.TXT`/ge' -i devel.txt To read the file once when the script starts, and avoid depending on a shell command: perl -MFile::Slurp -pe 'BEGIN {$r = read_file("temp.TXT"); chomp($r)}
s/SUBSTITUTETHIS/$r/ge' -i devel.txt (presented on two lines for readability but you can omit the line break). If the file name is variable, to avoid quoting issues, pass it to the script via an environment variable: replacement_file=temp.TXT perl -MFile::Slurp -pe 'BEGIN {$r = read_file($replacement_file); chomp($r)}
s/SUBSTITUTETHIS/$r/ge' -i devel.txt | {
"source": [
"https://unix.stackexchange.com/questions/49377",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23976/"
]
} |
49,496 | I am using diff -r a b to recursively compare directories a and b . It often happens though that there are some broken links (the same broken links in both a and b directories and pointing to the same, non-existing targets). diff then outputs error messages for those cases and exits with a non-zero exit code, however I would like it to stay silent, and exit with 0 as the directories are the same in my book. How can I do that? | For version 3.3 or later of diff , you should use the --no-dereference option, as described in Pete Harlan's answer . Unfortunately, older versions of diff don't support ignoring symlinks : Some files are neither directories nor regular files: they are unusual
files like symbolic links, device special files, named pipes, and
sockets. Currently, diff treats symbolic links like regular files;
it treats other special files like regular files if they are specified
at the top level, but simply reports their presence when comparing
directories. This means that patch cannot represent changes to such
files. For example, if you change which file a symbolic link points
to, diff outputs the difference between the two files, instead of
the change to the symbolic link. diff should optionally report changes to special files specially,
and patch should be extended to understand these extensions. If all you want is to verify an rsync (and presumably fix what's missing), then you could just run the rsync command a second time. If you don't want to do that, then check-summing the directory may be sufficient. If you really want to do this with diff , then you can use find to skip the symlinks, and run diff on each file individually. Pass your directories a and b in as arguments: #!/bin/bash
# Skip files in $1 which are symlinks
for f in `find $1/* ! -type l`
do
# Suppress details of differences
diff -rq $f $2/${f##*/}
done or as a one-liner: for f in `find a/* ! -type l`;do diff -rq $f b/${f##*/};done This will identify files that differ in content, or files which are in a but not in b . Note that: since we are skipping symlinks entirely, this won't notice if symlink names are not present in b . If you required that, you would need a second find pass to identify all the symlinks and then explicitly check for their existence in b . Extra files in b will not be identified, since the list is constructed from the contents of a . This probably isn't a problem for your rsync scenario. | {
"source": [
"https://unix.stackexchange.com/questions/49496",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24044/"
]
} |
49,546 | I need to be able to record the mouse movements every so often (every .2 of a second for example) and have them in a coordinate representation instead of a diff. I found the following script: #!/bin/bash
while :
do
cat /dev/input/mice | read -n 1
date
sleep 1
done But it doesn't seem to print anything to the terminal (or perhaps it's all gibberish). Other discussions suggest that /dev/input/mice is deprecated. On top of that, /dev/input/mice wouldn't actually have the data in a friendly format. Am I going to have to do the conversion manually (from the format in the /dev/input files), or is there an API for this? | Try the following command : xdotool getmouselocation 2>&1 |
sed -rn '${s/x:([0-9]+) y:([0-9]+) .*/\1 \2/p}' See http://www.semicomplete.com/projects/xdotool/ | {
"source": [
"https://unix.stackexchange.com/questions/49546",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15582/"
]
} |
49,601 | I want to do non-greedy pattern (regular expression) matching in awk .
Here is an example: echo "@article{gjn, Author = {Grzegorz J. Nalepa}, " | awk '{ sub(/@.*,/,""); print }' Is it possible to write a regular expression that selects the shorter string? @article{gjn, instead of this long string?: @article{gjn, Author = {Grzegorz J. Nalepa}, I want to get this result: Author = {Grzegorz J. Nalepa}, I have another example: echo " , article{gjn, Author = {Grzegorz J. Nalepa}, " | awk '{ sub(/ , [^,]*,/,""); print }'
↑ ↑^^^^^ Note that I changed the @ characters to comma ( , ) characters
in the first position of both the input string and the regular expression
(and also changed .* to [^,]* ).
Is it possible to write a regular expression that selects the shorter string? , Author = {Grzegorz J. Nalepa}, instead of the longer string?: ,article{gjn, Author = {Grzegorz J. Nalepa}, I want to get this result: ,article{gjn | If you want to select @ and up to the first , after that, you need to specify it as @[^,]*, That is @ followed by any number ( * ) of non-commas ( [^,] ) followed by a comma ( , ). That approach works as the equivalent of @.*?, , but not for things like @.*?string , that is where what's after is more than a single character. Negating a character is easy, but negating strings in regexps is a lot more difficult . A different approach is to pre-process your input to replace or prepend the string with a character that otherwise doesn't occur in your input: gsub(/string/, "\1&") # pre-process
gsub(/@[^\1]*\1string/, "")
gsub(/\1/, "") # revert the pre-processing If you can't guarantee that the input won't contain your replacement character ( \1 above), one approach is to use an escaping mechanism: gsub(/\1/, "\1\3") # use \1 as the escape character and escape itself as \1\3
# in case it's present in the input
gsub(/\2/, "\1\4") # use \2 as our maker character and escape it
# as \1\4 in case it's present in the input
gsub(/string/, "\2&") # mark the "string" occurrences
gsub(/@[^\2]*\2string/, "")
# then roll back the marking and escaping
gsub(/\2/, "")
gsub(/\1\4/, "\2")
gsub(/\1\3/, "\1") That works for fixed string s but not for arbitrary regexps like for the equivalent of @.*?foo.bar . | {
"source": [
"https://unix.stackexchange.com/questions/49601",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24098/"
]
} |
49,623 | Symlinks have limitations in how functions like ls , mv , and cp can operate on them because unlike shell initiated commands like cd , these functions do not have information about how the user accessed the directory with respect to the logical path (see related post ). It seems like using the mount --bind option instead can get an around this, offering increased functionality and compatibility with samba and other file servers because the mounted directory will then have two independent physical paths, instead of a link. I would like to replace all of my symbolic links with references using the mount --bind option but this would mean mounting over 150 points in fstab. Are there any performance issues that could potentially arise from this or any other drawbacks that I should consider? | With mount --bind , a directory tree exists in two (or more) places in the directory hierarchy. This can cause a number of problems. Backups and other file copies will pick all copies. It becomes difficult to specify that you want to copy a filesystem: you'll end up copying the bind-mounted files twice. Searches with find , grep -r , locate , etc., will traverse all the copies, and so on. You will not gain any “increased functionality and compatibility” with bind mounts. They look like any other directory, which most of the time is not desirable behavior. For example, Samba exposes symbolic links as directories by default; there is nothing to gain with using a bind mount. On the other hand, bind mounts can be useful to expose directory hierarchies over NFS. You won't have any performance issues with bind mounts. What you'll have is administration headaches. Bind mounts have their uses, such as making a directory tree accessible from a chroot, or exposing a directory hidden by a mount point (this is usually a transient use while a directory structure is being remodeled). Don't use them if you don't have a need. Only root can manipulate bind mounts. They can't be moved by ordinary means; they lock their location and the ancestor directories. Generally speaking, if you pass a symbolic link to a command, the command acts on the link itself if it operates on files, and on the target of the link if it operates on file contents. This goes for directories too. This is usually the right thing. Some commands have options to treat symbolic links differently, for example ls -L , cp -d , rsync -l . Whatever you're trying to do, it's far more likely that symlinks are the right tool, than bind mounts being the right tool. | {
"source": [
"https://unix.stackexchange.com/questions/49623",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24104/"
]
} |
49,626 | The header looks like this: #!/bin/sh -e
#
# rc.local - executed at the end of each multiuser runlevel
#
# Make sure that the script will "exit 0" on success or any other
# value on error. What is the reason for this file (it does not contain much), and what commands do you usually put in it? What is a "multiuser runlevel"? (I guess rc is "run commands"?) | A runlevel is a state of the system, indicating whether it is in the process of booting or rebooting or shutting down, or in single-user mode, or running normally. The traditional init program handles these actions by switching to the corresponding runlevel. Under Linux, the runlevels are by convention : S while booting, 0 while shutting down, 6 while rebooting, 1 in single-user mode and 2 through 5 in normal operation. Runlevels 2 through 5 are known as multiuser runlevels since they allow multiple users to log in, unlike runlevel 1 which is intended for only the system administrator. When the runlevel changes, init runs rc scripts (on systems with a traditional init — there are alternatives, such as Upstart and Systemd ). These rc scripts typically start and stop system services, and are provided by the distribution. The script /etc/rc.local is for use by the system administrator. It is traditionally executed after all the normal system services are started, at the end of the process of switching to a multiuser runlevel. You might use it to start a custom service, for example a server that's installed in /usr/local . Most installations don't need /etc/rc.local , it's provided for the minority of cases where it's needed. | {
"source": [
"https://unix.stackexchange.com/questions/49626",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13136/"
]
} |
49,650 | I'm trying to use xmodmap to remap Alt / Super keys on Dell L100 keyboard, and have trouble getting the keycodes. For instance, using xev doesn't give me keycode for Alt FocusOut event, serial 36, synthetic NO, window 0x4a00001,
mode NotifyGrab, detail NotifyAncestor
FocusIn event, serial 36, synthetic NO, window 0x4a00001,
mode NotifyUngrab, detail NotifyAncestor
KeymapNotify event, serial 36, synthetic NO, window 0x0,
keys: 122 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 For Right Super key, xev and showkey give different keycodes -- 134 and 126 respectively. What's going on with these keycodes? I tried getting keycodes from showkey -k , and using xmodmap file below, but that gave a weird map which remapped b key: clear Mod1
clear Control
keycode 125 = Meta_L
keycode 126 = Meta_R
keycode 58 = Control_L
keycode 56 = Control_L
keycode 100 = Control_R
add Control = Control_L Control_R
add Mod1 = Meta_L Meta_R | There are a lot of players between your keyboard and the process that finally handles the keyboard event. Among the major pieces of the landscape are the fact that the X system has its own keyboard-handling layer, and X associates different "keycodes" with keys than your Linux base system does. The showkey command is showing you the keycodes in Linux-base-system lingo. For xmodmap you need the X keycodes, which are what xev is displaying. So long as you're planning to work in X and do your key rebinding with xmodmap , then, ignore showkeys and just listen to what xev says. What you want to look for in your xev output are blocks like this: KeyPress event, serial 27, synthetic NO, window 0x1200001,
root 0x101, subw 0x0, time 6417361, (340,373), root:(342,393),
state 0x0, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES,
XLookupString gives 0 bytes:
XmbLookupString gives 0 bytes:
XFilterEvent returns: False
KeyRelease event, serial 27, synthetic NO, window 0x1200001,
root 0x101, subw 0x0, time 6417474, (340,373), root:(342,393),
state 0x8, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES,
XLookupString gives 0 bytes:
XFilterEvent returns: False xev tends to generate a lot of output, especially when you move your mouse. You may have to scroll back a while to find the output you're looking for. In the previous output, we see that the keysym Alt_L is associated with the X keycode 64 . | {
"source": [
"https://unix.stackexchange.com/questions/49650",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24118/"
]
} |
49,779 | I have a Unix-like OS installed without a windowing environment; i.e. , just a text-mode console and no GUI. Is it possible to change the font used by the console? To be clear, I am not talking about the terminal emulator that comes with a desktop environment like KDE or GNOME. | If you use the Linux console, the best way I found is: in /etc/default/console-setup put, for example CHARMAP="UTF-8"
CODESET="Lat7"
FONTFACE="Terminus"
FONTSIZE="28x14" Another way is to use setfont from the kbd package: setfont /usr/share/consolefonts/Lat7-Terminus28x14.psf This works for my Debian; it may be different for you. In Debian, you can also run dpkg-reconfigure -plow console-setup to be prompted for the various console settings and pick them from menus. Edit - I put together a small page how to setup the font colors . The section that is relevant for this post has the header "the Linux VTs" (= ttys, or "console"). | {
"source": [
"https://unix.stackexchange.com/questions/49779",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/223471/"
]
} |
49,786 | I have a need to find all of the writable storage devices attached to a given machine, whether or not they are mounted. The dopey way to do this would be to try every entry in /dev that corresponds to a writable devices (hd* and sd*). Is there a better solution, or should I stick with this one? | If one is interested only in block storage devices, one can use lsblk from widely-available util-linux package: $ lsblk -o KNAME,TYPE,SIZE,MODEL
KNAME TYPE SIZE MODEL
sda disk 149.1G TOSHIBA MK1637GS
sda1 part 23.3G
sda2 part 28G
sda3 part 93.6G
sda4 part 4.3G
sr0 rom 1024M CD/DVDW TS-L632M It lends itself well to scripting with many other columns available. | {
"source": [
"https://unix.stackexchange.com/questions/49786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6388/"
]
} |
49,886 | I accidentally screw up my tmux terminal after cat ing a binary file. Now my tmux is messed up. Detaching and re-attaching doesn't help, nor does a redraw (C-b r). Running reset only redraws the active pane, not the rest. Running ssty sane either in- or outside tmux doesn't help either. Within each pane, I have normal feedback from what I type (the initial call of reset immediately after the terminal got messed up solved this), but I can't seem to fix the status-bar. In gnome-terminal , every update to the status-bar leads to the status-bar to grow (see screenshot above). For example, this happens when I run a new application, when I switch panes, or when I resize a pane. Forcing a redraw (By C-b r , by running reset or via the gnome-terminal menu) shrinks back the status-bar to a single line, but it remains corrupted. In xterm , the status-bar does remain within one line, but it remains corrupted as pictured. I'm using tmux 1.5. How do I fix my tmux -terminal? This bug report from 2008 seems to describe the same issue, but it was marked as fixed. I don't know in what version it was fixed, but tmux 1.5 ought to include a fix from 2008. | Try renaming window 4 Switch to window 4: Control + b 4 Rename window: Control + b , Control + u myNewname (Thats a comma in the middle) Or: Control + b :rename-window myNewname | {
"source": [
"https://unix.stackexchange.com/questions/49886",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15654/"
]
} |
49,906 | So I was surfing the net and stumbled upon this article . It basically states that FreeBSD , starting from Version 10 and above will deprecate GCC in favor of Clang/LLVM . From what I have seen around the net so far, Clang/LLVM is a fairly ambitious project, but in terms of reliability it can not match GCC . Are there any technical reasons FreeBSD are choosing LLVM as their compiler infrastructure, or does the whole matter boil down to the eternal GNU/GPL vs. BSD licenses? This question has (somehow) relevant information about the usage of GCC in FreeBSD | Summary: The primary reason for switching from GCC to Clang is the incompatibility of GCC's GPL v3 license with the goals of the FreeBSD project . There are also political issues to do with corporate investment, as well as user base requirements. Finally, there are expected technical advantages to do with standards compliance and ease of debugging. Real world performance improvements in compilation and execution are code-specific and debatable; cases can be made for both compilers. FreeBSD and the GPL: FreeBSD has an uneasy relationship with the GPL. BSD-license advocates believe that truly free software has no usage restrictions . GPL advocates believe that restrictions are necessary in order to protect software freedom, and specifically that the ability to create non-free software from free software is an unjust form of power rather than a freedom. The FreeBSD project, where possible, tries to avoid the use of the GPL : Due to the additional complexities that can evolve in the commercial
use of GPL software, we do, however, endeavor to replace such software
with submissions under the more relaxed FreeBSD license whenever
possible. FreeBSD and the GPL v3: The GPL v3 explicitly forbids the so-called Tivoisation of code, a loophole in the GPL v2 which enabled hardware restrictions to disallow otherwise legal software modifications by users. Closing this loophole was an unacceptable step for many in the FreeBSD community: Appliance vendors in particular have the most to lose if the large
body of software currently licensed under GPLv2 today migrates to the
new license. They will no longer have the freedom to use GPLv3
software and restrict modification of the software installed on their
hardware... In short, there is a large
base of OpenSource consumers that are suddenly very interested in
understanding alternatives to GPL licensed software. Because of GCC's move to the GPL v3, FreeBSD was forced to remain using GCC 4.2.1 (GPL v2), which was released way back in 2007 , and is now significantly outdated. The fact that FreeBSD did not move to use more modern versions of GCC, even with the additional maintenance headaches of running an old compiler and backporting fixes, gives some idea of the strength of the requirement to avoid the GPL v3. The C compiler is a major component of the FreeBSD base, and " one of the (tentative) goals for FreeBSD 10 is a GPL-free base system ". Corporate investment: Like many major open source projects, FreeBSD receives funding and development work from corporations. Although the extent to which FreeBSD is funded or given development by Apple is not easily discoverable, there is considerable overlap because Apple's Darwin OS makes use of substantial BSD-originated kernel code . Additionally, Clang itself was originally an in-house Apple project, before being open-sourced in 2007 . Since corporate resources are a key enabler of the FreeBSD project, meeting sponsor needs is probably a significant real-world driver . Userbase: FreeBSD is an attractive open source option for many companies, because the licensing is simple, unrestrictive and unlikely to lead to lawsuits. With the arrival of GPL v3 and the new anti-Tivoisation provisions , it has been suggested that there is an accelerating, vendor-driven trend towards more permissive licenses . Since FreeBSD's perceived advantage to commercial entities lies in its permissive license, there is increasing pressure from the corporate user base to move away from GCC, and the GPL in general. Issues with GCC: Apart from the license, using GCC has some perceived issues . GCC is not fully-standards compliant, and has many extensions not found in ISO standard C . At over 3 million lines of code, it is also " one of the most complex and free/open source software projects ". This complexity makes distro-level code modification a challenging task. Technical advantages: Clang does have some technical advantages compared to GCC . Most notable are much more informative error messages and an explicitly designed API for IDEs, refactoring and source code analysis tools. Although the Clang website presents plots indicating much more efficient compilation and memory usage, real world results are quite variable , and broadly in line with GCC performance. In general, Clang-produced binaries run more slowly than the equivalent GCC binaries: While using LLVM is faster at building code than GCC... in most
instances the GCC 4.5 built binaries had performed better than
LLVM-GCC or Clang... in the rest of the tests the performance was
either close to that of GCC or well behind. In some tests, the
performance of the Clang generated binaries was simply awful. Conclusion: It's highly unlikely that compilation efficiency would be a significant motivator to take the substantial risk of moving a large project like FreeBSD to an entirely new compiler toolchain, particularly when binary performance is lacking. However, the situation was not really tenable. Given a choice between 1) running an out-of-date GCC, 2) Moving to a modern GCC and being forced to use a license incompatible with the goals of the project or 3) moving to a stable BSD-licensed compiler, the decision was probably inevitable. Bear in mind that this only applies to the base system, and support from the distribution; nothing prevents a user from installing and using a modern GCC on their FreeBSD box themselves. | {
"source": [
"https://unix.stackexchange.com/questions/49906",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20260/"
]
} |
49,913 | I'd like to write something like this: $ ls **.py in order to get all .py filenames, recursively walking a directory hierarchy. Even if there are .py files to find, the shell (bash) gives this output: ls: cannot access **.py: No such file or directory Any way to do what I want? EDIT: I'd like to specify that I'm not interested in the specific case of ls , but the question is about the glob syntax. | In order to do recursive globs in bash, you need the globstar feature from Bash version 4 or higher. From the Bash documentation : globstar If set, the pattern ** used in a filename expansion context will match all files and zero or more directories and subdirectories. If the pattern is followed by a / , only directories and subdirectories match. For your example pattern: shopt -s globstar
ls -d -- **/*.py | {
"source": [
"https://unix.stackexchange.com/questions/49913",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8115/"
]
} |
49,936 | With a netcat listener like: nc -l <port> < ~/.bashrc I can grab my .bashrc on a new machine (doesn't have nc or LDAP) with: cat < /dev/tcp/<ip>/<port> > ~/.bashrc My question is: Is there a way to mimic the capabilities of nc -l <port> in my first line with /dev/tcp instead of nc ? The machines I'm working on are extremely hardened lab/sandbox environment RHEL (no ssh, no nc, no LDAP, no yum, I can't install new software, and they are not connected to the internet) | Unfortunately it's impossible to do with just bash. /dev/tcp/<ip>/<port> virtual files are implemented in the way that bash tries to connect to the specified <ip>:<port> using connect(2) function. In order to create listening socket, it would have to call bind(2) function. You can check this by downloading bash sources and looking at it. It is implemented in lib/sh/netopen.c file in _netopen4 function (or _netopen6, which also supports IPv6). This function is used by wrapper function netopen from the same file, which in turns is directly used in file redir.c ( redir_special_open function) to implement this virtual redirection. You have to find some other application that can create listening socket on your machine. | {
"source": [
"https://unix.stackexchange.com/questions/49936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23854/"
]
} |
49,980 | Using mutt with Gmail I "check for e-mail updates" sometimes through the $ binding (sync-mailbox), sometimes just by using c (change folder) and just supplying = (defaults to MAILBOX) as the IMAP folder path to change to. Is there a better (in particular: faster) way of doing the same? A solution would be something that is more direct than "pretending to switch folders", as I do, for example. Also something that does not "miss" some updates, as $ seems to do at times (perhaps $ is not meant to check for e-mails at all but just to expunge messages marked as deleted, etc?) | Bind a key ( G for "Get" is recommended) to imap-fetch-mail in your ~/.muttrc. bind index G imap-fetch-mail Pressing G while in the index will now fetch new mail from the imap server. (for POP users, the fetch-mail function fetches mail from a POP server) | {
"source": [
"https://unix.stackexchange.com/questions/49980",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10283/"
]
} |
50,022 | I am trying to get cd to accept a directory name redirected to it from another command. Neither of these methods work: $ echo $HOME | cd
$ echo $HOME | xargs cd This does work: $ cd $(echo $HOME) Why does the first set of commands not work, and are there others that also fail this way? | cd is not an external command - it is a shell builtin function. It runs in the context of the current shell, and not, as external commands do, in a fork/exec'd context as a separate process. Your third example works, because the shell expands the variable and the command substitution before calling the cd builtin, so that cd receives the value of ${HOME} as its argument. POSIX systems do have a binary cd - on my FreeBSD machine, it's at /usr/bin/cd , but it doesn't do what you think. Calling the binary cd causes the shell to fork/exec the binary, which does indeed change its working directory to the name you pass. However, as soon as it does so, the binary exits, and the forked/exec'd process disappears, returning you to your shell, which is still in the directory it was in before you started. | {
"source": [
"https://unix.stackexchange.com/questions/50022",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16798/"
]
} |
50,031 | I use Esc q a lot (stashing the current line while I execute another command), and it works fine with Gnome Terminal 3.6.0 + zsh. However, if I start tmux , then it stops working: the cursor just moves one character to the left and stays there. Alt q does not work either. I don't set TERM in .zshrc, in .tmux.conf I use: set -g default-terminal "screen-256color" . | cd is not an external command - it is a shell builtin function. It runs in the context of the current shell, and not, as external commands do, in a fork/exec'd context as a separate process. Your third example works, because the shell expands the variable and the command substitution before calling the cd builtin, so that cd receives the value of ${HOME} as its argument. POSIX systems do have a binary cd - on my FreeBSD machine, it's at /usr/bin/cd , but it doesn't do what you think. Calling the binary cd causes the shell to fork/exec the binary, which does indeed change its working directory to the name you pass. However, as soon as it does so, the binary exits, and the forked/exec'd process disappears, returning you to your shell, which is still in the directory it was in before you started. | {
"source": [
"https://unix.stackexchange.com/questions/50031",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24293/"
]
} |
50,058 | As referenced in this fine answer , POSIX systems have an external binary cd in addition to the shell builtin. On OS X 10.8 it's /usr/bin/cd . You can't use it like the builtin cd since it exits immediately after changing its own working directory. What purpose does it serve? | It serves primarily as making sure the POSIX tool-chest is available both inside and outside a shell (see the POSIX rationale for requiring those ). For cd , that is not tremendously useful but note that cd changes directories but has other side effects: it returns an exit status that helps determine whether you're able to chdir() to that directory or not, and outputs a useful error message explaining why you can't chdir() when you can't. Example: dirs_i_am_able_to_cd_into=$(find . -type d -exec cd {} \; -print) Another potential side-effect is the automounting of a directory. On a few systems, most of the external commands for the standard shell builtins are implemented as a symlink to the same script that does: #! /bin/sh -
"${0##*/}" "$@" That is start a shell and run the builtin in it. Some other systems (like GNU), have utilities as true executable commands which can lead to confusions when the behavior differs from the shell builtin version. | {
"source": [
"https://unix.stackexchange.com/questions/50058",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5377/"
]
} |
50,098 | From time to time, Linux and Unix users are faced with various network problems. Many of these problems are presented here and at some other troubleshooting forums, but they are very concrete and contain a lot of additional technical information, and sometimes it's rather difficult to understand the main point and the real reason of buggy system behavior. By asking this question, my intention is to start a community wiki page which allows generalizing our network troubleshooting and debugging experience. I hope the Linux and Unix users will be able to easier recognize and solve ("divide and conquer") their network problems using this page. The parent of this page should be Best practise to diagnose problems . But here we should focus on troubleshooting the network problems from user- and kernel-space. I suppose, if you: share the information about using some great network diagnostic tool with concrete usage examples and examples of network bugs which they help to catch, share the link to the great network tutorial connected with this subject, tell us about a general method or recipe which allows to tackle some class of network problems, or share information about your tool set for network debugging and troubleshooting then it perfectly fits this topic. I'll begin by sharing the link to various diagnostic tools and a 12 year old simple tutorial . Also this Arch Linux tutorial seems to have actual information about our subject. And for diving into Linux networking, we definitely need to visit the Linux Networking-HOWTO . | I think, general principles of network troubleshooting are: Find out at what level of the TCP/IP stack (or some other stack) the problem occurs. Understand what the correct system behavior is and what the deviation from the normal system state is. Try to express the problem in one sentence or in several words. Using obtained information from the buggy system, your own experience, and experience of other people (Google, various forums, etc.), try to solve the problem until success (or failure). If you fail, ask other people about help or some advice. As for me, I usually obtain all required information using all needed tools, and try to match this information to my experience. Deciding what level of the network stack contains the bug helps to cut off unlikely variants. Using experience of other people helps to solve the problems quickly, but often it leads to a situation, where I can solve some problem without its understanding and if this problem occurs again, it's impossible for me to tackle it again without the Internet. And in general, I don't know how I solve network problems. It seems that there is some magic function in my brain named SolveNetworkProblem(information_about_system_state, my_experience, people_experience) , which could sometimes return exactly the right answer, and also could sometimes fail (like here TCP dies on a Linux laptop ). I usually use utils from this set for network debugging: ifconfig (or ip link , ip addr ) - for obtaining information about network interfaces ping - for validating if the target host is accessible from my machine. ping could also be used for basic DNS diagnostics - we could ping a host by its IP address or by its hostname and then decide if DNS works at all. And then traceroute or tracepath or mtr to look what's going on on the way there. dig - diagnose everything DNS dmesg | less or dmesg | tail or dmesg | grep -i error - for understanding what the Linux kernel thinks about some trouble. netstat -antp + | grep smth - my most popular usage of the netstat command, which shows information about TCP connections. Often I perform some filtering using grep. See also the new ss command (from iproute2 the new standard suite of Linux networking tools) and lsof as in lsof -ai tcp -c some-cmd . telnet <host> <port> - is very useful for communicating with various TCP services (e.g. on SMTP, HTTP protocols), also we could check general opportunity to connect to some TCP port. iptables-save (on Linux) - to dump the full iptables tables ethtool - get all the network interface card parameters (status of the link, speed, offload parameters...) socat - the Swiss army tool to test all network protocols (UDP, multicast, SCTP...). Especially useful (more so than telnet) with a few -d options. iperf - to test bandwidth availability openssl ( s_client , ocsp , x509 ...) to debug all SSL/TLS/PKI issues. wireshark - the powerful tool for capturing and analyzing network traffic, which allows you to analyze and catch many network bugs. iftop - show big users on the network/router. iptstate (on Linux) - current view of the firewall's connection tracking. arp (or the new ip neigh in Linux) - show the ARP table status. route or the newer (on Linux) ip route - show the routing table status. strace (or truss , dtrace or tusc depending on the system) - is a useful tool that shows which system calls the problematic process performs. It also shows error codes (errno) when system calls fail. This information often says enough for understanding the system behavior and solving a problem. Alternatively, using breakpoints on some networking functions in gdb can let you find out when they are made and with which arguments. to investigate firewall issues on Linux: iptables -nvL shows how many packets are matched by each rule ( iptables -Z to zero the counters). The LOG target inserted in the firewall chains is useful to see which packets reach them and how they have already been transformed when they get there. To get further, NFLOG (associated with ulogd ) will log the full packet. | {
"source": [
"https://unix.stackexchange.com/questions/50098",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4824/"
]
} |
50,103 | I have an embedded linux system using Busybox (OpenWRT) - so commands are limited. I have two files that look like: first file aaaaaa
bbbbbb
cccccc
mmmmmm
nnnnnn second file mmmmmm
nnnnnn
yyyyyy
zzzzzz I need to merge these 2 lists into 1 file, and remove the duplicates. I don't have diff (space is limited) so we get to use the great awk , sed , and grep (or other tools that might be included in a standard Busybox instance). Going to a merge file like: command1 > mylist.merge
command2 mylist.merge > originallist is totally ok. It doesn't have to be a single-line command. Currently defined functions in the instance of Busybox that I am using (default OpenWRT):
[, [[, arping, ash, awk, basename, brctl, bunzip2, bzcat, cat, chgrp, chmod, chown, chroot, clear, cmp,
cp, crond, crontab, cut, date, dd, df, dirname, dmesg, du, echo, egrep, env, expr, false, fgrep, find,
free, fsync, grep, gunzip, gzip, halt, head, hexdump, hostid, hwclock, id, ifconfig, init, insmod, kill,
killall, klogd, less, ln, lock, logger, logread, ls, lsmod, md5sum, mkdir, mkfifo, mknod, mktemp, mount,
mv, nc, netmsg, netstat, nice, nslookup, ntpd, passwd, pgrep, pidof, ping, ping6, pivot_root, pkill,
poweroff, printf, ps, pwd, reboot, reset, rm, rmdir, rmmod, route, sed, seq, sh, sleep, sort,
start-stop-daemon, strings, switch_root, sync, sysctl, syslogd, tail, tar, tee, telnet, telnetd, test,
time, top, touch, tr, traceroute, true, udhcpc, umount, uname, uniq, uptime, vconfig, vi, watchdog, wc,
wget, which, xargs, yes, zcat | I think sort file1 file2 | uniq
aaaaaa
bbbbbb
cccccc
mmmmmm
nnnnnn
yyyyyy
zzzzzz will do what you want. Additional Documentation: uniq sort | {
"source": [
"https://unix.stackexchange.com/questions/50103",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
50,177 | I was just reading up on the Birth section of stat and it appears ext4 should support it, but even a file I just created leaves it empty. ~ % touch test slave-iv
~ % stat test.pl slave-iv
File: ‘test.pl’
Size: 173 Blocks: 8 IO Block: 4096 regular file
Device: 903h/2307d Inode: 41943086 Links: 1
Access: (0600/-rw-------) Uid: ( 1000/xenoterracide) Gid: ( 100/ users)
Access: 2012-09-22 18:22:16.924634497 -0500
Modify: 2012-09-22 18:22:16.924634497 -0500
Change: 2012-09-22 18:22:16.947967935 -0500
Birth: -
~ % sudo tune2fs -l /dev/md3 | psp4 slave-iv
tune2fs 1.42.5 (29-Jul-2012)
Filesystem volume name: home
Last mounted on: /home
Filesystem UUID: ab2e39fb-acdd-416a-9e10-b501498056de
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: journal_data
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 59736064
Block count: 238920960
Reserved block count: 11946048
Free blocks: 34486248
Free inodes: 59610013
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 967
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
RAID stride: 128
RAID stripe width: 256
Flex block group size: 16
Filesystem created: Mon May 31 20:36:30 2010
Last mount time: Sat Oct 6 11:01:01 2012
Last write time: Sat Oct 6 11:01:01 2012
Mount count: 14
Maximum mount count: 34
Last checked: Tue Jul 10 08:26:37 2012
Check interval: 15552000 (6 months)
Next check after: Sun Jan 6 07:26:37 2013
Lifetime writes: 7255 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
First orphan inode: 55313243
Default directory hash: half_md4
Directory Hash Seed: 442c66e8-8b67-4a8c-92a6-2e2d0c220044
Journal backup: inode blocks Why doesn't my ext4 partition populate this field? | The field gets populated (see below) only coreutils stat does not display it. Apparently they're waiting 1 for the xstat() interface . coreutils patches - aug. 2012 - TODO stat(1) and ls(1) support for birth time. Dependent on xstat() being
provided by the kernel You can get the creation time via debugfs : debugfs -R 'stat <inode_number>' DEVICE e.g. for my /etc/profile which is on /dev/sda2 (see How to find out what device a file is on ): stat -c %i /etc/profile
398264 debugfs -R 'stat <398264>' /dev/sda2
debugfs 1.42.5 (29-Jul-2012)
Inode: 398264 Type: regular Mode: 0644 Flags: 0x80000
Generation: 2058737571 Version: 0x00000000:00000001
User: 0 Group: 0 Size: 562
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 8
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x506b860b:19fa3c34 -- Wed Oct 3 02:25:47 2012
atime: 0x50476677:dcd84978 -- Wed Sep 5 16:49:27 2012
mtime: 0x506b860b:19fa3c34 -- Wed Oct 3 02:25:47 2012
crtime: 0x50476677:dcd84978 -- Wed Sep 5 16:49:27 2012
Size of extra inode fields: 28
EXTENTS:
(0):3308774 Time fields meaning: ctime : file change time. atime : file access time. mtime : file modification time. crtime : file creation time. 1 Linus' reply on LKML thread | {
"source": [
"https://unix.stackexchange.com/questions/50177",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29/"
]
} |
50,179 | If you do rm myFile where myFile is a hard link, what happens? | In Unix all normal files are Hardlinks. Hardlinks in a Unix (and most (all?)) filesystems are references to what's called an inode . The inode has a reference counter, when you have one "link" to the file (which is the normal modus operandi) the counter is 1. When you create a second, third, fourth, etc link, the counter is incremented (increased) each time by one. When you delete ( rm ) a link the counter is decremented (reduced) by one. If the link counter reaches 0 the filesystem removes the inode and marks the space as available for use. In short, as long as you do not delete the last link the file will remain. Edit: The file will remain even if the last link is removed . This is one of the ways to ensure security of data contained in a file is not accessible to any other process. Removing the data from the filesystem completely is done only if the data has 0 links to it as given in its metadata and is not being used by any process. This IMHO is by far the easiest way to understand hard-links (and its difference from softlinks). | {
"source": [
"https://unix.stackexchange.com/questions/50179",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5536/"
]
} |
50,198 | I am manipulating a large number of XML files scattered throughout a nested directory structure. I tried the following: $ find . -name "*.xml" -type f | xargs -- xmllint --format The problem is that generates the formatted XML output on the screen, but doesn't change the file. How can I change this command so that the actual file contents are changed? | This can be done from find directly using -exec : find . -name "*.xml" -type f -exec xmllint --output '{}' --format '{}' \; What's passed to -exec will be invoked once per file found with the template parameters {} being replaced with the current file name. The \; on the end of the find command just terminates the line. The use of xargs isn't really necessary in this case because we need to invoke xmllint once per file as both the input and output file names must be specified within the same call. xargs would be needed if the command being piped to from find was working on multiple files at a time and that list was long. You can't do that in this case, as you need to pass the single filename to the --output option of xmllint . Without xargs you could end up with a "Argument List too long" error if you are processing a lot of files. xargs also supports file replace strings with the -I option: find . -name "*.xml" -type f | xargs -I'{}' xmllint --output '{}' --format '{}' Would do the same as the find -exec command above. If any of your folders have odd chars in like spaces you will need to use the -0 options of find and xargs . But using xargs with -I implies the option -L 1 which means only process 1 file at a time anyway, so you may as well directly use find with -exec . | {
"source": [
"https://unix.stackexchange.com/questions/50198",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24395/"
]
} |
50,215 | The only calculator I know is bc . I want to add 1 to a variable, and output to another variable. I got the nextnum variable from counting string in a file: nextnum=`grep -o stringtocount file.tpl.php | wc -w` Lets say the nextnum value is 1 . When added with 1 , it will become 2 . To calculate, I run: rownum=`$nextnum+1 | bc` but got error: 1+1: command not found I just failed in calculation part. I've tried changing the backtick but still not works. I have no idea how to calculate variables and output it to another variable. | The substring inside the ` ` must be a valid command itself: rownum=`echo $nextnum+1 | bc` But is preferable to use $( ) instead of ` ` : rownum=$(echo $nextnum+1 | bc) But there is no need for bc , the shell is able to do integer arithmetic: rownum=$((nextnum+1)) Or even simpler in bash and ksh : ((rownum=nextnum+1)) | {
"source": [
"https://unix.stackexchange.com/questions/50215",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17948/"
]
} |
50,220 | Contents of my dir are $ ls -lrt
total 0
-rw-r--r-- 1 user1 admin 19 Oct 8 12:31 night.txt
-rw-r--r-- 1 user1 admin 19 Oct 8 12:31 noon.txt
-rw-r--r-- 1 user1 admin 38 Oct 8 12:31 day.txt I would like to list out details of files that have a word in the filename as specified. Example : $ ls -lrt *day|night*
ls: *day: No such file or directory
bash: night.txt: command not found Expected output -rw-r--r-- 1 user1 admin 19 Oct 8 12:31 night.txt
-rw-r--r-- 1 user1 admin 38 Oct 8 12:31 day.txt How to list out different files matching 2 different partterns, or in short how to use regex with ls , so that I could OR the filename parts. Original scenarion, there are many file in the directory, have shortened the case for asking. | You don't even need extended globbing enabled to do what you want. This will work in bash: ls {day*,night*} | {
"source": [
"https://unix.stackexchange.com/questions/50220",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
50,264 | When logging in as root at my server everything works fine, but when I log in as myusername the bash is not working correctly. The line starts with: $ instead of myusername@myserver:~$ and all specials keys like the arrow keys, tab keys, etc. won't work. When I type bin/bash it works again, but I'd like to fix the problem or auto run bin/bash on login. How can I fix this? | You just need to change your shell. As that user, run: $ chsh -s /bin/bash Then sign out and back in. After doing this the prompt doesn't look like you want, you'll need to start tweaking your environment's PS1 variable. | {
"source": [
"https://unix.stackexchange.com/questions/50264",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
50,313 | My current best bet is: for i in $(find . -name *.jpg); do echo $i; done Problem: does not handle spaces in filenames. Note: I would also love a graphical way of doing this, such as the "tree" command. | The canonical way is to do find . -name '*.jpg' -exec echo {} \; (replace \; with + to pass more than one file to echo at a time) or (GNU specific, though some BSDs now have it as well): find . -name '*.jpg' -print0 | xargs -r0 echo zsh: for i (**/*.jpg(D)) echo $i | {
"source": [
"https://unix.stackexchange.com/questions/50313",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20506/"
]
} |
50,377 | I have compared the man pages of dir and ls and they seem to be exactly the same. Both are part of GNU coreutils and "list directory contents". The only difference I've seen so far is that dir doesn't colorize the output. So why do two commands exist? Is there a difference I missed? Why would one prefer dir over ls ? | I would be inclined to think that dir is there just for backwards compatibility . From GNU Coreutils : dir is equivalent to ls -C -b; that is, by default files are listed in columns, sorted vertically, and special characters are represented by backslash escape sequences. By the way, ls doesn't colorize the output by default: this is because most distros alias ls to ls --color=auto in /etc/profile.d . For a test, type unalias ls then try ls : it will be colorless. | {
"source": [
"https://unix.stackexchange.com/questions/50377",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4784/"
]
} |
50,431 | I am working on a Red Hat server. The commands ls -l or ll giving me the date and time in format +"%b %-d %H:%M" . I want to list the files in a way where the year when each was file created would appear within the date. How is that possible? | You can use man ls and here you can find --time-style parameter. Or you can use: ls --full-time . | {
"source": [
"https://unix.stackexchange.com/questions/50431",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18501/"
]
} |
50,487 | How can I move all files and folders from one directory to another via mv command? | Try with this: mv /path/sourcefolder/* /path/destinationfolder/ | {
"source": [
"https://unix.stackexchange.com/questions/50487",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/212951/"
]
} |
50,493 | One thing that annoys me using Linux's terminal is when I have to type a invisble password, like when you run ssh . Sometimes I mistype one or more letters and then I have to press backspace key a few times to make sure that I erased everything to start again. I know this is not a big deal but since on the other day I found out that pressing CTRL+L "clears" the terminal I was wondering if there's a way to clear/erase the invisible password without relying on backspace or pressing enter key to try again. | You just have to type ctrl + u . Enjoy =) | {
"source": [
"https://unix.stackexchange.com/questions/50493",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24536/"
]
} |
50,499 | Possible Duplicate: How linux handles multiple path separators (/home////username///file) Most commands I use in linux behave exactly the same whether I include the trailing slash / character on the end of a directory name or not. For example: ls /home/cklein
ls /home/cklein/
cp foo bar
cp foo/ bar/ When does this trailing slash matter? What is the semantic meaning of the trailing slash? | One good example is moving a file into a directory: mv some_file foo vs. mv some_file foo/ If foo doesn't exist, the first will rename some_file to foo , rather than the intended foo/some_file ; the second will complain, which is what you want. If foo does exists but isn't a directory, the first can clobber the foo file; again, the second will complain. cp presents similar issues. Working on some old versions of SunOS, I developed the habit of appending /. , because the system actually ignored a trailing / on a file name; for example, /etc/motd/ would refer to the file rather than being an error. Later versions of SunOS / Solaris don't seem to have that problem. | {
"source": [
"https://unix.stackexchange.com/questions/50499",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4143/"
]
} |
50,512 | I am trying to understand the difference between grep -e and grep -E . Now from grep manpage I got: -E, --extended-regexp Interpret PATTERN as an extended regular expression (see below). -e PATTERN, --regexp=PATTERN Use PATTERN as the pattern; useful to protect patterns beginning with - The above explanation does not make sense for me. So, can someone explain it to me using examples what is the difference between the two and when to use which option. PS: Version: grep (GNU grep) 2.10 | -e is strictly the flag for indicating the pattern you want to match against. -E controls whether you need to escape certain special characters. man grep explains -E it a bit more: Basic vs Extended Regular Expressions
In basic regular expressions the meta-characters ?, +, {, |, (, and ) lose their
special meaning; instead use the backslashed versions \?, \+, \{, \|, \(, and \).
Traditional egrep did not support the { meta-character, and some egrep
implementations support \{ instead, so portable scripts should avoid { in grep -E
patterns and should use [{] to match a literal {.
GNU grep -E attempts to support traditional usage by assuming that { is not
special if it would be the start of an invalid interval specification.
For example, the command grep -E '{1' searches for the two-character string {1
instead of reporting a syntax error in the regular expression. POSIX.2 allows
this behavior as an extension, but portable scripts should avoid it. | {
"source": [
"https://unix.stackexchange.com/questions/50512",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24545/"
]
} |
50,555 | Suppose I have a thousand or more instances of any process (for example, vi ) running. How do I kill them all in one single shot/one line command/one command? | What's wrong with the good old, for pid in $(ps -ef | grep "some search" | awk '{print $2}'); do kill -9 $pid; done There are ways to make that more efficient, for pid in $(ps -ef | awk '/some search/ {print $2}'); do kill -9 $pid; done and other variations, but at the basic level, it's always worked for me. | {
"source": [
"https://unix.stackexchange.com/questions/50555",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23755/"
]
} |
50,563 | I have this code - #getoptDemo.sh
usage()
{
echo "usage: <command> options:<w|l|h>"
}
while getopts wlh: option
do
case $option in
(w)
name='1';;
(l)
name='2';;
(h)
name='3';;
(*)
usage
exit;;
esac
done
print 'hi'$name When I run bash getoptDemos.sh (without the option) it prints hi instead of calling the function usage . It calls usage when options other than w, h and l are given. Then can't it work when no options are specified. I have tried using ? , \? , : in place of * but I can't achieve what I wanted to. I mean all the docs on getopt says it to use ? . What am I doing wrong? | When you run this script without any options, getopt will return false, so it won't enter the loop at all. It will just drop down to the print - is this ksh/zsh? If you must have an option, you're best bet is to test $name after the loop. if [ -z "$name" ]
then
usage
exit
fi But make sure $name was empty before calling getopts (as there could have been a $name in the environment the shell received on startup) with unset name (before the getopts loop) | {
"source": [
"https://unix.stackexchange.com/questions/50563",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24449/"
]
} |
50,602 | I can't ifdown an interface on Debian 6.0.5: user@box:/etc/network$ sudo ifdown eth0 && sudo ifup eth0
ifdown: interface eth0 not configured
SIOCADDRT: File exists
Failed to bring up eth0.
user@box:/etc/network$ cat interfaces
auto lo
iface lo inet loopback
allow-hotplug eth0
allow-hotplug eth1
auto eth0
iface eth0 inet static
address 10.0.0.1
netmask 255.255.255.0
gateway 10.0.0.254
auto eth1
iface eth1 inet manual As requested by marco: user@box:/etc/network/$ cat /run/network/ifstate
lo=lo
eth1=eth1 | Check the contents of the file /run/network/ifstate . ifup and ifdown use this file to note which network interfaces can be brought up and down. Thus, ifup can be easily confused when other networking tools are used to bring up an interface (e.g. ifconfig ). From man ifup The program keeps records of whether network interfaces are up or
down. Under exceptional circumstances these records can become
inconsistent with the real states of the interfaces. For example,
an interface that was brought up using ifup and later
deconfigured using ifconfig will still be recorded as up. To fix
this you can use the --force option to force ifup or ifdown to
run configuration or deconfiguration commands despite what it
considers the current state of the interface to be. | {
"source": [
"https://unix.stackexchange.com/questions/50602",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12583/"
]
} |
50,612 | I would like to search for files that would not match 2 -name conditions. I can do it like so : find /media/d/ -type f -size +50M ! -name "*deb" ! -name "*vmdk" and this will yield proper result but can I join these 2 condition with OR somehow ? | yes, you can: find /media/d/ -type f -size +50M ! \( -name "*deb" -o -name "*vmdk" \) Explanation from the POSIX spec : ! expression : Negation of a primary; the unary NOT operator. ( expression ): True if expression is true. expression -o expression : Alternation of primaries; the OR operator. The second expression shall not be evaluated if the first expression is true. Note that parenthesis, both opening and closing, are prefixed by a backslash ( \ ) to prevent evaluation by the shell. | {
"source": [
"https://unix.stackexchange.com/questions/50612",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15387/"
]
} |
50,639 | Does anyone know which sebool it is to allow httpd write access to /home/user/html?
When I disable SELinux echo 0 > /selinux/enforce I can write, so my problem is definitely related to SELinux. I just don't know which one is the right one without opening a big hole and Google isn't being much help. #[/home]ls -Z
drwxr-x---. user apache unconfined_u:object_r:user_home_dir_t:s0 user
#sestatus -b
Policy booleans:
abrt_anon_write off
abrt_handle_event off
allow_console_login on
allow_cvs_read_shadow off
allow_daemons_dump_core on
allow_daemons_use_tcp_wrapper off
allow_daemons_use_tty on
allow_domain_fd_use on
allow_execheap off
allow_execmem on
allow_execmod on
allow_execstack on
allow_ftpd_anon_write off
allow_ftpd_full_access off
allow_ftpd_use_cifs off
allow_ftpd_use_nfs off
allow_gssd_read_tmp on
allow_guest_exec_content off
allow_httpd_anon_write off
allow_httpd_mod_auth_ntlm_winbind off
allow_httpd_mod_auth_pam off
allow_httpd_sys_script_anon_write off
allow_java_execstack off
allow_kerberos on
allow_mount_anyfile on
allow_mplayer_execstack off
allow_nsplugin_execmem on
allow_polyinstantiation off
allow_postfix_local_write_mail_spool on
allow_ptrace off
allow_rsync_anon_write off
allow_saslauthd_read_shadow off
allow_smbd_anon_write off
allow_ssh_keysign off
allow_staff_exec_content on
allow_sysadm_exec_content on
allow_unconfined_nsplugin_transition off
allow_user_exec_content on
allow_user_mysql_connect off
allow_user_postgresql_connect off
allow_write_xshm off
allow_xguest_exec_content off
allow_xserver_execmem off
allow_ypbind off
allow_zebra_write_config on
authlogin_radius off
cdrecord_read_content off
clamd_use_jit off
cobbler_anon_write off
cobbler_can_network_connect off
cobbler_use_cifs off
cobbler_use_nfs off
condor_domain_can_network_connect off
cron_can_relabel off
dhcpc_exec_iptables off
domain_kernel_load_modules off
exim_can_connect_db off
exim_manage_user_files off
exim_read_user_files off
fcron_crond off
fenced_can_network_connect off
fenced_can_ssh off
ftp_home_dir on
ftpd_connect_db off
ftpd_use_passive_mode off
git_cgit_read_gitosis_content off
git_session_bind_all_unreserved_ports off
git_system_enable_homedirs off
git_system_use_cifs off
git_system_use_nfs off
global_ssp off
gpg_agent_env_file off
gpg_web_anon_write off
httpd_builtin_scripting on
httpd_can_check_spam off
httpd_can_network_connect off
httpd_can_network_connect_cobbler off
httpd_can_network_connect_db on
httpd_can_network_memcache off
httpd_can_network_relay off
httpd_can_sendmail on
httpd_dbus_avahi on
httpd_enable_cgi on
httpd_enable_ftp_server off
httpd_enable_homedirs on
httpd_execmem off
httpd_manage_ipa off
httpd_read_user_content off
httpd_setrlimit off
httpd_ssi_exec off
httpd_tmp_exec off
httpd_tty_comm on
httpd_unified on
httpd_use_cifs off
httpd_use_gpg off
httpd_use_nfs off
httpd_use_openstack off
icecast_connect_any off
init_upstart on
irssi_use_full_network off
logging_syslogd_can_sendmail off
mmap_low_allowed off
mozilla_read_content off
mysql_connect_any off
named_write_master_zones off
ncftool_read_user_content off
nscd_use_shm on
nsplugin_can_network on
openvpn_enable_homedirs on
piranha_lvs_can_network_connect off
pppd_can_insmod off
pppd_for_user off
privoxy_connect_any on
puppet_manage_all_files off
puppetmaster_use_db off
qemu_full_network on
qemu_use_cifs on
qemu_use_comm off
qemu_use_nfs on
qemu_use_usb on
racoon_read_shadow off
rgmanager_can_network_connect off
rsync_client off
rsync_export_all_ro off
rsync_use_cifs off
rsync_use_nfs off
samba_create_home_dirs off
samba_domain_controller off
samba_enable_home_dirs off
samba_export_all_ro off
samba_export_all_rw off
samba_run_unconfined off
samba_share_fusefs off
samba_share_nfs off
sanlock_use_nfs off
sanlock_use_samba off
secure_mode off
secure_mode_insmod off
secure_mode_policyload off
sepgsql_enable_users_ddl on
sepgsql_unconfined_dbadm on
sge_domain_can_network_connect off
sge_use_nfs off
smartmon_3ware off
spamassassin_can_network off
spamd_enable_home_dirs on
squid_connect_any on
squid_use_tproxy off
ssh_chroot_rw_homedirs off
ssh_sysadm_login off
telepathy_tcp_connect_generic_network_ports off
tftp_anon_write off
tor_bind_all_unreserved_ports off
unconfined_login on
unconfined_mmap_zero_ignore off
unconfined_mozilla_plugin_transition off
use_fusefs_home_dirs off
use_lpd_server off
use_nfs_home_dirs on
use_samba_home_dirs off
user_direct_dri on
user_direct_mouse off
user_ping on
user_rw_noexattrfile on
user_setrlimit on
user_tcp_server off
user_ttyfile_stat off
varnishd_connect_any off
vbetool_mmap_zero_ignore off
virt_use_comm off
virt_use_fusefs off
virt_use_nfs off
virt_use_samba off
virt_use_sanlock off
virt_use_sysfs on
virt_use_usb on
virt_use_xserver off
webadm_manage_user_files off
webadm_read_user_files off
wine_mmap_zero_ignore off
xdm_exec_bootloader off
xdm_sysadm_login off
xen_use_nfs off
xguest_connect_network on
xguest_mount_media on
xguest_use_bluetooth on
xserver_object_manager off | None of them, at least not by itself. You must either give the directory structure a context of httpd_sys_rw_content_t , or give them a context of public_content_rw_t and enable allow_httpd_anon_write and/or allow_httpd_sys_script_anon_write as follows: chcon -R -t httpd_sys_rw_content_t /path See the httpd_selinux(8) man page for details. | {
"source": [
"https://unix.stackexchange.com/questions/50639",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15401/"
]
} |
50,642 | Is there a way to download all dependencies with yumdownloader, even if they are already installed? I'm trying to create a local repo and only want to include the packages we need. | You can use repotrack instead like this: repotrack -a x86_64 -p /repos/Packages [packages] Unfortunately there is a bug with the -a flag (arch). It will download i686 and x86_64. Here's how to fix it: if opts.arch:
#archlist = []
#archlist.extend(rpmUtils.arch.getArchList(opts.arch))
archlist = opts.arch.split(',') # Change to this
else:
archlist = rpmUtils.arch.getArchList() You can use repoquery to get a list of group packages: repoquery --qf=%{name} -g --list --grouppkgs=all [groups] Which you can feed into repotrack: repoquery --qf=%{name} -g --list --grouppkgs=all [groups] | xargs repotrack -a x86_64 -p /repos/Packages | {
"source": [
"https://unix.stackexchange.com/questions/50642",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24636/"
]
} |
50,665 | Possible Duplicate: Difference between Login Shell and Non-Login Shell? I have been looking at /etc/profile. bash.bashrc to see how they are run, and notice that some are executed by non-login shells, some work with interactive shells etc. What are the differences in this type of shells, ie interactive & non-interactive, login & non-login etc? The question may be pretty basic,but it seems I need to ask what a shell is, first and foremost. What is a shell, what is its relevance, how do you use it, and why does it exist to start with? Update: To make the intent of the question better understood, what I need to understand besides the definitions, are the use cases for one type of shell or the other. It is the use cases that help understanding, not just dictionary definitions. | A shell is the generic name for any program that gives you a text-interface to interact with the computer. You type a command and the output is shown on screen. Many shells have scripting abilities: Put multiple commands in a script and the shell executes them as if they were typed from the keyboard. Most shells offer additional programming constructs that extend the scripting feature into a programming language. On most Unix/Linux systems multiple shells are available: bash, csh, ksh, sh, tcsh, zsh just to name a few. They differ in the various options they give the user to manipulate the commands and in the complexity and capabilities of the scripting language. Interactive: As the term implies: Interactive means that the commands are run with user-interaction from keyboard. E.g. the shell can prompt the user to enter input. Non-interactive: the shell is probably run from an automated process so it can't assume it can request input or that someone will see the output. E.g., maybe it is best to write output to a log file. Login: Means that the shell is run as part of the login of the user to the system. Typically used to do any configuration that a user needs/wants to establish his work environment. Non-login: Any other shell run by the user after logging on, or which is run by any automated process which is not coupled to a logged in user. | {
"source": [
"https://unix.stackexchange.com/questions/50665",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26026/"
]
} |
50,692 | I'm on Solaris 10 and I have tested the following with ksh (88), bash (3.00) and zsh (4.2.1). The following code doesn't yield any result: function foo {
echo "Hello World"
}
find somedir -exec foo \; The find does match several files (as shown by replacing -exec ... with -print ), and the function works perfectly when called outside from the find call. Here's what the man find page say about -exec : -exec command True if the executed command returns a
zero value as exit status. The end of
command must be punctuated by an escaped
semicolon (;). A command argument {} is
replaced by the current pathname. If the
last argument to -exec is {} and you
specify + rather than the semicolon (;),
the command is invoked fewer times, with
{} replaced by groups of pathnames. If
any invocation of the command returns a
non-zero value as exit status, find
returns a non-zero exit status. I could probably get away doing something like this: for f in $(find somedir); do
foo
done But I'm afraid of dealing with field separator issues. Is it possible to call a shell function (defined in the same script, let's not bother with scoping issues) from a find ... -exec ... call? I tried it with both /usr/bin/find and /bin/find and got the same result. | A function is local to a shell, so you'd need find -exec to spawn a shell and have that function defined in that shell before being able to use it. Something like: find ... -exec ksh -c '
function foo {
echo blan: "$@"
}
foo "$@"' ksh {} + bash allows one to export functions via the environment with export -f , so you can do (in bash): foo() { ...; }
export -f foo
find ... -exec bash -c 'foo "$@"' bash {} + ksh88 has typeset -fx to export function (not via the environment), but that can only be used by she-bang less scripts executed by ksh , so not with ksh -c . Another option is to do: find ... -exec ksh -c "
$(typeset -f foo)"'
foo "$@"' ksh {} + That is, use typeset -f to dump the definition of the foo function inside the inline script. Note that if foo uses other functions, you'll also need to dump them as well. Or instead of passing the function definition on the command line (which would be visible in the output of ps -f for instance), you can pass it via an environment variable: FUNCDEFS=$(typeset -f foo) find ... -exec ksh -c '
eval "$FUNCDEFS" &&
unset -v FUNCDEFS &&
foo "$@"' ksh {} + (the unset -v FUNCDEFS to avoid polluting the environment of commands started by that foo function if any). | {
"source": [
"https://unix.stackexchange.com/questions/50692",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4098/"
]
} |
50,733 | I'm running VIM in tmux, When I'm try to select a range of texts in VIM, the mouse kept resetting its position, thus I can only select one line (the last line where mouse occurs). Does anyone know how to solve this? | There are two settings that you need to configure for this to work. In your .vimrc add: set ttymouse=xterm2
set mouse=a In your .tmux.conf add: set -g mouse on You will then be able to use the mouse to select blocks of text, resize splitted windows, ... | {
"source": [
"https://unix.stackexchange.com/questions/50733",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
50,752 | After having played with curl, a binary file has been dumped inside my terminal. For example, the horizontal lines: ─ I use to format my prompt are replaced by 'q', and it can be much worse. Why does this happen, and how can you fix it without having to close the terminal
? | I think reset would definitely fix it. Consider looking into man page . Example: [m0nhawk@terra:~]> cat /dev/urandom
êIÉè;┤Ü)MåÇ▐¿÷¢§ôWdO┘&!π¡
[└█┼░▒┬┐@├err▒:·]> c▒├ /de┴/┤r▒┼do└ And reset fixes this. | {
"source": [
"https://unix.stackexchange.com/questions/50752",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20687/"
]
} |
50,762 | How to set systemd for automatically force fsck disks after crash (hard poweroff)? When I used sysvinit (in Arch Linux) as /sbin/init I used the hack: in the rc.local I create /forcefsck file; in the rc.local.shutdown I remove it. At boot-up rc.sysinit enable force fsck if /forcefsck exists. How to do same in systemd ? Maybe it has built-in facilities for automatic fsck after crush? | You can force fsck at boot time by passing fsck.mode=force ( auto is default and skip can be used to skip checking at all) as a kernel command line parameter (as of systemd v. 213 , there's also a second parameter: fsck.repair - to control how fsck shall deal with unclean file systems at boot; possible values are: preen to fix what can be safely fixed, yes to answer yes to all questions and no is default). Note that systemd-fsck does not know any details about specific filesystems, and simply executes file system checkers specific to each filesystem type ( /sbin/fsck.* ) . Now, if your filesystem is xfs or btrfs it will execute /sbin/fsck.xfs or /sbin/fsck.btrfs respectively. If that does not seem to work maybe you should check the manual page for fsck.xfs or fsck.btrfs respectively and examine the contents of the said files in /sbin . | {
"source": [
"https://unix.stackexchange.com/questions/50762",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
50,785 | How does Linux system behave when I am not sudoer?
Here is what happens if I try to use sudo: server:/tmp>$ sudo cal
[sudo] password for user:
Sorry, try again. Is it possible that I just don't know my password or does this mean that I am not sudoer? (On another machine system printed out that I'm no sudoer and the incident will be reported) | To know whether a particular user is having sudo access or not, we can use -l and -U options together. For example, If the user has sudo access, it will print the level of sudo access for that particular user. $ sudo -l -U pradeep
User pradeep may run the following commands on this host:
(ALL : ALL) ALL If the user don't have sudo access, it will print that user is not allowed to run sudo on localhost. $ sudo -l -U pradeep.c
User pradeep.c is not allowed to run sudo on localhost. | {
"source": [
"https://unix.stackexchange.com/questions/50785",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24694/"
]
} |
51,818 | I'd like to have a file eg. f with only zsh aliases (pureness reasons). Then I'd like to include f file in my .zshrc file, so that the aliases defined in f are visible in .zshrc . Is it possible? If it is, I could create a script eg. my_alias ( $my_alias ll 'ls -l' ) which appends alias to f file.
Of course I could do $echo {alias command} >> ~/.zshrc but this makes .zshrc one big mess. Additionally how is it looks like in bash? UPDATE If someone share my idea this is solution, thanks to phunehehe: # source aliases
ALIASFILE=~/.aliasesrc
source $ALIASFILE
function add_alias() {
if [[ -z $1 || -z $2 || $# -gt 2 ]]; then
echo usage:
echo "\t\$$0 ll 'ls -l'"
else
echo "alias $1='$2'" >> $ALIASFILE
echo "alias ADDED to $ALIASFILE"
fi
} | .zshrc and .bashrc are script files, not config files, so you "source" the alias file. In Zsh ( .zshrc ) and Bash ( .bashrc ) alike: . my_alias will run my_alias and leave its effects in the same environment with the RC files, effectively giving you the aliases in the shell. Of course, your are not limited to aliases either. I use a .shrc that is sourced by both .bashrc and .zshrc for common exports, functions and aliases. For more on sourcing see Different ways to execute a shell script . | {
"source": [
"https://unix.stackexchange.com/questions/51818",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21544/"
]
} |
51,945 | As a C programmer, I was surprised to see that wc -c (which count the number of bytes), and wc -m (which counts the number of characters) output very different results for a long, text file of mine. I had always been told that sizeof(char) is 1 byte. qdii@nomada ~/Documents $ wc -c sentences.csv
102990983 sentences.csv
qdii@nomada ~/Documents $ wc -m sentences.csv
89023123 sentences.csv Any explanation? | The char type in C is one byte, but it's intended for ASCII characters; there are variable-width encodings like UTF-8 that can take up many bytes per character. wc uses the mbrtowc(3) function to decode multibyte sequences, depending on the locale set by the LC_CTYPE environment variable. If you set the locale properly, you should get the same result for all cases. For example: qdii@nomada ~/Documents $ LC_CTYPE="C" wc -m sentences.csv
102990983 sentences.csv | {
"source": [
"https://unix.stackexchange.com/questions/51945",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13585/"
]
} |
51,949 | For some reason a while back, the behavior of my command line changed, and I don't know why. Using OSX, now Mountain Lion(although this behavior was present before the switch). Using standard terminal, I would expect back some results from ps , but I get an error: $ ps aux |grep 'asdf'
grep: asdf: No such file or director This also shows up, for example, here: ln -s "/Applications/Sublime Text 2.app/Contents/SharedSupport/bin/subl" ~/bin/subl
ln: /Users/peter/bin/subl: No such file or directory | The char type in C is one byte, but it's intended for ASCII characters; there are variable-width encodings like UTF-8 that can take up many bytes per character. wc uses the mbrtowc(3) function to decode multibyte sequences, depending on the locale set by the LC_CTYPE environment variable. If you set the locale properly, you should get the same result for all cases. For example: qdii@nomada ~/Documents $ LC_CTYPE="C" wc -m sentences.csv
102990983 sentences.csv | {
"source": [
"https://unix.stackexchange.com/questions/51949",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20488/"
]
} |
51,981 | I have a directory in which lots of files (around 200) with the name temp_log.$$ are created with several other important files which I need to check. How can I easily list out all the files and exclude the temp_log.$$ files from getting displayed? Expected output $ ls -lrt <exclude-filename-part>
-- Lists files not matching the above given string I have gone through ls man page but couldn't find anything in this reference. Please let me know if I have missed any vital information here. Thanks | With GNU ls (the version on non-embedded Linux and Cygwin, sometimes also found elsewhere), you can exclude some files when listing a directory. ls -I 'temp_log.*' -lrt (note the long form of -I is --ignore='temp_log.*' ) With zsh, you can let the shell do the filtering. Pass -d to ls so as to avoid listing the contents of matched directories. setopt extended_glob # put this in your .zshrc
ls -dltr ^temp_log.* With ksh, bash or zsh, you can use the ksh filtering syntax. In zsh, run setopt ksh_glob first. In bash, run shopt -s extglob first. ls -dltr !(temp_log.*) | {
"source": [
"https://unix.stackexchange.com/questions/51981",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
51,983 | I searched SO and found that to uppercase a string following would work str="Some string"
echo ${str^^} But I tried to do a similar thing on a command-line argument, which gave me the following error Tried #!/bin/bash
## Output
echo ${1^^} ## line 3: ${1^^}: bad substitution
echo {$1^^} ## No error, but output was still smaller case i.e. no effect How could we do this? | The syntax str^^ which you are trying is available from Bash 4.0 and above. Perhaps yours is an older version (or you ran the script with sh explicitly): Try this: str="Some string"
printf '%s\n' "$str" | awk '{ print toupper($0) }' | {
"source": [
"https://unix.stackexchange.com/questions/51983",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
52,026 | I am trying to read the output of a command in bash using a while loop . while read -r line
do
echo "$line"
done <<< $(find . -type f) The output I got ranveer@ranveer:~/tmp$ bash test.sh
./test.py ./test1.py ./out1 ./test.sh ./out ./out2 ./hello
ranveer@ranveer:~/tmp$ After this I tried $(find . -type f) |
while read -r line
do
echo "$line"
done but it generated an error test.sh: line 5: ./test.py: Permission denied . So, how do I read it line by line because I think currently it is slurping the entire line at once. Required output: ./test.py
./test1.py
./out1
./test.sh
./out
./out2
./hello | There's a mistake, you need < <(command) not <<<$(command) < <( ) is a Process Substitution , $() is a command substitution and <<< is a here-string . | {
"source": [
"https://unix.stackexchange.com/questions/52026",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22728/"
]
} |
52,055 | Accidently I mapped Enter via xkbset to Pointer_button2 . Now every time I hit Enter some gibberish text appears. I thought of a workaround involving remapping it back, but that means I will have to run a command xmodmap -e "keycode 135 = Pointer_Button2" (or any other keycode but that of Enter ). But I'm not able to run this command in terminal, without hitting Enter . How do I do so? I'm running Ubuntu 12.04. | You can use CTRL + J or CTRL + M as an alternative to Enter . They are the control characters for linefeed (LF) and carriage return (CR). | {
"source": [
"https://unix.stackexchange.com/questions/52055",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23301/"
]
} |
52,087 | How to reset Xfce session? When I start Xfce4, it always opens 2 windows of Evince, Emacs and one terminal emulator. I remember I used this configuration once a week ago, but now it starts every time after logging in. How can I make Xfce4 start cleanly with no applications? | This question is answered in Xfce wiki, subsection Some of my applications are always started when I login : There are two possible reasons why the application is started: It is saved in the last session or it is listed in the auto started applications. Follow 1 of the two steps below to get rid of the applications. Start the xfce4-autostart-editor and remove the application(s). You can also manually delete those files in ~/Desktop/Autostart and ~/.config/autostart. Most of the time closing all the applications and save your session when you logout is sufficient. If this doesn't work, remove the content of the ~/.cache/sessions/ directory when you're not logged in. And if you don't want xfce remember every session you should turn off (uncheck) “Automatically save session on logout” in Settings Manager → Sessions and Startup (tab General) Run this: rm -fr ~/.cache/sessions/* and Xfce should starts cleanly. Also, you might need to clear the entire .cache directory if the above doesn't fix the issues: rm -fr ~/.cache | {
"source": [
"https://unix.stackexchange.com/questions/52087",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11397/"
]
} |
52,100 | I have more than 1000 lines in a file. The file starts as follows (line numbers added): Station Name
Station Code
A N DEV NAGAR
ACND
ABHAIPUR
AHA
ABOHAR
ABS
ABU ROAD
ABR I need to convert this to a file, with comma separated entries by joining every two lines. The final data should look like Station Name,Station Code
A N DEV NAGAR,ACND
ABHAIPUR,AHA
ABOHAR,ABS
ABU ROAD,ABR
... What I was trying was - trying to write a shell script and then echo them with comma in between. But I guess a simpler effective one-liner would do the job here may be in sed / awk . Any ideas? | Simply use cat (if you like cats ;-)) and paste : cat file.in | paste -d, - - > file.out Explanation: paste reads from a number of files and pastes together the corresponding lines (line 1 from first file with line 1 from second file etc): paste file1 file2 ... Instead of a file name, we can use - (dash). paste takes first line from file1 (which is stdin). Then, it wants to read the first line from file2 (which is also stdin). However, since the first line of stdin was already read and processed, what now waits on the input stream is the second line of stdin, which paste happily glues to the first one. The -d option sets the delimiter to be a comma rather than a tab. Alternatively, do cat file.in | sed "N;s/\n/,/" > file.out P.S. Yes, one can simplify the above to < file.in sed "N;s/\n/,/" > file.out or < file.in paste -d, - - > file.out which has the advantage of not using cat . However, I did not use this idiom on purpose , for clarity reasons -- it is less verbose and I like cat (CATS ARE NICE). So please do not edit. Alternatively, if you prefer paste to cats (paste is the command to concatenate files horizontally, while cat concatenates them vertically), you may use: paste file.in | paste -d, - - | {
"source": [
"https://unix.stackexchange.com/questions/52100",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
52,108 | In Linux I can create a SHA1 password hash using sha1pass mypassword . Is there a similar command line tool which lets me create sha512 hashes? Same question for Bcrypt and PBKDF2 . | Yes, you're looking for mkpasswd , which (at least on Debian) is part of the whois package. Don't ask why... anthony@Zia:~$ mkpasswd -m help
Available methods:
des standard 56 bit DES-based crypt(3)
md5 MD5
sha-256 SHA-256
sha-512 SHA-512 Unfortunately, my version at least doesn't do bcrypt. If your C library does, it should (and the manpage gives a -R option to set the strength). -R also works on sha-512, but I'm not sure if its PBKDF-2 or not. If you need to generate bcrypt passwords, you can do it fairly simply with the Crypt::Eksblowfish::Bcrypt Perl module. | {
"source": [
"https://unix.stackexchange.com/questions/52108",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5289/"
]
} |
52,115 | It is well known that UNIX systems won't actually delete a file on disk while the file is in use. So if a file is being accessed by process 1 and process 2 deletes the file using rm, process 1 continues to see the file; additionally the file descriptor link at /proc/(process 1 id)/fd reports the original contents of the deleted file. However, if process 2 overwrites the file as opposed to deleting it (say with echo "abracadabra" > file.txt), the file descriptor link at /proc/(process 1 id)/fd reports the overwriting material("abracadabra"), while process 1 is still able to access the original contents of the file.
Why this difference? [Edit]The snippet below is in response to Jim Paris >uname -a
Linux ravoori-netbook 3.2.0-32-generic-pae #51-Ubuntu SMP Wed Sep 26 21:54:23 UT
C 2012 i686 i686 i386 GNU/Linux
>echo original > /tmp/foo
>tail -0f /tmp/foo &
[2] 6144
>rm /tmp/foo
>cat /proc/6144/fd/3
original
>echo abracadabra > /tmp/foo
>cat /proc/6144/fd/3
original | Yes, you're looking for mkpasswd , which (at least on Debian) is part of the whois package. Don't ask why... anthony@Zia:~$ mkpasswd -m help
Available methods:
des standard 56 bit DES-based crypt(3)
md5 MD5
sha-256 SHA-256
sha-512 SHA-512 Unfortunately, my version at least doesn't do bcrypt. If your C library does, it should (and the manpage gives a -R option to set the strength). -R also works on sha-512, but I'm not sure if its PBKDF-2 or not. If you need to generate bcrypt passwords, you can do it fairly simply with the Crypt::Eksblowfish::Bcrypt Perl module. | {
"source": [
"https://unix.stackexchange.com/questions/52115",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21233/"
]
} |
52,131 | So I've been using 'sed' on linux for a while, but have had a bit of difficulty trying to use it on OSX since 'POSIX sed' and 'GNU sed' have so many little differences. Currently I'm struggling with how to insert a line of text after a certain line number. (in this case, line 4) On linux I would do something like this: sed --in-place "4 a\ mode '0755'" file.txt So on OSX I tried this: sed -i "" "4 a\ mode '0755'" file.txt However this keeps giving me a 'extra characters after \ at the end of a command' error. Any ideas what's wrong here? Do I have a typo? Or am I not understanding another difference between versions of sed? | Strictly speaking, the POSIX specification for sed requires a newline after a\ : [1addr]a\
text Write text to standard output as described previously. This makes writing one-liners a bit of a pain, which is probably the reason for the following GNU extension to the a , i , and c commands: As a GNU extension, if between the a and the newline there is other than a whitespace- \ sequence, then the text of this line, starting at the first non-whitespace character after the a , is taken as the first line of the text block. (This enables a simplification in scripting a one-line add.) This extension also works with the i and c commands. Thus, to be portable with your sed syntax, you will need to include a newline after the a\ somehow. The easiest way is to just insert a quoted newline: $ sed -e 'a\
> text' (where $ and > are shell prompts). If you find that a pain, bash [1] has the $' ' quoting syntax for inserting C-style escapes, so just use sed -e 'a\'$'\n''text' [1] since version 2.0 (1996) and ksh93 (where it comes from), zsh (3.1.5+), mksh (r39b+) and some Almquist shell derivatives (e.g., /bin/sh in FreeBSD 9+) | {
"source": [
"https://unix.stackexchange.com/questions/52131",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/25944/"
]
} |
52,179 | I have opened a dir vim some/dir . I can navigate within the tree, yet once I opened a file I wonder, how do I close the file view in order to go back to the directory listing to navigate to another file. :wq is no option, as it closes the whole vim session. I guess there is a for mode to that, yet I do not know what it is called nor how I start it. How to close the file to file navigation view? | How about :e . ? This opens the current directory in Vim, i.e. it opens the file explorer. Because I have autochdir setting set, this shows the directory that the currently edited file is in. | {
"source": [
"https://unix.stackexchange.com/questions/52179",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12471/"
]
} |
52,215 | How can I find out the size of a block device, such as /dev/sda ? Running ls -l gives no useful information. | blockdev --getsize64 /dev/sda returns size in bytes. blockdev --getsz /dev/sda returns size in 512-byte sectors. Deprecated: blockdev --getsize /dev/sda returns size in sectors. blockdev is part of util-linux . | {
"source": [
"https://unix.stackexchange.com/questions/52215",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/933/"
]
} |
52,234 | I have linux (Ubuntu) server which I have root access to through ssh (port 22) only. On that server there is a MySQL database listening on port 3306. Is it possible to use putty (on my machine) and tunneling (on the linux machine) to create a tunnel from a local port on my machine (say 4000), to the linux server on port 22 and then from the server to itself on port 3306 ? | I have drawn some sketches The machine, where the ssh tunnel command is typed (or in your case: Putty with tunneling is started) is called »your host« . Introduction local: -L Specifies that the given port on the local (client) host is to be forwarded to the given host and port on the remote side. ssh -L sourcePort:forwardToHost:onPort connectToHost means: connect with ssh to connectToHost , and forward all connection attempts to the local sourcePort to port onPort on the machine called forwardToHost , which can be reached from the connectToHost machine. remote: -R Specifies that the given port on the remote (server) host is to be forwarded to the given host and port on the local side. ssh -R sourcePort:forwardToHost:onPort connectToHost means: connect with ssh to connectToHost , and forward all connection attempts to the remote sourcePort to port onPort on the machine called forwardToHost , which can be reached from your local machine. Your example The first image represents your situation. The blue box called your host is your Windows machine from which you start Putty to your Ubuntu server, called remotehost in my image. Connections to the green port (in your case port number 4000 ) are forwarded to the pink MySQL port 3306 of the localhost of your Ubuntu server machine (i.e. the Ubuntu server itself). To set it up with Putty Start Putty and enter your usual connection settings (Hostname or IP address)
In the tree on the left side, navigate to Connection
→ SSH
→ Tunnels and create a new local tunnel with the source port 4000 (123 in the image) and the destination localhost:3306 (localhost:456 in the image). Do not forget to click on Add . Then navigate back to session and click Save to keep your settings for the next time. Now you can use the saved connection to log in to your server and after you successfully log in, every time you connect to port 4000 on your host you will actually connect to port 3306 on the Ubuntu server. | {
"source": [
"https://unix.stackexchange.com/questions/52234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/25989/"
]
} |
52,277 | When installing software in debian systems we can put something like this: sudo apt-get install -y chromium-browser that way the installation occurs automatically, whitout asking to confirm installation [Y/n].
Can i do the same with pacman? | From man pacman : --noconfirm Bypass any and all “Are you sure?” messages. It’s not a good idea to do this unless you want to run pacman from a script. Note the qualification about using this with care... Arch is a rolling release, which means pacman has to, from time to time, manage some quite complex upgrades. At these times pacman will prompt you to confirm your choices—disregarding these prompts will generally not be a significant issue, but in some cases, as with the recent move from /lib to /usr/lib , a lack of attention will cause major breakage. This is a not a habit you want to cultivate. | {
"source": [
"https://unix.stackexchange.com/questions/52277",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13251/"
]
} |
52,280 | Lightdm is displayed properly but after entering a password and clicking login, I see the screen blink and lightdm window re-appears and asks to log in as if nothing happened. Configs user@laptop:~$ cat /etc/lightdm/ lightdm.conf lightdm-gtk-greeter-ubuntu.conf
lightdm-gtk-greeter.conf users.conf lightdm.conf user@laptop:~$ cat /etc/lightdm/lightdm.conf #[LightDM]
#xsessions-directory=/usr/share/xsessions
[SeatDefaults]
user-session=xfce
greeter-session=lightdm-gtk-greeter
greeter-hide-users=true lightdm-gtk-greeter.conf user@laptop:~$ cat /etc/lightdm/lightdm-gtk-greeter.conf #
# background = Background file to use, either an image path or a color (e.g. #772953)
# theme-name = GTK+ theme to use
# font-name = Font to use
# xft-antialias = Whether to antialias Xft fonts (true or false)
# xft-dpi = Resolution for Xft in dots per inch (e.g. 96)
# xft-hintstyle = What degree of hinting to use (hintnone, hintslight, hintmedium, or hintfull)
# xft-rgba = Type of subpixel antialiasing (none, rgb, bgr, vrgb or vbgr)
#
[greeter]
logo=/usr/share/pixmaps/xubuntu-logo-menu.png
background=/usr/share/xfce4/backdrops/xubuntu-karmic.png
#show-language-selector=false
theme-name=retro1
font-name=Droid Sans 10
xft-antialias=true
#xft-dpi=
xft-hintstyle=hintfull
xft-rgba=rgb
show-language-selector=true lightdm-gtk-greeter-ubuntu.conf user@laptop:~$ cat /etc/lightdm/lightdm-gtk-greeter-ubuntu.conf #
# background = Background file to use, either an image path or a color (e.g. #772953)
# theme-name = GTK+ theme to use
# font-name = Font to use
# xft-antialias = Whether to antialias Xft fonts (true or false)
# xft-dpi = Resolution for Xft in dots per inch (e.g. 96)
# xft-hintstyle = What degree of hinting to use (hintnone, hintslight, hintmedium, or hintfull)
# xft-rgba = Type of subpixel antialiasing (none, rgb, bgr, vrgb or vbgr)
#
[greeter]
logo=/usr/share/pixmaps/xubuntu-logo-menu.png
background=/usr/share/xfce4/backdrops/xubuntu-karmic.png
#show-language-selector=false
theme-name=retro1
font-name=Droid Sans 10
xft-antialias=true
#xft-dpi=
xft-hintstyle=hintfull
xft-rgba=rgb
show-language-selector=true users.conf user@laptop:~$ cat /etc/lightdm/users.conf #
# User accounts configuration
#
# NOTE: If you have AccountsService installed on your system, then LightDM will
# use this instead and these settings will be ignored
#
# minimum-uid = Minimum UID required to be shown in greeter
# hidden-users = Users that are not shown to the user
# hidden-shells = Shells that indicate a user cannot login
#
[UserAccounts]
minimum-uid=500
hidden-users=nobody nobody4 noaccess
hidden-shells=/bin/false /usr/sbin/nologin Logs user@laptop:~$ cat /var/log/lightdm/ lightdm.log x-0-greeter.log.old x-1-greeter.log x-1.log
x-0-greeter.log x-0.log x-1-greeter.log.old user@laptop:~$ cat /var/log/lightdm/lightdm.log cat: /var/log/lightdm/lightdm.log: Permission denied
user@laptop:~$ sudo cat /var/log/lightdm/lightdm.log
[sudo] password for user:
[+0.00s] DEBUG: Logging to /var/log/lightdm/lightdm.log
[+0.00s] DEBUG: Starting Light Display Manager 1.2.1, UID=0 PID=2677
[+0.00s] DEBUG: Loaded configuration from /etc/lightdm/lightdm.conf
[+0.00s] DEBUG: Using D-Bus name org.freedesktop.DisplayManager
[+0.00s] DEBUG: Registered seat module xlocal
[+0.00s] DEBUG: Registered seat module xremote
[+0.00s] DEBUG: Adding default seat
[+0.00s] DEBUG: Starting seat
[+0.00s] DEBUG: Starting new display for greeter
[+0.00s] DEBUG: Starting local X display
[+0.00s] DEBUG: Using VT 7
[+0.00s] DEBUG: Activating VT 7
[+0.01s] DEBUG: Logging to /var/log/lightdm/x-0.log
[+0.01s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0
[+0.01s] DEBUG: Launching X Server
[+0.01s] DEBUG: Launching process 2683: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch
[+0.01s] DEBUG: Waiting for ready signal from X server :0
[+0.01s] DEBUG: Acquired bus name org.freedesktop.DisplayManager
[+0.01s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0
[+0.91s] DEBUG: Got signal 10 from process 2683
[+0.91s] DEBUG: Got signal from X server :0
[+0.91s] DEBUG: Connecting to XServer :0
[+0.91s] DEBUG: Starting greeter
[+0.91s] DEBUG: Started session 2688 with service 'lightdm', username 'lightdm'
[+0.94s] DEBUG: Session 2688 authentication complete with return value 0: Success
[+0.94s] DEBUG: Greeter authorized
[+0.94s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log
[+0.94s] DEBUG: Session 2688 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/lightdm-gtk-greeter
[+1.12s] DEBUG: Greeter connected version=1.2.1
[+1.12s] DEBUG: Greeter connected, display is ready
[+1.12s] DEBUG: New display ready, switching to it
[+1.12s] DEBUG: Activating VT 7
[+1.45s] DEBUG: Greeter start authentication
[+1.45s] DEBUG: Started session 2733 with service 'lightdm', username '(null)'
[+1.46s] DEBUG: Session 2733 got 1 message(s) from PAM
[+1.46s] DEBUG: Prompt greeter with 1 message(s)
[+2.98s] DEBUG: Continue authentication
[+2.99s] DEBUG: Session 2733 got 1 message(s) from PAM
[+2.99s] DEBUG: Prompt greeter with 1 message(s)
[+8.36s] DEBUG: Continue authentication
[+8.41s] DEBUG: Session 2733 authentication complete with return value 0: Success
[+8.41s] DEBUG: Authenticate result for user user: Success
[+8.42s] DEBUG: User user authorized
[+8.42s] DEBUG: Greeter requests session xfce
[+8.42s] DEBUG: Using session xfce
[+8.42s] DEBUG: Stopping greeter
[+8.42s] DEBUG: Session 2688: Sending SIGTERM
[+8.46s] DEBUG: Session 2688 exited with return value 0
[+8.46s] DEBUG: Greeter quit
[+8.47s] DEBUG: Dropping privileges to uid 1001
[+8.47s] DEBUG: Restoring privileges
[+8.47s] DEBUG: Dropping privileges to uid 1001
[+8.47s] DEBUG: Writing /home/user/.dmrc
[+8.54s] DEBUG: Restoring privileges
[+8.58s] DEBUG: Starting session xfce as user user
[+8.58s] DEBUG: Session 2733 running command /usr/sbin/lightdm-session startxfce4
[+8.60s] DEBUG: Registering session with bus path /org/freedesktop/DisplayManager/Session0
[+8.60s] DEBUG: Greeter closed communication channel
[+8.60s] DEBUG: Session 2733 exited with return value 1
[+8.60s] DEBUG: User session quit
[+8.60s] DEBUG: Stopping display
[+8.60s] DEBUG: Sending signal 15 to process 2683
[+8.77s] DEBUG: Process 2683 exited with return value 0
[+8.77s] DEBUG: X server stopped
[+8.77s] DEBUG: Removing X server authority /var/run/lightdm/root/:0
[+8.77s] DEBUG: Releasing VT 7
[+8.77s] DEBUG: Display server stopped
[+8.77s] DEBUG: Display stopped
[+8.77s] DEBUG: Active display stopped, switching to greeter
[+8.77s] DEBUG: Switching to greeter
[+8.77s] DEBUG: Starting new display for greeter
[+8.77s] DEBUG: Starting local X display
[+8.77s] DEBUG: Using VT 7
[+8.77s] DEBUG: Logging to /var/log/lightdm/x-0.log
[+8.77s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0
[+8.77s] DEBUG: Launching X Server
[+8.77s] DEBUG: Launching process 2764: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch
[+8.77s] DEBUG: Waiting for ready signal from X server :0
[+9.68s] DEBUG: Got signal 10 from process 2764
[+9.68s] DEBUG: Got signal from X server :0
[+9.68s] DEBUG: Connecting to XServer :0
[+9.68s] DEBUG: Starting greeter
[+9.68s] DEBUG: Started session 2769 with service 'lightdm', username 'lightdm'
[+9.70s] DEBUG: Session 2769 authentication complete with return value 0: Success
[+9.71s] DEBUG: Greeter authorized
[+9.71s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log
[+9.71s] DEBUG: Session 2769 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/lightdm-gtk-greeter
[+9.87s] DEBUG: Greeter connected version=1.2.1
[+9.87s] DEBUG: Greeter connected, display is ready
[+9.87s] DEBUG: New display ready, switching to it
[+9.87s] DEBUG: Activating VT 7
[+9.87s] DEBUG: Stopping greeter display being switched from
[+10.20s] DEBUG: Greeter start authentication
[+10.20s] DEBUG: Started session 2814 with service 'lightdm', username '(null)'
[+10.20s] DEBUG: Session 2814 got 1 message(s) from PAM
[+10.20s] DEBUG: Prompt greeter with 1 message(s)
[+110.97s] DEBUG: Got signal 15 from process 1
[+110.97s] DEBUG: Caught Terminated signal, shutting down
[+110.97s] DEBUG: Stopping display manager
[+110.97s] DEBUG: Stopping seat
[+110.97s] DEBUG: Stopping display
[+110.97s] DEBUG: Session 2769: Sending SIGTERM
[+110.97s] DEBUG: Session 2814 terminated with signal 15
[+110.97s] DEBUG: Session 2814 failed during authentication
[+110.97s] DEBUG: Authenticate result for user (null): Authentication stopped before completion
[+111.04s] DEBUG: Session 2769 exited with return value 15
[+111.04s] DEBUG: Greeter quit
[+111.04s] DEBUG: Sending signal 15 to process 2764
[+111.06s] DEBUG: Process 2764 exited with return value 0
[+111.06s] DEBUG: X server stopped
[+111.06s] DEBUG: Removing X server authority /var/run/lightdm/root/:0
[+111.06s] DEBUG: Releasing VT 7
[+111.06s] DEBUG: Display server stopped
[+111.06s] DEBUG: Display stopped
[+111.06s] DEBUG: Seat stopped
[+111.06s] DEBUG: Display manager stopped
[+111.06s] DEBUG: Stopping daemon
[+111.06s] DEBUG: Exiting with return value 0 Installed packages A web search suggested I need to install additional packages, such as lightdm-gtk-greeter, but I already have it installed. user@laptop:~$ sudo dpkg -l lightdm*
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Description
+++-===========================-===========================-======================================================================
ii lightdm 1.2.1-0ubuntu1.1 Display Manager
un lightdm-greeter <none> (no description available)
un lightdm-greeter-example-gtk <none> (no description available)
ii lightdm-gtk-greeter 1.1.5-0ubuntu1 LightDM GTK+ Greeter
un lightdm-gtk-greeter-config <none> (no description available)
user@laptop:~$ Unity-greeter solution (not what I want) Bug 850941 , bug 804171 suggest using unity-greeter with lightdm. This might work, but lightdm does not depend on unity-greeter, and I am trying to use the lightdm-gtk-greeter alternative. Problem interpretation The way I see it, the problem is that the user session is not defined properly. It is started and closes off at once. Relevant snippet from the log: [+8.58s] DEBUG: Starting session xfce as user user
[+8.58s] DEBUG: Session 2733 running command /usr/sbin/lightdm-session startxfce4
[+8.60s] DEBUG: Registering session with bus path /org/freedesktop/DisplayManager/Session0
[+8.60s] DEBUG: Greeter closed communication channel
[+8.60s] DEBUG: Session 2733 exited with return value 1
[+8.60s] DEBUG: User session quit
[+8.60s] DEBUG: Stopping display
[+8.60s] DEBUG: Sending signal 15 to process 2683
[+8.77s] DEBUG: Process 2683 exited with return value 0
[+8.77s] DEBUG: X server stopped
[+8.77s] DEBUG: Removing X server authority /var/run/lightdm/root/:0
[+8.77s] DEBUG: Releasing VT 7
[+8.77s] DEBUG: Display server stopped
[+8.77s] DEBUG: Display stopped
[+8.77s] DEBUG: Active display stopped, switching to greeter I am using the xfce desktop environment and any working examples would be appreciated. | From man pacman : --noconfirm Bypass any and all “Are you sure?” messages. It’s not a good idea to do this unless you want to run pacman from a script. Note the qualification about using this with care... Arch is a rolling release, which means pacman has to, from time to time, manage some quite complex upgrades. At these times pacman will prompt you to confirm your choices—disregarding these prompts will generally not be a significant issue, but in some cases, as with the recent move from /lib to /usr/lib , a lack of attention will cause major breakage. This is a not a habit you want to cultivate. | {
"source": [
"https://unix.stackexchange.com/questions/52280",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
52,313 | I would like to display the completion time of a script. What I currently do is - #!/bin/bash
date ## echo the date at start
# the script contents
date ## echo the date at end This just show's the time of start and end of the script. Would it be possible to display a fine grained output like processor time/ io time , etc? | Just use time when you call the script: time yourscript.sh | {
"source": [
"https://unix.stackexchange.com/questions/52313",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
52,330 | I have a backup script which I need to run at a particular time of a day so I am using cron for this task and from within cron am also trying to redirect the output of backup script to a logfile . crontab -e */1 * * * * /home/ranveer/backup.sh &>> /home/ranveer/backup.log In the above cron entry I am redirecting both stderr and stdout to a log file. The above cron job executes fine according to syslog and it performs the task mentioned in the backup.sh file but it doesn't write anything to the log file. /var/log/syslog Oct 19 20:26:01 ranveer CRON[15214]: (ranveer) CMD (/home/ranveer/backup.sh &>> /home/ranveer/backup.log) When I run the script from cli it works as required and output is written to a log file ranveer@ranveer:~$ ./backup.sh &>> backup.log
ranveer@ranveer:~$ cat backup.log
Fri Oct 19 20:28:01 IST 2012
successfully copied testdir
test.txt successfully copied
-------------------------------------------------------------------------------------
ranveer@ranveer:~$ So, why the output of file is not getting redirected to the file from within cron. | I solved the problem. There are two ways: M1 Change the redirection from &>> to 2>&1 . So now crontab -e looks like */1 * * * * /home/ranveer/vimbackup.sh >> /home/ranveer/vimbackup.log 2>&1 I believe the above works because by default cron is using sh to run the task instead of bash so &>> is not supported by sh . M2 Change the default shell by adding SHELL=/bin/bash in the crontab -e file. | {
"source": [
"https://unix.stackexchange.com/questions/52330",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22728/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.