source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
178,078 | I'm doing a data transfer, the old file system relies deeply on a directory which now is on different path. This is a git directory which stores code online. I have no rights to move it or rename it. So what I can do is rsync this directory to the same old path. But the directory name also changed. Is there a easy way that I can rsync a directory to a target directory with a different name? | If you want to use rsync to recursively make the dest directory an exact copy of the src directory: rsync -a src/ dest The rsync man page explains how this works: A trailing slash on the source [...] avoid[s] creating an additional directory level at the destination. You can think of a trailing / on a source as meaning "copy the contents of this directory" as opposed to "copy the directory by name" [...] | {
"source": [
"https://unix.stackexchange.com/questions/178078",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74226/"
]
} |
178,127 | If you tar a directory recursively, it just uses the order from the os's readdir . But in some cases it's nice to tar the files sorted. What's a good way to tar a directory sorted alphabetically? Note, for the purpose of this question, gnu-tar on a typical Linux system is fine. | For a GNU tar : --sort=ORDER
Specify the directory sorting order when reading directories.
ORDER may be one of the following:
`none'
No directory sorting is performed. This is the default.
`name'
Sort the directory entries on name. The operating system may
deliver directory entries in a more or less random order, and
sorting them makes archive creation reproducible.
`inode'
Sort the directory entries on inode number. Sorting
directories on inode number may reduce the amount of disk
seek operations when creating an archive for some file
systems. You'll probably also want to look at --preserve-order . | {
"source": [
"https://unix.stackexchange.com/questions/178127",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63928/"
]
} |
178,162 | Can somebody explain to me why a number with a leading 0 gives this funny behaviour? #!/bin/bash
NUM=016
SUM=$((NUM + 1))
echo "$NUM + 1 = $SUM" Will print: 016 + 1 = 15 | The misunderstanding is that the numbers don't mean what you expect. A leading zero denotes a number with base 8. I.e. 016 is the same as 8#16 . If you want to keep the leading zero then you need 10#016 . > num=016
> echo $((num))
14
> echo $((10#$num))
16 | {
"source": [
"https://unix.stackexchange.com/questions/178162",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95215/"
]
} |
178,187 | I'm trying to automatically mount a network drive at startup by editing /etc/fstab but doesn't work. If I execute this líne, sudo mount.cifs //192.168.0.67/test /home/pi/test -o username=myname,password=123 it works great. But I don't know how to properly write the same in /etc/fstab . | Each line in the /etc/fstab file contains the following fields separated by spaces or tabs: file_system dir type options dump pass A typical mount point added in /etc/fstab would look like the following: # <file system> <dir> <type> <options> <dump> <pass>
/dev/sda1 / ext4 defaults,noatime 0 1 You can't simply add a mount statement in the file. Add this line to the end of your /etc/fstab file: //192.168.0.67/test /home/pi/test cifs username=myname,password=123,iocharset=utf8,sec=ntlm 0 0 After the /etc/fstab is edited you can test by mounting the filesystem with mount -a which will check fstab and attempt to mount everything that is present. | {
"source": [
"https://unix.stackexchange.com/questions/178187",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98557/"
]
} |
178,189 | Currently I create a copy of my log files like this: # tar -cvzf /var/www/vhosts/example.com/httpdocs/myfiles.tar.gz
/var/www/vhosts/example.com/logs/access_log
/var/www/vhosts/example.com/httpdocs/app/tmp/logs/error.log
/var/log/mysqld.log
/var/log/messages
/var/log/httpd/access_log
/var/log/httpd/suexec_log
/var/log/httpd/error_log
/var/log/sw-cp-server/error_log
/usr/local/psa/var/log/xferlog
/etc/php.ini But this creates a tar files with the directory structure. To avoid folder structure it seems like I should make cd for each file, so all files will be saved to tar file without subfolders. (Also there is a problem with tar.gz files, tar command doesn't permit to update archieve file, if file is compressed) But in this case there will be multiple files with same name, for example 2 file with name access_log. So I need to change destination log file name. For example /var/www/vhosts/example.com/logs/access_log to -var-www-vhosts-example.com-logs-access_log
/var/log/httpd/access_log to -var-log-httpd-access_log Is this possible to archieve these files without the directory structure and with the file name changes ? Note that files exists in different folders. | Each line in the /etc/fstab file contains the following fields separated by spaces or tabs: file_system dir type options dump pass A typical mount point added in /etc/fstab would look like the following: # <file system> <dir> <type> <options> <dump> <pass>
/dev/sda1 / ext4 defaults,noatime 0 1 You can't simply add a mount statement in the file. Add this line to the end of your /etc/fstab file: //192.168.0.67/test /home/pi/test cifs username=myname,password=123,iocharset=utf8,sec=ntlm 0 0 After the /etc/fstab is edited you can test by mounting the filesystem with mount -a which will check fstab and attempt to mount everything that is present. | {
"source": [
"https://unix.stackexchange.com/questions/178189",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31892/"
]
} |
178,638 | I'm using Ubuntu 12.04, and when I rigth click on a my flash drive icon (in the Unity left bar) I get two options that have me confused: eject and safely remove . The closer I came to an answer was this forum thread , which concludes that (for a flash drive) they are both equal and also equivalent to use the umount command. However, this last assertion seems to be false. If I use umount from the console to unmount my flash dive, and then I use the command lsblk , I still see my device (with nothing under MOUNTPOINT, of course). On the other hand, if I eject or safely remove my flash drive, lsblk does not list it anymore. So, my question is, what would be the console command/commands that would really reproduce the behaviour of eject and safely remove ? | If you are using systemd then use udisksctl utility with power-off option: power-off Arranges for the drive to be safely removed and powered off. On the OS side this includes ensuring that no process is using the drive, then requesting that in-flight buffers and caches are committed to stable storage. I would recommend first to unmount all filesystems on that usb. This can be done also with udisksctl , so steps would be: udisksctl unmount -b /dev/sda1
udisksctl power-off -b /dev/sda If you are not using systemd then old good udisks should work: udisks --unmount /dev/sda1
udisks --detach /dev/sda | {
"source": [
"https://unix.stackexchange.com/questions/178638",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98894/"
]
} |
178,677 | Recently I'm echoing short sentences to a tree_hole file. I was using echo 'something' >> tree_hole to do this job. But I was always worried of what if I mis-input of > instead of >> , since I did this often. So I made a global bash func of my own in the bashrc: function th { echo "$1" >> /Users/zen1/zen/pythonstudy/tree_hole; }
export -f th But I'm wondering if there is another simple way to append lines to the end of a file.
Because I may need to use that often in other occasions. Is there any? | Set the shell's noclobber option: bash-3.2$ set -o noclobber
bash-3.2$ echo hello >foo
bash-3.2$ echo hello >foo
bash: foo: cannot overwrite existing file
bash-3.2$ | {
"source": [
"https://unix.stackexchange.com/questions/178677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74226/"
]
} |
178,752 | I know that to capture a pipeline's contents at an intermediate stage of processing, we use tee as ls /bin /usr/bin | sort | uniq | tee abc.txt | grep out , but what if i don't want to redirect the contents after uniq to abc.txt but to screen (through stdout, ofcourse) so that as an end result , i'll have on screen, the intermediate contents after uniq as well as the contents after grep. | sometimes /dev/tty can be used for that... ls /bin /usr/bin | sort | uniq | tee /dev/tty | grep out | wc | {
"source": [
"https://unix.stackexchange.com/questions/178752",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96181/"
]
} |
178,857 | I am trying to run grep against a list of a few hundred files: $ head -n 3 <(cat files.txt)
admin.php
ajax/accept.php
ajax/add_note.php However, even though I am grepping for a string that I know is found in the files, the following does not search the files: $ grep -i 'foo' <(cat files.txt)
$ grep -i 'foo' admin.php
The foo was found I am familiar with the -f flag which will read the patterns from a file. But how to read the input files ? I had considered the horrible workaround of copying the files to a temporary directory as cp seems to support the <(cat files.txt) format, and from there grepping the files. Shirley there is a better way. | You seem to be grepping the list of filenames, not the files themselves. <(cat files.txt) just lists the files. Try <(cat $(cat files.txt)) to actually concatenate them and search them as a single stream, or grep -i 'foo' $(cat files.txt) to give grep all the files. However, if there are too many files on the list, you may have problems with number of arguments. In that case I'd just write while read filename; do grep -Hi 'foo' "$filename"; done < files.txt | {
"source": [
"https://unix.stackexchange.com/questions/178857",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9760/"
]
} |
178,862 | I have multiple files something like: (in reality i have 80) file1.dat 2 5
6 9
7 1 file2.dat 3 7
8 4
1 3 I want to end up with a file containing all of the second lines. i.e. output.dat 6 9
8 4 What i have so far loops though the file names but then over-writes the file before it. e.g. the output of the above files would just be 8 4 my shell script looks like this: post.sh TEND = 80
TINDX = 0
while [ $TINDX - lt $TEND]; do
awk '{ print NR==2 "input-$TINDX.dat > output.dat
TINDX = $((TINDX+1))
done | Remove while loop and make use of shell brace expansion and also FNR , a built-in awk variable: awk 'FNR==2{print $0 > "output.dat"}' file{1..80}.dat | {
"source": [
"https://unix.stackexchange.com/questions/178862",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99064/"
]
} |
179,238 | I'm currently sifting through a lot of unfamiliar logs looking for some issues. The first file I look at is Events.log, and I get at least three pages in less which appear to display the same event at different times – an event that appears to be fairly benign.
I would like to filter this event out, and currently I quit less and do something like grep -v "event text" Events.log | less This now brings a number of other common, uninteresting events that I would also like to filter out. Is there a way I can grep -v inside of less ? Rather than having to do egrep -v "event text|something else|the other thing|foo|bar" Events.log | less It strikes me as a useful feature when looking at any kind of log file – and if less isn't the tool, is there another with the qualities I seek? Just a less -style viewer with built in grep . | less has very powerful pattern matching. From the man page : & pattern Display only lines which match the pattern ;
lines which do not match the pattern are not displayed. If pattern is empty
(if you type & immediately followed by ENTER ),
any filtering is turned off, and all lines are displayed.
While filtering is in effect,
an ampersand is displayed at the beginning of the prompt,
as a reminder that some lines in the file may be hidden. Certain characters are special as in the / command † : ^N or ! Display only lines which do NOT match the pattern . ^R Don't interpret regular expression metacharacters;
that is, do a simple textual comparison. ____________ † Certain characters are special
if entered at the beginning of the pattern ;
they modify the type of search
rather than become part of the pattern . (Of course ^N and ^R represent Ctrl + N and Ctrl + R , respectively.) So, for example, &dns will display only lines that match the pattern dns ,
and &!dns will filter out (exclude) those lines,
displaying only lines that don't match the pattern. It is noted in the description of the / command that The pattern is a regular expression,
as recognized by the regular expression library supplied by your system. So ð[01] will display lines containing eth0 or eth1 &arp.*eth0 will display lines containing arp followed by eth0 &arp|dns will display lines containing arp or dns And the ! can invert any of the above.
So the command you would want to use for the example in your question is: &!event text|something else|the other thing|foo|bar Also use / pattern and ? pattern to search (and n / N to go to next/previous). | {
"source": [
"https://unix.stackexchange.com/questions/179238",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67170/"
]
} |
179,291 | My ~/.bashrc contains exactly one line: source my_config/my_actual_bashrc.sh Is there an equivalent with .inputrc , so my customizations can be in a separate location, and "called" by ~/.inputrc ? | According to man readline : $include This directive takes a single filename as an argument and reads commands and bindings from that file. For example, the following directive would read /etc/inputrc : $include /etc/inputrc | {
"source": [
"https://unix.stackexchange.com/questions/179291",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80154/"
]
} |
179,303 | When it comes to CD's you have virtual CD software in which you load an .iso and it works like a CD-Rom. But when it comes to USB is there something similar? Is it possible to use a directory to simulate a USB storage device? Like mounting/unmounting that directory to simulate a plug/unplug of a USB storage device? Purpose: to read(using an application) music or video files from a USB. The application reacts only when a USB is inserted/removed. Or any other way could help. But files seem to be the linux way.
Or if there are no tools yet for this: how feasible it would be to write one? | According to man readline : $include This directive takes a single filename as an argument and reads commands and bindings from that file. For example, the following directive would read /etc/inputrc : $include /etc/inputrc | {
"source": [
"https://unix.stackexchange.com/questions/179303",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54297/"
]
} |
179,327 | I am trying to ls some files with a pattern in a directory. I only want to scan the first level not recursive. My script: for i in $(ls $INCOMINGDIR/*$BUSSINESSDATE*)
do
echo $i;
done Above command scan recursively. How can make it only to scan the first level directory? | Don't parse ls . Also don't use ALL_CAPS_VARS for i in "$incoming_dir"/*"$business_date"*; do Interactively, ls has a -d option that prevents descending into subdirectories: ls -d $INCOMINGDIR/*$BUSSINESSDATE* | {
"source": [
"https://unix.stackexchange.com/questions/179327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72369/"
]
} |
179,451 | I have a folder of around 180 GBs, I need to zip it like: zip -p password /Volumes/GGZ/faster/mybigfolder/* /Volumes/Storage\ 4/archive.zip But it says: zip warning: name not matched: /Volumes/Storage 4/archive.zip So how do I do this? On another note, archive.zip does not exist, but I'm trying to create it. | This error can also be caused by symbolic links in the directory tree being compressed. If these don't have correct destinations (perhaps because the directory has been moved or copied from elsewhere), zip will attempt to follow the symlink to archive the target file. You can avoid this (and also get the effect you probably want anyway, which is not to archive multiple copies of the file) by using the -y (or --symlinks ) option. | {
"source": [
"https://unix.stackexchange.com/questions/179451",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79979/"
]
} |
179,604 | I don't have much experience, just trying to get involved into the processes how do they interpret to hardware from user level. So when a command is fired from a shell, fork() inherits a child process of it and exec() loads the child process to the memory and executes. If the child process contains all the attributes of the parent process (which is the original process), then what is the need of this child process? The original process also could have been loaded to the memory. Does this fork and exec concept apply to all the executable program in UNIX? Like for shell script also or only for commands? Does it also apply for shell builtin commands? When is the copy on write concept used if I'll execute a command/script? Sorry for asking many questions at a time, but all these questions come to my mind at once when I think about any command execution. | So when a command is fired from a shell, fork() inherits a child
process of it and exec() loads the child process to the memory and
executes. Not quite. fork() clones the current process, creating an identical child. exec() loads a new program into the current process, replacing the existing one. My qs is: If the child process contains all the attributes of the parent process(which is the original process), then what is the need of this
child process? The original process also could have been loaded to the
memory. The need is because the parent process does not want to terminate yet; it wants a new process to go off and do something at the same time that it continues to execute as well. Does this fork and exec concept apply to all the executable program in
UNIX?Like for shell script also or only for commands? Does it also
apply for shell builtin commands? For external commands, the shell does a fork() so that the command runs in a new process. Builtins are just run by the shell directly. Another notable command is exec , which tells the shell to exec() the external program without first fork() ing. This means that the shell itself is replaced with the new program, and so is no longer there for that program to return to when it exits. If you say, exec true , then /bin/true will replace your shell, and immediately exit, leaving nothing running in your terminal anymore, so it will close. when copy on write concept is used if I'll execute a command/script? Back in the stone age, fork() actually had to copy all of the memory in the calling process to the new process. Copy on Write is an optimization where the page tables are set up so that the two processes start off sharing all of the same memory, and only the pages that are written to by either process are copied when needed. | {
"source": [
"https://unix.stackexchange.com/questions/179604",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99546/"
]
} |
179,671 | Like jumping to the end of a line is Ctrl + E , where E can be thought of as end, why does it jump to the start using A ? | There are two sides to the question, the technical side and the historical side. The technical answer is because bash uses GNU Readline . In readline Control-a is bound to the function beginning-of-line , you can show this with: $ bind -q beginning-of-line
beginning-of-line can be invoked via "\C-a", "\M-OH", "\M-[1~", "\M-[7~", "\M-[H". where \C-a means "Control-a". bind -p will show all bindings (be careful using bind , it's easy to break your keyboard if you accidentally provide additional options or arguments). Some of the above bindings are added by default, others I have added (via .inputrc ) for various terminals I have used. Since bash-2.0, if the terminal termcap contains the capabilities kh , and kH then Home and End will be set to beginning-of-line and end-of-line . Both bash and readline are developed by Chet Ramey , an Emacs user and also the developer of ce an Emacs clone. (Please note, this endeavours to summarise many years of history from many decades ago, and glosses over some details.) Now, why is it Control-a in particular? Readline uses by default Emacs-like bindings . Control-a in GNU Emacs invokes move-beginning-of-line , what we consider to be the "home" function now. Stallman and Steel's original EMACS was inspired by Fred Wright's E editor (an early WYSIWYG editor) and TECO (a cryptic modal editor/language) -- EMACS was a set of macros for TECO. See Essential E [PDF] (from SAIL , 1980). E however used Control-Form for "beginning of line", this was on the "DataDisc" keyboard which had a Control key, and a Form key. The space-cadet keyboard of the time (lacking a Home key by the way, though it had an End ) is commonly blamed for the Emacs keyboard interface. One of the desirable features of EMACS was its use of TECO's Control-R "real-time" line editing mode (TECO predates CRT/keyboard terminals), you can see the key bindings on page 6 of the MIT AI Lab 1978 ITS Introduction to the EMACS editor [scanned PDF], where ┌ is used to denote Control. In this mode, the key bindings were all control sequences, largely mnemonic: Control-E End of this line , Control-P move to previous line , Control-N move to next line , Control-B backward one character , and not least Control-A move to beginning of this line , Costas' suggestion of "first letter of the alphabet" for this is as good as any. (A similar key-binding is in the tvlib macro package which aimed to make EMACS behave like the TVEDIT editor, binding control A and E to backward and forward sentence , but used different sequences for beginning and end of line.) The Control-A/Control-E bindings in "^R mode" were implemented directly in the ITS TECO (1983, version 1208, see the _teco_.tgz archive at the nocrew PDP10/ITS site, or on Github ), though I cannot determine more accurately when they first appeared, and the TECO source doesn't indicate why any particular bindings were chosen. The 1978 MIT EMACS document above implies that in 1978 EMACS did not use TECO native Control-A/Control-E, it's possible that the scrlin macro package (screen line) implemented these. To recap: bash uses readline readline key bindings follow Emacs/EMACS the original EMACS was created with TECO, inheriting many features TECO's interactive mode macros used (mostly) mnemonic control key bindings, and "start of line" ended up assigned to Control-A See also: http://www.gnu.org/gnu/rms-lisp.html http://xahlee.info/kbd/keyboard_hardware_and_key_choices.html http://blog.djmnet.org/2008/08/05/origin-of-emacs/ http://www.jwz.org/doc/emacs-timeline.html http://www.multicians.org/mepap.html * | {
"source": [
"https://unix.stackexchange.com/questions/179671",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80904/"
]
} |
179,851 | I find it very convenient to install packages on a new machine through package files like brewfiles, caskfiles, dockerfiles, package.json etc. Is there an alternative to this for apt-get since I still just use it through commandline with apt-get install pkg1 pkg2 pkg3… ? | As specified in the comments of your question, you can write a simple text file, listing the packages to install: iceweasel
terminator
vim Assuming this is stored in packages.txt , then run the following command: xargs sudo apt-get install <packages.txt xargs is used to pass the package names from the packages.txt file to the command line. From the xargs manual: xargs reads
items
from the standard input, delimited by blanks (which can be protected
with double or single quotes or a backslash) or newlines, and executes
the command (default is /bin/echo ) one or more times with any initial
arguments followed by items read from standard input. | {
"source": [
"https://unix.stackexchange.com/questions/179851",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99807/"
]
} |
179,954 | I am running Ubuntu 12.04 on my laptop using VMware Player. I am not sure why but I have an account called "User Account" in addition to my account that I usually login to use Ubuntu. Well that was just a side comment but basically all I am trying to do is install the ncurses library on Ubuntu. I have tried installing ncurses using the following command lines: sudo apt-get install libncurses5-dev
sudo apt-get install ncurses-dev When I tried installing ncurses twice using the above commands I received the following prompt in the terminal: [sudo] password for username When I type in my password I receive the following message: username is not in the sudoers file. This incident will be reported. So far I have tried enabling the root user ("Super User") account by following these instructions . Here are some of the things the link suggested to do: Allow an other user to run sudo. Type the following in the command line: sudo adduser username sudo Or sudo adduser username sudo logging in as another user. Type the following in the command line: sudo -i -u username Enabling the root account. Type the following in the command line: sudo -i Or sudo passwd root I have tried all of the above command lines and after typing in each command I was prompted for my password. After I entered my password I received the same message as when I tried to install ncurses: fsolano is not in the sudoers file. This incident will be reported. | When this happened to me all I had to do to fix it was: Step 1. Open a terminal window, CTRL + ALT + T on my system (Debian KDE after setting up as hotkey) Step 2. Entered root using command su root Step 3. Input root password Step 4. Input command apt-get install sudo -y to install sudo Step 5. Add user to sudoers file by inputting adduser username sudo , put your username in place of username Step 6. Set the correct permissions for sudoers file by inputting chmod 0440 /etc/sudoers Step 7. Type exit and hit Enter until you close your terminal window. Shutdown your system completely and reboot. Step 8. Open another terminal window. Step 9. Try any sudo command to check if your username is correctly added to sudoers file. I used sudo echo "Hello World!" . If your username has been correctly added to the sudoers list then you will get Hello World! as the terminal response! | {
"source": [
"https://unix.stackexchange.com/questions/179954",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99906/"
]
} |
180,008 | My colleague is generating log files with a preceding date format like 2015120 , which represent January as 1 instead of 01 . The usual way I'm using to deal with this kind of issue is using date command.Like date +'%Y%m%d' . But I maned date command, it turns out they didn't mention represent January without a preceding 0. So I'm wondering is there an another way to represent date like 2015120 in Linux? | With GNU, FreeBSD or OS/X date (or date implementations that use the system's libc 's strftime() where that is the GNU libc ), adding hyphen - after % prevents numeric fields from being padded with zeroes: $ date +'%Y%-m%d'
2015120 From man date on a GNU system: By default, date pads numeric fields with zeroes. The following
optional flags may follow `%': - (hyphen) do not pad the field If your system date does not support that, you can use perl : $ perl -MTime::Piece -e '
$t = localtime;
print $t->year, $t->mon, $t->mday;
'
2015122 | {
"source": [
"https://unix.stackexchange.com/questions/180008",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74226/"
]
} |
180,077 | What is the smallest interval for the watch command? The man page and Google searches do not indicate what the smallest interval lower limit is. I found through experimentation it can be smaller than 1 second. To test, I ran this command run on a firewall: watch -n 0.1 cat /sys/class/net/eth1/statistics/rx_bytes It clearly updates faster than one second, but it is not clear if it is really doing 100ms updates. | What platform are you on? On my Linux (Ubuntu 14.10) the man page says: -n, --interval seconds
Specify update interval. The command will not allow quicker
than 0.1 second interval, in which the smaller values are con‐
verted. I just tested this with a script calling a C-program that prints the timestamp with microseconds and it works. | {
"source": [
"https://unix.stackexchange.com/questions/180077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99984/"
]
} |
180,271 | I have a couple of files with ".old" extension. How can I remove the ".old" extension without remove the file? I can do it manually but with more work: mv file1.key.old file1.key
mv file2.pub.old file2.pub
mv file3.jpg.old file3.jpg
mv file4.jpg.old file4.jpg
(etc...) The command will work with other extensions too? example: mv file1.MOV.mov file1.MOV
mv file2.MOV.mov file2.MOV
mv file3.MOV.mov file3.MOV
(etc...) or better: mv file1.MOV.mov file1.mov
mv file2.MOV.mov file2.mov
mv file3.MOV.mov file3.mov
(etc...) | Use bash's parameter substitution mechanism to remove matching suffix pattern: for file in *.old; do
mv -- "$file" "${file%%.old}"
done | {
"source": [
"https://unix.stackexchange.com/questions/180271",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99659/"
]
} |
180,492 | Listening to TCP port 0 allocates a free port number on the system for me. But what happens when I try to connect to TCP port 0? The obvious answer is: "It doesn't work": $ nc localhost 0
nc: port number too small: 0 Where in the system is this handled? In the TCP stack of the OS kernel? Are there Unixes where connecting to TCP port 0 would work? | Just to make sure we're on the same page (your question is ambiguous this way), asking to bind TCP on port 0 indicates a request to dynamically generate an unused port number. In other words, the port number you're actually listening on after that request is not zero. There's a comment about this in [linux kernel source]/net/ipv4/inet_connection_sock.c on inet_csk_get_port() : /* Obtain a reference to a local port for the given sock,
* if snum is zero it means select any available local port.
*/ Which is a standard Unix convention. There could be systems that will actually allow the use of port 0, but that would be considered a bad practice. This behaviour is not officially specified by POSIX, IANA, or the TCP protocol, however. 1 You may find this interesting . That's why you cannot sensibly make a TCP connection to port zero. Presumably nc is aware of this and informs you you're making a nonsensical request. If you try this in native code: int fd = socket(AF_INET, SOCK_STREAM, 0);
struct sockaddr_in addr;
addr.sin_family = AF_INET;
addr.sin_port = 0;
inet_aton("127.0.0.1", &addr.sin_addr);
if (connect(fd, (const struct sockaddr*)&addr, sizeof(addr)) == -1) {
fprintf(stderr,"%s", strerror(errno));
} You get the same error you would trying to connect to any other unavailable port: ECONNREFUSED , "Connection refused". So in reply to: Where in the system is this handled? In the TCP stack of the OS kernel? Probably not; it doesn't require special handling. I.e., if you can find a system that allows binding and listening on port 0, you could presumably connect to it. 1. But IANA does refer to port 0 as "Reserved" ( see here ). Meaning, this port should not be used online. That makes it okay with regard to the dynamic assignment convention (since it won't actually be used). Stipulating that specifically as a purpose would probably be beyond the scope of IANA; in essence, operating systems are free to do what they want with it, including nothing. | {
"source": [
"https://unix.stackexchange.com/questions/180492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100270/"
]
} |
180,497 | the project currently i am working on require me that fetch all info installed in the system, like total capacity, form factor, ssd or hdd, rotational speed, interface type, etc. i searched a lot and i could find any command that does this. as a comparison, i found in windows, there are applications that would satisfy my requirement. how they do this? just for interest, is there something similar in Linux too? these two os is majorly for server, so i think the hardware info should play more important role than one in pc. | Just to make sure we're on the same page (your question is ambiguous this way), asking to bind TCP on port 0 indicates a request to dynamically generate an unused port number. In other words, the port number you're actually listening on after that request is not zero. There's a comment about this in [linux kernel source]/net/ipv4/inet_connection_sock.c on inet_csk_get_port() : /* Obtain a reference to a local port for the given sock,
* if snum is zero it means select any available local port.
*/ Which is a standard Unix convention. There could be systems that will actually allow the use of port 0, but that would be considered a bad practice. This behaviour is not officially specified by POSIX, IANA, or the TCP protocol, however. 1 You may find this interesting . That's why you cannot sensibly make a TCP connection to port zero. Presumably nc is aware of this and informs you you're making a nonsensical request. If you try this in native code: int fd = socket(AF_INET, SOCK_STREAM, 0);
struct sockaddr_in addr;
addr.sin_family = AF_INET;
addr.sin_port = 0;
inet_aton("127.0.0.1", &addr.sin_addr);
if (connect(fd, (const struct sockaddr*)&addr, sizeof(addr)) == -1) {
fprintf(stderr,"%s", strerror(errno));
} You get the same error you would trying to connect to any other unavailable port: ECONNREFUSED , "Connection refused". So in reply to: Where in the system is this handled? In the TCP stack of the OS kernel? Probably not; it doesn't require special handling. I.e., if you can find a system that allows binding and listening on port 0, you could presumably connect to it. 1. But IANA does refer to port 0 as "Reserved" ( see here ). Meaning, this port should not be used online. That makes it okay with regard to the dynamic assignment convention (since it won't actually be used). Stipulating that specifically as a purpose would probably be beyond the scope of IANA; in essence, operating systems are free to do what they want with it, including nothing. | {
"source": [
"https://unix.stackexchange.com/questions/180497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82930/"
]
} |
180,613 | I have a parent folder named "parent_folder" with a lot of subfolders, in these subfolders is a file named "foo.mp4". I can find these files easily by doing this: mymacbook:parent_folder username$ find ./ -name "foo.mp4" -exec echo {} \; Now that returns the path of each file, relative to parent_folder/ ./path/to/foo.mp4 How can i return just the path, without the filename? | With GNU find: find . -name foo.mp4 -printf '%h\n' With other find s, provided directory names don't contain newline characters: find . -name foo.mp4 | sed 's|/[^/]*$||' Or: find . -name foo.mp4 -exec dirname {} \; though that means running one dirname command per file. If you need to run a command on that path , you can do (standard syntax): find . -name "featured.mp4" -exec sh -c '
for file do
dir=${file%/*}
ffmpeg -i "$file" -c:v libvpx -b:v 1M -c:a libvorbis "$dir" featured.webm
done' sh {} + Though in this case, you may be able to use -execdir (a BSD extension also available in GNU find ), which chdir() s to the file's directory: find . -name "featured.mp4" -execdir \
ffmpeg -i {} -c:v libvpx -b:v 1M -c:a libvorbis . featured.webm \; Beware though that while the GNU implementation of find will expand {} to ./filename here, BSD ones expand to filename . It's OK here as the filename is passed as argument to an option and is always featured.mp4 anyway, but for other usages you may have to take into account that the file name may start with - or + (and be understood as an option by the command) or contain = (and be understood as a variable assignment by awk for instance), or other characters causing this kind of problem with perl -p/n (not all of them fixed by GNU find 's ./ prefix though in that case), etc. | {
"source": [
"https://unix.stackexchange.com/questions/180613",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100350/"
]
} |
180,663 | How can I select first occurrence between two patterns including them. Preferably using sed or awk . I have: text
something P1 something
content1
content2
something P2 something
text
something P1 something
content3
content4
something P2 something
text I want the first occurrence of the lines between P1 and P2 (including P1 line and P2 line): something P1 something
content1
content2
something P2 something | sed '/P1/,/P2/!d;/P2/q' ...would do the job portably by d eleting all lines which do ! not fall within the range, then q uitting the first time it encounters the end of the range. It does not fail for P2 preceding P1, and it does not require GNU specific syntax to write simply. | {
"source": [
"https://unix.stackexchange.com/questions/180663",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100386/"
]
} |
180,818 | I had similar issues before but i don't remember how i solved it. When i try to copy something to USB stick, with FAT, it stops near the end, sometimes at 100%. And of course, when i transfer the memory stick somewhere else, it doesn't contain complete file. (file is a movie!) I tried to mount device with mount -o flush, but i get same issue. Also, i did format the USB stick with new FAT partition... Any idea what cold i do? p.s.
I believe it's not related to OS, which is Debian, and i believe that coping from SSD drive doesn't make it stuck. | The reason it happens that way is that the program says "write this data" and the linux kernel copies it into a memory buffer that is queued to go to disk, and then says "ok, done". So the program thinks it has copied everything. Then the program closes the file, but suddenly the kernel makes it wait while that buffer is pushed out to disk. So, unfortunately the program can't tell you how long it will take to flush the buffer because it doesn't know. If you want to try some power-user tricks, you can reduce the size of the buffer that Linux uses by setting the kernel parameter vm.dirty_bytes to something like 15000000 (15 MB). This means the application can't get more than 15MB ahead of its actual progress. (You can change kernel parameters on the fly with sudo sysctl vm.dirty_bytes=15000000 but making them stay across a reboot requires changing a config file like /etc/sysctl.conf which might be specific to your distro.) A side effect is that your computer might have lower data-writing throughput with this setting, but on the whole, I find it helpful to see that a program is running a long time while it writes lots of data vs. the confusion of having a program appear to be done with its job but the system lagging badly as the kernel does the actual work. Setting dirty_bytes to a reasonably small value can also help prevent your system from becoming unresponsive when you're low on free memory and run a program that suddenly writes lots of data. But, don't set it too small! I use 15MB as a rough estimate that the kernel can flush the buffer to a normal hard drive in 1/4 of a second or less. It keeps my system from feeling "laggy". | {
"source": [
"https://unix.stackexchange.com/questions/180818",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
180,900 | I am using screen /dev/tty-MyDevice to look at traffic on my serial port. Pressing Ctrl + D does not cause the screen to terminate. What I have to do in order to terminate it? | Use the screen quit command (normally ctrl-A \ ). | {
"source": [
"https://unix.stackexchange.com/questions/180900",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18578/"
]
} |
180,901 | I've been using sed for quite some time but here is a quirk I came around with, which I am not able to resolve. Let me explain my problem with the actual case. Scene#1 printf "ls" | xclip -selection clipboard
echo "ls" | xclip -selection clipboard In the first command, I pipe printf output to xclip so that it gets copied to the clipboard. Now, printf , unlike echo does not insert a new line at the end by default. So, if I paste this content into terminal, the ls command that is copied does not automatically run. In the second, there is a new line at the end, so pasting the clipboard content also results in the running of the command in the clipboard. This is undesirable for me. So, I wanted to remove the newline using sed , but it failed, as explained in the scene below. Scene#2 echo "ls" | sed -r 's/\n//g' | xclip -selection clipboard The content in the clipboard still contains new-line. When I paste it into terminal, the command automatically runs. I also tried removing carriage return character \r . But nada. It seems I am missing something very crucial/basic here. | sed delimits on \n ewlines - they are always removed on input and reinserted on output. There is never a \n ewline character in a sed pattern space which did not occur as a result of an edit you have made. Note: with the exception of GNU sed 's -z mode... Just use tr : echo ls | tr -d \\n | xclip -selection clipboard Or, better yet, forget sed altogether: printf ls | xclip -selection clipboard | {
"source": [
"https://unix.stackexchange.com/questions/180901",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89385/"
]
} |
180,943 | I'm on a Mac but I think this is generally Unix-applicable. I'm in the process of learning shell scripting and there's something I seem to be missing. When I'm in the ordinary terminal, I can use scripting syntax like for loops and such in conjunction with commands to do stuff. But.... bash opens an interpreter for shell scripting. Which is where I get confused, because isn't the terminal already an interpreter for shell scripting, as demonstrated by the fact that the scripting works when given to stdin? Bonus question: how is bash different from bash -i , which according to man "starts an interactive session".....isn't that what happens when you just enter bash on its own? Which, to my eye is no different than being in the normal terminal in the first place... | When you launch a terminal it will always run some program inside it. That program will generally by default be your shell. On OS X, the default shell is Bash. In combination that means that when you launch Terminal you get a terminal emulator window with bash running inside it (by default). You can change the default shell to something else if you like, although OS X only ships with bash and tcsh . You can choose to launch a custom command in a new terminal with the open command : open -b com.apple.terminal somecommand In that case, your shell isn't running in it, and when your custom command terminates that's the end of things. If you run bash inside your terminal that is already running bash , you get exactly that: one shell running another. You can exit the inner shell with Ctrl-D or exit and you'll drop back to the shell you started in. That can sometimes be useful if you want to test out configuration changes or customise your environment temporarily — when you exit the inner shell, the changes you made go away with it. You can nest them arbitrarily deeply. If you're not doing that, there's no real point in launching another one, but a command like bash some-script.sh will run just that script and then exit, which is often useful. The differences between interactive and non-interactive shells are a bit subtle and mostly deal with which configuration files are loaded, which error behaviours there are, and whether aliases and similar are enabled. The rough principle is that an interactive shell gives you the settings you'd want for sitting in front of it, while a non-interactive shell gives you what you'd want for a standalone script. All of the differences are documented explicitly in the Bash Reference Manual , and also in a dedicated question on this site . For the most part, you don't need to care. There's not often a reason to launch another shell, and when you do you'll have a specific purpose in mind and know what to do with it. | {
"source": [
"https://unix.stackexchange.com/questions/180943",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17563/"
]
} |
180,985 | I'm trying to copy files and subfolders from A folder without the A itself. For instance, A folder contains next: | file1.txt
| file2.txt
| subfolder1 Executing next command gives me wrong result: sudo cp -r /home/username/A/ /usr/lib/B/ The result is /usr/lib/B/A/...copied files... instead of.. /usr/lib/B/...copied files... How can I reach the expected one without origin-folder | advanced cp cp -r /home/username/A/. /usr/lib/B/ This is especially great because it works no matter whether the target directory already exists. shell globbing If there are not too many objects in the directory then you can use shell globbing: mkdir -p /usr/lib/B/
shopt -s dotglob
cp -r /home/username/A/* /usr/lib/B/ rsync rsync -a /home/username/A/ /usr/lib/B/ The / at the end of the source path is important; works no matter whether the target directory already exists. find mkdir -p /usr/lib/B/
find /home/username/A/ -mindepth 1 -maxdepth 1 -exec cp -r -t /usr/lib/B/ {} + or if you don't need empty subdirectories: find /home/username/A/ -mindepth 1 -type f -exec cp --parents -t /usr/lib/B/ {} + (without mkdir ) | {
"source": [
"https://unix.stackexchange.com/questions/180985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100580/"
]
} |
181,001 | Suppose, for example, you have a shell script similar to: longrunningthing &
p=$!
echo Killing longrunningthing on PID $p in 24 hours
sleep 86400
echo Time up!
kill $p Should do the trick, shouldn't it? Except that the process may have terminated early and its PID may have been recycled, meaning some innocent job get a bomb in its signal queue instead. In practice this possibly does matter, but its worrying me nonetheless. Hacking longrunningthing to drop dead by itself, or keep/remove its PID on the FS would do but I'm thinking of the generic situation here. | Best would be to use the timeout command if you have it which is meant for that: timeout 86400 cmd The current (8.23) GNU implementation at least works by using alarm() or equivalent while waiting for the child process. It does not seem to be guarding against the SIGALRM being delivered in between waitpid() returning and timeout exiting (effectively cancelling that alarm ). During that small window, timeout may even write messages on stderr (for instance if the child dumped a core) which would further enlarge that race window (indefinitely if stderr is a full pipe for instance). I personally can live with that limitation (which probably will be fixed in a future version). timeout will also take extra care to report the correct exit status, handle other corner cases (like SIGALRM blocked/ignored on startup, handle other signals...) better than you'd probably manage to do by hand. As an approximation, you could write it in perl like: perl -MPOSIX -e '
$p = fork();
die "fork: $!\n" unless defined($p);
if ($p) {
$SIG{ALRM} = sub {
kill "TERM", $p;
exit 124;
};
alarm(86400);
wait;
exit (WIFSIGNALED($?) ? WTERMSIG($?)+128 : WEXITSTATUS($?))
} else {exec @ARGV}' cmd There's a timelimit command at http://devel.ringlet.net/sysutils/timelimit/ (predates GNU timeout by a few months). timelimit -t 86400 cmd That one uses an alarm() -like mechanism but installs a handler on SIGCHLD (ignoring stopped children) to detect the child dying. It also cancels the alarm before running waitpid() (that doesn't cancel the delivery of SIGALRM if it was pending, but the way it's written, I can't see it being a problem) and kills before calling waitpid() (so can't kill a reused pid). netpipes also has a timelimit command. That one predates all the other ones by decades, takes yet another approach, but doesn't work properly for stopped commands and returns a 1 exit status upon timeout. As a more direct answer to your question, you could do something like: if [ "$(ps -o ppid= -p "$p")" -eq "$$" ]; then
kill "$p"
fi That is, check that the process is still a child of ours. Again, there's a small race window (in between ps retrieving the status of that process and kill killing it) during which the process could die and its pid be reused by another process. With some shells ( zsh , bash , mksh ), you can pass job specs instead of pids. cmd &
sleep 86400
kill %
wait "$!" # to retrieve the exit status That only works if you spawn only one background job (otherwise getting the right jobspec is not always possible reliably). If that's an issue, just start a new shell instance: bash -c '"$@" & sleep 86400; kill %; wait "$!"' sh cmd That works because the shell removes the job from the job table upon the child dying. Here, there should not be any race window since by the time the shell calls kill() , either the SIGCHLD signal has not been handled and the pid can't be reused (since it has not been waited for), or it has been handled and the job has been removed from the process table (and kill would report an error). bash 's kill at least blocks SIGCHLD before it accesses its job table to expand the % and unblocks it after the kill() . Another option to avoid having that sleep process hanging around even after cmd has died, with bash or ksh93 is to use a pipe with read -t instead of sleep : {
{
cmd 4>&1 >&3 3>&- &
printf '%d\n.' "$!"
} | {
read p
read -t 86400 || kill "$p"
}
} 3>&1 That one still has race conditions, and you lose the command's exit status. It also assumes cmd doesn't close its fd 4. You could try implementing a race-free solution in perl like: perl -MPOSIX -e '
$p = fork();
die "fork: $!\n" unless defined($p);
if ($p) {
$SIG{CHLD} = sub {
$ss = POSIX::SigSet->new(SIGALRM); $oss = POSIX::SigSet->new;
sigprocmask(SIG_BLOCK, $ss, $oss);
waitpid($p,WNOHANG);
exit (WIFSIGNALED($?) ? WTERMSIG($?)+128 : WEXITSTATUS($?))
unless $? == -1;
sigprocmask(SIG_UNBLOCK, $oss);
};
$SIG{ALRM} = sub {
kill "TERM", $p;
exit 124;
};
alarm(86400);
pause while 1;
} else {exec @ARGV}' cmd args... (though it would need to be improved to handle other types of corner cases). Another race-free method could be using process groups: set -m
((sleep 86400; kill 0) & exec cmd) However note that using process groups can have side-effects if there's I/O to a terminal device involved. It has the additional benefit though to kill all the other extra processes spawned by cmd . | {
"source": [
"https://unix.stackexchange.com/questions/181001",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100598/"
]
} |
181,067 | dmesg is a command to read the contents from /var/log/dmesg .
The nice thing compared to less /var/log/dmesg is that I can use the -T flag for human readable time output. Now I would like to look at /var/log/dmesg.0 , to see how my computer crashed. The file contains the logs from previous session. But I want to use the -T flag from the dmesg command. Or something equivalent. Any idea how? I would not mind a graphical tool, but the best would be a cli solution. | Although a bit late for the OP... I use Fedora, but if your system uses journalctl then you can easily get the kernel messages (dmesg log) from prior shutdown/crash (in a dmesg -T format) through the following. Options: -k (dmesg) -b < boot_number > (How many reboots ago 0, -1, -2, etc.) -o short-precise (dmesg -T) -p priority Filter by priority output (4 to filter out notice and info). NOTE: there is also an -o short and -o short-iso which gives you the date only, and the date-time in iso format respectively. Commands: All boot cycles : journalctl -o short-precise -k -b all Current boot : journalctl -o short-precise -k Last boot : journalctl -o short-precise -k -b -1 Two boots prior : journalctl -o short-precise -k -b -2 And so on Example Output: Feb 18 21:41:26.917400 localhost.localdomain kernel: usb 2-4: USB disconnect, device number 12
Feb 18 21:41:26.917678 localhost.localdomain kernel: usb 2-4.1: USB disconnect, device number 13
Feb 18 21:41:27.246264 localhost.localdomain kernel: usb 2-4: new high-speed USB device number 22 using xhci_hcd
Feb 18 21:41:27.419395 localhost.localdomain kernel: usb 2-4: New USB device found, idVendor=05e3, idProduct=0610
Feb 18 21:41:27.419581 localhost.localdomain kernel: usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Feb 18 21:41:27.419739 localhost.localdomain kernel: usb 2-4: Product: USB2.0 Hub
Feb 18 21:41:27.419903 localhost.localdomain kernel: usb 2-4: Manufacturer: GenesysLogic The amount of boots you can look back on can be viewed with the following. journalctl --list-boot The output of journalctl --list-boot looks like the following. -6 cc4333602fbd4bbabb0df2df9dd1f0d4 Sun 2016-11-13 08:32:58 JST—Thu 2016-11-17 07:53:59 JST
-5 85dc0d63e6a14b1b9a72424439f2bab4 Fri 2016-11-18 22:46:28 JST—Sat 2016-12-24 02:38:18 JST
-4 8abb8267e06b4c26a2466562f3422394 Sat 2016-12-24 08:10:28 JST—Sun 2017-02-12 12:31:20 JST
-3 a040f5e79a754b2a9055ac2598d430e8 Sun 2017-02-12 12:31:36 JST—Sat 2017-02-18 21:31:04 JST
-2 6c29e3b6f6a14f549f06749f9710e1f2 Sat 2017-02-18 21:31:15 JST—Sat 2017-02-18 22:36:08 JST
-1 42fd465eacd345f7b595069c7a5a14d0 Sat 2017-02-18 22:51:22 JST—Sat 2017-02-18 23:08:30 JST
0 26ea10b064ce4559808509dc7f162f07 Sat 2017-02-18 23:09:25 JST—Sun 2017-02-19 00:57:35 JST | {
"source": [
"https://unix.stackexchange.com/questions/181067",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29368/"
]
} |
181,069 | I have many files that are ordered by file name in a directory. I wish to copy the final N (say, N=4) files to my home directory. How should I do it? cp ./<the final 4 files> ~/ | This can be easily done with bash/ksh93/zsh arrays: a=(*)
cp -- "${a[@]: -4}" ~/ This works for all non-hidden file names even if they contain spaces, tabs, newlines, or other difficult characters (assuming there are at least 4 non-hidden files in the current directory with bash ). How it works a=(*) This creates an array a with all the file names. The file names returned by bash are alphabetically sorted. (I assume that this is what you mean by "ordered by file name." ) ${a[@]: -4} This returns the last four elements of array a (provided the array contains at least 4 elements with bash ). cp -- "${a[@]: -4}" ~/ This copies the last four file names to your home directory. To copy and rename at the same time This will copy the last four files only to the home directory and, at the same time, prepend the string a_ to the name of the copied file: a=(*)
for fname in "${a[@]: -4}"; do cp -- "$fname" ~/a_"$fname"; done Copy from a different directory and also rename If we use a=(./some_dir/*) instead of a=(*) , then we have the issue of the directory being attached to the filename. One solution is: a=(./some_dir/*)
for f in "${a[@]: -4}"; do cp "$f" ~/a_"${f##*/}"; done Another solution is to use a subshell and cd to the directory in the subshell: (cd ./some_dir && a=(*) && for f in "${a[@]: -4}"; do cp -- "$f" ~/a_"$f"; done) When the subshell completes, the shell returns us to the original directory. Making sure that the ordering is consistent The question asks for files "ordered by file name". That order, Olivier Dulac points out in the comments, will vary from one locale to another. If it is important to have fixed results independent of machine settings, then it is best to specify the locale explicitly when the array a is defined. For example: LC_ALL=C a=(*) You can find out which locale you are currently in by running the locale command. | {
"source": [
"https://unix.stackexchange.com/questions/181069",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/71888/"
]
} |
181,141 | I want to rename files to change their extension, effectively looking to accomplish mv *.txt *.tsv But when doing this I get : *.tsv is not a directory I find it somewhat strange that the first 10 google hits show mv should work like this. | When you issue the command: mv *.txt *.tsv the shell, lets assume bash, expands the wildcards if there are any matching files (including directories). The list of files are passed to the program, here mv . If no matches are found the unexpanded version is passed. Again: the shell expands the patterns, not the program. Loads of examples is perhaps best way, so here we go: Example 1: $ ls
file1.txt file2.txt
$ mv *.txt *.tsv Now what happens on the mv line is that the shell expands *.txt to the matching files. As there are no *.tsv files that is not changed. The mv command is called with two special arguments : argc : Number of arguments, including the program. argv : An array of arguments, including the program as first entry. In the above example that would be: argc = 4
argv[0] = mv
argv[1] = file1.txt
argv[2] = file2.txt
argv[3] = *.tsv The mv program check to see if last argument, *.tsv , is a directory. As it is not, the program can not continue as it is not designed to concatenate files. (Typically move all the files into one.) Nor create directories on a whim. As a result it aborts and reports the error: mv: target ‘*.tsv’ is not a directory Example 2: Now if you instead say: $ mv *1.txt *.tsv The mv command is executed with: argc = 3
argv[0] = mv
argv[1] = file1.txt
argv[2] = *.tsv Now, again, mv check to see if *.tsv exists. As it does not the file file1.txt is moved to *.tsv . That is: the file is renamed to *.tsv with the asterisk and all. $ mv *1.txt *.tsv
‘file1.txt’ -> ‘*.tsv’
$ ls
file2.txt *.tsv Example 3: If you instead said: $ mkdir *.tsv
$ mv *.txt *.tsv The mv command is executed with: argc = 3
argv[0] = mv
argv[1] = file1.txt
argv[1] = file2.txt
argv[2] = *.tsv As *.tsv now is a directory, the files ends up being moved there. Now: using commands like some_command *.tsv when the intention is to actually keep the wildcard one should always quote it. By quoting you prevent the wildcards from being expanded if there should be any matches. E.g. say mkdir "*.tsv" . Example 4: The expansion can further be viewed if you do for example: $ ls
file1.txt file2.txt
$ mkdir *.txt
mkdir: cannot create directory ‘file1.txt’: File exists
mkdir: cannot create directory ‘file2.txt’: File exists Example 5: Now: the mv command can and do work on multiple files. But if there is more then two the last has to be a target directory. (Optionally you can use the -t TARGET_DIR option, at least for GNU mv.) So this is OK: $ ls -F
b1.tsv b2.tsv f1.txt f2.txt f3.txt foo/
$ mv *.txt *.tsv foo Here mv would be called with: argc = 7
argv[0] = mv
argv[1] = b1.tsv
argv[2] = b2.tsv
argv[3] = f1.txt
argv[4] = f2.txt
argv[5] = f3.txt
argv[6] = foo and all the files end up in the directory foo . As for your links. You have provided one (in a comment), where mv is not mentioned at all, but rename . If you have more links you could share. As well as for man pages where you claim this is expressed. | {
"source": [
"https://unix.stackexchange.com/questions/181141",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39219/"
]
} |
181,152 | On my Fedora 21 64bit Gnome 3.14.3 install, I have noticed that the NeworkManager always connects to the wired connection, even if the cable is not connected: However, the driver (or something) does know whether or not it is connected - I can see this with watch "dmesg | tail -10" : [ 7349.552202] atl1c 0000:07:00.0: atl1c: enp7s0 NIC Link is Down
[ 7373.496359] IPv6: ADDRCONF(NETDEV_UP): enp7s0: link is not ready
[ 7376.271449] atl1c 0000:07:00.0: atl1c: enp7s0 NIC Link is Up<100 Mbps Full Duplex>
[ 7376.271482] IPv6: ADDRCONF(NETDEV_CHANGE): enp7s0: link becomes ready
[ 7553.088393] atl1c 0000:07:00.0: atl1c: enp7s0 NIC Link is Down
[ 7597.096174] atl1c 0000:07:00.0: atl1c: enp7s0 NIC Link is Up<100 Mbps Full Duplex>
[ 7620.983378] atl1c 0000:07:00.0: atl1c: enp7s0 NIC Link is Down
[ 7622.556874] atl1c 0000:07:00.0: atl1c: enp7s0 NIC Link is Up<100 Mbps Full Duplex> This causes issues when the cable has unplugged, but it still thinks it is connected to the internet, and trys to sync or connect and fails. lspci -v 07:00.0 Ethernet controller: Qualcomm Atheros AR8152 v2.0 Fast Ethernet (rev c1)
Subsystem: Lenovo Device 3979
Flags: bus master, fast devsel, latency 0, IRQ 29
Memory at e0500000 (64-bit, non-prefetchable) [size=256K]
I/O ports at 2000 [size=128]
Capabilities: <access denied>
Kernel driver in use: atl1c
Kernel modules: atl1c ifconfig : enp7s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.22 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::de0e:a1ff:fed1:d12b prefixlen 64 scopeid 0x20<link>
ether dc:0e:a1:d1:d1:2b txqueuelen 1000 (Ethernet)
RX packets 111875 bytes 67103677 (63.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 87152 bytes 7793021 (7.4 MiB)
TX errors 0 dropped 0 overruns 0 carrier 13 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 3669 bytes 880913 (860.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3669 bytes 880913 (860.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 sudo yum list installed NetworkManager* Installed Packages
NetworkManager.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates
NetworkManager-adsl.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates
NetworkManager-bluetooth.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates
NetworkManager-config-connectivity-fedora.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates
NetworkManager-config-server.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates
NetworkManager-devel.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates
NetworkManager-glib.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates
NetworkManager-glib-devel.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates
NetworkManager-iodine.x86_64 0.0.4-4.fc21 @fedora
NetworkManager-iodine-gnome.x86_64 0.0.4-4.fc21 @fedora
NetworkManager-l2tp.x86_64 0.9.8.7-3.fc21 @fedora
NetworkManager-openconnect.x86_64 0.9.8.6-2.fc21 @updates
NetworkManager-openswan.x86_64 0.9.8.4-4.fc21 @fedora
NetworkManager-openswan-gnome.x86_64 0.9.8.4-4.fc21 @fedora
NetworkManager-openvpn.x86_64 1:0.9.9.0-3.git20140128.fc21 @koji-override-0/$releasever
NetworkManager-openvpn-gnome.x86_64 1:0.9.9.0-3.git20140128.fc21 @koji-override-0/$releasever
NetworkManager-pptp.x86_64 1:0.9.8.2-6.fc21 @koji-override-0/$releasever
NetworkManager-pptp-gnome.x86_64 1:0.9.8.2-6.fc21 @koji-override-0/$releasever
NetworkManager-ssh.x86_64 0.9.3-0.3.20140601git9d834f2.fc21 @fedora
NetworkManager-ssh-gnome.x86_64 0.9.3-0.3.20140601git9d834f2.fc21 @fedora
NetworkManager-tui.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates
NetworkManager-vpnc.x86_64 1:0.9.9.0-6.git20140428.fc21 @koji-override-0/$releasever
NetworkManager-vpnc-gnome.x86_64 1:0.9.9.0-6.git20140428.fc21 @koji-override-0/$releasever
NetworkManager-wifi.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates
NetworkManager-wwan.x86_64 1:0.9.10.1-1.4.20150115git.fc21 @updates modinfo atl1c filename: /lib/modules/3.17.8-300.fc21.x86_64/kernel/drivers/net/ethernet/atheros/atl1c/atl1c.ko.xz
version: 1.0.1.1-NAPI
license: GPL
description: Qualcom Atheros 100/1000M Ethernet Network Driver
author: Qualcomm Atheros Inc., <[email protected]>
author: Jie Yang
srcversion: 4333D8ADEE755DD5ABDF0B8
alias: pci:v00001969d00001083sv*sd*bc*sc*i*
alias: pci:v00001969d00001073sv*sd*bc*sc*i*
alias: pci:v00001969d00002062sv*sd*bc*sc*i*
alias: pci:v00001969d00002060sv*sd*bc*sc*i*
alias: pci:v00001969d00001062sv*sd*bc*sc*i*
alias: pci:v00001969d00001063sv*sd*bc*sc*i*
depends:
intree: Y
vermagic: 3.17.8-300.fc21.x86_64 SMP mod_unload
signer: Fedora kernel signing key
sig_key: F4:8F:FC:A3:C9:62:D6:47:0F:1A:63:E0:32:D1:F5:F1:93:2A:03:6A
sig_hashalgo: sha256 If this is a bug, is it NetworkManager or something else? I had no issues with this under Fedora 19 with the same driver (which seems to be the same version in the latest kernel in its backup). | When you issue the command: mv *.txt *.tsv the shell, lets assume bash, expands the wildcards if there are any matching files (including directories). The list of files are passed to the program, here mv . If no matches are found the unexpanded version is passed. Again: the shell expands the patterns, not the program. Loads of examples is perhaps best way, so here we go: Example 1: $ ls
file1.txt file2.txt
$ mv *.txt *.tsv Now what happens on the mv line is that the shell expands *.txt to the matching files. As there are no *.tsv files that is not changed. The mv command is called with two special arguments : argc : Number of arguments, including the program. argv : An array of arguments, including the program as first entry. In the above example that would be: argc = 4
argv[0] = mv
argv[1] = file1.txt
argv[2] = file2.txt
argv[3] = *.tsv The mv program check to see if last argument, *.tsv , is a directory. As it is not, the program can not continue as it is not designed to concatenate files. (Typically move all the files into one.) Nor create directories on a whim. As a result it aborts and reports the error: mv: target ‘*.tsv’ is not a directory Example 2: Now if you instead say: $ mv *1.txt *.tsv The mv command is executed with: argc = 3
argv[0] = mv
argv[1] = file1.txt
argv[2] = *.tsv Now, again, mv check to see if *.tsv exists. As it does not the file file1.txt is moved to *.tsv . That is: the file is renamed to *.tsv with the asterisk and all. $ mv *1.txt *.tsv
‘file1.txt’ -> ‘*.tsv’
$ ls
file2.txt *.tsv Example 3: If you instead said: $ mkdir *.tsv
$ mv *.txt *.tsv The mv command is executed with: argc = 3
argv[0] = mv
argv[1] = file1.txt
argv[1] = file2.txt
argv[2] = *.tsv As *.tsv now is a directory, the files ends up being moved there. Now: using commands like some_command *.tsv when the intention is to actually keep the wildcard one should always quote it. By quoting you prevent the wildcards from being expanded if there should be any matches. E.g. say mkdir "*.tsv" . Example 4: The expansion can further be viewed if you do for example: $ ls
file1.txt file2.txt
$ mkdir *.txt
mkdir: cannot create directory ‘file1.txt’: File exists
mkdir: cannot create directory ‘file2.txt’: File exists Example 5: Now: the mv command can and do work on multiple files. But if there is more then two the last has to be a target directory. (Optionally you can use the -t TARGET_DIR option, at least for GNU mv.) So this is OK: $ ls -F
b1.tsv b2.tsv f1.txt f2.txt f3.txt foo/
$ mv *.txt *.tsv foo Here mv would be called with: argc = 7
argv[0] = mv
argv[1] = b1.tsv
argv[2] = b2.tsv
argv[3] = f1.txt
argv[4] = f2.txt
argv[5] = f3.txt
argv[6] = foo and all the files end up in the directory foo . As for your links. You have provided one (in a comment), where mv is not mentioned at all, but rename . If you have more links you could share. As well as for man pages where you claim this is expressed. | {
"source": [
"https://unix.stackexchange.com/questions/181152",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52956/"
]
} |
181,254 | I am trying to use grep and cut to extract URLs from an HTML file. The links look like: <a href="http://examplewebsite.com/"> Other websites have .net , .gov , but I assume I could make the cut off point right before > . So I know I can use grep and cut somehow to cut off everything before http and after .com, but I have been stuck on it for a while. | Not sure if you are limited on tools: But regex might not be the best way to go as mentioned, but here is an example that I put together: cat urls.html | grep -Eo "(http|https)://[a-zA-Z0-9./?=_%:-]*" | sort -u grep -E : is the same as egrep grep -o : only outputs what has been grepped (http|https) : is an either / or a-z : is all lower case A-Z : is all upper case . : is dot / : is the slash ? : is ? = : is equal sign _ : is underscore % : is percentage sign : : is colon - : is dash * : is repeat the [...] group sort -u : will sort & remove any duplicates Output: bob@bob-NE722:~s$ wget -qO- https://stackoverflow.com/ | grep -Eo "(http|https)://[a-zA-Z0-9./?=_-]*" | sort -u
https://stackauth.com
https://meta.stackoverflow.com
https://cdn.sstatic.net/Img/svg-icons
https://stackoverflow.com
https://www.stackoverflowbusiness.com/talent
https://www.stackoverflowbusiness.com/advertising
https://stackoverflow.com/users/login?ssrc=head
https://stackoverflow.com/users/signup?ssrc=head
https://stackoverflow.com
https://stackoverflow.com/help
https://chat.stackoverflow.com
https://meta.stackoverflow.com
... You can also add in \d to catch other numeral types. | {
"source": [
"https://unix.stackexchange.com/questions/181254",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99897/"
]
} |
181,280 | I'm using git. I did a normal merge, but it keeps asking this: # Please enter a commit message to explain why this merge is necessary,
# especially if it merges an updated upstream into a topic branch. And even if I write something, I can't exit from here. I can't find docs explaining this. How should I do? | This is depend on the editor you're using. If vim you can use ESC and :wq or ESC and Shift + zz . Both command save file and exit. You also can check ~/.gitconfig for editor, in my case ( cat ~/.gitconfig ): [user]
name = somename
email = [email protected]
[core]
editor = vim
excludesfile = /home/mypath/.gitignore_global
[color]
ui = auto
# other settings here | {
"source": [
"https://unix.stackexchange.com/questions/181280",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59856/"
]
} |
181,311 | I would like to see how many users are on my system. How could I view a list of all the users on the system? | You can get a list of all users with getent passwd | cut -d':' -f1 This selects the first column (user name) of the system user database. In contrast to solutions parsing /etc/passwd , this will work regardless of the type of database used (traditional /etc/passwd , LDAP, etc). Note that this list includes system users as well (e.g. nobody, mail, etc.). The exact user number can be determined as follows: getent passwd | wc -l A list of currently logged in users can be obtained with the users or who command: users # or
who | {
"source": [
"https://unix.stackexchange.com/questions/181311",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93519/"
]
} |
181,423 | I have a bash script that produces a cat output when it takes an argument. I also have another bash script that executes the first bash script with an an argument that I want to produce cat outputs with. How do I store those cat outputs produced by the first bash script in variables? | var=$( cat foo.txt ) would store the output of the cat in variable var . var=$( ./myscript ) would store the output of myscript in the same variable. | {
"source": [
"https://unix.stackexchange.com/questions/181423",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99897/"
]
} |
181,492 | I'd like to create a new user and give him sudo access. To be specific, I want him to use sudo vim and edit httpd.conf. I wrote this in sudoers: user ALL=(ALL) /usr/bin/vim /etc/httpd/confs/httpd.conf I, however, heard that this could be risky. Why is this problematic? How serious is the problem? | Although you restrict the commandline arguments there is nothing that prevents the user from using vim to open, edit and overwrite any random file once it is running as root. The user can run sudo vim /etc/httpd/conf/httpd.conf and then clear all that text from the edit buffer then for convenience source an existing file (although that is not even required): for instance the sudo configuration :r /etc/sudoers NOTE: Unless restricted by SELinux the user can read any file this way! grant himself more sudo privileges user ALL=(ALL) NOPASSWD: ALL overwrite the old configuration :w /etc/sudoers I can imagine dozens of similar ways your user can now access, modify or destroy your system. You won't even have an audit trail which files were changed in this fashion as you will only see him editing your Apache config in the sudo log messages. This is a security risk in granting sudo privileges to any editor. This is more or less the same reason why granting sudo root level rights to commands such as tar and unzip is often insecure, nothing prevents you from including replacements for system binaries or system configuration files in the archive. A second risk, as many other commentators have pointed out, is that vim allows for shell escapes , where you can start a sub-shell from within vim that allows you execute any arbitrary command . From within your sudo vim session those will run as root, for instance the shell escape: :!/bin/bash will give you an interactive root shell :!/bin/rm -rf / will make for good stories in the pub. What to do instead? You can still use sudo to allow users to edit files they don't own in a secure way. In your sudoers configuration you can set a special reserved command sudoedit followed by the full (wildcard) pathname to the file(s) a user may edit: user ALL=(ALL) sudoedit /etc/httpd/conf/httpd.conf /etc/httpd/conf.d/*.conf The user may then use the -e switch in their sudo command line or use the sudoedit command: sudo -e /etc/httpd/conf/httpd.conf
sudoedit /etc/httpd/conf/httpd.conf As explained in the man page : The -e (edit) option indicates that, instead of running a command, the user wishes to edit one or more files. In lieu of a command, the string "sudoedit" is used when consulting the security policy. If the user is authorized by the policy, the following steps are taken: Temporary copies are made of the files to be edited with the owner set to the invoking user. The editor specified by the policy is run to edit the temporary files. The sudoers policy uses the SUDO_EDITOR, VISUAL and EDITOR environment variables (in that order). If none of SUDO_EDITOR, VISUAL or EDITOR are set, the first program listed in the editor sudoers (5) option is used. If they have been modified, the temporary files are copied back to their original location and the temporary versions are removed. If the specified file does not exist, it will be created. Note that unlike most commands run by sudo, the editor is run with the invoking user's environment unmodified. If, for some reason, sudo is unable to update a file with its edited version, the user will receive a warning and the edited copy will remain in a temporary file. The sudoers manual also has a whole section how it can offer limited protection against shell escapes with the RESRICT and NOEXEC options. restrict Avoid giving users access to commands that allow the user to run arbitrary commands. Many editors have a restricted mode where shell escapes are disabled, though sudoedit is a better solution to running editors via sudo. Due to the large number of programs that offer shell escapes, restricting users to the set of programs that do not is often unworkable. and noexec Many systems that support shared libraries have the ability to override default library functions by pointing an environment variable (usually LD_PRELOAD) to an alternate shared library. On such systems, sudo's noexec functionality can be used to prevent a program run by sudo from executing any other programs. Note, ... ... To enable noexec for a command, use the NOEXEC tag as documented in the User Specification section above. Here is that example again: aaron shanty = NOEXEC: /usr/bin/more, /usr/bin/vi This allows user aaron to run /usr/bin/more and /usr/bin/vi with noexec enabled. This will prevent those two commands from executing other commands (such as a shell). | {
"source": [
"https://unix.stackexchange.com/questions/181492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100746/"
]
} |
181,496 | My laptop (a Toshiba Sattelite) runs far too bright, even in the ambient light from outside in the day, and I need to be able to dim it below its minimum setting. ~#cat /sys/class/backlight/acpi_video0/brightness
~#0 Setting it below 0 will not work, and apps like flux even with some hackery to force it to night mode via script by rolling the timezone fails to do too much and leave colours of course yellowed. Is there some sort of method to set it below its minimum somehow? (uses some integrated nvidia card by the way) Is there a program I'm missing that will artificially dim it by overlaying transparent black? | With xrandr you can affect the gamma and brightness of a display by altering RGB values. From man xrandr : --brightness Multiply the gamma values on the crtc currently attached to the output to specified floating value. Useful for overly bright or overly dim outputs. However, this is a software only modification, if your hardware has support to actually change the brightness, you will probably prefer to use xbacklight . I can use it like: xrandr --output DVI-1 --brightness .7 There is also the xgamma package, which does much of the same, but... man xgamma : Note that the xgamma utility is obsolete and deficient, xrandr should be used with drivers that support the XRandr extension. I can use it like: xgamma -gamma .7 | {
"source": [
"https://unix.stackexchange.com/questions/181496",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100902/"
]
} |
181,503 | I have recently installed CentOS 7 (Minimal Install without GUI) and now I want to install a GUI environment in it. How can I install Desktop Environments on previously installed CentOS7 without reinstalling it? | 1. Installing GNOME-Desktop: Install GNOME Desktop Environment on here. # yum -y groups install "GNOME Desktop" Input a command like below after finishing installation: # startx GNOME Desktop Environment will start. For first booting, initial setup runs and you have to configure it for first time. Select System language first. Select your keyboard type. Add online accounts if you'd like to. Finally click "Start using CentOS Linux". GNOME Desktop Environments starts like follows. How to use GNOME Shell? The default GNOME Desktop of CentOS 7 starts with classic mode but if you'd like to use GNOME Shell, set like follows: Option A: If you start GNOME with startx , set like follows. # echo "exec gnome-session" >> ~/.xinitrc
# startx Option B: set the system graphical login systemctl set-default graphical.target ( more info ) and reboot the system. After system starts Click the button which is located next to the "Sign In" button. Select "GNOME" on the list. (The default is GNOME Classic) Click "Sign In" and log in with GNOME Shell. GNOME shell starts like follows: 2. Installing KDE-Desktop: Install KDE Desktop Environment on here. # yum -y groups install "KDE Plasma Workspaces" Input a command like below after finishing installation: # echo "exec startkde" >> ~/.xinitrc
# startx KDE Desktop Environment starts like follows: 3. Installing Cinnamon Desktop Environment: Install Cinnamon Desktop Environment on here. First Add the EPEL Repository (EPEL Repository which is provided from Fedora project.) Extra Packages for Enterprise Linux (EPEL) How to add EPEL Repository? # yum -y install epel-release
# sed -i -e "s/\]$/\]\npriority=5/g" /etc/yum.repos.d/epel.repo # set [priority=5]
# sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/epel.repo # for another way, change to [enabled=0] and use it only when needed
# yum --enablerepo=epel install [Package] # if [enabled=0], input a command to use the repository And now install the Cinnamon Desktop Environment from EPEL Repository: # yum --enablerepo=epel -y install cinnamon* Input a command like below after finishing installation: # echo "exec /usr/bin/cinnamon-session" >> ~/.xinitrc
# startx Cinnamon Desktop Environment will start. For first booting, initial setup runs and you have to configure it for first time. Select System language first. Select your keyboard type. Add online accounts if you'd like to. Finally click "Start using CentOS Linux". Cinnamon Desktop Environment starts like follows. 4. Installing MATE Desktop Environment: Install MATE Desktop Environment on here (You will need to add the EPEL Repository as explained above in advance). # yum --enablerepo=epel -y groups install "MATE Desktop" Input a command like below after finishing installation: # echo "exec /usr/bin/mate-session" >> ~/.xinitrc
# startx MATE Desktop Environment starts. 5. Installing Xfce Desktop Environment: Install Xfce Desktop Environment on here (You will need to add the EPEL Repository as like above in "Cinnamon" installation before). # yum -y groupinstall X11
# yum --enablerepo=epel -y groups install "Xfce" Input a command like below after finishing installation: # echo "exec /usr/bin/xfce4-session" >> ~/.xinitrc
# startx Xfce Desktop Environment starts. | {
"source": [
"https://unix.stackexchange.com/questions/181503",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72456/"
]
} |
181,621 | OpenSUSE (among other distributions) uses snapper to take snapshots of btrfs partitions. Some people think the default snapshot intervals take up too much space too quickly, but whether or not you believe that, there are times when you want to clear space on your filesystem and often find that the btrfs snapshots are taking a significant amount of space. Or, in other cases you may want to clear the filesystem of all excess data before moving it to/from a VM or changing the storage medium or something along those lines. But, I can't seem to find a command to quickly wipe all of the snapshots snapper has taken, either via snapper or another tool. How would I do this? | The command in recent versions of snapper is (I don't remember when it was introduced, but the version in e.g., openSUSE 13.2 supports this): snapper delete number1-number2 So to delete all snapshots (assuming you have no more than 100000 of them) you'd do: snapper delete 1-100000 Obviously this only deletes snapshots on the default root config, so for a different config it would be: snapper -c configname delete number1-number2 | {
"source": [
"https://unix.stackexchange.com/questions/181621",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13308/"
]
} |
181,676 | I know how to use nmap to find the list of hosts that are currently online. What I would like to do is get a list of just their IP addresses, now it displays extra information such as Nmap scan report for 192.168.x.x' and 'Host is up (0.12s latency). What I would like is to be able to run an nmap command, get a text document of the IP addresses that are currently online. Is this at all possible? | This is a common one: nmap -n -sn 192.0.2.0/24 -oG - | awk '/Up$/{print $2}' Quick rundown of options and commands: -n turns off reverse name resolution, since you just want IP addresses. On a local LAN this is probably the slowest step, too, so you get a good speed boost. -sn means "Don't do a port scan." It's the same as the older, deprecated -sP with the mnemonic "ping scan." -oG - sends "grepable" output to stdout, which gets piped to awk . /Up$/ selects only lines which end with "Up", representing hosts that are online. {print $2} prints the second whitespace-separated field, which is the IP address. | {
"source": [
"https://unix.stackexchange.com/questions/181676",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
181,937 | While running a script, I want to create a temporary file in /tmp directory. After execution of that script, that will be cleaned by that script. How to do that in shell script? | tmpfile=$(mktemp /tmp/abc-script.XXXXXX)
: ...
rm "$tmpfile" You can make sure that a file is deleted when the scripts exits (including kills and crashes) by opening a file descriptor to the file and deleting it. The file keeps available (for the script; not really for other processes but /proc/$PID/fd/$FD is a work-around) as long as the file descriptor is open. When it gets closed (which the kernel does automatically when the process exits) the filesystem deletes the file. # create temporary file
tmpfile=$(mktemp /tmp/abc-script.XXXXXX)
# create file descriptor 3 for writing to a temporary file so that
# echo ... >&3 writes to that file
exec 3>"$tmpfile"
# create file descriptor 4 for reading from the same file so that
# the file seek positions for reading and writing can be different
exec 4<"$tmpfile"
# delete temp file; the directory entry is deleted at once; the reference counter
# of the inode is decremented only after the file descriptor has been closed.
# The file content blocks are deallocated (this is the real deletion) when the
# reference counter drops to zero.
rm "$tmpfile"
# your script continues
: ...
# example of writing to file descriptor
echo foo >&3
# your script continues
: ...
# reading from that file descriptor
head -n 1 <&4
# close the file descriptor (done automatically when script exits)
# see section 2.7.6 of the POSIX definition of the Shell Command Language
exec 3>&- | {
"source": [
"https://unix.stackexchange.com/questions/181937",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95074/"
]
} |
182,032 | I have a directory called folder that looks like this: folder
-> root_folder
-> some files I want to zip this directory into zipped_dir , I tried: zip -r zipped_dir.zip folder/* But this generates a ZIP that looks like this: zipped_dir
-> folder
-> root_folder
-> some files in other words, it's including the directory whose contents I want to zip. How can I exclude this parent directory from the ZIP, without moving anything? IE I would like this end result: zipped_dir
-> root_folder
-> some files | Try to use this command (you will get the idea) cd folder; zip -r ../zipped_dir.zip * Maybe there is other way, but this is fastest and simplest for me :) | {
"source": [
"https://unix.stackexchange.com/questions/182032",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89986/"
]
} |
182,077 | Hi I have many files that have been deleted but for some reason the disk space associated with the deleted files is unable to be utilized until I explicitly kill the process for the file taking the disk space $ lsof /tmp/
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
cron 1623 root 5u REG 0,21 0 395919638 /tmp/tmpfPagTZ4 (deleted) The disk space taken up by the deleted file above causes problems such as when trying to use the tab key to autocomplete a file path I get the error bash: cannot create temp file for here-document: No space left on device But after I run kill -9 1623 the space for that PID is freed and I no longer get the error. My questions are: why is this space not immediately freed when the file is first deleted? what is the best way to get back the file space associated with the deleted files? and please let me know any incorrect terminology I have used or any other relevant and pertinent info regarding this situation. | On unices, filenames are just pointers (inodes) that point to the memory where the file resides (which can be a hard drive or even a RAM-backed filesystem). Each file records the number of links to it: the links can be either the filename (plural, if there are multiple hard links to the same file), and also every time a file is opened, the process actually holds the "link" to the same space. The space is physically freed only if there are no links left (therefore, it's impossible to get to it). That's the only sensible choice: while the file is being used, it's not important if someone else can no longer access it: you are using it and until you close it, you still have control over it - you won't even notice the filename is gone or moved or whatever. That's even used for tempfiles: some implementations create a file and immediately unlink it, so it's not visible in the filesystem, but the process that created it is using it normally. Flash plugin is especially fond of this method: all the downloaded video files are held open, but the filesystem doesn't show them. So, the answer is, while the processes have the files still opened, you shouldn't expect to get the space back. It's not freed, it's being actively used. This is also one of the reasons that applications should really close the files when they finish using them. In normal usage, you shouldn't think of that space as free, and this also shouldn't be very common at all - with the exception of temporary files that are unlinked on purpose, there shouldn't really be any files that you would consider being unused, but still open. Try to review if there is a process that does this a lot and consider how you use it, or just find more space. | {
"source": [
"https://unix.stackexchange.com/questions/182077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46709/"
]
} |
182,180 | I created a new user (testuser) using the useradd command on an Ubuntu server Virtual machine. I would like to create a home directory for the user and also give them root provileges. However, when I login as the new user, it complains that there is no home directory. What am I doing wrong? | Finally I found how to do this myself: useradd -m -d /home/testuser/ -s /bin/bash -G sudo testuser -m creates the home directory if it does not exist. -d overrides the default home directory location. -s sets the login shell for the user. -G expects a comma-separated list of groups that the user should belong to. See man useradd for details. | {
"source": [
"https://unix.stackexchange.com/questions/182180",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61108/"
]
} |
182,185 | I ran into an issue trying to dynamically set some variables based on output of a program, in a fish function. I narrowed my issues down to a MWE: function example
eval (echo 'set -x FOO 1;')
end calling: >example
>echo $FOO results in no output -- ie the FOO environment variable has not been set.
How should I have done this? | Finally I found how to do this myself: useradd -m -d /home/testuser/ -s /bin/bash -G sudo testuser -m creates the home directory if it does not exist. -d overrides the default home directory location. -s sets the login shell for the user. -G expects a comma-separated list of groups that the user should belong to. See man useradd for details. | {
"source": [
"https://unix.stackexchange.com/questions/182185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18637/"
]
} |
182,190 | I'm trying to install php5-auth-pam on Ubuntu 14.04, but package doesn't exist after Ubuntu 12.04. I've tried by downloading package here: http://packages.ubuntu.com/precise/php5-auth-pam I've done things like: dpkg -I php*.deb
apt-get install -f I can't match dependencies, and I don't know how to install them too. | Finally I found how to do this myself: useradd -m -d /home/testuser/ -s /bin/bash -G sudo testuser -m creates the home directory if it does not exist. -d overrides the default home directory location. -s sets the login shell for the user. -G expects a comma-separated list of groups that the user should belong to. See man useradd for details. | {
"source": [
"https://unix.stackexchange.com/questions/182190",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101380/"
]
} |
182,212 | Hello I want to understand the role of the chmod g+s command in Unix. I also would like to know what it does in this particular context: cd /home/canard;
touch un;
chgrp canard .;
chmod g+s .;
touch deux ; I understand all the commands roles except for chmod g+s and I want to know the differences between the files un and deux resulting from this series of commands. | chmod g+s .; This command sets the "set group ID" (setgid) mode bit on the current directory, written as . . This means that all new files and subdirectories created within the current directory inherit the group ID of the directory, rather than the primary group ID of the user who created the file. This will also be passed on to new subdirectories created in the current directory. g+s affects the files' group ID but does not affect the owner ID. Note that this applies only to newly-created files. Files that are moved ( mv ) into the directory are unaffected by the setgid setting. Files that are copied with cp -p are also unaffected. Example touch un;
chgrp canard .;
chmod g+s .;
touch deux ; In this case, deux will belong to group canard but un will belong to the group of the user creating it, whatever that is. Minor Note on the Use of Semicolons in Shell Commands Unlike C or Perl, a shell command only needs to be followed by a semicolon if there is another shell command following it on the same command line. Thus, consider the following command line: chgrp canard .; chmod g+s .; The final semicolon is superfluous and can be removed: chgrp canard .; chmod g+s . Further, if we were to place the two commands on separate lines, then the remaining semicolon is unneeded: chgrp canard .
chmod g+s . Documentation For more information, see man chmod . Also, wikipedia has tables summarizing the chmod command options. | {
"source": [
"https://unix.stackexchange.com/questions/182212",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101397/"
]
} |
182,232 | I don't understand how the data flows in the pipeline and hope someone could clarify what is going on there. I thought a pipeline of commands processes files (text, arrays of strings) in line by line manner. (If each command itself works line by line.) Each line of text passes through the pipeline, commands don't wait for the previous to finish processing whole input. But it seems it is not so. Here is a test example. There are some lines of text. I uppercase them and repeat each line twice. I do so with cat text | tr '[:lower:]' '[:upper:]' | sed 'p' . To follow the process we can run it "interactively" -- skip the input filename in cat . Each part of the pipeline runs line by line: $ cat | tr '[:lower:]' '[:upper:]'
alkjsd
ALKJSD
sdkj
SDKJ
$ cat | sed 'p'
line1
line1
line1
line 2
line 2
line 2 But the complete pipeline does wait for me to finish the input with EOF and only then prints the result: $ cat | tr '[:lower:]' '[:upper:]' | sed 'p'
I am writing...
keep writing...
now ctrl-D
I AM WRITING...
I AM WRITING...
KEEP WRITING...
KEEP WRITING...
NOW CTRL-D
NOW CTRL-D Is it supposed to be so? Why isn't it line-by-line? | There's a general buffering rule followed by the C standard I/O library ( stdio ) that most unix programs use. If output is going to a terminal, it is flushed at the end of each line; otherwise it is flushed only when the buffer (8K on my Linux/amd64 system; could be different on yours) is full. If all your utilities were following the general rule, you would see output delayed in all of your examples ( cat|sed , cat|tr , and cat|tr|sed ). But there's an exception: GNU cat never buffers its output. It either doesn't use stdio or it changes the default stdio buffering policy. I can be fairly sure you're using GNU cat and not some other unix cat because the others wouldn't behave this way. Traditional unix cat has a -u option to request unbuffered output. GNU cat ignores the -u option because its output is always unbuffered. So whenever you have a pipe with a cat on the left, in the GNU system, the passage of data through the pipe will not be delayed. The cat isn't even going line by line - your terminal is doing that. While you're typing input for cat, your terminal is in "canonical" mode - line-based, with editing keys like backspace and ctrl-U offering you the chance to edit the line you have typed before sending it with Enter . In the cat|tr|sed example, tr is still receiving data from cat as soon as you press Enter , but tr is following the stdio default policy: its output is going to a pipe, so it doesn't flush after each line. It writes to the second pipe when the buffer is full, or when an EOF is received, whichever comes first. sed is also following the stdio default policy, but its output is going to a terminal so it will write each line as soon as it has finished with it. This has an effect on how much you must type before something shows up on the other end of the pipeline - if sed was block-buffering its output, you'd have to type twice as much (to fill tr 's output buffer and sed 's output buffer). GNU sed has -u option so if you reversed the order and used cat|sed -u|tr you would see the output appear instantly again. (The sed -u option might be available elsewhere but I don't think it's an ancient unix tradition like cat -u ) As far as I can tell there's no equivalent option for tr . There is a utility called stdbuf which lets you alter the buffering mode of any command that uses the stdio defaults. It's a bit fragile since it uses LD_PRELOAD to accomplish something the C library wasn't designed to support, but in this case it seems to work: cat | stdbuf -o 0 tr '[:lower:]' '[:upper:]' | sed 'p' | {
"source": [
"https://unix.stackexchange.com/questions/182232",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38371/"
]
} |
182,234 | I'm running CENTOS 6.6 on a VPS and am trying to install the ZMQ PHP extension and tried installing using the command shown in the instructions: sudo pecl install zmq-beta However, it fails, showing this as the output: root@host [/zmq]# sudo pecl install zmq-beta
downloading zmq-1.1.2.tgz ...
Starting to download zmq-1.1.2.tgz (39,573 bytes)
..........done: 39,573 bytes
could not extract the package.xml file from "/root/tmp/pear/cache/zmq-1.1.2.tgz"
Download of "pecl/zmq" succeeded, but it is not a valid package archive
Error: cannot download "pecl/zmq"
Download failed
install failed I also tried: sudo pecl install -Z zmq-beta And: sudo pecl install --nocompress zmq-beta But I get the same error. Why is this error occuring? | There's a general buffering rule followed by the C standard I/O library ( stdio ) that most unix programs use. If output is going to a terminal, it is flushed at the end of each line; otherwise it is flushed only when the buffer (8K on my Linux/amd64 system; could be different on yours) is full. If all your utilities were following the general rule, you would see output delayed in all of your examples ( cat|sed , cat|tr , and cat|tr|sed ). But there's an exception: GNU cat never buffers its output. It either doesn't use stdio or it changes the default stdio buffering policy. I can be fairly sure you're using GNU cat and not some other unix cat because the others wouldn't behave this way. Traditional unix cat has a -u option to request unbuffered output. GNU cat ignores the -u option because its output is always unbuffered. So whenever you have a pipe with a cat on the left, in the GNU system, the passage of data through the pipe will not be delayed. The cat isn't even going line by line - your terminal is doing that. While you're typing input for cat, your terminal is in "canonical" mode - line-based, with editing keys like backspace and ctrl-U offering you the chance to edit the line you have typed before sending it with Enter . In the cat|tr|sed example, tr is still receiving data from cat as soon as you press Enter , but tr is following the stdio default policy: its output is going to a pipe, so it doesn't flush after each line. It writes to the second pipe when the buffer is full, or when an EOF is received, whichever comes first. sed is also following the stdio default policy, but its output is going to a terminal so it will write each line as soon as it has finished with it. This has an effect on how much you must type before something shows up on the other end of the pipeline - if sed was block-buffering its output, you'd have to type twice as much (to fill tr 's output buffer and sed 's output buffer). GNU sed has -u option so if you reversed the order and used cat|sed -u|tr you would see the output appear instantly again. (The sed -u option might be available elsewhere but I don't think it's an ancient unix tradition like cat -u ) As far as I can tell there's no equivalent option for tr . There is a utility called stdbuf which lets you alter the buffering mode of any command that uses the stdio defaults. It's a bit fragile since it uses LD_PRELOAD to accomplish something the C library wasn't designed to support, but in this case it seems to work: cat | stdbuf -o 0 tr '[:lower:]' '[:upper:]' | sed 'p' | {
"source": [
"https://unix.stackexchange.com/questions/182234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43886/"
]
} |
182,308 | How to install pip in Debian Wheezy? I've found many advises apt-get install python-pip but the result is "Unable to locate package python-pip" Is pip available in Debian Wheezy? I'm using 7.8 | Although apt-get update might seem to help you, I recommend strongly against using pip installed from the Wheeze repository with apt-get install python-pip : that pip is at version 1.1 while the current version is > 9.0 version 1.1 of pip has known security problems when used to download packages version 1.1 doesn't restrict downloads/installs to stable versions of packages lacks a lot of new functionality (like support for the wheel format) and misses bug fixes (see the changelog ) python-pip installed via apt-get pulls in some perl modules for whatever reason Unless you are running python2.4 or so that is still supported by pip 1.1 (and which you should not use anyway) you should follow the installation instructions on the pip documentation page to securely download pip (don't use the insecure pip install --upgrade pip with the 1.1 version, and certainly don't install any packages with sudo pip ... with that version) If you already have made the mistake of installing pip version 1.1, immediately do: sudo apt-get remove python-pip After that: wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py (for any of the python versions you have installed). Python2 versions starting with 2.7.9 and Python3 version starting with 3.4 have pip included by default. | {
"source": [
"https://unix.stackexchange.com/questions/182308",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60701/"
]
} |
182,382 | I want to generate a random password, and am doing it like so: </dev/urandom tr -dc [:print:] | head -c 64 On my laptop, which runs Ubuntu, this produces only printable characters, as intended. But when I ssh into my school's server, which runs Red Hat Enterprise Linux, and run it there, I get outputs like 3!ri�b�GrӴ��1�H�<�oM����&�nMC[�Pb�|L%MP�����9��fL2q���IFmsd|l�K , which won't do at all. What might be going wrong here? | It's your locale and tr problem. Currently, GNU tr fully supports only single-byte characters. So in locales using multibyte encodings, the output can be weird: $ </dev/urandom LC_ALL=vi_VN.tcvn tr -dc '[:print:]' | head -c 64
`�pv���Z����c�ox"�O���%�YR��F�>��췔��ovȪ������^,<H ���> The shell will print multi-byte characters correctly, but GNU tr will remove bytes it think non-printable. If you want it to be stable, you must set the locale: $ </dev/urandom LC_ALL=C tr -dc '[:print:]' | head -c 64
RSmuiFH+537z+iY4ySz`{Pv6mJg::RB;/-2^{QnKkImpGuMSq92D(6N8QF?Y9Co@ | {
"source": [
"https://unix.stackexchange.com/questions/182382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19429/"
]
} |
182,537 | When trying to write the stdout from a Python script to a text file ( python script.py > log ), the text file is created when the command is started, but the actual content isn't written until the Python script finishes. For example: script.py: import time
for i in range(10):
print('bla')
time.sleep(5) prints to stdout every 5 seconds when called with python script.py , but when I call python script.py > log , the size of the log file stays zero until the script finishes. Is it possible to directly write to the log file, such that you can follow the progress of the script (e.g. using tail )? EDIT It turns out that python -u script.py does the trick, I didn't know about the buffering of stdout. | This is happening because normally when process STDOUT is redirected to something other than a terminal, then the output is buffered into some OS-specific-sized buffer (perhaps 4k or 8k in many cases). Conversely, when outputting to a terminal, STDOUT will be line-buffered or not buffered at all, so you'll see output after each \n or for each character. You can generally change the STDOUT buffering with the stdbuf utility: stdbuf -oL python script.py > log Now if you tail -F log , you should see each line output immediately as it is generated. Alternatively explicit flushing of the output stream after each print should achieve the same. It looks like sys.stdout.flush() should achieve this in Python. If you are using Python 3.3 or newer, the print function also has a flush keyword that does this: print('hello', flush=True) . | {
"source": [
"https://unix.stackexchange.com/questions/182537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101617/"
]
} |
182,602 | I have an FFmpeg command to trim audio: ffmpeg -ss 01:43:46 -t 00:00:44.30 -i input.mp3 output.mp3 The problem I have with this command is that option -t requires a duration (in seconds) from 01:43:46 . I want to trim audio using start/stop times, e.g. between 01:43:46 and 00:01:45.02 . Is this possible? | ffmpeg seems to have a new option -to in the documentation : -to position ( input / output ) Stop writing the output or reading the input at position . position must be a time duration specification, see (ffmpeg-utils)the Time
duration section in the ffmpeg-utils(1) manual. -to and -t are mutually exclusive and -t has priority. Sample command with two time formats ffmpeg -i file.mkv -ss 20 -to 40 -c copy file-2.mkv
ffmpeg -i file.mkv -ss 00:00:20 -to 00:00:40 -c copy file-2.mkv This should create a copy (file-2.mkv) of file.mkv from the 20 second mark to the 40 second mark. | {
"source": [
"https://unix.stackexchange.com/questions/182602",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77038/"
]
} |
182,696 | I want to get the current CPUPower governor. When I type cpupower frequency-info I get a lot of information. I just want to get the governor, just like "ondemand" with no more information, to use its value in a program. | The current governor can be obtained as follows: cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor Note that cpu* will give you the scaling governor of all your cores and not just e.g. cpu0. This solution might be system dependent, though. I'm not 100% sure this is portable. | {
"source": [
"https://unix.stackexchange.com/questions/182696",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83668/"
]
} |
182,801 | I've used many variants of Linux(mostly Debian derivatives) for over a decade now. One problem that I haven't seen solved satisfactorily is the issue of horizontal tearing, or Vsync not being properly implemented. I say this because I use used 5 different distros on 4 different computers with various monitors and Nvidia/AMD/ATI/Intel graphics cards; every time, there has been an issue with video tearing with even slight motion. This is a big problem, especially since even Windows XP doesn't have these issues on modern hardware. If anyone is going to use Linux for anything, why would they want constant defects to show up when doing anything non-CLI? I'm guessing that either few developers know about this problem or care enough to fix it. I've tried just about every compositor out there, and usually the best they can do is minimize the issue but not eliminate it. Shouldn't it be as simple as synchronizing with the refresh rate of the monitor? Is there some politics among the OSS community that's preventing anyone from committing code that fixes this? Every time I've asked for help on this issue in the past, it either gets treated as an edge case(which I find difficult to believe it is given the amount of times I've replicated the problem) or I get potential solutions that at most minimize the tearing. | This is all due to the fact that the X server is out-dated, ill-suitable for today's graphics hardware and basically all the direct video card communication is done as an extension ("patch") over the ancient bloated core. The X server provides no builtin means of synchronization between user rendering the window and the screen displaying a window, so the content changes in the middle of rendering. This is one of the well-known issues of the X server (it has many, the entire model of what the server does and is outdated - event handling in subwindows, metadata about windows, graphical primitives for direct drawing...). Widget toolkits mostly want to gloss over all this, but tearing is still a problem because there is no mechanism to handle that. Additional problems arise when you have multiple cards that require different drivers, and on top of all this, opengl library has a hard-wired dependency on xlib, so you can't really use it independently without going through X. Wayland, which is somewhat unenthusiastically trying to replace X, supports a pedantic vsync synchronization in its core, and is advertised to have every frame exactly perfect. If you quickly google "wayland video tearing" you'll find more information on everything. | {
"source": [
"https://unix.stackexchange.com/questions/182801",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79611/"
]
} |
182,882 | (See Use #!/bin/sh or #!/bin/bash for Ubuntu-OSX compatibility and ease of use & POSIX ) If I want my scripts to use the bash shell, does using the .bash extension actually invoke bash or does it depend on system config / 1st shebang line. If both were in effect but different, which would have precedence? I'm not sure whether to end my scripts with .sh to just indicate "shell script" and then have the first line select the bash shell (e.g. #!/usr/bin/env bash ) or whether to just end them with .bash (as well as the line 1 setting). I want bash to be invoked. | does using the .bash extension actually invoke bash or does it depend
on system config / 1st shebang line. If you do not use an interpreter explicitly, then the interpreter being invoked is determined by the shebang used in the script. If you use an interpreter specifically then the interpreter doesn't care what extension you give for your script. However, the extension exists to make it very obvious for others what kind of script it is. [sreeraj@server ~]$ cat ./ext.py
#!/bin/bash
echo "Hi. I am a bash script" See, .py extension to the bash script does not make it a python script. [sreeraj@server ~]$ python ./ext.py
File "./ext.py", line 2
echo "Hi. I am a bash script"
^
SyntaxError: invalid syntax Its always a bash script. [sreeraj@server ~]$ ./ext.py
Hi. I am a bash script | {
"source": [
"https://unix.stackexchange.com/questions/182882",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
183,063 | I'm asking this question cautiously because I don't want to get this wrong. I have a program_name.rpm file saved locally on my server (CentOS 6.5). I have installed it previously just by navigating to it and using yum install program_name.rpm which worked fine but it didn't give me any option to specify where it is installed. Is it possible to install this rpm to /opt/some_directory instead of it's default install location? | Too bad you accepted that rpm answer. That will lead to warnings from subsequent executions of yum, such as Warning: RPMDB altered outside of yum Instead you should use yum localinstall , per section 13 of the Yum and RPM Tricks page of the CentOS wiki => https://wiki.centos.org/TipsAndTricks/YumAndRPM#head-3c061f4a180e5bc90b7f599c4e0aebdb2d5fc7f6 You can use the --installroot option to specify a different installation root. | {
"source": [
"https://unix.stackexchange.com/questions/183063",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89568/"
]
} |
183,125 | I have the following bash script, from what I understand >> is used to append the output of a command to an existing file instead of overwrite, but what is it doing in this case? This script is calling some exe files to convert from one format to another. There are many years for each file, so it loops through each file by looking at the filename. Also when I run this script I get "ambiguous redirect" #!/bin/bash
source $HOME/.bashrc
jobout=${1}
joberr=${2}
# Set some paths and prefixes
yr_bgn=2000
yr_end=2000
yr=${yr_bgn}
pth_data='/mnt/'
pth_rst='/mnt/'
while [ ${yr} -le ${yr_end} ]
do
./executable1 ${pth_data}file${yr}-${yr}.nc ${yr} ${pth_rst} 1>> ${jobout} 2>> ${joberr}
./executable2 ${pth_data}file${yr}-${yr}.nc ${yr} ${pth_rst} 1>> ${jobout} 2>> ${joberr}
./executable3 ${pth_data}file${yr}-${yr}.nc ${yr} ${pth_rst} 1>> ${jobout} 2>> ${joberr}
let yr=${yr}+1
done | 1>> and 2>> are redirections for specific file-descriptors, in this case the standard output (file descriptor 1) and standard error (file descriptor 2). So the script is redirecting all "standard" messages to ${jobout} and all error messages to ${joberr} . Using >> in both cases means all messages are appended to the respective files. Note that ${jobout} and ${joberr} take their values from the two command-line parameters to the script ( ${1} and ${2} ), so you need to specify the files you want to use to store the messages. If the parameters aren't given the script will produce the "ambiguous redirection" error message you've seen; the script should really check whether the parameters have been provided and produce an appropriate error message otherwise, something like if [ -z "$1" -o -z "$2" ]; then
echo "Log files for standard and error messages must be specified"
echo "${0} msgfile errfile"
exit 1
fi at the start of the script. | {
"source": [
"https://unix.stackexchange.com/questions/183125",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102018/"
]
} |
183,279 | In my understanding, awk array is something like python dict. So I write down the code bellow to explore it: awk '{my_dict[$1] = $2} END { print my_dict}' zen And I got: awk: can't read value of my_dict; it's an array name. As the first column isn`t a number, how could I read the total content of the array or traverse it? | You can loop over the array's keys and extract the corresponding values: awk '{my_dict[$1] = $2} END { for (key in my_dict) { print my_dict[key] } }' zen To get output similar to that you'd get with a Python dictionary, you can print the key as well: awk '{my_dict[$1] = $2} END { for (key in my_dict) { print key ": " my_dict[key] } }' zen This works regardless of the key type. | {
"source": [
"https://unix.stackexchange.com/questions/183279",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74226/"
]
} |
183,452 | I received a zip file from a bank. I get the following error when I trying to unzip it. unzip filename.zip
Archive: filename.zip
skipping: SOME_STUFF.pdf need PK compat. v6.1 (can do v4.6) The file command returns Zip archive data for this file. There are a fair number of threads containing this error message, but the only concrete suggestions they have is to use 7z x or 7za x from the p7zip-full package. These fail with the error: Unsupported Method
Sub items Errors: 1 I'm using Debian wheezy amd64. I don't see significant updates of the unzip or 7za packages in testing/unstable though. I'd appreciate suggestions of how to unzip this file, and more generally, what does the error message PK compat. v6.1 (can do v4.6) mean? For a widely used utility, zip does not have much documentation available about it. The README in the Debian sources points to http://www.info-zip.org/pub/infozip/ which lists a release dated of 29th April 2009 for UnZip 6.0. Here is the version output for the unzip binary on my system. unzip -v
UnZip 6.00 of 20 April 2009, by Debian. Original by Info-ZIP.
Latest sources and executables are at ftp://ftp.info-zip.org/pub/infozip/ ;
see ftp://ftp.info-zip.org/pub/infozip/UnZip.html for other sites.
Compiled with gcc 4.7.2 for Unix (Linux ELF) on Feb 3 2015.
UnZip special compilation options:
ACORN_FTYPE_NFS
COPYRIGHT_CLEAN (PKZIP 0.9x unreducing method not supported)
SET_DIR_ATTRIB
SYMLINKS (symbolic links supported, if RTL and file system permit)
TIMESTAMP
UNIXBACKUP
USE_EF_UT_TIME
USE_UNSHRINK (PKZIP/Zip 1.x unshrinking method supported)
USE_DEFLATE64 (PKZIP 4.x Deflate64(tm) supported)
UNICODE_SUPPORT [wide-chars, char coding: UTF-8] (handle UTF-8 paths)
LARGE_FILE_SUPPORT (large files over 2 GiB supported)
ZIP64_SUPPORT (archives using Zip64 for large files supported)
USE_BZIP2 (PKZIP 4.6+, using bzip2 lib version 1.0.6, 6-Sept-2010)
VMS_TEXT_CONV
WILD_STOP_AT_DIR
[decryption, version 2.11 of 05 Jan 2007]
UnZip and ZipInfo environment options:
UNZIP: [none]
UNZIPOPT: [none]
ZIPINFO: [none]
ZIPINFOOPT: [none] dpkg reports the package version as 6.0-8+deb7u2 . The output of zipinfo is: zipinfo filename.zip
Archive: filename.zip
Zip file size: 6880 bytes, number of entries: 1
-rw-a-- 6.4 fat 10132 Bx defN 15-Feb-06 16:24 SOME_STUFF.pdf
1 file, 10132 bytes uncompressed, 6568 bytes compressed: 35.2% | Origin of the error The PK in the error stands for Phil Katz, the inventor of the original PKZIP format. The zip utility has not kept up with the capabilities of the pkzip derived commercial software, particularly the certificate storage that banks like to include in their ZIP files. Wikipedia gives an overview of the development of the format. But the Unix zip utilities don't implement the changes after the year 2002. You might have to buy the PKWARE commercial version for Linux to uncompress this. The man page for zip has the following to say for itself and unzip : A companion program (unzip(1)) unpacks zip archives. The zip and
unzip(1) programs can work with archives produced by PKZIP (supporting
most PKZIP features up to PKZIP version 4.6), and PKZIP and PKUNZIP can
work with archives produced by zip (with some exceptions, notably
streamed archives, but recent changes in the zip file standard may
facilitate better compatibility). zip version 3.0 is compatible with
PKZIP 2.04 and also supports the Zip64 extensions of PKZIP 4.5 which
allow archives as well as files to exceed the previous 2 GB limit (4 GB
in some cases). zip also now supports bzip2 compression if the bzip2
library is included when zip is compiled. Note that PKUNZIP 1.10 can‐
not extract files produced by PKZIP 2.04 or zip 3.0. You must use PKUN‐
ZIP 2.04g or unzip 5.0p1 (or later versions) to extract them. Solution Although zip cannot do the job there are other tools that can. You mention the 7zip utility and the Linux/Unix commandline version of 7-Zip that, among others can read and write ZIP format. It claims that if 7-Zip cannot read a zip file, that in 99% of the cases the file is broken . 7-Zip utilities should be able to read your file, so either it is broken or else yours are in the 1% (for which I found no further details). 7-zip on Linux comes in various executables with different format support. The most basic ( 7zr ), doesn't support ZIP, you should use at least 7za or the full-fledged 7z : 7za x filename.zip Different Linux version package 7za / 7z in packages with different names. The most easy (as so often) is installing on Solus: sudo eopkg install p7zip On Debian derived Linux version, the package p7zip only installs the base 7z that doesn't support ZIP. This split-up has caused some problems and installing p7zip-full doesn't do what it says, sometimes you also have to install p7zip-rar On my Linux Mint system I needed to do: sudo apt-get install p7zip-full p7zip-rar On RedHat/CentOS you need to have the EPEL repository enabled. E.g on CentOS 7 I needed to do: sudo yum install epel-release
sudo yum --enablerepo=epel install p7zip | {
"source": [
"https://unix.stackexchange.com/questions/183452",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4671/"
]
} |
183,504 | I would like to transfer files between two remote hosts using on local shell, but it seems rsync doesn't support synchronisation if two remotes are specified as follow: $ rsync -vuar host1:/var/www host2:/var/www
The source and destination cannot both be remote. What other workarounds/commands I could use to achieve similar results? | As you have discovered you cannot use rsync with a remote source and a remote destination. Assuming the two servers can't talk directly to each other, it is possible to use ssh to tunnel via your local machine. Instead of rsync -vuar host1:/var/www host2:/var/www you can use this ssh -R localhost:50000:host2:22 host1 'rsync -e "ssh -p 50000" -vuar /var/www localhost:/var/www' The first instance of /var/www applies to the source on host1 , the localhost:/var/www corresponds to the destination on host2 . In case you're curious, the -R option sets up a reverse channel from port 50000 on host1 that maps (via your local machine) to port 22 on host2. There is no direct connection from host1 to host2. | {
"source": [
"https://unix.stackexchange.com/questions/183504",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21471/"
]
} |
183,717 | The man page of unshare says: UTS namespace setting hostname, domainname will not affect rest of the system (CLONE_NEWUTS flag) What does UTS stand for? | It means the process has a separate copy of the hostname and the (now mostly unused) NIS domain name, so it can set it to something else without affecting the rest of the system. The hostname is set via sethostname and is the nodename member of the struct returned by uname . The NIS domain name is set by setdomainname and is the domainname member of the struct returned by uname . UTS stands for UNIX Timesharing System. References: lwn.net - Namespaces in operation, part 1: namespaces overview uts namespaces: Introduction man uname(2) Meaning of UTS in UTS_RELEASE | {
"source": [
"https://unix.stackexchange.com/questions/183717",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65170/"
]
} |
183,745 | As far as I know, [[ is an enhanced version of [ , but I am confused when I see [[ as a keyword and [ being shown as a builtin. [root@server ~]# type [
[ is a shell builtin
[root@server ~]# type [[
[[ is a shell keyword TLDP says A builtin may be a synonym to a system command of the same name, but
Bash reimplements it internally. For example, the Bash echo command is
not the same as /bin/echo, although their behavior is almost
identical. and A keyword is a reserved word, token or operator. Keywords have a
special meaning to the shell, and indeed are the building blocks of
the shell's syntax. As examples, for, while, do, and ! are keywords.
Similar to a builtin, a keyword is hard-coded into Bash, but unlike a
builtin, a keyword is not in itself a command, but a subunit of a
command construct. [2] Shouldn't that make both [ and [[ a keyword? Is there anything that I am missing here?
Also, this link re-affirms that both [ and [[ should belong to the same kind. | The difference between [ and [[ is quite fundamental. [ is a command. Its arguments are processed just the way any other commands arguments are processed. For example, consider: [ -z $name ] The shell will expand $name and perform both word splitting and filename generation on the result, just as it would for any other command. As an example, the following will fail: $ name="here and there"
$ [ -n $name ] && echo not empty
bash: [: too many arguments To have this work correctly, quotes are necessary: $ [ -n "$name" ] && echo not empty
not empty [[ is a shell keyword and its arguments are processed according to special rules. For example, consider: [[ -z $name ]] The shell will expand $name but, unlike any other command, it will perform neither word splitting nor filename generation on the result. For example, the following will succeed despite the spaces embedded in name : $ name="here and there"
$ [[ -n $name ]] && echo not empty
not empty Summary [ is a command and is subject to the same rules as all other commands that the shell executes. Because [[ is a keyword, not a command, however, the shell treats it specially and it operates under very different rules. | {
"source": [
"https://unix.stackexchange.com/questions/183745",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68296/"
]
} |
183,910 | I'm running a dual-screen setup and have my trackpad disabled most of the time (which includes hiding the mousepointer).
When I reenable the trackpad (and display the mouse pointer again), I've lost track where the pointer was before. I'm looking for a tool to highlight the current mouse position (e.g. by a circle). Ideally this would be a single command flashing the circle for a short period of time. I'm aware that xdotool can find the current position, yet there is no highlighting; also, key-mon doesn't provide this functionality.
I've also read that cairo composition manager provides such functionality, yet I'm wondering if there is a smaller tool to achieve this. In case there is no such tool: What is the easiest way to display such a circle around the cursor using the data provided by xdotool getmouselocation ? In case this is relevant: I don't use a desktop environment, just the xmonad window manager. | While I like Mikeserv 's answer for cleverness, it has the downside that it will create a window which "steals" the focus and has to be clicked away. I also find it takes just slightly too long to start: about 0.2 to 0.3 seconds, which is just slightly too slow for a "smooth" experience. I finally got around to digging into XLib, and clobbered together a basic C program to do this. The visual effect is roughly similar to what Windows (XP) has (from memory). It's not very beautiful, but it works ;-) It doesn't "steal" focus, starts near-instantaneous, and you can click "through" it. You can compile it with cc find-cursor.c -o find-cursor -lX11 -lXext -lXfixes . There are some variables at the top you can tweak to change the size, speed, etc. I released this as a program at https://github.com/arp242/find-cursor . I recommend you use this version, as it has some improvements that the below script doesn't have (such as commandline arguments and ability to click "through" the window). I've left the below as-is due to its simplicity. /*
* https://github.com/arp242/find-cursor
* Copyright © 2015 Martin Tournoij <[email protected]>
* See below for full copyright
*/
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <X11/Xlib.h>
#include <X11/Xutil.h>
// Some variables you can play with :-)
int size = 220;
int step = 40;
int speed = 400;
int line_width = 2;
char color_name[] = "black";
int main(int argc, char* argv[]) {
// Setup display and such
char *display_name = getenv("DISPLAY");
if (!display_name) {
fprintf(stderr, "%s: cannot connect to X server '%s'\n", argv[0], display_name);
exit(1);
}
Display *display = XOpenDisplay(display_name);
int screen = DefaultScreen(display);
// Get the mouse cursor position
int win_x, win_y, root_x, root_y = 0;
unsigned int mask = 0;
Window child_win, root_win;
XQueryPointer(display, XRootWindow(display, screen),
&child_win, &root_win,
&root_x, &root_y, &win_x, &win_y, &mask);
// Create a window at the mouse position
XSetWindowAttributes window_attr;
window_attr.override_redirect = 1;
Window window = XCreateWindow(display, XRootWindow(display, screen),
root_x - size/2, root_y - size/2, // x, y position
size, size, // width, height
0, // border width
DefaultDepth(display, screen), // depth
CopyFromParent, // class
DefaultVisual(display, screen), // visual
CWOverrideRedirect, // valuemask
&window_attr // attributes
);
XMapWindow(display, window);
XStoreName(display, window, "find-cursor");
XClassHint *class = XAllocClassHint();
class->res_name = "find-cursor";
class->res_class = "find-cursor";
XSetClassHint(display, window, class);
XFree(class);
// Keep the window on top
XEvent e;
memset(&e, 0, sizeof(e));
e.xclient.type = ClientMessage;
e.xclient.message_type = XInternAtom(display, "_NET_WM_STATE", False);
e.xclient.display = display;
e.xclient.window = window;
e.xclient.format = 32;
e.xclient.data.l[0] = 1;
e.xclient.data.l[1] = XInternAtom(display, "_NET_WM_STATE_STAYS_ON_TOP", False);
XSendEvent(display, XRootWindow(display, screen), False, SubstructureRedirectMask, &e);
XRaiseWindow(display, window);
XFlush(display);
// Prepare to draw on this window
XGCValues values;
values.graphics_exposures = False;
unsigned long valuemask = 0;
GC gc = XCreateGC(display, window, valuemask, &values);
Colormap colormap = DefaultColormap(display, screen);
XColor color;
XAllocNamedColor(display, colormap, color_name, &color, &color);
XSetForeground(display, gc, color.pixel);
XSetLineAttributes(display, gc, line_width, LineSolid, CapButt, JoinBevel);
// Draw the circles
for (int i=1; i<=size; i+=step) {
XDrawArc(display, window, gc,
size/2 - i/2, size/2 - i/2, // x, y position
i, i, // Size
0, 360 * 64); // Make it a full circle
XSync(display, False);
usleep(speed * 100);
}
XFreeGC(display, gc);
XCloseDisplay(display);
}
/*
* The MIT License (MIT)
*
* Copyright © 2015 Martin Tournoij
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to
* deal in the Software without restriction, including without limitation the
* rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
* sell copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* The software is provided "as is", without warranty of any kind, express or
* implied, including but not limited to the warranties of merchantability,
* fitness for a particular purpose and noninfringement. In no event shall the
* authors or copyright holders be liable for any claim, damages or other
* liability, whether in an action of contract, tort or otherwise, arising
* from, out of or in connection with the software or the use or other dealings
* in the software.
*/ | {
"source": [
"https://unix.stackexchange.com/questions/183910",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102507/"
]
} |
183,918 | I installed RHEL 7 using server with GUI with no added option on my local virtualbox, but I cannot get Firefox connected to the Internet. I checked /etc/resolv.conf contains nameserver setting I can ping other servers, such as 8.8.8.8 I open the firewall config GUI, I can see connected is at the left corner, I added http , https and 80 , 443 , 8080 and 8443 in public zone, and set interface to use public zone Firefox cannot get connected, and I can't curl either. Could some RHEL experts explain what I am missing? | While I like Mikeserv 's answer for cleverness, it has the downside that it will create a window which "steals" the focus and has to be clicked away. I also find it takes just slightly too long to start: about 0.2 to 0.3 seconds, which is just slightly too slow for a "smooth" experience. I finally got around to digging into XLib, and clobbered together a basic C program to do this. The visual effect is roughly similar to what Windows (XP) has (from memory). It's not very beautiful, but it works ;-) It doesn't "steal" focus, starts near-instantaneous, and you can click "through" it. You can compile it with cc find-cursor.c -o find-cursor -lX11 -lXext -lXfixes . There are some variables at the top you can tweak to change the size, speed, etc. I released this as a program at https://github.com/arp242/find-cursor . I recommend you use this version, as it has some improvements that the below script doesn't have (such as commandline arguments and ability to click "through" the window). I've left the below as-is due to its simplicity. /*
* https://github.com/arp242/find-cursor
* Copyright © 2015 Martin Tournoij <[email protected]>
* See below for full copyright
*/
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <X11/Xlib.h>
#include <X11/Xutil.h>
// Some variables you can play with :-)
int size = 220;
int step = 40;
int speed = 400;
int line_width = 2;
char color_name[] = "black";
int main(int argc, char* argv[]) {
// Setup display and such
char *display_name = getenv("DISPLAY");
if (!display_name) {
fprintf(stderr, "%s: cannot connect to X server '%s'\n", argv[0], display_name);
exit(1);
}
Display *display = XOpenDisplay(display_name);
int screen = DefaultScreen(display);
// Get the mouse cursor position
int win_x, win_y, root_x, root_y = 0;
unsigned int mask = 0;
Window child_win, root_win;
XQueryPointer(display, XRootWindow(display, screen),
&child_win, &root_win,
&root_x, &root_y, &win_x, &win_y, &mask);
// Create a window at the mouse position
XSetWindowAttributes window_attr;
window_attr.override_redirect = 1;
Window window = XCreateWindow(display, XRootWindow(display, screen),
root_x - size/2, root_y - size/2, // x, y position
size, size, // width, height
0, // border width
DefaultDepth(display, screen), // depth
CopyFromParent, // class
DefaultVisual(display, screen), // visual
CWOverrideRedirect, // valuemask
&window_attr // attributes
);
XMapWindow(display, window);
XStoreName(display, window, "find-cursor");
XClassHint *class = XAllocClassHint();
class->res_name = "find-cursor";
class->res_class = "find-cursor";
XSetClassHint(display, window, class);
XFree(class);
// Keep the window on top
XEvent e;
memset(&e, 0, sizeof(e));
e.xclient.type = ClientMessage;
e.xclient.message_type = XInternAtom(display, "_NET_WM_STATE", False);
e.xclient.display = display;
e.xclient.window = window;
e.xclient.format = 32;
e.xclient.data.l[0] = 1;
e.xclient.data.l[1] = XInternAtom(display, "_NET_WM_STATE_STAYS_ON_TOP", False);
XSendEvent(display, XRootWindow(display, screen), False, SubstructureRedirectMask, &e);
XRaiseWindow(display, window);
XFlush(display);
// Prepare to draw on this window
XGCValues values;
values.graphics_exposures = False;
unsigned long valuemask = 0;
GC gc = XCreateGC(display, window, valuemask, &values);
Colormap colormap = DefaultColormap(display, screen);
XColor color;
XAllocNamedColor(display, colormap, color_name, &color, &color);
XSetForeground(display, gc, color.pixel);
XSetLineAttributes(display, gc, line_width, LineSolid, CapButt, JoinBevel);
// Draw the circles
for (int i=1; i<=size; i+=step) {
XDrawArc(display, window, gc,
size/2 - i/2, size/2 - i/2, // x, y position
i, i, // Size
0, 360 * 64); // Make it a full circle
XSync(display, False);
usleep(speed * 100);
}
XFreeGC(display, gc);
XCloseDisplay(display);
}
/*
* The MIT License (MIT)
*
* Copyright © 2015 Martin Tournoij
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to
* deal in the Software without restriction, including without limitation the
* rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
* sell copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* The software is provided "as is", without warranty of any kind, express or
* implied, including but not limited to the warranties of merchantability,
* fitness for a particular purpose and noninfringement. In no event shall the
* authors or copyright holders be liable for any claim, damages or other
* liability, whether in an action of contract, tort or otherwise, arising
* from, out of or in connection with the software or the use or other dealings
* in the software.
*/ | {
"source": [
"https://unix.stackexchange.com/questions/183918",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102515/"
]
} |
183,994 | I've never really got how chmod worked up until today. I followed a tutorial that explained a big deal to me. For example, I've read that you've got three different permission groups: owner ( u ) group ( g ) everyone ( o ) Based on these three groups, I now know that: If the file is owned by the user, the user permissions determine the access. If the group of the file is the same as the user's group, the group permission determine the access. If the user is not the file owner, and is not in the group, then the other permission is used. I've also learned that you've got the following permissions: read ( r ) write ( w ) execute ( x ) I created a directory to test my newly acquired knowledge: mkdir test Then I did some tests: chmod u+rwx test/
# drwx------
chmod g+rx test/
# drwxr-x---
chmod u-x test/
# drw-r-x--- After fooling around for some time I think I finally got the hang of chmod and the way you set permission using this command. But... I still have a few questions: What does the d at the start stand for? What's the name and use of the containing slot and what other values can it hold? How can I set and unset it? What is the value for this d ? (As you only have 7=4+2+1 7=4+2+1 7=4+2+1) Why do people sometimes use 0777 instead of 777 to set their permissions? But as I shouldn't be asking multiple questions, I'll try to ask it in one question. In UNIX based system such as all Linux distributions, concerning the permissions, what does the first part ( d ) stand for and what's the use for this part of the permissions? | I’ll answer your questions in three parts: file types, permissions, and use cases for the various forms of chmod . File types The first character in ls -l output represents the file type; d means it’s a directory. It can’t be set or unset, it depends on how the file was created. You can find the complete list of file types in the ls documentation ; those you’re likely to come across are - : “regular” file, created with any program which can write a file b : block special file, typically disk or partition devices, can be created with mknod c : character special file, can also be created with mknod (see /dev for examples) d : directory, can be created with mkdir l : symbolic link, can be created with ln -s p : named pipe, can be created with mkfifo s : socket, can be created with nc -U D : door , created by some server processes on Solaris/openindiana. Permissions chmod 0777 is used to set all the permissions in one chmod execution, rather than combining changes with u+ etc. Each of the four digits is an octal value representing a set of permissions: suid , sgid and “sticky” (see below) user permissions group permissions “other” permissions The octal value is calculated as the sum of the permissions: “read” is 4 “write” is 2 “execute” is 1 For the first digit: suid is 4; binaries with this bit set run as their owner user (commonly root ) sgid is 2; binaries with this bit set run as their owner group (this was used for games so high scores could be shared, but it’s often a security risk when combined with vulnerabilities in the games), and files created in directories with this bit set belong to the directory’s owner group by default (this is handy for creating shared folders) “sticky” (or “restricted deletion”) is 1; files in directories with this bit set can only be deleted by their owner, the directory’s owner, or root (see /tmp for a common example of this). See the chmod manpage for details. Note that in all this I’m ignoring other security features which can alter users’ permissions on files (SELinux, file ACLs...). Special bits are handled differently depending on the type of file (regular file or directory) and the underlying system. (This is mentioned in the chmod manpage.) On the system I used to test this (with coreutils 8.23 on an ext4 filesystem, running Linux kernel 3.16.7-ckt2), the behaviour is as follows. For a file, the special bits are always cleared unless explicitly set, so chmod 0777 is equivalent to chmod 777 , and both commands clear the special bits and give everyone full permissions on the file. For a directory, the special bits are never fully cleared using the four-digit numeric form, so in effect chmod 0777 is also equivalent to chmod 777 but it’s misleading since some of the special bits will remain as-is. (A previous version of this answer got this wrong.) To clear special bits on directories you need to use u-s , g-s and/or o-t explicitly or specify a negative numeric value, so chmod -7000 will clear all the special bits on a directory. In ls -l output, suid , sgid and “sticky” appear in place of the x entry: suid is s or S instead of the user’s x , sgid is s or S instead of the group’s x , and “sticky” is t or T instead of others’ x . A lower-case letter indicates that both the special bit and the executable bit are set; an upper-case letter indicates that only the special bit is set. The various forms of chmod Because of the behaviour described above, using the full four digits in chmod can be confusing (at least it turns out I was confused). It’s useful when you want to set special bits as well as permission bits; otherwise the bits are cleared if you’re manipulating a file, preserved if you’re manipulating a directory. So chmod 2750 ensures you’ll get at least sgid and exactly u=rwx,g=rx,o= ; but chmod 0750 won’t necessarily clear the special bits. Using numeric modes instead of text commands ( [ugo][=+-][rwxXst] ) is probably more a case of habit and the aim of the command. Once you’re used to using numeric modes, it’s often easier to just specify the full mode that way; and it’s useful to be able to think of permissions using numeric modes, since many other commands can use them ( install , mknod ...). Some text variants can come in handy: if you simply want to ensure a file can be executed by anyone, chmod a+x will do that, regardless of what the other permissions are. Likewise, +X adds the execute permission only if one of the execute permissions is already set or the file is a directory; this can be handy for restoring permissions globally without having to special-case files v. directories. Thus, chmod -R ug=rX,u+w,o= is equivalent to applying chmod -R 750 to all directories and executable files and chmod -R 640 to all other files. | {
"source": [
"https://unix.stackexchange.com/questions/183994",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102590/"
]
} |
184,009 | I'm writing a script which shows the git log for a directory when I cd into it. Such a log can be overwhelming, containing hundreds of lines. So far I have been limiting that to a hard-coded 20 lines ( ... | head -n 20 ), which is fine on the screen at work, but too much on the smaller MacBook screen at home. I would prefer the log to take up about half the (vertical) screen on either terminal. And "terminal" also changes: it's Gnome terminal at work, but iTerm2 at home. And I do not use screen or tmux. How do I find the number of vertical lines available in a terminal from command line? | Terminal parameters are stored as $LINES and $COLUMNS variables. Additionally you can use special term-operation programm, for example tput : tput lines # outputs the number of lines of the present terminal window.
tput cols # outputs the number of columns of the present terminal window. | {
"source": [
"https://unix.stackexchange.com/questions/184009",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32775/"
]
} |
184,031 | If a user has loginShell=/sbin/nologin is it still possible to ssh user@machine [command] assuming that the user has proper ssh keys in its home directory that can be used to authenticate? My goal is to keep the user as a nologin, but still able to execute commands on a few other machines on the network (similar to its use through 'sudo -u'), and am wondering if this is a reasonable course. | Setting /sbin/nologin as the user's shell (or /bin/false or /bin/true , which are almost equivalent ) forbids the user from logging in to run any command whatsoever. SSH always invokes the user's login shell to run commands, so you need to set the login shell to one that is able to run some commands. There are several restricted shells that allow users to run only a few commands. For example rssh and scponly are both such shells that allow the user to run a few predefined commands (such as scp , sftp-server , rsync , …). See also Restrict user access in linux and Do you need a shell for SCP? | {
"source": [
"https://unix.stackexchange.com/questions/184031",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67807/"
]
} |
184,038 | Here is a little script to retarget old-wrong symlinks that I want to make interactive. #!/bin/bash
# retarget (broken) symbolink links interactively
echo -n "Enter the source directory where symlinks path should be retargeted > "
read response1
if [ -n "$response1" ]; then
symlinksdirectory=$response1
fi
if [ -d $symlinksdirectory ]; then
echo -n "Okay, source directory exists. Now enter 1) the symlinks OLD-WRONG target directory > "
read response2
if [ -n "$response2" ]; then
oldtargetdir=$response2
fi
echo -n "$oldtargetdir - And 2) enter the symlinks CORRECT target directory > "
read response3
if [ -n "$response3" ]; then
goodtargetdir=$response3
fi
echo -n "Now parsing symlinks in $symlinksdirectory to retarget them from $oldtargetdir to $goodtargetdir > "
find $symlinksdirectory -type l | while read nullsymlink ;
do wrongpath=$(readlink "$nullsymlink") ;
right=$(echo "$wrongpath" | sed s'|$oldtargetdir|$goodtargetdir|') ;
ln -fs "$right" "$nullsymlink" ; done
fi It does not replace the symlinks' path. My bad syntax as it works fine when replacing variables with real paths for sed (end of the script): right=$(echo "$wrongpath" | sed s'|/mnt/docs/dir2|/mnt/docs/dir1/dir2|') ; How should I insert variables properly? | Setting /sbin/nologin as the user's shell (or /bin/false or /bin/true , which are almost equivalent ) forbids the user from logging in to run any command whatsoever. SSH always invokes the user's login shell to run commands, so you need to set the login shell to one that is able to run some commands. There are several restricted shells that allow users to run only a few commands. For example rssh and scponly are both such shells that allow the user to run a few predefined commands (such as scp , sftp-server , rsync , …). See also Restrict user access in linux and Do you need a shell for SCP? | {
"source": [
"https://unix.stackexchange.com/questions/184038",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87958/"
]
} |
184,379 | I am trying to copy files from one server directly to another, bypassing my local computer. I did scp -r [email protected]:~/data/* [email protected]:~/data/
Password:
Host key verification failed.
lost connection Is this even possible? How may I fix it? | Something I use fairly often when there is no connection possible between the two servers scp -3 user@server1:/path/to/file user@server2:/path/to/file source -3 Copies between two remote hosts are transferred through the local host. Without this option the data is copied directly
between the two
remote hosts. Note that this option disables the progress meter. Assuming youu have a good connection to both, its not too slow. | {
"source": [
"https://unix.stackexchange.com/questions/184379",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/71888/"
]
} |
184,393 | I have this code snippet: $CUSER=tim
$APPDIR=/var/www/testing
$APPVENV=/var/www/testing/ven
cat > $APPDIR/cronfile << EOF
PWD=$APPDIR/$CUSER
PATH=$APPVENV/bin:\$PATH
0 2 * * * testapp search newsite
EOF
crontab $APPDIR/cronfile It seems to work but I'm really confused about how I would try to run this manually. What does this expand to if I wanted to run it from a command from shell? I tried something like this but it didn't work :( cd /var/www/testing/ven
testapp search newsite | Something I use fairly often when there is no connection possible between the two servers scp -3 user@server1:/path/to/file user@server2:/path/to/file source -3 Copies between two remote hosts are transferred through the local host. Without this option the data is copied directly
between the two
remote hosts. Note that this option disables the progress meter. Assuming youu have a good connection to both, its not too slow. | {
"source": [
"https://unix.stackexchange.com/questions/184393",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61825/"
]
} |
184,413 | I have few directores inside a folder like below - teckapp@machineA:/opt/keeper$ ls -ltrh
total 8.0K
drwxr-xr-x 10 teckapp cloudmgr 4.0K Feb 9 10:22 keeper-3.4.6
drwxr-xr-x 3 teckapp cloudmgr 4.0K Feb 12 01:44 data I have some other folder as well in some other machines for which I need to change the permission to the above one like this drwxr-xr-x . Meaning how can I change any folder permissions to drwxr-xr-x ? I know I need to use chmod command with this but what should be the value with chown that I should use for this? | To apply those permissions to a directory: chmod 755 directory_name To apply to all directories inside the current directory: chmod 755 */ If you want to modify all directories and subdirectories, you'll need to combine find with chmod : find . -type d -exec chmod 755 {} + | {
"source": [
"https://unix.stackexchange.com/questions/184413",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102842/"
]
} |
184,493 | Am I correct to assume that when ; joins two commands on a line, Bash always waits until the first command has exited before executing the second command?
And similarly, in a shell script containing two different commands on different lines, Bash always waits until the command on the first line has exited before executing the command on the second line? If this is the case, is there a way to execute two commands in one line or in a script, so that the second command doesn't wait until the first command has finished? Also, are different lines in a shell script equivalent to separate lines joined by ; or && ? | You're correct, commands in scripts are executed sequentially by default. You can run a command in the background by suffixing it with & (a single ampersand). Commands on separate lines are equivalent to commands joined with ; by default. If you tell your shell to abort on non-zero exit codes ( set -e ), then the script will execute as though all the commands were joined with && . | {
"source": [
"https://unix.stackexchange.com/questions/184493",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85900/"
]
} |
184,502 | Currently I have a script.sh file with the following content: #!/bin/bash
wget -q http://exemple.com/page1.php;
wget -q http://exemple.com/page2.php;
wget -q http://exemple.com/page3.php; I want to execute the commands one by one, when the previous finishes. Am I doing it in the right way? I've never worked with Linux before and tried to search for it but found no solutions. | Yes, you're doing it the right way. Shell scripts will run each command sequentially, waiting for the first to finish before the next one starts. You can either join commands with ; or have them on separate lines: command1; command2 or command1
command2 There is no need for ; if the commands are on separate lines. You can also choose to run the second command only if the first exited successfully. To do so, join them with && : command1 && command2 or command1 &&
command2 For more information on the various control operators available to you, see here . | {
"source": [
"https://unix.stackexchange.com/questions/184502",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102903/"
]
} |
184,508 | I am trying to build a script in in which nvm and eventually node will get installed. I have installed nvm with cURL. I see the modifications in the .profile or .bashrc file (both work) and when typing the nvm at the bash prompt, it shows the options available etc. So nvm works. Manually I can install node, but as soon as I put the nvm command in a shell script: nano test.sh
#!/bin/bash
nvm and run it with: chmod 755 test.sh
./test.sh I get: ./test.sh: line 2: nvm: command not found If it can't find nvm , I don't even have to think of nvm ls-remote or nvm install ... I got Ubuntu 14.04 installed and Bash is my shell. | nvm command is a shell function declared in ~/.nvm/nvm.sh . You may source either of following scripts at the start of yours to make nvm() available: . ~/.nvm/nvm.sh
. ~/.profile
. ~/.bashrc
. $(brew --prefix nvm)/nvm.sh # if installed via Brew | {
"source": [
"https://unix.stackexchange.com/questions/184508",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102902/"
]
} |
184,519 | NOTE: This question is the complement of this Q&A: How to "grep" for line length *not* in a given range? I need to get only the lines from a textfile (a wordlist, separated with newline) that has a length range of minimum or equal than 3 characters, but not longer or equal than 10. Example: INPUT: egyezményét
megkíván
ki
alma
kevesen
meghatározó OUTPUT: megkíván
alma
kevesen Question: How can I do this in bash ? | grep -x '.\{3,10\}' where -x (also --line-regexp with GNU grep ) match pattern to whole line . any single character \{3,10\} quantify from 3 to 10 times previous symbol (in the case any ones) | {
"source": [
"https://unix.stackexchange.com/questions/184519",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
184,658 | I have a file compressed in *.txz. After unpacking it I received a *.tar file. Is there any way to unpack it twice with one command? I mean unpack file (*.tar).txz with one command? For know I'm do it like this: xz -d file.txz
tar xvf file.tar But I wonder if there is nicer way. | xz -d < file.tar.xz | tar xvf - That's the same as with any compressed archive. You should never have to create an uncompressed copy of the original file. Some tar implementations like recent versions of GNU tar have builtin options to call xz by themselves. With GNU tar or bsdtar : tar Jxvf file.tar.xz Though, if you've got a version that has -J , chances are it will detect xz files automatically, so: tar xvf file.tar.xz will suffice. If your GNU or BSD tar is too old to support xz specifically, you may be able to use the --use-compress-program option: tar --use-compress-program=xz -xvf file.tar.gz One of the advantages of having tar invoke the compressor utility is that it is able to report the failure of it in its exit status. Note: if the tar.xz archive has been created with pixz , pixz may have added a tar index to it, which allows extracting files individually without having to uncompress the whole archive: pixz -x path/to/file/in/archive < file.tar.xz | tar xvf - | {
"source": [
"https://unix.stackexchange.com/questions/184658",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102994/"
]
} |
184,659 | Can anyone tell me what is necessary to get elementaryOS installed in a virtualbox on a Windows7 host? I came across elementaryOS a few days back and I have been trying to install it in a VirtualBox. However, each time I have downloadeded it and attempted to select the elementary iso in storage settings of my virtualbox, my downloaded elementary iso CD file is not visible and hence not selectable. The file download in named in the following format:
elementaryos-stable-amd64.20130810 | xz -d < file.tar.xz | tar xvf - That's the same as with any compressed archive. You should never have to create an uncompressed copy of the original file. Some tar implementations like recent versions of GNU tar have builtin options to call xz by themselves. With GNU tar or bsdtar : tar Jxvf file.tar.xz Though, if you've got a version that has -J , chances are it will detect xz files automatically, so: tar xvf file.tar.xz will suffice. If your GNU or BSD tar is too old to support xz specifically, you may be able to use the --use-compress-program option: tar --use-compress-program=xz -xvf file.tar.gz One of the advantages of having tar invoke the compressor utility is that it is able to report the failure of it in its exit status. Note: if the tar.xz archive has been created with pixz , pixz may have added a tar index to it, which allows extracting files individually without having to uncompress the whole archive: pixz -x path/to/file/in/archive < file.tar.xz | tar xvf - | {
"source": [
"https://unix.stackexchange.com/questions/184659",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16154/"
]
} |
184,726 | I need to include below python script inside a bash script. If the bash script end success, I need to execute the below script: #!/usr/bin/python
from smtplib import SMTP
import datetime
debuglevel = 0
smtp = SMTP()
smtp.set_debuglevel(debuglevel)
smtp.connect('192.168.75.1', 25)
smtp.login('my_mail', 'mail_passwd')
from_addr = "My Name <[email protected]>"
to_addr = "<[email protected]"
subj = "Process completed"
date = datetime.datetime.now().strftime( "%d/%m/%Y %H:%M" )
#print (date)
message_text = "Hai..\n\nThe process completed."
msg = "From: %s\nTo: %s\nSubject: %s\nDate: %s\n\n%s" % ( from_addr, to_addr, subj, date, message_text )
smtp.sendmail(from_addr, to_addr, msg)
smtp.quit() | You can use heredoc if you want to keep the source of both bash and python scripts together. For example, say the following are the contents of a file called pyinbash.sh : #!/bin/bash
echo "Executing a bash statement"
export bashvar=100
cat << EOF > pyscript.py
#!/usr/bin/python
import subprocess
print 'Hello python'
subprocess.call(["echo","$bashvar"])
EOF
chmod 755 pyscript.py
./pyscript.py Now running pyinbash.sh will yield: $ chmod 755 pyinbash.sh
$ ./pyinbash.sh
Executing a bash statement
Hello python
100 | {
"source": [
"https://unix.stackexchange.com/questions/184726",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102478/"
]
} |
184,804 | Consider the following (slightly silly) script name 'test1.sh': #/bin/bash
#
sleep 10 &
echo sleep pid = $!
pkill sleep When I run it, I get not only the output of the echo, but bash's reporting of the death of sleep on stderr: $ ./test1.sh
sleep pid = 3551
./test1.sh: line 5: 3551 Terminated sleep 10 In this case, I'd like to suppress the printout to stderr. I know I can do it on the command line, as in: $ ./test1.sh 2> /dev/null ... but is there a way to suppress it from within the script? (I know I could wrap it in a second script and have the wrapper redirect it, but there must be something easier...) | You're right; pkill isn't generating the message, bash is.
You suggest that $ ./test1.sh 2> /dev/null is a possible solution.
As UVV points out, the equivalent action from within the script is exec 2> /dev/null This redirects the stderr for the script to /dev/null from this statement until it is changed back.
Clumsy ways of changing it back include exec 2> /dev/tty which redirects stderr to the terminal.
This is probably (but not necessarily) where it was originally. Or exec 2>&1 which sets stderr to be the same as stdout, and is likely to be wrong. A more reliable way is exec 3>&2
exec 2> /dev/null (do stuff where you don't want to see the stderr.) exec 2>&3 which saves the original stderr in file descriptor 3, and later restores it. Other ways to suppress just the announcement of the process death include (sleep 10 & pkill sleep) 2> /dev/null and { sleep 10 & pkill sleep;} 2> /dev/null which change the stderr for only the grouped commands. | {
"source": [
"https://unix.stackexchange.com/questions/184804",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82849/"
]
} |
184,813 | I am fairly new to the system administration world. I've been working on an application recently and when I check my application server logs, I constantly get various IP addresses trying to ssh into my server by brute force. Here is an example of my server log: Feb 14 04:07:20 foodwiz3 sshd[1264]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
Feb 14 04:07:21 foodwiz3 sshd[1264]: reverse mapping checking getaddrinfo for coenamor.columbiansabbatical.com [23.249.167.223] failed - POSSIBLE BREAK-IN ATTEMPT!
Feb 14 04:07:21 foodwiz3 sshd[1264]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=23.249.167.223 user=root
Feb 14 04:07:23 foodwiz3 sshd[1264]: Failed password for root from 23.249.167.223 port 32997 ssh2
Feb 14 04:07:23 foodwiz3 sshd[1264]: Received disconnect from 23.249.167.223: 11: Bye Bye [preauth]
Feb 14 04:13:04 foodwiz3 sshd[1289]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
Feb 14 04:13:05 foodwiz3 sshd[1289]: reverse mapping checking getaddrinfo for coenamor.columbiansabbatical.com [23.249.167.223] failed - POSSIBLE BREAK-IN ATTEMPT!
Feb 14 04:13:05 foodwiz3 sshd[1289]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=23.249.167.223 user=root
Feb 14 04:13:07 foodwiz3 sshd[1289]: Failed password for root from 23.249.167.223 port 41562 ssh2 Is this something that is fairly normal, or should I be worried/doing something about it? | Welcome to the wonderful world of the Internet... Have you: put your server behind a hardware firewall? activated the software firewall ? (just in case the HW firewall glitches) hardened your server? installed rkhunter before putting it online? activated automatic daily security updates ? changed the default port of ssh ? ... But the real answer is: Yes, this is normal : the BotNet Maffia can always use a few extra badly protected servers... | {
"source": [
"https://unix.stackexchange.com/questions/184813",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95321/"
]
} |
184,860 | Using ls -R / ,
i get every file and directory listed that exists, but not with the full file path. What do I need to write on the command line in order to get a list in such a format, that every line contains an absolute path. | You can get the files with full path with this command: find / -type f or list files from the current directory down: find $(pwd) -type f | {
"source": [
"https://unix.stackexchange.com/questions/184860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
184,863 | At the beginning of a bash shell script is the following line: IFS=$'\n' What is the meaning behind this collection of symbols? | IFS stands for "internal field separator". It is used by the shell to determine how to do word splitting, i. e. how to recognize word boundaries. Try this in a shell like bash (other shells may handle this differently, for example zsh): mystring="foo:bar baz rab"
for word in $mystring; do
echo "Word: $word"
done The default value for IFS consists of whitespace characters (to be precise: space, tab and newline). Each character can be a word boundary. So, with the default value of IFS , the loop above will print: Word: foo:bar
Word: baz
Word: rab In other words, the shell thinks that whitespace is a word boundary. Now, try setting IFS=: before executing the loop. This time, the result is: Word: foo
Word: bar baz rab Now, the shell splits mystring into words as well -- but now, it only treats a colon as the word boundary. The first character of IFS is special: It is used to delimit words in the output when using the special $* variable (example taken from the Advanced Bash Scripting Guide , where you can also find more information on special variables like that one): $ bash -c 'set w x y z; IFS=":-;"; echo "$*"'
w:x:y:z Compare to: $ bash -c 'set w x y z; IFS="-:;"; echo "$*"'
w-x-y-z Note that in both examples, the shell will still treat all of the characters : , - and ; as word boundaries. The only thing that changes is the behaviour of $* . Another important thing to know is how so-called "IFS whitespace" is treated . Basically, as soon as IFS includes whitespace characters, leading and trailing whitespace is stripped from the string to be split before processing it and a sequence of consecutive whitespace characters delimits fields as well. However, this only applies to those whitespace characters which are actually present in IFS . For example, let's look at the string "a:b:: c d " (trailing space and two space characters between c and d ). With IFS=: it would be split into four fields: "a" , "b" , "" (empty string) and " c d " (again, two spaces between c and d ). Note the leading and trailing whitespace in the last field. With IFS=' :' , it would be split into five fields: "a" , "b" , "" (empty string), "c" and "d" . No leading and trailing whitespace anywhere. Note how multiple, consecutive whitespace characters delimit two fields in the second example, while multiple, consecutive colons don't (since they are not whitespace characters). As for IFS=$'\n' , that is a ksh93 syntax also supported by bash , zsh , mksh and FreeBSD sh (with variations between all shells). Quoting the bash manpage: Words of the form $'string' are treated specially. The word expands to "string", with backslash-escaped characters replaced as specified by the ANSI C standard. \n is the escape sequence for a newline, so IFS ends up being set to a single newline character. | {
"source": [
"https://unix.stackexchange.com/questions/184863",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
184,877 | I'm looking for a few kernel modules to load i2c-dev and i2c-bcm2708 . But the modprobe command returns: sudo modprobe i2c-dev
modprobe: module i2c-dev not found in modules.dep How do I list all the available modules in the system? In which directory are they located? | By default modprobe loads modules from kernel subdirectories located in the /lib/modules/$(uname -r) directory. Usually all files have extension .ko , so you can list them with find /lib/modules/$(uname -r) -type f -name '*.ko' or, taking into account compressed files: find /lib/modules/$(uname -r) -type f -name '*.ko*' Each module can be also loaded by referring to its aliases, stored in the /lib/modules/$(uname -r)/modules.alias (and modules.alias.bin ). However, to load a modules successfully modprobe needs their dependencies listed in the file /lib/modules/$(uname -r)/modules.dep (and a corresponding binary version modules.dep.bin ). If some module is present on the system, but is not on the list, then you should run a command depmod which will generate such dependencies and automatically include your module to modules.dep and modules.dep.bin . Additionally, if the module is successfully loaded it will be listed in the file /proc/modules (also accessed via command lsmod ). | {
"source": [
"https://unix.stackexchange.com/questions/184877",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65903/"
]
} |
184,947 | I'm trying to copy my gpg key from one machine to another. I do: gpg --export ${ID} > public.key
gpg --export-secret-key ${ID} > private.key Move files to new machine, and then: gpg --import public.key
gpg: nyckel [ID]: public key [Name, e-mail] was imported
gpg: Total number of treated keys: 1
gpg: imported: 1 (RSA: 1)
gpg --allow-secret-key-import private.key
sec [?]/[ID] [Creation date] [Name, e-mail]
ssb [?]/[SUB-ID] [Creation date] All looks good to me, but then: $ gpg -d [file].gpg
gpg: encrypted with 4096-bit RSA-key, id [SUB-ID], created [Creation date]
[Name, e-mail]
gpg: decryption failed: secret key not accessible So the error message says that the file has been encrypted with [SUB-ID], which the secret key import appears to say it has imported. (The [SUB-ID] in both messages is the same). So I'm clearly doing something wrong, but I don't know what. | You need to add --import to the command line to import the private key. (You don't need to use the --allow-secret-key-import flag. According to the man page: "This is an obsolete option and is not used anywhere.") gpg --import private.key | {
"source": [
"https://unix.stackexchange.com/questions/184947",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18863/"
]
} |
184,965 | I'm trying to edit a file from a remote computer connected via ssh. How can I open the remote file on my local computer to edit? | You can mount the remote directory with sshfs , after that, the file is accessible in your local directory tree. Example: sshfs user@domain:/remote/directory/ /local/directory/ It's all in the man pages. Or just copy the file over with scp/rsync , edit it, and copy it back. | {
"source": [
"https://unix.stackexchange.com/questions/184965",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50683/"
]
} |
185,283 | I'm trying to write a shell script that will wait for a file to appear in the /tmp directory called sleep.txt and once it is found the program will cease, otherwise I want the program to be in a sleep (suspended) state until the file is located. Now, I'm assuming that I will use a test command. So, something like (if [ -f "/tmp/sleep.txt" ];
then stop
else sleep.) I'm brand new to writing shell script and any help is greatly appreciated! | Under Linux, you can use the inotify kernel subsystem to efficiently wait for the appearance of a file in a directory: while read i; do if [ "$i" = sleep.txt ]; then break; fi; done \
< <(inotifywait -e create,open --format '%f' --quiet /tmp --monitor)
# script execution continues ... (assuming Bash for the <() output redirection syntax) The advantage of this approach in comparison to fixed time interval polling like in while [ ! -f /tmp/sleep.txt ]; do sleep 1; done
# script execution continues ... is that the kernel sleeps more. With an inotify event specification like create,open the script is just scheduled for execution when a file under /tmp is created or opened. With the fixed time interval polling you waste CPU cycles for each time increment. I included the open event to also register touch /tmp/sleep.txt when the file already exists. | {
"source": [
"https://unix.stackexchange.com/questions/185283",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103447/"
]
} |
185,359 | The man page for mandb refers to stray cats: -s, --no-straycats
Do not spend time looking for or adding information to the
databases regarding stray cats. There is no explanation of what a stray cat is. What's up? | From the Glossary in /usr/share/doc/man-db/man-db-manual.txt (source is manual/glossary.me ): cat page A formatted manual page suitable for viewing on a vt100-type terminal. stray cat page A cat page that does not have a relative manual page on the system, i.e.
only the cat page was supplied or the manual page was removed after the
cat page had been created. | {
"source": [
"https://unix.stackexchange.com/questions/185359",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73309/"
]
} |
185,365 | I've had a cronjob working for about a fortnight without any problems.
Then last night I checked I didn't get the email that I usually get.
I went to the terminal to try send myself an email, I got the following error: mail: cannot send message: process exited with a non-zero status I haven't changed anything with my ssmtp cfg file. It just stopped working, when I check and recheck everything, the code, ssmtp, everything is perfect. I send out my emails twice a day via cronjob. The crontab hasn't been interfered either. I really don't know why it would stop working. The system sends out emails via gmail - I've gone into the gmail account and sent out test emails, they are sent and received without any problems. Additionally I've checked throughout google, forums, websites I don't see any mistakes. This makes sense as everything was working fine 24 hours ago, and now it's just stopped. Q: Is there any way of diagnosing and troubleshooting how to solve such a problem? | I have get the same problem in an Ubuntu 14.04 server. And I find error message in /var/log/mail.err , which said: postfix/sendmail[27115]: fatal: open /etc/postfix/main.cf: No such file or directory Then I just reconfigured postfix and solved this problem. sudo dpkg-reconfigure postfix | {
"source": [
"https://unix.stackexchange.com/questions/185365",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102813/"
]
} |
185,390 | If I mount a simple loop device, losetup -a give me the devices opened.
Is something similar possible with cryptsetup ? | dmsetup is useful for anything device mapper related. For Example: [root@localhost]~# dmsetup ls --target crypt
luks-90dc732d-e183-4948-951e-c32f3f11b305 (253, 0)
[root@localhost]~# | {
"source": [
"https://unix.stackexchange.com/questions/185390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
185,544 | If I run ls I get colored output which I find pretty handy for quickly getting a glance of the kind of file. When I try to pipe it to less even with the -r and -R flags the coloring always get lost. I am using zsh version 5.0.7. Any ideas? Thanks. edit: I am on OS X. | This is by design: programs that produce colored output typically do so only when their output goes to a terminal, not when it's sent to a pipe or to a regular file. The reason is that data sent on a terminal is presumably read by a human, whereas data piped to a program or written to a file is likely to be parsed by some program, so it shouldn't contain extraneous content like color-changing escape sequences. GNU ls displays colored output on a terminal when you pass the option --color (or --color=auto ). To force colored output regardless of the file type of the standard output, pass --color=always or --color=yes (they're synonyms). This convention has been followed by other commands, like GNU grep, FreeBSD grep, git diff , etc. ls --colors=yes -l | less With the FreeBSD version of ls (also found on OSX, and available as the colorls port on OpenBSD and NetBSD), pass the option -G to display colors when the output is a terminal. Set the environment CLICOLOR_FORCE to display colors regardless of the output file type. CLICOLOR_FORCE=1 ls -l | less | {
"source": [
"https://unix.stackexchange.com/questions/185544",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98453/"
]
} |
185,764 | I tried to obtain the size of a directory (containing directories and sub directories) by using the ls command with option l . It seems to work for files ( ls -l file name ), but if I try to get the size of a directory (for instance, ls -l /home ), I get only 4096 bytes, although altogether it is much bigger. | du -sh file_path Explanation du ( d isc u sage) command estimates file_path space usage The options -sh are (from man du ): -s, --summarize
display only a total for each argument
-h, --human-readable
print sizes in human readable format (e.g., 1K 234M 2G) To check more than one directory and see the total, use du -sch : -c, --total
produce a grand total | {
"source": [
"https://unix.stackexchange.com/questions/185764",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
185,829 | I'm mounting a NFS filesystem on my machine. How do I figure out what version of the NFS protocol the server uses? I don't have access to the NFS server machine, but I do have root on my client machine. Is there anything I can run on my client machine to identify what version of the NFS protocol is being used by the server, or what versions it supports? I wasn't able to find any useful information in /var/log/messages or kernel debugging output ( dmesg ). I have tried running nfsstat , but I'm not sure if it is giving me any useful information. However, when I run nfsstat -s to request information about the server, I don't see anything useful: # nfsstat -s
Server rpc stats:
calls badcalls badfmt badauth badclnt
0 0 0 0 0 When I run nfsstat -c to request information about the client, I do see some information about Client nfs v3 , but I'm not sure how to interpret this. Does this tell me anything about the protocol being used between my client machine and the NFS server? Does it mean I am currently using v3 of the NFS protocol? Does it tell me anything about what versions of the NFS protocol the server supports, e.g., NFS v4? | The nfsstat -c program will show you the NFS version actually being used. If you run rpcinfo -p {server} you will see all the versions of all the RPC programs that the server supports. On my system I get this output: $ rpcinfo -p localhost
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
...
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
... This shows me that my NFS server ( localhost in this example) offers versions 2, 3, and 4 of the NFS protocol all over UDP and TCP. | {
"source": [
"https://unix.stackexchange.com/questions/185829",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9812/"
]
} |
185,894 | I have a very high density virtualized environment with containers, so I'm trying to make each container really small. "Really small" means 87 MB on base Ubuntu 14.04 (Trusty Tahr) without breaking up the package manager compatibility. So I use LVM as a backing storage for my containers and recently I found very strange numbers. Here they are. Let's create a 100 MiB (yeah, power of 2) logical volume. sudo lvcreate -L100M -n test1 /dev/purgatory I'd like to check the size, so I issue sudo lvs --units k test1 purgatory -wi-a---- 102400.00k Sweet, this is really 100 MiB. Now let's make an ext4 filesystem. And of course, we remember -m 0 parameter, which prevents space waste. sudo mkfs.ext4 -m 0 /dev/purgatory/test1
mke2fs 1.42.9 (4-Feb-2014)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
25688 inodes, 102400 blocks
0 blocks (0.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
13 block groups
8192 blocks per group, 8192 fragments per group
1976 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done Sweet and clean. Mind the block size - our logical volume is small, so mkfs.ext4 decided to make a 1 KiB sized block, not the usual 4 KiB. Now we will mount it. sudo mount /dev/purgatory/test1 /mnt/test1 And let's call df without parameters (we would like to see 1 KiB-blocks) /dev/mapper/purgatory-test1 95054 1550 91456 2% /mnt/test1 Wait, oh shi~ We have 95054 blocks total. But the device itself has 102400 blocks of 1 KiB. We have only 92.8% of our storage. Where are my blocks, man? Let's look at it on a real block device. A have a 16 GiB virtual disk, 16777216 blocks of 1K, but only 15396784 blocks are in df output. 91.7%, what is it? Now follows the investigation (spoiler: no results) Filesystem could begin not at the beginning of the device. This is strange, but possible. Luckily, ext4 has magic bytes, let's check their presence. sudo hexdump -C /dev/purgatory/test1 | grep "53 ef" This shows superblock: 00000430 a9 10 e7 54 01 00 ff ff 53 ef 01 00 01 00 00 00 |...T....S.......| Hex 430 = Dec 1072, so somewhere after first kilobyte. Looks reasonable, ext4 skips first 1024 bytes for oddities like VBR, etc. This is journal! No, it is not. Journal take space from Available if df output. Oh, we have dump2fs and could check the sizes there! ... a lot of greps ... sudo dumpe2fs /dev/purgatory/test1 | grep "Free blocks" Ouch. Free blocks: 93504
Free blocks: 3510-8192
Free blocks: 8451-16384
Free blocks: 16385-24576
Free blocks: 24835-32768
Free blocks: 32769-40960
Free blocks: 41219-49152
Free blocks: 53249-57344
Free blocks: 57603-65536
Free blocks: 65537-73728
Free blocks: 73987-81920
Free blocks: 81921-90112
Free blocks: 90113-98304
Free blocks: 98305-102399 And we have another number. 93504 free blocks. The question is: what is going on? Block device: 102400k (lvs says) Filesystem size: 95054k (df says) Free blocks: 93504k (dumpe2fs says) Available size: 91456k (df says) | Try this: mkfs.ext4 -N 104 -m0 -O ^has_journal,^resize_inode /dev/purgatory/test1 I thinks this does let you understand "what is going on". -N 104 (set the number of iNodes you filesystem should have) every iNode "costs" usable space (128 Byte) -m 0 (no reserved blocks) -O ^has_journal,^resize_inode (deactivate the features has_journal and resize_inode resize_inode "costs" free space (most of the 1550 1K-Blocks/2% you see in your df - 12K are used for the "lost+found" folder) has_journal "costs" usable space (4096 1K-Blocks in your case) We get 102348 out of 102400 , another 52 blocks unusable (if we have deleted the "lost+found" folder). Therefore we dive into dumpe2fs : Group 0: (Blocks 1-8192) [ITABLE_ZEROED]
Checksum 0x5ee2, unused inodes 65533
Primary superblock at 1, Group descriptors at 2-2
Block bitmap at 3 (+2), Inode bitmap at 19 (+18)
Inode table at 35-35 (+34)
8150 free blocks, 0 free inodes, 1 directories, 65533 unused inodes
Free blocks: 17-18, 32-34, 48-8192
Free inodes:
Group 1: (Blocks 8193-16384) [BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x56cf, unused inodes 5
Backup superblock at 8193, Group descriptors at 8194-8194
Block bitmap at 4 (+4294959107), Inode bitmap at 20 (+4294959123)
Inode table at 36-36 (+4294959139)
8190 free blocks, 6 free inodes, 0 directories, 5 unused inodes
Free blocks: 8193-16384
Free inodes: 11-16
Group 2: (Blocks 16385-24576) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x51eb, unused inodes 8
Block bitmap at 5 (+4294950916), Inode bitmap at 21 (+4294950932)
Inode table at 37-37 (+4294950948)
8192 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 16385-24576
Free inodes: 17-24
Group 3: (Blocks 24577-32768) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x3de1, unused inodes 8
Backup superblock at 24577, Group descriptors at 24578-24578
Block bitmap at 6 (+4294942725), Inode bitmap at 22 (+4294942741)
Inode table at 38-38 (+4294942757)
8190 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 24577-32768
Free inodes: 25-32
Group 4: (Blocks 32769-40960) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x79b9, unused inodes 8
Block bitmap at 7 (+4294934534), Inode bitmap at 23 (+4294934550)
Inode table at 39-39 (+4294934566)
8192 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 32769-40960
Free inodes: 33-40
Group 5: (Blocks 40961-49152) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x0059, unused inodes 8
Backup superblock at 40961, Group descriptors at 40962-40962
Block bitmap at 8 (+4294926343), Inode bitmap at 24 (+4294926359)
Inode table at 40-40 (+4294926375)
8190 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 40961-49152
Free inodes: 41-48
Group 6: (Blocks 49153-57344) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x3000, unused inodes 8
Block bitmap at 9 (+4294918152), Inode bitmap at 25 (+4294918168)
Inode table at 41-41 (+4294918184)
8192 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 49153-57344
Free inodes: 49-56
Group 7: (Blocks 57345-65536) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x5c0a, unused inodes 8
Backup superblock at 57345, Group descriptors at 57346-57346
Block bitmap at 10 (+4294909961), Inode bitmap at 26 (+4294909977)
Inode table at 42-42 (+4294909993)
8190 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 57345-65536
Free inodes: 57-64
Group 8: (Blocks 65537-73728) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0xf050, unused inodes 8
Block bitmap at 11 (+4294901770), Inode bitmap at 27 (+4294901786)
Inode table at 43-43 (+4294901802)
8192 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 65537-73728
Free inodes: 65-72
Group 9: (Blocks 73729-81920) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x50fd, unused inodes 8
Backup superblock at 73729, Group descriptors at 73730-73730
Block bitmap at 12 (+4294893579), Inode bitmap at 28 (+4294893595)
Inode table at 44-44 (+4294893611)
8190 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 73729-81920
Free inodes: 73-80
Group 10: (Blocks 81921-90112) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x60a4, unused inodes 8
Block bitmap at 13 (+4294885388), Inode bitmap at 29 (+4294885404)
Inode table at 45-45 (+4294885420)
8192 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 81921-90112
Free inodes: 81-88
Group 11: (Blocks 90113-98304) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x28de, unused inodes 8
Block bitmap at 14 (+4294877197), Inode bitmap at 30 (+4294877213)
Inode table at 46-46 (+4294877229)
8192 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 90113-98304
Free inodes: 89-96
Group 12: (Blocks 98305-102399) [INODE_UNINIT, ITABLE_ZEROED]
Checksum 0x9223, unused inodes 8
Block bitmap at 15 (+4294869006), Inode bitmap at 31 (+4294869022)
Inode table at 47-47 (+4294869038)
4095 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 98305-102399
Free inodes: 97-104 and count the used blocks (for Backup superblock, Group descriptors, Block bitmap, Inode bitmap and Inode table) or we grep and count: LANG=C dumpe2fs /dev/mapper/vg_vms-test1 | grep ' at ' | grep -v ',' | wc -l which gives us the count of lines which have a single block (in our example) and LANG=C dumpe2fs /dev/mapper/vg_vms-test1 | grep ' at ' | grep ',' | wc -l which gives us the count of lines which have two blocks (in our example). So we have (in our example) 13 lines with one block each and 19 lines with two blocks each. 13+19*2 which gives us 51 blocks which are in use by ext4 itself. Finally there is only one block left. The block 0, which are the skipped 1024 Bytes at the beginning for things like the boot sector. | {
"source": [
"https://unix.stackexchange.com/questions/185894",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103868/"
]
} |
185,897 | My question is: How to find files that has been modified from a given date(specifying the exact hour) to another given date(specifying the exact hour) in a given directory, from / for example. I think that can be achieved with find . Does anyone know how to do this¿? Thank you in advance | go for this one: find . -newermt "2013-01-01 00:00:00" ! -newermt "2013-01-02 00:00:00" m The modification time of the file reference t reference is interpreted directly as a time | {
"source": [
"https://unix.stackexchange.com/questions/185897",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88768/"
]
} |
Subsets and Splits