source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
264,632
I found two commands to output information about my CPU: cat /proc/cpuinfo and lscpu . /proc/cpuinfo shows that my CPU speed is 2.1 Ghz, whereas lspcu says it is 3167 Mhz. Which one is correct? This is my exact output from cat /proc/cpuinfo about my processor speed: model name : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz And this is from lscpu : CPU MHz: 3225.234 (For some reason, lscpu outputs differently every time, varying between 3100 and 3300 MHz)
To see the current speed of each core I do this: watch -n.1 "grep \"^[c]pu MHz\" /proc/cpuinfo" Notes: This does not work on server CPUs such as the Intel Xeon series. On such machines it will show the base frequency only. To show the turbo frequency, you'll need cpupower or turbostat. See @Maxim Egorushkin's answer. If your watch command does not work with intervals smaller than one second, modify the interval like so: watch -n1 "grep \"^[c]pu MHz\" /proc/cpuinfo" This displays the cpu speed of each core in real time. By running the following command, one or more times, from another terminal one can see the speed change with the above watch command, assuming SpeedStep is enabled ( Cool'n'Quiet for AMD ). echo "scale=10000; 4*a(1)" | bc -l & (This command uses bc to calculate pi to 10000 places.)
{ "source": [ "https://unix.stackexchange.com/questions/264632", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139098/" ] }
264,635
In bash, say you have var=a.b.c. , then: $ IFS=. printf "%s\n" $var a.b.c However, such a usage of IFS does take effect while creating an array: $ IFS=. arr=($var) $ printf "%s\n" "${arr[@]}" a b c This is very convenient, sure, but where is this documented? A quick reading of the sections on Arrays or Word Splitting in the Bash documentation does not give any indication either way. A search for IFS through the single-page documentation doesn't provide any hints about this effect either. I'm not sure when I can reliably do: IFS=x do something And expect that IFS will affect field splitting.
The basic idea is that VAR=VALUE some-command sets VAR to VALUE for the execution of some-command when some-command is an external command, and it doesn't get more fancy than that. If you combine this intuition with some knowledge of how a shell works, you should come up with the right answer in most cases. The POSIX reference is “Simple Commands” in the chapter “Shell Command Language” . If some-command is an external command , VAR=VALUE some-command is equivalent to env VAR=VALUE some-command . VAR is exported in the environment of some-command , and its value (or lack of a value) in the shell doesn't change. If some-command is a function , then VAR=VALUE some-command is equivalent to VAR=VALUE; some-command , i.e. the assignment remains in place after the function has returned, and the variable is not exported into the environment. The reason for that has to do with the design of the Bourne shell (and subsequently with backward compatibility): it had no facility to save and restore variable values around the execution of a function. Not exporting the variable makes sense since a function executes in the shell itself. However, ksh (including both ATT ksh93 and pdksh/mksh), bash and zsh implement the more useful behavior where VAR is set only during the execution of the function (it's also exported). In ksh , this is done if the function is defined with the ksh syntax function NAME … , not if it's defined with the standard syntax NAME () . In bash , this is done only in bash mode, not in POSIX mode (when run with POSIXLY_CORRECT=1 ). In zsh , this is done if the posix_builtins option is not set; this option is not set by default but is turned on by emulate sh or emulate ksh . If some-command is a builtin, the behavior depends on the type of builtin. Special builtins behave like functions. Special built-ins are the ones that have to be implemented inside the shell because they affect the state shell (e.g. break affects control flow, cd affects the current directory, set affects positional parameters and options…). Other builtins are built-in only for performance and convenience (mostly — e.g. the bash feature printf -v can only be implemented by a builtin), and they behave like an external command. The assignment takes place after alias expansion, so if some-command is an alias , expand it first to find what happens. Note that in all cases, the assignment is performed after the command line is parsed, including any variable substitution on the command line itself. So var=a; var=b echo $var prints a , because $var is evaluated before the assignment takes place. And thus IFS=. printf "%s\n" $var uses the old IFS value to split $var . I've covered all the types of commands, but there's one more case: when there is no command to execute , i.e. if the command consists only of assignments (and possibly redirections). In that case, the assignment remains in place . VAR=VALUE OTHERVAR=OTHERVALUE is equivalent to VAR=VALUE; OTHERVAR=OTHERVALUE . So after IFS=. arr=($var) , IFS remains set to . . Since you could use $IFS in the assignment to arr with the expectation that it already has its new value, it makes sense that the new value of IFS is used for the expansion of $var . In summary, you can use IFS for temporary field splitting only: by starting a new shell or a subshell (e.g. third=$(IFS=.; set -f; set -- $var; echo "$3") is a complicated way of doing third=${var#*.*.} except that they behave differently when the value of var contains less than two . characters); in ksh, with IFS=. some-function where some-function is defined with the ksh syntax function some-function … ; in bash and zsh, with IFS=. some-function as long as they are operating in native mode as opposed to compatibility mode.
{ "source": [ "https://unix.stackexchange.com/questions/264635", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70524/" ] }
264,791
Putting on Debian 8.3 stty werase '^H' or on Arch Linux 2/2016 stty werase '^?' in .bashrc (for example) makes Ctrl - Backspace delete the last word in the terminal. Still it's not the same behavior as in modern GUI applications (e.g. Firefox): It deletes the last whitespace -separated word, and not the last word separated by whitespace or characters like . : , ; " ' & / ( ) . Is it possible to make Ctrl - Backspace behave in the terminal similar to modern GUI applications? Also, is there any way to make Ctrl - Delete delete the word immediately before the cursor?
There are two line editors at play here: the basic line editor provided by the kernel (canonical mode tty line editor), and bash's line editor (implemented via the readline library). Both of these have an erase-to-previous-word command which is bound to Ctrl + W by default. The key can be configured for the canonical mode tty line editor through stty werase ; bash imitates the key binding that it finds in the tty setting unless overridden in its own configuration. The werase action in tty line editor cannot be configured. It always erases (ASCII) whitespace-delimited words. It's rare to interact with the tty line editor — it's what you get e.g. when you type cat with no argument. If you want fancy key bindings there, you can run the command under a tool like rlwrap which uses readline. Bash provides two commands to delete the previous word : unix-word-rubout ( Ctrl + w or as set through stty werase ), and backward-kill-word ( M-DEL , i.e. Esc Backspace ) which treats a word as a sequence of alphanumeric characters in the current locale and _ . If you want Ctrl + Backspace to erase the previous sequence of alphanumeric characters, don't set stty werase , and instead put the following line in your .inputrc : "\C-h": backward-kill-word Note that this assumes that your terminal sends the Ctrl+H character for Ctrl + Backspace . Unfortunately it's one of those keys with no standard binding (and Backspace in particular is a mess for historical reasons). There's also a symmetric command kill-word which is bound to M-d ( Alt + D ) by default. To bind it to Ctrl + Delete , you first need to figure out what escape sequence your terminal sends, then add a corresponding line in your .inputrc . Type Ctrl + V then Ctrl + Delete ; this will insert something like ^[[3;5~ where the initial ^[ is a visual representation of the escape character. Then the binding is "\e[3;5~": kill-word If you aren't happy with either definition of a word, you can provide your own in bash: see confusing behavior of emacs-style keybindings in bash
{ "source": [ "https://unix.stackexchange.com/questions/264791", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150422/" ] }
264,962
How can I print all the lines between two lines starting with one pattern for the first line and ending with another pattern for the last line? Update I guess it was a mistake to mention that this document is HTML. I seem to have touched a nerve, so forget that. I'm not trying to parse HTML or do anything with it other than print a section of a text document. Consider this example: aaa bbb pattern1 aaa pattern2 bbb ccc pattern2 ddd eee pattern1 fff ggg Now, I want to print everything between the first instance of pattern1 starting at the beginning of a line and pattern2 starting at the beginning of another line. I want to include the pattern1 and pattern2 lines in my output, but I don't want anything after the pattern2 line. pattern2 is found in one of the lines of the section. I don't want to stop there, but that's easily remedied by indicating the start of the line with ^ . pattern1 appears on another line after pattern2 , but I don't want to look at that at all. I'm just looking for everything between the first instance of pattern1 and the first instance of pattern2 , inclusive. I found something that almost gets me there using sed : sed -n '/^pattern1/,/^pattern2/p' inputfile.txt ... but that starts printing again at the next instance of pattern1 I can think of a method using grep -n ... | cut -f1 -d: twice to get the two line numbers then tail and head to get the section I want, but I'm hoping for a cleaner way. Maybe awk is a better tool for this task? When I get this working, I hope to tie this into a git hook. I don't know how to do that yet, either, but I'm still reading and searching :) Thank you.
You can make sed quit at a pattern with sed '/pattern/q' , so you just need your matches and then quit at the second pattern match: sed -n '/^pattern1/,/^pattern2/{p;/^pattern2/q}' That way only the first block will be shown. The use of a subcommand ensures that ^pattern2 can cause sed to quit only after a match for ^pattern1 . The two ^pattern2 matches can be combined: sed -n '/^pattern1/,${p;/^pattern2/q}'
{ "source": [ "https://unix.stackexchange.com/questions/264962", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53544/" ] }
264,980
mknod /tmp/oracle.pipe p sqlplus / as sysdba << _EOF set escape on host nohup gzip -c < /tmp/oracle.pipe > /tmp/out1.gz \& spool /tmp/oracle.pipe select * from employee; spool off _EOF rm /tmp/oracle.pip I need to insert a trailer at the end of the zipped file out1.gz , I can count the lines using count=zcat out1.gz |wc -l How do i insert the trailer T5 (assuming count=5) At the end of out1.gz without unzipping it.
From man gzip you can read that gzip ped files can simply be concatenated: ADVANCED USAGE Multiple compressed files can be concatenated. In this case, gunzip will extract all members at once. For example: gzip -c file1 > foo.gz gzip -c file2 >> foo.gz Then gunzip -c foo is equivalent to cat file1 file2 This could also be done using cat for the gzip ped files, e.g.: seq 1 4 > A && gzip A echo 5 > B && gzip B #now 1 to 4 is in A.gz and 5 in B.gz, we want 1 to 5 in C.gz: cat A.gz B.gz > C.gz && zcat C.gz 1 2 3 4 5 #or for appending B.gz to A.gz: cat B.gz >> A.gz For doing it without external file for you line to be appended, do as follows: echo "this is the new line" | gzip - >> original_file.gz
{ "source": [ "https://unix.stackexchange.com/questions/264980", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157667/" ] }
264,984
I have a file named test which has two columns one having ID and other having status. I want to loop through the file and print IDs where status have one particular value (e.g. 'ACTIVE'). I tried cat test | while read line; do templine= $($line | cut -d ' ' -f 2);echo $templine; if [ $templine = 'ACCEPTED' ]; then echo "$templine"; fi done and some variation of above which obviously did not work. Any help would be appreciated.
From man gzip you can read that gzip ped files can simply be concatenated: ADVANCED USAGE Multiple compressed files can be concatenated. In this case, gunzip will extract all members at once. For example: gzip -c file1 > foo.gz gzip -c file2 >> foo.gz Then gunzip -c foo is equivalent to cat file1 file2 This could also be done using cat for the gzip ped files, e.g.: seq 1 4 > A && gzip A echo 5 > B && gzip B #now 1 to 4 is in A.gz and 5 in B.gz, we want 1 to 5 in C.gz: cat A.gz B.gz > C.gz && zcat C.gz 1 2 3 4 5 #or for appending B.gz to A.gz: cat B.gz >> A.gz For doing it without external file for you line to be appended, do as follows: echo "this is the new line" | gzip - >> original_file.gz
{ "source": [ "https://unix.stackexchange.com/questions/264984", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157670/" ] }
265,523
If a set of files (several GBs big each) and each changes slightly every day (at random places, not only information appended at the end), how can it be copied efficiently? I mean, in the sense that only changed parts are updated, and not the whole files. That would mean the difference between copying some Kb here and there or some GBs.
The rsync program does exactly that. From the man page: It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.
{ "source": [ "https://unix.stackexchange.com/questions/265523", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46600/" ] }
265,620
Why do open() and close() exist in the Unix filesystem design? Couldn't the OS just detect the first time read() or write() was called and do whatever open() would normally do?
Dennis Ritchie mentions in «The Evolution of the Unix Time-sharing System» that open and close along with read , write and creat were present in the system right from the start. I guess a system without open and close wouldn't be inconceivable, however I believe it would complicate the design. You generally want to make multiple read and write calls, not just one, and that was probably especially true on those old computers with very limited RAM that UNIX originated on. Having a handle that maintains your current file position simplifies this. If read or write were to return the handle, they'd have to return a pair -- a handle and their own return status. The handle part of the pair would be useless for all other calls, which would make that arrangement awkward. Leaving the state of the cursor to the kernel allows it to improve efficiency not only by buffering. There's also some cost associated with path lookup -- having a handle allows you to pay it only once. Furthermore, some files in the UNIX worldview don't even have a filesystem path (or didn't -- now they do with things like /proc/self/fd ).
{ "source": [ "https://unix.stackexchange.com/questions/265620", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158154/" ] }
265,740
Maybe it is a trivial question, but in the man page I didn't find something useful. I am using Ubuntu and bash . The normal output for sha512sum testfile is <hash_code> testfile How to suppress the filename output? I would like to obtain just <hash_code>
There isn't a way to suppress that, but since the SHA is always a single word without spaces you can do: sha512sum testfile | cut -d " " -f 1 or e.g. < testfile sha512sum | sed 's/ -//'
{ "source": [ "https://unix.stackexchange.com/questions/265740", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48707/" ] }
265,755
I noticed that the folder referenced in the subject line is taking up 1.5 GB. Can I run the below to clear it without causing permanent damage to my system? rm -rf /var/cache/PackageKit/metadata/updates/packages/*
From the discussion in the bug linked in Daniel Bruno's answer .. you can get rid of these files using PackageKit console client pkcon $ sudo pkcon refresh force -c -1 It takes some time but is provided by PackageKit itself. (and you may set a cron job for it) from the man page of pkcon(1) refresh [force] Refresh the cached information about available updates. and -c, --cache-age AGE Set the maximum acceptable age for cached metadata, in seconds. Use -1 for 'never'. So this tells PackageKit to delete cached information (refresh cached information with maximum acceptable age of : never) References : https://bugs.freedesktop.org/show_bug.cgi?id=80053#c6 https://bugzilla.redhat.com/show_bug.cgi?id=1306992#c10
{ "source": [ "https://unix.stackexchange.com/questions/265755", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158252/" ] }
265,845
XFA forms are features of a pdf file involving options to complete fields in certain documents - in many cases official documents. These options may open a calendar, for example, in order to select day, month and year, etc. Usually these forms ensure that a certain official format is used. I have seen that Okular displays a warning that XFA forms are not supported: More here . Selecting 'Show forms' in Okular those fields can be edited and changes can be saved, but comparing to what I see in Windows with Adobe Reader only some part of those are really accessed in this way: the calendar options are absent, and the separate fields of day/month/year are not present, which may raise questions on the correctness of the result. Adobe Reader 9 can still be installed in Ubuntu 14.04 but this seems like a very limited option. Is there a a native pdf reader that can use fully XFA forms? (If not, is Wine a solution?) The solution for Ubuntu 14.04 works in 16.04. too. The file I tested was here (official French government website).
Master PDF Editor for Linux has a free and a commercial version, and even the free version has many advanced features, among which "Dynamic XFA form support" . Playonlinux has an option to install Adobe Acrobat Reader DC . But oddly, only letting PoL download and install the program works , while when selecting the latest version ( AcroRdrDC1700920044_en_US ) of the exe file previously downloaded locally the installation fails with an error. I have noticed this on several occasions, and also that PoL installs a different older version: 2015.010.20056 . In Ubuntu-16.04-systems the method of installing Adobe Reader 9 for 14.04 ( link ) still works. As suggested in Chris' answer , the newer versions of Evince/GNOME Document Viewer, can better handle XFA files, and good enough for the file in question - tested version 3.24.0.
{ "source": [ "https://unix.stackexchange.com/questions/265845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
265,890
Commonly for arm systems, device trees supply hardware information to the kernel (Linux). These device trees exist as dts (device tree source) files that are compiled and loaded to the kernel. Problem is that I do not have access to such a dts file, not even to a dtb file. I have access to /sys and /proc on the machine and I wanted to ask if that would allow me to "guess the correct values" to be used in a dts? Also potential answer could highlight additionally the aspect if the answer to this question also depends on whether the device tree interface was used in the first place (i.e. a dtb was created and provided to the kernel) instead of some more hacking "we simply divert from vanilla and patch the kernel so as to solve the device information problem for our kernel only"-solution?
/proc/device-tree or /sys/firmware/devicetree/base /proc/device-tree is a symlink to /sys/firmware/devicetree/base and the kernel documentation says userland should stick to /proc/device-tree : Userspace must not use the /sys/firmware/devicetree/base path directly, but instead should follow /proc/device-tree symlink. It is possible that the absolute path will change in the future, but the symlink is the stable ABI. You can then access dts properties from files: hexdump /sys/firmware/devicetree/base/apb-pclk/clock-frequency The output format for integers is binary, so hexdump is needed. dtc -I fs Get a full device tree from the filesystem: sudo apt-get install device-tree-compiler dtc -I fs -O dts /sys/firmware/devicetree/base outputs the dts to stdout. See also: How to list the kernel Device Tree | Unix & Linux Stack Exchange dtc in Buildroot Buildroot has a BR2_PACKAGE_DTC=y config to put dtc inside the root filesystem. QEMU -machine dumpdtb If you are running Linux inside QEMU, QEMU automatically generates the DTBs if you don't give it explicitly with -dtb , and so it is also able to dump it directly with: qemu-system-aarch64 -machine virt -cpu cortex-a57 -machine dumpdtb=dtb.dtb as mentioned at: https://lists.gnu.org/archive/html/qemu-discuss/2017-02/msg00051.html Tested with this QEMU + Buildroot setup on the Linux kernel v4.19 arm64. Thanks to Harry Tsai for pointing out the kernel documentation that says that /proc/device-tree is preferred for userland .
{ "source": [ "https://unix.stackexchange.com/questions/265890", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24394/" ] }
266,179
I wrote a small C library for Linux and FreeBSD, and I'm going to write the documentation for it. I tried to learn more about creating man pages and not found the instructions or descriptions of best practices making man pages for libraries. In particular I'm interested in what section to put man pages of the functions? 3? Maybe there good examples or manuals? Create man pages for each function from the library a bad idea?
Manual pages for a library would go in section 3. For good examples of manual pages, bear in mind that some are written using specific details of groff and/or use specific macros which are not really portable. There are always some pitfalls in portability of man-pages, since some systems may (or may not) use special features. For instance, in documenting dialog , I have had to keep in mind (and work around) differences in various systems for displaying examples (which are not justified). Start by reading the relevant sections of man man where it mentions the standard macros, and compare those descriptions for FreeBSD and Linux. Whether you choose to write one manual page for the library, or separate manual pages for the functions (or groups of functions) depends on how complicated the descriptions of the functions would be: ncurses has a few hundred functions across several dozen manual pages. dialog has several dozen functions in one manual page. Others will be sure to show many more examples. Further reading: man -- display online manual documentation pages (FreeBSD) man-pages - conventions for writing Linux man pages groff_mdoc -- reference for groff's mdoc implementation HowTo: Create a manpage from scratch. (FreeBSD) What Is A "Bikeshed"?
{ "source": [ "https://unix.stackexchange.com/questions/266179", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91357/" ] }
266,545
After a recent break in on a machine running Linux, I found an executable file in the home folder of a user with a weak password. I have cleaned up what appears to be all the damage, but am preparing a full wipe to be sure. What can malware run by a NON-sudo or unprivileged user do? Is it just looking for files marked with world writable permission to infect? What threatening things can a non-admin user do on most Linux systems? Can you provide some examples of real world problems this kind of security breach can cause?
Most normal users can send mail, execute system utilities, and create network sockets listening on higher ports. This means an attacker could send spam or phishing mails, exploit any system misconfiguration only visible from within the system (think private key files with permissive read permissions), setup a service to distribute arbitrary contents (e.g. porn torrent). What exactly this means depends on your setup. E.g. the attacker could send mail looking like it came from your company and abuse your servers mail reputation; even more so if mail authentication features like DKIM have been set up. This works till your server's rep is tainting and other mail servers start to blacklist the IP/domain. Either way, restoring from backup is the right choice.
{ "source": [ "https://unix.stackexchange.com/questions/266545", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/94805/" ] }
266,728
In a Ubuntu 14.04 server I am experiencing a massive hard disk activity which has no apparent justification: it comes as a burst, it lasts a few minutes and then disappears. It consumes system resources and slows down the whole system. Is there a (command-line) tool which can be used to monitor the disk activity, listing the processes that are using the disk and the files involved? Something like htop for the CPU.
For checking I/O usage I usually use iotop . It's not installed by default on the distro, but you can easily get it with: sudo apt-get install iotop Then launch it with root priviledges: sudo iotop --only The --only option will show only the processes currently accessing the I/O.
{ "source": [ "https://unix.stackexchange.com/questions/266728", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48707/" ] }
266,888
I want to install an older version of package <x> , and when I use dnf it only shows the current version of the package <x> . Is there any way to install an older versions using dnf ?
You can install using a specific name-version as described in the man page: dnf install tito-0.5.6-1.fc22 Install package with specific version. If the package is already installed it will automatically try to downgrade or upgrade to specific version. To view all versions of a package in your enabled repositories, use: dnf --showduplicates list <package>
{ "source": [ "https://unix.stackexchange.com/questions/266888", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148747/" ] }
266,921
I'm printing a message in a Bash script, and I want to colourise a portion of it; for example, #!/bin/bash normal='\e[0m' yellow='\e[33m' cat <<- EOF ${yellow}Warning:${normal} This script repo is currently located in: [ more messages... ] EOF But when I run in the terminal ( tmux inside gnome-terminal ) the ANSI escape characters are just printed in \ form; for example, \e[33mWarning\e[0m This scr.... If I move the portion I want to colourise into a printf command outside the here-doc, it works.  For example, this works: printf "${yellow}Warning:${normal}" cat <<- EOF This script repo is currently located in: [ more messages... ] EOF From man bash – Here Documents: No parameter and variable expansion, command substitution, arithmetic expansion, or pathname expansion is performed on word . If any characters in word are quoted, the delimiter is the result of quote removal on word , and the lines in the here-document are not expanded. If word is unquoted, all lines of the here-document are subjected to parameter expansion, command substitution, and arithmetic expansion.  In the latter case, the character sequence \<newline> is ignored, and \ must be used to quote the characters \ , $ , and ` . I can't work out how this would affect ANSI escape codes. Is it possible to use ANSI escape codes in a Bash here document that is cat ted out?
In your script, these assignments normal='\e[0m' yellow='\e[33m' put those characters literally into the variables, i.e., \ e [ 0 m , rather than the escape sequence. You can construct an escape character using printf (or some versions of echo ), e.g., normal=$(printf '\033[0m') yellow=$(printf '\033[33m') but you would do much better to use tput , as this will work for any correctly set up terminal: normal=$(tput sgr0) yellow=$(tput setaf 3) Looking at your example, it seems that the version of printf you are using treats \e as the escape character (which may work on your system, but is not generally portable to other systems). To see this, try yellow='\e[33m' printf 'Yellow:%s\n' $yellow and you would see the literal characters: Yellow:\e[33m rather than the escape sequence. Putting those in the printf format tells printf to interpret them (if it can). Further reading: tput, reset - initialize a terminal or query terminfo database printf - write formatted output (POSIX)
{ "source": [ "https://unix.stackexchange.com/questions/266921", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
267,361
Lots of programming-oriented editors will colorize source code. Is there a command that will colorize source code for viewing in the terminal? I could open a file with emacs -nw (which opens in the terminal instead of popping up a new window), but I'm looking for something that works like less (or that works with less -R , which passes through color escape sequences in its input).
With highlight on a terminal that supports the same colour escape sequences as xterm : highlight -O xterm256 your-file | less -R With ruby-rouge : rougify your-file | less -R With python-pygments : pygmentize your-file | less -R With GNU source-highlight : source-highlight -f esc256 -i your-file | less -R You can also use vim as a pager with the help of macros/less.sh script shipped with vim (see :h less within vim for details): On my system: sh /usr/share/vim/vim74/macros/less.sh your-file Or you could use any of the syntax highlighters that support HTML output and use elinks or w3m as the pager (or elinks -dump -dump-color-mode 3 | less -R ) like with GNU source-highlight : source-highlight -o STDOUT -i your-file | elinks -dump -dump-color-mode 3 | less -R
{ "source": [ "https://unix.stackexchange.com/questions/267361", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23344/" ] }
267,367
Currently, my whole system is located at the end of my hdd. I'd like to move that data to the beginning and still have booting and other details working. dd seems to do exactly what I want (to copy my data exactly how it is placed), but I'm not sure about things like booting, grub configs and so on. Will I need to set these things later, or will dd do this job for me?
With highlight on a terminal that supports the same colour escape sequences as xterm : highlight -O xterm256 your-file | less -R With ruby-rouge : rougify your-file | less -R With python-pygments : pygmentize your-file | less -R With GNU source-highlight : source-highlight -f esc256 -i your-file | less -R You can also use vim as a pager with the help of macros/less.sh script shipped with vim (see :h less within vim for details): On my system: sh /usr/share/vim/vim74/macros/less.sh your-file Or you could use any of the syntax highlighters that support HTML output and use elinks or w3m as the pager (or elinks -dump -dump-color-mode 3 | less -R ) like with GNU source-highlight : source-highlight -o STDOUT -i your-file | elinks -dump -dump-color-mode 3 | less -R
{ "source": [ "https://unix.stackexchange.com/questions/267367", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119794/" ] }
267,437
How can I make the second echo to echo out test in this example as well: echo test | xargs -I {} echo {} && echo {}
Just write {} two times in your command. The following would work: $ echo test | xargs -I {} echo {} {} test test Your problem is how the commands are nested . Lets look at this: echo test | xargs -I {} echo {} && echo {} bash will execute echo test | xargs -I {} echo {} . If it runs successfully, echo {} is executed. To change the nesting, you could do something like this: echo test | xargs -I {} sh -c "echo {} && echo {}" However, you could get trouble because the approach might be prone to code injection. When "test" is substituted with shell code, it gets executed. Therefore, you should probably pass the input to the nested shell with arguments. echo test | xargs -I {} sh -c 'echo "$1" && echo "$1"' sh {}
{ "source": [ "https://unix.stackexchange.com/questions/267437", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
267,506
I recently noticed the following in my cygwin profile, more precisely: /usr/local/bin:/usr/bin${PATH:+:${PATH}} What does it mean? Why is not just $PATH? Is this an 'if $PATH exists then add :$PATH'? My purpose is to swap the order and put the cygwin paths behind the windows path. In the past I would have $PATH:/usr/local/bin:/usr/bin but this confuses me. Maybe I should be doing PATH="${PATH:+${PATH}:}/usr/local/bin:/usr/bin" to append the : at the end of the $PATH?
The :+ is a form of parameter expansion : ${parameter:+[word]} : Use Alternative Value. If parameter is unset or null, null shall be substituted; otherwise, the expansion of word (or an empty string if word is omitted) shall be substituted. In other words, if the variable $var is defined, echo ${var:+foo} will print foo and, if it is not, it will print the empty string. The second : is nothing special. It is the character used as a separator in the list of directories in $PATH . So, PATH="/usr/local/bin:/usr/bin${PATH:+:${PATH}}" is a shorthand way of writing: if [ -z "$PATH" ]; then PATH=/usr/local/bin:/usr/bin else PATH=/usr/local/bin:/usr/bin:$PATH fi It's just a clever trick to avoid adding an extra : when $PATH is not set. For example: $ PATH="/usr/bin" $ PATH="/new/dir:$PATH" ## Add a directory $ echo "$PATH" /new/dir:/usr/bin But if PATH is unset: $ unset PATH $ PATH="/new/dir:$PATH" $ echo "$PATH" /new/dir: A : by itself adds the current directory to the $PATH . Using PATH="/new/dir${PATH:+:$PATH}" avoids this. So sure, you can use PATH="${PATH:+${PATH}:}/usr/local/bin:/usr/bin" if you want to, or you can use PATH="$PATH:/usr/local/bin:/usr/bin" if you prefer. The only difference is that the former might add an extra : , thereby adding your current directory to your $PATH .
{ "source": [ "https://unix.stackexchange.com/questions/267506", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/110907/" ] }
267,704
Is there a command that exists that can simulate keypresses? I want to pipe some data to it to make it type into a GUI program for me.
Yes, it is xdotool . To simulate a key press, use: xdotool key <key> For example, to simulate pressing F2 : xdotool key F2 To simulate pressing crtl + c : xdotool key ctrl+c To simulate pressing ctrl + c and then a Backspace : xdotool key ctrl+c BackSpace Check man xdotool to get more idea. You might need to install the xdotool package first to use xdotool command.
{ "source": [ "https://unix.stackexchange.com/questions/267704", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/146386/" ] }
267,885
I have a Dell XPS 13 9343 2015 with a resolution of 3200x1800 pixels. I am trying to use i3 windows manager on it but everything is tiny and hardly readable. I managed to scale every applications (firefox, terminal, etc...) using .Xresources : ! Fonts {{{ Xft.antialias: true Xft.hinting: true Xft.rgba: rgb Xft.hintstyle: hintfull Xft.dpi: 220 ! }}} but i3 interface still does not scale... I have understood that xrandr --dpi 220 may solve the problem, but I don't know how/where to use it. Can somebody enlighten me on this issue ?
Since version 4.13 i3 reads DPI information from Xft.dpi ( source ). So, to set i3 to work with high DPI screens you'll probably need to modify two files. Add this line to ~/.Xresources with your preferred value: Xft.dpi: 120 Make sure the settings are loaded properly when X starts in your ~/.xinitrc ( source ): xrdb -merge ~/.Xresources exec i3 Note that it will affect other applications (e.g. your terminal) that read DPI settings from X resources.
{ "source": [ "https://unix.stackexchange.com/questions/267885", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115216/" ] }
267,965
I have a new CentOS 7 installation, and noticed that my /var/log/messages file is full of messages like this Mar 6 08:40:01 myhostname systemd: Started Session 2043 of user root. Mar 6 08:40:01 myhostname systemd: Starting Session 2043 of user root. Mar 6 08:40:01 myhostname systemd: Created slice user-1001.slice. Mar 6 08:40:01 myhostname systemd: Starting user-1001.slice. Mar 6 08:40:01 myhostname systemd: Started Session 2042 of user userx. Mar 6 08:40:01 myhostname systemd: Starting Session 2042 of user userx. Mar 6 08:40:01 myhostname systemd: Started Session 2041 of user root. Mar 6 08:40:01 myhostname systemd: Starting Session 2041 of user root. Mar 6 08:40:31 myhostname systemd: Removed slice user-1001.slice. Mar 6 08:40:31 myhostname systemd: Stopping user-1001.slice. Mar 6 08:41:01 myhostname systemd: Created slice user-1001.slice. Mar 6 08:41:01 myhostname systemd: Starting user-1001.slice. Mar 6 08:41:01 myhostname systemd: Started Session 2044 of user userx. Mar 6 08:41:01 myhostname systemd: Starting Session 2044 of user userx. Mar 6 08:41:21 myhostname systemd: Removed slice user-1001.slice. Mar 6 08:41:21 myhostname systemd: Stopping user-1001.slice. What do all of these mean, and why are they there? If this is normal background noise them it seems like an enourmous waste to of resources to log this...
(this question is also answered over on superuser here ) Those are messages pertaining to the creation and deletion of slices, which are used in systemd to group processes and manage their resources. Why they are logged by default escapes me but I've seen two ways to disable them: The less intrusive way is to filter them out by creating /etc/rsyslog.d/ignore-systemd-session-slice.conf with the following contents: if $programname == "systemd" and ($msg contains "Starting Session" or $msg contains "Started Session" or $msg contains "Created slice" or $msg contains "Starting user-" or $msg contains "Removed Slice" or $msg contains "Stopping user-") then stop and restart rsyslogd with systemctl restart rsyslog The broader way is to set the systemd logging level a bit higher by editing /etc/systemd/system.conf : #LogLevel=info LogLevel=notice References: https://access.redhat.com/solutions/1564823 I have more but can't post more than 2 links. Hooray.
{ "source": [ "https://unix.stackexchange.com/questions/267965", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23091/" ] }
268,006
Below is the process I took to create a user on bash in Linux. $ sudo useradd Alexandra $ sudo passwd Alexandra Enter new UNIX password: Enter new UNIX password: passwd: password updated successfully I understand that the password shouldn't be displayed for security purposes, but what I mean is, why do asterisks (or the characters I entered) not appear?
Because that's the way we do things in *nix land. :) It gives a little bit of extra security by not displaying a bunch of asterisks. That way, someone who sees your screen can't see the length of your password. But I must admit it is a little bit scary not getting any feedback when you're entering a password, especially if you've got a bad keyboard. So most GUI password dialog on *nix systems do give you some kind of feedback, e.g. using asterisks, or more commonly ⬤. And some even display each character as you type it, but then immediately replace it with a * or ⬤, but that's not so good if someone may be looking over your shoulder. Or if they have a device that can pick up & decode the video signal being sent from your computer to your monitor.
{ "source": [ "https://unix.stackexchange.com/questions/268006", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142920/" ] }
268,386
I recently moved from GNU screen to tmux . I find it quite similar but with bigger support (I switched due to problem with escape-time in neovim - resolution was only for tmux). Unfortunately in tmux I'm unable to find a similar command to this: screen -X eval "chdir $(some_dir)" The command above changed the default directory for new window/screen/pane from within the GNU screen so when I pressed Ctrl + a (similar to tmux Ctrl + b )- new window opened in the $(some_dir) directory. Is there a similar thing in tmux? ANSWER: I have used @Lqueryvg answer and combined it with @Vincent Nivoliers suggestion froma a comment and that gave me a new binding for a command attach -c "#{pane_current_path}" which sets my current directory as a default one. Thanks.
tl;dr Ctrl + b : attach -c desired/directory/path Long Answer Start tmux as follows: (cd /aaa/bbb; tmux) Now, any new windows (or panes) you create will start in directory /aaa/bbb , regardless of the current directory of the current pane. If you want to change the default directory once tmux is up and running, use attach-session with -c . Quoting from the tmux man page for attach-session : -c will set the session working directory (used for new windows) to working-directory. For example: Ctrl + b : attach -c /ddd/eee New windows (or panes) will now start in directory /ddd/eee , regardless of the directory of the current pane.
{ "source": [ "https://unix.stackexchange.com/questions/268386", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116320/" ] }
268,474
I need a file (preferably a .list file) which contains the absolute path of every file in a directory. Example dir1: file1.txt file2.txt file3.txt listOfFiles.list : /Users/haddad/dir1/file1.txt /Users/haddad/dir1/file2.txt /Users/haddad/dir1/file3.txt How can I accomplish this in linux/mac?
ls -d "$PWD"/* > listOfFiles.list
{ "source": [ "https://unix.stackexchange.com/questions/268474", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160195/" ] }
268,480
I would like to find directories that does not contain a file with name OSZICAR and then cd into that directory and do something more... all i have now is: find `pwd` -mindepth 2 -maxdepth 2 -type d -exec sh -c "echo {}; cd {}; ls; if [! -f $0/OSZICAR];echo "doing my thing";fi" \; but there is error, could anyone help? Thank you My original command without the Criteria of Not having OSZICAR is: find `pwd` -mindepth 2 -maxdepth 2 -type d -exec sh -c "echo {}; cd {}; ls; cp ../../submit_script_Stampede.sh .; ls;sed -i s/Monkhorst/Gamma/ KPOINTS; cp CONTCAR POSCAR ;sbatch submit_script_Stampede.sh" \;
ls -d "$PWD"/* > listOfFiles.list
{ "source": [ "https://unix.stackexchange.com/questions/268480", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118930/" ] }
268,640
I'm trying to use sed to edit a config file. There are a few lines I'd like to change. I know that under Linux sed -i allows for in place edits but it requires you save to a backup file. However I would like to avoid having multiple backup files and make all my in place changes at once. Is there a way to do so with sed -i or is there a better alternative?
You can tell sed to carry out multiple operations by just repeating -e (or -f if your script is in a file). sed -i -e 's/a/b/g' -e 's/b/d/g' file makes both changes in the single file named file , in-place. Without a backup file. sed -ibak -e 's/a/b/g' -e 's/b/d/g' file makes both changes in the single file named file , in-place. With a single backup file named filebak .
{ "source": [ "https://unix.stackexchange.com/questions/268640", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153274/" ] }
268,766
So, I thought I had a good understanding of this, but just ran a test (in response to a conversation where I disagreed with someone) and found that my understanding is flawed... In as much detail as possible what exactly happens when I execute a file in my shell? What I mean is, if I type: ./somefile some arguments into my shell and press return (and somefile exists in the cwd, and I have read+execute permissions on somefile ) then what happens under the hood? I thought the answer was: The shell make a syscall to exec , passing the path to somefile The kernel examines somefile and looks at the magic number of the file to determine if it is a format the processor can handle If the magic number indicates that the file is in a format the processor can execute, then a new process is created (with an entry in the process table) somefile is read/mapped to memory. A stack is created and execution jumps to the entry point of the code of somefile , with ARGV initialized to an array of the parameters (a char** , ["some","arguments"] ) If the magic number is a shebang then exec() spawns a new process as above, but the executable used is the interpreter referenced by the shebang (e.g. /bin/bash or /bin/perl ) and somefile is passed to STDIN If the file doesn't have a valid magic number, then an error like "invalid file (bad magic number): Exec format error" occurs However someone told me that if the file is plain text, then the shell tries to execute the commands (as if I had typed bash somefile ). I didn't believe this, but I just tried it, and it was correct. So I clearly have some misconceptions about what actually happens here, and would like to understand the mechanics. What exactly happens when I execute a file in my shell? (in as much detail is reasonable...)
The definitive answer to "how programs get run" on Linux is the pair of articles on LWN.net titled, surprisingly enough, How programs get run and How programs get run: ELF binaries . The first article addresses scripts briefly. (Strictly speaking the definitive answer is in the source code, but these articles are easier to read and provide links to the source code.) A little experimentation show that you pretty much got it right, and that the execution of a file containing a simple list of commands, without a shebang, needs to be handled by the shell. The execve(2) manpage contains source code for a test program, execve; we'll use that to see what happens without a shell. First, write a testscript, testscr1 , containing #!/bin/sh pstree and another one, testscr2 , containing only pstree Make them both executable, and verify that they both run from a shell: chmod u+x testscr[12] ./testscr1 | less ./testscr2 | less Now try again, using execve (assuming you built it in the current directory): ./execve ./testscr1 ./execve ./testscr2 testscr1 still runs, but testscr2 produces execve: Exec format error This shows that the shell handles testscr2 differently. It doesn't process the script itself though, it still uses /bin/sh to do that; this can be verified by piping testscr2 to less : ./testscr2 | less -ppstree On my system, I get |-gnome-terminal--+-4*[zsh] | |-zsh-+-less | | `-sh---pstree As you can see, there's the shell I was using, zsh , which started less , and a second shell, plain sh ( dash on my system), to run the script, which ran pstree . In zsh this is handled by zexecve in Src/exec.c : the shell uses execve(2) to try to run the command, and if that fails, it reads the file to see if it has a shebang, processing it accordingly (which the kernel will also have done), and if that fails it tries to run the file with sh , as long as it didn't read any zero byte from the file: for (t0 = 0; t0 != ct; t0++) if (!execvebuf[t0]) break; if (t0 == ct) { argv[-1] = "sh"; winch_unblock(); execve("/bin/sh", argv - 1, newenvp); } bash has the same behaviour, implemented in execute_cmd.c with a helpful comment (as pointed out by taliezin ): Execute a simple command that is hopefully defined in a disk file somewhere. fork () connect pipes look up the command do redirections execve () If the execve failed, see if the file has executable mode set. If so, and it isn't a directory, then execute its contents as a shell script. POSIX defines a set of functions, known as the exec(3) functions , which wrap execve(2) and provide this functionality too; see muru 's answer for details. On Linux at least these functions are implemented by the C library, not by the kernel.
{ "source": [ "https://unix.stackexchange.com/questions/268766", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1974/" ] }
268,818
In Bash, suppose I visit a directory, and then another directory. I would like to copy a file from the first directory to the second directory, but without specifying the long pathnames of them. Is it possible? My temporary solution is to use /tmp as a temporary place to store a copy of the file. cp myfile /tmp when I am in the first directory, and then cp /tmp/myfile . when I am in the second directory. But I may check if the file will overwrite anything in /tmp . Is there something similar to a clipboard for copying and pasting a file?
Using Bash, I would just visit the directories: $ cd /path/to/source/directory $ cd /path/to/destination/directory Then, I would use the shortcut ~- , which points to the previous directory: $ cp -v ~-/file1.txt . $ cp -v ~-/file2.txt . $ cp -v ~-/file3.txt . If one wants to visit directories in reverse order, then: $ cp -v fileA.txt ~- $ cp -v fileB.txt ~- $ cp -v fileC.txt ~-
{ "source": [ "https://unix.stackexchange.com/questions/268818", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
268,889
When I want to easily read my PostgreSQL schema, I dump it to stderr and redirect it to vim : pg_dump -h localhost -U postgres dog_food --schema-only | vim - This gives: vim does not have a syntax highlight schema, because it has no filename extension when reading from stdin, so I use the following: :set syntax=sql Which gives: Being the lazy developer I am, I would like to force vim to use the SQL syntax by passing a command line argument, saving me the choir of re-typing set syntax=<whatever> every time I open it with stdin data.. Is there a way to set vim syntax by passing a command line argument?
You can use: vim -c 'set syntax=sql' -
{ "source": [ "https://unix.stackexchange.com/questions/268889", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1079/" ] }
268,952
I'm looking for ways to use /dev/random (or /dev/urandom ) from the command line. In particular, I'd like to know how to use such a stream as stdin to write streams of random numbers to stdout (one number per line). I'm interested in random numbers for all the numeric types that the machine's architecture supports natively. E.g. for a 64-bit architecture, these would include 64-bit signed and unsigned integers, and 64-bit floating point numbers. As far as ranges go, the maximal ranges for the various numeric types will do. I know how to do all this with all-purpose interpreters like Perl, Python, etc., but I'd like to know how to do this with "simpler" tools from the shell. (By "simpler" I mean "more likely to be available even in a very minimal Unix installation".) Basically the problem reduces that of converting binary data to their string representations on the command line. (E.g., this won't do: printf '%f\n' $(head -c8 /dev/random) .) I'm looking for shell-agnostic answers. Also, the difference between /dev/random and /dev/urandom is not important for this question. I expect that any procedure that works for one will work for the other, even when the semantics of the results may differ. I adapted EightBitTony's answer to produce the functions toints , etc. shown below. Example use: % < /dev/urandom toprobs -n 5 0.237616281778928 0.85578479125532 0.0330049682019756 0.798812391655243 0.138499033902422 Remarks: I'm using hexdump instead of od because it gave me an easier way to format the output the way I wanted it; Annoyingly though, hexdump does not support 64-bit integers (wtf???); The functions' interface needs work (e.g. they should accept -n5 as well as -n 5 ), but given my pitiful shell programming skillz, this was the best I could put together quickly. (Comments/improvements welcome, as always.) The big surprise I got from this exercise was to discover how hard it is to program on the shell the most elementary numerical stuff (e.g. read a hexadecimal float, or get the maximum native float value)... _tonums () { local FUNCTION_NAME=$1 BYTES=$2 CODE=$3 shift 3 local USAGE="Usage: $FUNCTION_NAME [-n <INTEGER>] [FILE...]" local -a PREFIX case $1 in ( -n ) if (( $# > 1 )) then PREFIX=( head -c $(( $2 * $BYTES )) ) shift 2 else echo $USAGE >&2 return 1 fi ;; ( -* ) echo $USAGE >&2 return 1 ;; ( * ) PREFIX=( cat ) ;; esac local FORMAT=$( printf '"%%%s\\n"' $CODE ) $PREFIX "$@" | hexdump -ve $FORMAT } toints () { _tonums toints 4 d "$@" } touints () { _tonums touints 4 u "$@" } tofloats () { _tonums tofloats 8 g "$@" } toprobs () { _tonums toprobs 4 u "$@" | perl -lpe '$_/=4294967295' }
You can use od to get numbers out of /dev/random and /dev/urandom . For example, 2 byte unsigned decimal integers, $ od -vAn -N2 -tu2 < /dev/urandom 24352 1 byte signed decimal integer, $ od -vAn -N1 -td1 < /dev/urandom -78 4 byte unsigned decimal integers, $ od -vAn -N4 -tu4 < /dev/urandom 3394619386 man od for more information on od .
{ "source": [ "https://unix.stackexchange.com/questions/268952", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10618/" ] }
269,077
I am in the process of colorizing my terminal’s PS1 . I am setting color variables using tput ; for example, here’s purple: PURPLE=$(tput setaf 125) Question: How do I find the color codes (e.g. 125 ) of other colors? Is there a color table guide/cheat sheet somewhere? I’m just not sure what 125 is … Is there some way to take a hex color and convert into a number that setaf can use?
The count of colors available to tput is given by tput colors . To see the basic 8 colors (as used by setf in urxvt terminal and setaf in xterm terminal): $ printf '\e[%sm▒' {30..37} 0; echo ### foreground $ printf '\e[%sm ' {40..47} 0; echo ### background And usually named as this: Color #define Value RGB black COLOR_BLACK 0 0, 0, 0 red COLOR_RED 1 max,0,0 green COLOR_GREEN 2 0,max,0 yellow COLOR_YELLOW 3 max,max,0 blue COLOR_BLUE 4 0,0,max magenta COLOR_MAGENTA 5 max,0,max cyan COLOR_CYAN 6 0,max,max white COLOR_WHITE 7 max,max,max To see the extended 256 colors (as used by setaf in urxvt): $ printf '\e[48;5;%dm ' {0..255}; printf '\e[0m \n' If you want numbers and an ordered output: #!/bin/bash color(){ for c; do printf '\e[48;5;%dm%03d' $c $c done printf '\e[0m \n' } IFS=$' \t\n' color {0..15} for ((i=0;i<6;i++)); do color $(seq $((i*36+16)) $((i*36+51))) done color {232..255} The 16 million colors need quite a bit of code (some consoles can not show this). The basics is: fb=3;r=255;g=1;b=1;printf '\e[0;%s8;2;%s;%s;%sm▒▒▒ ' "$fb" "$r" "$g" "$b" fb is front/back or 3/4 . A simple test of your console capacity to present so many colors is: for r in {200..255..5}; do fb=4;g=1;b=1;printf '\e[0;%s8;2;%s;%s;%sm ' "$fb" "$r" "$g" "$b"; done; echo It will present a red line with a very small change in tone from left to right. If that small change is visible, your console is capable of 16 million colors. Each r , g , and b is a value from 0 to 255 for RGB (Red,Green,Blue). If your console type support this, this code will create a color table: mode2header(){ #### For 16 Million colors use \e[0;38;2;R;G;Bm each RGB is {0..255} printf '\e[mR\n' # reset the colors. printf '\n\e[m%59s\n' "Some samples of colors for r;g;b. Each one may be 000..255" printf '\e[m%59s\n' "for the ansi option: \e[0;38;2;r;g;bm or \e[0;48;2;r;g;bm :" } mode2colors(){ # foreground or background (only 3 or 4 are accepted) local fb="$1" [[ $fb != 3 ]] && fb=4 local samples=(0 63 127 191 255) for r in "${samples[@]}"; do for g in "${samples[@]}"; do for b in "${samples[@]}"; do printf '\e[0;%s8;2;%s;%s;%sm%03d;%03d;%03d ' "$fb" "$r" "$g" "$b" "$r" "$g" "$b" done; printf '\e[m\n' done; printf '\e[m' done; printf '\e[mReset\n' } mode2header mode2colors 3 mode2colors 4 To convert an hex color value to a (nearest) 0-255 color index: fromhex(){ hex=${1#"#"} r=$(printf '0x%0.2s' "$hex") g=$(printf '0x%0.2s' ${hex#??}) b=$(printf '0x%0.2s' ${hex#????}) printf '%03d' "$(( (r<75?0:(r-35)/40)*6*6 + (g<75?0:(g-35)/40)*6 + (b<75?0:(b-35)/40) + 16 ))" } Use it as: $ fromhex 00fc7b 048 $ fromhex #00fc7b 048 To find the color number as used in HTML colors format : #!/bin/dash tohex(){ dec=$(($1%256)) ### input must be a number in range 0-255. if [ "$dec" -lt "16" ]; then bas=$(( dec%16 )) mul=128 [ "$bas" -eq "7" ] && mul=192 [ "$bas" -eq "8" ] && bas=7 [ "$bas" -gt "8" ] && mul=255 a="$(( (bas&1) *mul ))" b="$(( ((bas&2)>>1)*mul ))" c="$(( ((bas&4)>>2)*mul ))" printf 'dec= %3s basic= #%02x%02x%02x\n' "$dec" "$a" "$b" "$c" elif [ "$dec" -gt 15 ] && [ "$dec" -lt 232 ]; then b=$(( (dec-16)%6 )); b=$(( b==0?0: b*40 + 55 )) g=$(( (dec-16)/6%6)); g=$(( g==0?0: g*40 + 55 )) r=$(( (dec-16)/36 )); r=$(( r==0?0: r*40 + 55 )) printf 'dec= %3s color= #%02x%02x%02x\n' "$dec" "$r" "$g" "$b" else gray=$(( (dec-232)*10+8 )) printf 'dec= %3s gray= #%02x%02x%02x\n' "$dec" "$gray" "$gray" "$gray" fi } for i in $(seq 0 255); do tohex ${i} done Use it as ("basic" is the first 16 colors, "color" is the main group, "gray" is the last gray colors): $ tohex 125 ### A number in range 0-255 dec= 125 color= #af005f $ tohex 6 dec= 6 basic= #008080 $ tohex 235 dec= 235 gray= #262626
{ "source": [ "https://unix.stackexchange.com/questions/269077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67282/" ] }
269,078
I have a script that does a number of different things, most of which do not require any special privileges. However, one specific section, which I have contained within a function, needs root privileges. I don't wish to require the entire script to run as root, and I want to be able to call this function, with root privileges, from within the script. Prompting for a password if necessary isn't an issue since it is mostly interactive anyway. However, when I try to use sudo functionx , I get: sudo: functionx: command not found As I expected, export didn't make a difference. I'd like to be able to execute the function directly in the script rather than breaking it out and executing it as a separate script for a number of reasons. Is there some way I can make my function "visible" to sudo without extracting it, finding the appropriate directory, and then executing it as a stand-alone script? The function is about a page long itself and contains multiple strings, some double-quoted and some single-quoted. It is also dependent upon a menu function defined elsewhere in the main script. I would only expect someone with sudo ANY to be able to run the function, as one of the things it does is change passwords.
I will admit that there's no simple, intuitive way to do this, and this is a bit hackey. But, you can do it like this: function hello() { echo "Hello!" } # Test that it works. hello FUNC=$(declare -f hello) sudo bash -c "$FUNC; hello" Or more simply: sudo bash -c "$(declare -f hello); hello" It works for me: $ bash --version GNU bash, version 4.3.42(1)-release (x86_64-apple-darwin14.5.0) $ hello Hello! $ $ FUNC=$(declare -f hello) $ sudo bash -c "$FUNC; hello" Hello! Basically, declare -f will return the contents of the function, which you then pass to bash -c inline. If you want to export all functions from the outer instance of bash, change FUNC=$(declare -f hello) to FUNC=$(declare -f) . Edit To address the comments about quoting, see this example: $ hello() > { > echo "This 'is a' test." > } $ declare -f hello hello () { echo "This 'is a' test." } $ FUNC=$(declare -f hello) $ sudo bash -c "$FUNC; hello" Password: This 'is a' test.
{ "source": [ "https://unix.stackexchange.com/questions/269078", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160045/" ] }
269,159
When I always try to install new package I get this message: Can't set locale; make sure $LC_* and $LANG are correct! perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_GB:en", LC_ALL = (unset), LC_CTYPE = "en_GB.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory My OS is Debian Jessie 8.3 (Mate) using English with French keyboard. When I type locale, I get this: locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory LANG=en_US.UTF-8 LANGUAGE=en_GB:en LC_CTYPE=en_GB.UTF-8 LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL=
Debian ships locales in source form. They need to be compiled explicitly. The reason for this is that compiled locales use a lot more disk space, but most people only use a few of them. Run dpkg-reconfigure locales as root, select the locales you want in the list (with your settings, you need en_GB and en_US.UTF-8 — I recommend selecting en_US and en_GB.UTF-8 as well) then press <OK> . Alternatively, edit /etc/locale.gen , uncomment the lines for the locales you want, and run locale-gen as root. (Note: on Ubuntu, this works differently: run locale-gen with the locales you want to generate as arguments, e.g. sudo locale-gen en_GB en_US en_GB.UTF-8 en_US.UTF-8 .) Alternatively, Debian now has a package locales-all which you can install instead of locales . It has all the locales pre-generated. The downside is that they use up more disk space (112MB vs 16MB).
{ "source": [ "https://unix.stackexchange.com/questions/269159", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31699/" ] }
269,170
I have 10 unix servers, I want to log into them one by one, execute 4-5 lines of code, save the output and exit. For Example: 10 serves: Intially at xyz server Login in server 1 --> execute 4-5 lines --> send output to xyz server ---> exit Login in server 2 --> execute 4-5 lines --> send output to xyz server ---> exit .... Login in server 10 -->execute 4-5 lines --> send output to xyz server ---> exit Finally on XYZ sever with output files. Let's say, I want to execute some time commands,... say backing time to one hour, taking new time as output, saving new time in some file on xyz server, with this format: Server Name New Time =========== ========= Server1 Date and Time Server2 Date and Time
Debian ships locales in source form. They need to be compiled explicitly. The reason for this is that compiled locales use a lot more disk space, but most people only use a few of them. Run dpkg-reconfigure locales as root, select the locales you want in the list (with your settings, you need en_GB and en_US.UTF-8 — I recommend selecting en_US and en_GB.UTF-8 as well) then press <OK> . Alternatively, edit /etc/locale.gen , uncomment the lines for the locales you want, and run locale-gen as root. (Note: on Ubuntu, this works differently: run locale-gen with the locales you want to generate as arguments, e.g. sudo locale-gen en_GB en_US en_GB.UTF-8 en_US.UTF-8 .) Alternatively, Debian now has a package locales-all which you can install instead of locales . It has all the locales pre-generated. The downside is that they use up more disk space (112MB vs 16MB).
{ "source": [ "https://unix.stackexchange.com/questions/269170", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147118/" ] }
269,180
I found a way in Windows to do such thing echo "This is just a sample line appended to create a big file. " > dummy.txt for /L %i in (1,1,21) do type dummy.txt >> dummy.txt http://www.windows-commandline.com/how-to-create-large-dummy-file/ Is there a way in UNIX to copy a file, append and then repeat the process? Something like for .. cat file1.txt > file1.txt ?
yes "Some text" | head -n 100000 > large-file With csh / tcsh : repeat 10000 echo some test > large-file With zsh : {repeat 10000 echo some test} > large-file On GNU systems, see also: seq 100000 > large-file Or: truncate -s 10T large-file (creates a 10TiB sparse file (very large but doesn't take any space on disk)) and the other alternatives discussed at "Create a test file with lots of zero bytes" . Doing cat file >> file would be a bad idea. First, it doesn't work with some cat implementations that refuse to read files that are the same as their output file. But even if you work around it by doing cat file | cat >> file , if file is larger than cat 's internal buffer, that would cause cat to run in an infinite loop as it would end up reading the data that it has written earlier. On file systems backed by a rotational hard drive, it would be pretty inefficient as well (after reaching a size greater than would possibly be cached in memory) as the drive would need to go back and forth between the location where to read the data, and that where to write it.
{ "source": [ "https://unix.stackexchange.com/questions/269180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160688/" ] }
269,587
I've been using grep -i more often and I found out that it is slower than its egrep equivalent, where I match against the upper or lower case of each letter: $ time grep -iq "thats" testfile real 0m0.041s user 0m0.038s sys 0m0.003s $ time egrep -q "[tT][hH][aA][tT][sS]" testfile real 0m0.010s user 0m0.003s sys 0m0.006s Does grep -i do additional tests that egrep doesn't?
grep -i 'a' is equivalent to grep '[Aa]' in an ASCII-only locale. In a Unicode locale, character equivalences and conversions can be complex, so grep may have to do extra work to determine which characters are equivalent. The relevant locale setting is LC_CTYPE , which determines how bytes are interpreted as characters. In my experience, GNU grep can be slow when invoked in a UTF-8 locale. If you know that you're searching for ASCII characters only, invoking it in an ASCII-only locale may be faster. I expect that time LC_ALL=C grep -iq "thats" testfile time LC_ALL=C egrep -q "[tT][hH][aA][tT][sS]" testfile would produce indistinguishable timings. That being said, I can't reproduce your finding with GNU grep on Debian jessie (but you didn't specify your test file). If I set an ASCII locale ( LC_ALL=C ), grep -i is faster. The effects depend on the exact nature of the string, for example a string with repeated characters reduces the performance ( which is to be expected ).
{ "source": [ "https://unix.stackexchange.com/questions/269587", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137040/" ] }
269,593
I wanted to backup my ~/.ssh/id_rsa to id_rsa.old , and it looks like it got deleted! How is this possible? :) root@localhost:~/.ssh# ls -l total 16 -rw------- 1 root root 3326 Mar 12 11:22 id_rsa -rw-r--r-- 1 root root 756 Mar 12 11:22 id_rsa.pub -rw------- 1 userx userx 666 Mar 8 11:09 known_hosts -rw-r--r-- 1 userx userx 666 Feb 29 10:53 known_hosts.old root@localhost:~/.ssh# mv id_rsa *.old root@localhost:~/.ssh# ls -l total 12 -rw-r--r-- 1 root root 756 Mar 12 11:22 id_rsa.pub -rw------- 1 userx userx 666 Mar 8 11:09 known_hosts -rw------- 1 root root 3326 Mar 12 11:22 known_hosts.old root@localhost:~/.ssh# touch p root@localhost:~/.ssh# mv p *.p root@localhost:~/.ssh# ls -l total 12 -rw-r--r-- 1 root root 756 Mar 12 11:22 id_rsa.pub -rw------- 1 userx userx 666 Mar 8 11:09 known_hosts -rw------- 1 root root 3326 Mar 12 11:22 known_hosts.old -rw-r--r-- 1 root root 0 Mar 12 11:28 *.p root@localhost:~/.ssh# rm *.p root@localhost:~/.ssh# ls -l total 12 -rw-r--r-- 1 root root 756 Mar 12 11:22 id_rsa.pub -rw------- 1 userx userx 666 Mar 8 11:09 known_hosts -rw------- 1 root root 3326 Mar 12 11:22 known_hosts.old userx@localhost:~$ uname -r 4.2.0-30-generic userx@localhost:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 15.10 Release: 15.10 Codename: wily userx@localhost:~$ bash --version GNU bash, version 4.3.42(1)-release (x86_64-pc-linux-gnu)
It has been renamed as known_hosts.old , hence has overwritten the previous contents of known_hosts.old . As you already have a file named known_hosts.old in there so the glob pattern *.old has been expanded to known_hosts.old . In a nutshell, the following: mv id_rsa *.old has been expanded to: mv id_rsa known_hosts.old In bash , if there was not a file named known_hosts.old present there it would expand to literal *.old (given you have not enabled nullglob ).
{ "source": [ "https://unix.stackexchange.com/questions/269593", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160979/" ] }
269,600
I have done much research and attempted fixes on this issue, mostly involving tweaking the xstartup file. I've tried alternative VNC clients (UltraVNC and TightVNC) from a Windows 7 computer, with the same results for each client. Basically, I get either a blank grey screen with only an arrow cursor, or a failure to connect at all. I also tried a different VNC server (VNC4server) but abandoned that because, although I could connect, I got an error every time on the client window. And Tightvnc seems more widely used and user-supported. I find that, almost regardless of what I put in the ~/.vnc/xstartup file (for example, even if it has just one line (startkde &) it will work if I specify "root" as the VNC user. But then I'm logged in as root and I need instead to follow standard *nix practice of being logged in as a non-root user. So, the issue does appear to relate to privileges. However, I check for correct ownership and executable flags on files after every time I edit them. I read somewhere that the latest Tightvnc server will not allow KDE desktop to be started if there is already a desktop session running on the host (user logged in), so I start the host machine without anyone logged in. I have configured Tightvnc server as a service. My current xstartup file follows, but like I said, I have already attempted many variants of these lines, commenting out nearly everything, from suggestions gathered on the internet. #!/bin/sh # Uncomment the following two lines for normal desktop: unset SESSION_MANAGER exec /etc/X11/xinit/xinitrc & # unset DBUS_SESSION_BUS_ADDRESS [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources xsetroot -solid grey vncconfig -iconic & x-terminal-emulator -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & # x-window-manager & exec startkde & Here is the service file, /lib/systemd/system/tightvncserver.service : [Unit] Description=TightVNC remote desktop server After=sshd.service [Service] Type=dbus ExecStart=/usr/bin/vncserver -geometry 1024x768 -depth 24 :1 User=vnc Type=forking [Install] WantedBy=multi-user.target Here is the log after one reboot of the host followed by one connection attempt: 14/03/16 01:37:46 Xvnc version TightVNC-1.3.9 14/03/16 01:37:46 Copyright (C) 2000-2007 TightVNC Group 14/03/16 01:37:46 Copyright (C) 1999 AT&T Laboratories Cambridge 14/03/16 01:37:46 All Rights Reserved. 14/03/16 01:37:46 See http://www.tightvnc.com/ for information on TightVNC 14/03/16 01:37:46 Desktop name 'X' (test:1) 14/03/16 01:37:46 Protocol versions supported: 3.3, 3.7, 3.8, 3.7t, 3.8t 14/03/16 01:37:46 Listening for VNC connections on TCP port 5901 /home/vnc/.vnc/xstartup: 12: /home/vnc/.vnc/xstartup: vncconfig: not found x-terminal-emulator: Unknown option 'ls'. x-terminal-emulator: Use --help to get a list of available command line options. Error: cannot create directory "/tmp/ksocket-vncw1nXNU": File exists startkde: Starting up... kdeinit4: Aborting. bind() failed: Address already in use Could not bind to socket '/tmp/ksocket-vncGcyXe4/kdeinit4__1' 14/03/16 01:38:09 Got connection from client 192.168.10.10 14/03/16 01:38:09 Using protocol version 3.8 14/03/16 01:38:14 Full-control authentication passed by 192.168.10.10 14/03/16 01:38:14 Pixel format for client 192.168.10.10: 14/03/16 01:38:14 32 bpp, depth 24, little endian 14/03/16 01:38:14 true colour: max r 255 g 255 b 255, shift r 16 g 8 b 0 14/03/16 01:38:14 no translation needed 14/03/16 01:38:14 Using hextile encoding for client 192.168.10.10 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding 19 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding 18 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding 17 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding 16 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding 10 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding 9 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding 8 14/03/16 01:38:14 Using compression level 6 for client 192.168.10.10 14/03/16 01:38:14 Enabling full-color cursor updates for client 192.168.10.10 14/03/16 01:38:14 Enabling cursor position updates for client 192.168.10.10 14/03/16 01:38:14 Using image quality level 6 for client 192.168.10.10 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding -65530 14/03/16 01:38:14 Enabling LastRect protocol extension for client 192.168.10.10 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding -223 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding -32768 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding -32767 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding -32764 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding -32766 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding -32765 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding -1063131698 14/03/16 01:38:43 Client 192.168.10.10 gone 14/03/16 01:38:43 Statistics: 14/03/16 01:38:43 key events received 0, pointer events 260 14/03/16 01:38:43 framebuffer updates 2, rectangles 5, bytes 776789 14/03/16 01:38:43 cursor shape updates 2, bytes 4920 14/03/16 01:38:43 cursor position updates 1, bytes 12 14/03/16 01:38:43 hextile rectangles 2, bytes 771857 14/03/16 01:38:43 raw bytes equivalent 6291480, compression ratio 8.151095 Any ideas? [EDIT, 2014/03/14, 1409 UTC]: I forgot to mention that I had it working error-free with XFCE desktop. But I much prefer KDE, and I wish to get that working if at all possible. [EDIT, 2014/03/14, 2216 UTC]: This is a follow-up to Paul H.'s suggestion, I'm putting it here because the mini-formatting of comments doesn't seem to allow blockquotes and images. Thank you, that got me further. After I give the "startkde &" command, the client window opens with a sensible-looking desktop that is starting to load and gets this far before closing (note the error message in top left): The log is as follows: 14/03/16 21:32:11 Desktop name 'X' (test:1) 14/03/16 21:32:11 Protocol versions supported: 3.3, 3.7, 3.8, 3.7t, 3.8t 14/03/16 21:32:11 Listening for VNC connections on TCP port 5901 QDBusConnection: session D-Bus connection created before QCoreApplication. Application may misbehave. QDBusConnection: session D-Bus connection created before QCoreApplication. Application may misbehave. 14/03/16 21:32:37 Got connection from client 192.168.10.10 14/03/16 21:32:37 Using protocol version 3.8 14/03/16 21:32:47 Full-control authentication passed by 192.168.10.10 14/03/16 21:32:47 Pixel format for client 192.168.10.10: 14/03/16 21:32:47 32 bpp, depth 24, little endian 14/03/16 21:32:47 true colour: max r 255 g 255 b 255, shift r 16 g 8 b 0 14/03/16 21:32:47 no translation needed 14/03/16 21:32:47 Using hextile encoding for client 192.168.10.10 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding 19 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding 18 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding 17 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding 16 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding 10 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding 9 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding 8 14/03/16 21:32:47 Using compression level 6 for client 192.168.10.10 14/03/16 21:32:47 Enabling full-color cursor updates for client 192.168.10.10 14/03/16 21:32:47 Enabling cursor position updates for client 192.168.10.10 14/03/16 21:32:47 Using image quality level 6 for client 192.168.10.10 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding -65530 14/03/16 21:32:47 Enabling LastRect protocol extension for client 192.168.10.10 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding -223 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding -32768 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding -32767 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding -32764 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding -32766 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding -32765 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding -1063131698 Connecting to deprecated signal QDBusConnectionInterface::serviceOwnerChanged(QString,QString,QString) QDBusConnection: session D-Bus connection created before QCoreApplication. Application may misbehave. QDBusConnection: session D-Bus connection created before QCoreApplication. Application may misbehave. kbuildsycoca4 running... kbuildsycoca4(989) KBuildSycoca::checkTimestamps: checking file timestamps kbuildsycoca4(989) KBuildSycoca::checkTimestamps: timestamps check ok kbuildsycoca4(989) kdemain: Emitting notifyDatabaseChanged () QDBusConnection: session D-Bus connection created before QCoreApplication. Application may misbehave. QDBusConnection: session D-Bus connection created before QCoreApplication. Application may misbehave. Object::connect: No such signal org::freedesktop::UPower::DeviceAdded(QString) Object::connect: No such signal org::freedesktop::UPower::DeviceRemoved(QString) QDBusConnection: name 'org.freedesktop.UDisks2' had owner '' but we thought it was ':1.11' klauncher: Exiting on signal 15 knotify4: Fatal IO error: client killed kded4: Fatal IO error: client killed konsole: Fatal IO error: client killed konsole(902) Konsole::SessionManager::~SessionManager: Konsole SessionManager destroyed with sessions still alive The first error message, ending with "application may misbehave," is supposed to be unimportant, from the bug reports I have seen. The rest, I'm not sure about..
It has been renamed as known_hosts.old , hence has overwritten the previous contents of known_hosts.old . As you already have a file named known_hosts.old in there so the glob pattern *.old has been expanded to known_hosts.old . In a nutshell, the following: mv id_rsa *.old has been expanded to: mv id_rsa known_hosts.old In bash , if there was not a file named known_hosts.old present there it would expand to literal *.old (given you have not enabled nullglob ).
{ "source": [ "https://unix.stackexchange.com/questions/269600", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160968/" ] }
269,617
Coming from Windows administration, I want to dig deeper in Linux (Debian). One of my burning questions I could not answer searching the web (didn't find it) is: how can I achieve the so called "one-to-many" remoting like in PowerShell for Windows? To break it down to the basics I would say: My view on Linux: I can ssh into a server and type my command I get the result. For an environment of 10 servers I would have to write a (perl/python?) script sending the command for each of them? My experience from Windows: I type my command and with "invoke-command" I can "send" this to a bunch of servers (maybe from a textfile) to execute simultaneously and get the result back (as an object for further work). I can even establish multiple sessions, the connection is held in the background, and selectively send commands to these sessions, and remote in and out like I need. (I heard of chef, puppet, etc. Is this something like that?) Update 2019: After trying a lot - I suggest Rex (see this comment below ) - easy setup (effectively it just needs ssh, nothing else) and use (if you know just a little bit perl it's even better, but it's optional) With Rex(ify) you can do adhoc command and advance it to a real configuration management (...meaning: it is a CM in first place, but nice for adhoc tasks, too) The website seams outdated, but currently (as of 01/2019) it's in active development and the IRC-Channel is also active. With Windows' new openssh there are even more possibilities you can try: rex -u user -p password -H 192.168.1.3 -e 'say run "hostname"'
Summary Ansible is a DevOps tool that is a powerful replacement for PowerShell RunDeck as a graphical interface is handy Some people run RunDeck+Ansible together clusterssh For sending remote commands to several servers, for a beginner, I would recommend clusterssh To install clusterssh in Debian: apt-get install clusterssh Another clusterssh tutorial : ClusterSSH is a Tk/Perl wrapper around standard Linux tools like XTerm and SSH. As such, it'll run on just about any POSIX-compliant OS where the libraries exist — I've run it on Linux, Solaris, and Mac OS X. It requires the Perl libraries Tk (perl-tk on Debian or Ubuntu) and X11::Protocol (libx11-protocol-perl on Debian or Ubuntu), in addition to xterm and OpenSSH. Ansible As for a remote framework for multiple systems administration, Ansible is a very interesting alternative to Puppet. It is more lean, and it does not need dedicated remote agents as it works over SSH (it also has been bought by RedHat) The Playbooks are more elaborate than the command line options. However, to start using Ansible you need a simple installation and to setup the clients list text file. Afterwards, to run a command in all servers, it is as simple as doing: ansible all -m command -a "uptime" The output also is very nicely formatted and separated per rule/server, and while running it in the background can be redirected to a file and consulted later. You can start with simple rules, and Ansible usage will get more interesting as you grow in Linux, and your infra-structure becomes larger. As such it will do so much more than PowerShell. As an example, a very simple Playbook to upgrade Linux servers that I wrote: --- - hosts: all become: yes gather_facts: False tasks: - name: updates a server apt: update_cache=yes - name: upgrade a server apt: upgrade=full It also has many modules defined that let you easily write comprehensive policies. Module Index - Ansible Documentation It also has got an interesting official hub/"social" network of repositories to search for already made ansible policies by the community. Ansible Galaxy Ansible is also widely used, and you will find lots of projects in github, like this one from myself for FreeRadius setup . While Ansible is a free open source framework, it also has a paid web panel interface, Ansible Tower although the licensing is rather expensive. Nowadays, after RedHat bought it, tower has also the open source version known as AWX . As a bonus, Ansible also is capable of administering Windows servers, though I have never used it for that. It is also capable of administering networking equipment (routers, switches, and firewall), which make it a very interesting solution as an automation turn key solution. How to install Ansible Rundeck Yet again, for a remote framework easier to use, but not so potent as Ansible, I do recommend Rundeck . It is a very powerful multi-user/login graphical interface where you can automate much of your common day-to-day tasks, and even give watered down views to sysops or helpdesk people. When running the commands, it also gives you windows with the output broken down by server/task. It can run multiple jobs in the background seamlessly, and allows you to see the report and output later on. How to install RunDeck Please note there are people running Ansible+RunDeck as a web interface; not all cases are appropriated for that. It also goes without saying that using Ansible and/or RunDeck can be construed as a form or part of the infra-structure documentation, and over time allows to replicate and improve the actions/recipes/Playbooks. Lastly, talking about a central command server, I would create one just up for the task. Actually the technical term is a jump box. 'Jump boxes' improve security, if you set them up right .
{ "source": [ "https://unix.stackexchange.com/questions/269617", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161003/" ] }
269,661
In Linux Mint 17.3 / 18 iwconfig says the power management of my wireless card is turned on . I want to turn it off permanently or some workaround on this issue. sudo iwconfig wlan0 power off works, until I reboot the laptop. Also, if I randomly check iwconfig , sometimes it's on, despite I did run this command. I read some articles about making the fix permanent. All of them contained the first step "Go to directory /etc/pm/power.d ", which in my case did not exist. I followed these steps: sudo mkdir -p /etc/pm/power.d sudo nano /etc/pm/power.d/wireless_power_management_off I entered these two lines into the file: #!/bin/bash /sbin/iwconfig wlan0 power off And I finished with setting proper user rights: sudo chmod 700 /etc/pm/power.d/wireless_power_management_off But after reboot the power management is back on. iwconfig after manually turning power management off eth0 no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"SSID" Mode:Managed Frequency:2.462 GHz Access Point: 00:00:00:00:00:00 Bit Rate=24 Mb/s Tx-Power=22 dBm Retry short limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=42/70 Signal level=-68 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:2 Invalid misc:18 Missed beacon:0 lo no wireless extensions. I don't think this question applies only to Linux Mint, it is a general issue of particular wireless adapters.
Open this file with your favorite text editor, I use nano here: sudo nano /etc/NetworkManager/conf.d/default-wifi-powersave-on.conf By default there is: [connection] wifi.powersave = 3 Change the value to 2 . Possible values for the wifi.powersave field are: NM_SETTING_WIRELESS_POWERSAVE_DEFAULT (0): use the default value NM_SETTING_WIRELESS_POWERSAVE_IGNORE (1): don't touch existing setting NM_SETTING_WIRELESS_POWERSAVE_DISABLE (2): disable powersave NM_SETTING_WIRELESS_POWERSAVE_ENABLE (3): enable powersave (Informal source on GitHub for these values.) To take effect, just run: sudo systemctl restart NetworkManager
{ "source": [ "https://unix.stackexchange.com/questions/269661", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
270,199
For any given version or installation of Linux Mint, how would I find out which version of Ubuntu it is based on? I'm sure it must be in documentation somewhere right?
You'll find Ubuntu version in the /etc/upstream-release/lsb-release file: $ cat /etc/upstream-release/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE= 14.04 DISTRIB_CODENAME=trusty To figure out which subrelease you are using, you need to know what kernel you are running, e.g. here kernel 3.19: $ uname -r 3.19 .0-32-generic Then you compare it with the 14.04.x Ubuntu Kernel Support schedule which says that in my case, the 3.19 kernel matches 14.04.3 . Now on wiki it listed
{ "source": [ "https://unix.stackexchange.com/questions/270199", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106998/" ] }
270,272
In the second method proposed by this page , one gets the tty in which bash is being run with the command: ps ax | grep $$ | awk '{ print $2 }' I though to myself that surely this is a bit lazy, listing all running processes only to extract one of them. Would it not be more efficient (I am also asking if this would introduce unwanted effects) to do: ps -p $$ | tail -n 1 | awk '{ print $2 }' FYI, I came across this issue because sometimes the first command would actually yield two (or more) lines. This would happen randomly, when there would be another process running with a PID that contains $$ as a substring. In the second approach, I am avoiding such cases by requesting the PID that I know I want.
Simply by typing tty : $ tty /dev/pts/20 Too simple and obvious to be true :) Edit: The first one returns you also the pty of the process running grep as you can notice: $ ps ax | grep $$ 28295 pts/20 Ss 0:00 /bin/bash 29786 pts/20 S+ 0:00 grep --color=auto 28295 therefore you would need to filter out the grep to get only one result, which is getting ugly: ps ax | grep $$ | grep -v grep | awk '{ print $2 }' or using ps ax | grep "^$$" | awk '{ print $2 }' (a more sane variant)
{ "source": [ "https://unix.stackexchange.com/questions/270272", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45354/" ] }
270,334
The shortcut for "move windows to another workspace" in Xfce should be Ctrl + Alt + Shift + ← / → / ↑ / ↓ . But it doesn't work, there're no such shortcuts. Why, am I missing anything?
There isn't any. By default, the action "Move window to left/right/up/down workspace" has no shortcuts set and that has not changed since Xfce 4.6 to this date. So the shortcuts might have been deprecated earlier or not adopted at all. But there should be Those 'old' shortcuts were originally found in GNOME; the original author of this answer was aware of this, because they had been using GNOME 2 before switching to Xfce. The oldest known proof is shown by the screenshot with additional highlight as follows. Source: Xfce 4.6 tour , screenshots by Jannis Pohlmann. The original screenshot was used to describe "fill operation" for xfwm4, which luckily showing the unset window shortcuts. Revive them anyway To define shortcuts for the action "Move window to left/right/up/down workspace", user can configure in xfwm4-settings or navigate from Settings Manager in Xfce. Go to Settings Manager > Window Manager - Keyboard In the tab, scroll down until "Toggle fullscreen" entry and the relevant actions "Move window to..." are listed below it with empty column on the right For the corresponding action "Move window to upper workspace", do either double-click the empty column , or select the row and click Edit A small popup window will appear, then press the shortcut keys of choice to be assigned for previously selected action: Ctrl + Alt + Shift + ↑ for "Move window to upper workspace" and then the popup window will be closed Repeat step 3 and 4 for other actions, and finally click Close to finish. Additional notes To this date, Wikipedia still note the 'old' shortcuts in the article of Table of keyboard shortcuts under "Window Management". That has changed since the introduction of GNOME 3, with most of the shortcuts have been redefined and favours combination of Super key .
{ "source": [ "https://unix.stackexchange.com/questions/270334", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40830/" ] }
270,390
When I compile my own kernel, basically what I do is the following: I download the sources from www.kernel.org and uncompress it. I copy my previous .config to the sources and do a make menuconfig to watch for the new options and modify the configuration according to the new policy of the kernel. Then, I compile it: make -j 4 Finally, I install it: su -c 'make modules_install && make install' . After a few tests, I remove the old kernel (from /boot and /lib/modules ) and run fully with the new one (this last step saved my life several times! It's a pro-tip !). The problem is that I always get a /boot/initrd.img-4.x.x which is huge compared to the ones from my distribution. Here the content of my current /boot/ directory as an example: # ls -alFh total 243M drwxr-xr-x 5 root root 4.0K Mar 16 21:26 ./ drwxr-xr-x 25 root root 4.0K Feb 25 09:28 ../ -rw-r--r-- 1 root root 2.9M Mar 9 07:39 System.map-4.4.0-1-amd64 -rw-r--r-- 1 root root 3.1M Mar 11 22:30 System.map-4.4.5 -rw-r--r-- 1 root root 3.2M Mar 16 21:26 System.map-4.5.0 -rw-r--r-- 1 root root 170K Mar 9 07:39 config-4.4.0-1-amd64 -rw-r--r-- 1 root root 124K Mar 11 22:30 config-4.4.5 -rw-r--r-- 1 root root 126K Mar 16 21:26 config-4.5.0 drwxr-xr-x 5 root root 512 Jan 1 1970 efi/ drwxr-xr-x 5 root root 4.0K Mar 16 21:27 grub/ -rw-r--r-- 1 root root 19M Mar 10 22:01 initrd.img-4.4.0-1-amd64 -rw-r--r-- 1 root root 101M Mar 12 13:59 initrd.img-4.4.5 -rw-r--r-- 1 root root 103M Mar 16 21:26 initrd.img-4.5.0 drwx------ 2 root root 16K Apr 8 2014 lost+found/ -rw-r--r-- 1 root root 3.5M Mar 9 07:30 vmlinuz-4.4.0-1-amd64 -rw-r--r-- 1 root root 4.1M Mar 11 22:30 vmlinuz-4.4.5 -rw-r--r-- 1 root root 4.1M Mar 16 21:26 vmlinuz-4.5.0 As you may have noticed, the size of my initrd.img files are about 10 times bigger than the ones from my distribution. So, do I do something wrong when compiling my kernel? And, how can I reduce the size of my initrd.img ?
This is because all the kernel modules are not stripped. You need to strip it to down its size. Use this command: SHW@SHW:/tmp# cd /lib/modules/<new_kernel> SHW@SHW:/tmp# find . -name *.ko -exec strip --strip-unneeded {} + This will drastically reduce the size. After executing above command, you can proceed to create initramfs/initrd man strip --strip-unneeded Remove all symbols that are not needed for relocation processing.
{ "source": [ "https://unix.stackexchange.com/questions/270390", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40768/" ] }
270,778
I love to type bash scripts, but if I prepare multiple tools, project's root directory is filled with so many shell scripts. That's why I prefer using Makefile. Makefile is good. However I want to build my makefiles just as regular bash scripts. Eg: Think that I wrote a bash script very quickly with the following content: #!/bin/bash echo "hello" cd ~ do-some-work.sh my-parameter I can run this script with $ ./my-important-task.sh . If I wanted to move that script into makefile, I should do the following: SHELL := /bin/bash my-important-task: echo "hello" ;\ cd ~ ;\ do-some-work.sh my-parameter but I want the following: my-important-task: [[copy and paste the my-important-task.sh file]] Is there anyway to accomplish this goal?
If you really want to “write exactly bash scripts into Makefiles” then you'll need to do it a bit indirectly. If you just paste the script after the target line, then you'll run into two problems that just cannot be bypassed: the command lines need to be indented with a tab, and dollar signs need to be escaped. If you use GNU make (as opposed to BSD make, Solaris make, etc.), then you can define your script as a variable using the multi-line definition syntax , and then use the value function to use the raw value of the variable, bypassing expansion. In addition, as explained by skwllsp , you need to tell make to execute the command list for each target as a single shell script rather than line by line, which you can do in GNU make by defining a .ONESHELL target . define my_important_task = # script goes here endef my-important-task: ; $(value my_important_task) .ONESHELL:
{ "source": [ "https://unix.stackexchange.com/questions/270778", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65781/" ] }
270,828
I have seen constructs in scripts such as this: if somevar="$(somecommand 2>/dev/null)"; then ... fi Is this documented somewhere? How is the return status of a variable determined and how does it relate to command substitution? (For instance, would I get the same result with if echo "$(somecommand 2>/dev/null)"; then ?)
It is documented (for POSIX) in Section 2.9.1 Simple Commands of The Open Group Base Specifications.  There's a wall of text there; I direct your attention to the last paragraph: If there is a command name, execution shall continue as described in Command Search and Execution .  If there is no command name, but the command contained a command substitution, the command shall complete with the exit status of the last command substitution performed.  Otherwise, the command shall complete with a zero exit status. So, for example, Command Exit Status $ FOO=BAR 0 (but see also the note from icarus, below) $ FOO=$(bar) Exit status from "bar" $ FOO=$(bar)$(quux) Exit status from "quux" $ FOO=$(bar) baz Exit status from "baz" $ foo $(bar) Exit status from "foo" This is how bash works, too.  But see also the “not so simple” section at the end. phk , in his question Assignments are like commands with an exit status except when there’s command substitution? , suggests … it appears as if an assignment itself counts as a command … with a zero exit value, but which applies before the right side of the assignment (e.g., a command substitution call…) That’s not a terrible way of looking at it.  A crude scheme for determining the return status of a simple command (one not containing ; , & , | , && or || ) is: Scan the line from left to right until you reach the end or a command word (typically a program name). If you see a variable assignment, the return status for the line just might be 0. If you see a command substitution — i.e., $(…) — take the exit status from that command. If you reach an actual command (not in a command substitution), take the exit status from that command. The return status for the line is the last number you encountered. Command substitutions as arguments to the command, e.g., foo $(bar) , don’t count; you get the exit status from foo .  To paraphrase phk’s notation , the behavior here is temporary_variable = EXECUTE( "bar" ) overall_exit_status = EXECUTE( "foo", temporary_variable ) But this is a slight oversimplification.  The overall return status from A=$( cmd 1 ) B=$( cmd 2 ) C=$( cmd 3 ) D=$( cmd 4 ) E=mc 2 is the exit status from cmd 4 .  The E= assignment that occurs after the D= assignment does not set the overall exit status to 0. icarus , in his answer to phk’s question , raises an important point: variables can be set as readonly.  The third-to-last paragraph in Section 2.9.1 of the POSIX standard says, If any of the variable assignments attempt to assign a value to a variable for which the readonly attribute is set in the current shell environment (regardless of whether the assignment is made in that environment), a variable assignment error shall occur.  See Consequences of Shell Errors for the consequences of these errors. so if you say readonly A C=Garfield A=Felix T=Tigger the return status is 1.  It doesn’t matter if the strings Garfield , Felix , and/or Tigger are replaced with command substitution(s) — but see notes below. Section 2.8.1 Consequences of Shell Errors has another bunch of text, and a table, and ends with In all of the cases shown in the table where an interactive shell is required not to exit, the shell shall not perform any further processing of the command in which the error occurred. Some of the details make sense; some don’t: The A= assignment sometimes aborts the command line, as that last sentence seems to specify.  In the above example, C is set to Garfield , but T is not set (and, of course, neither is A ). Similarly, C=$( cmd 1 ) A=$( cmd 2 ) T=$( cmd 3 ) executes cmd 1 but not cmd 3 . But, in my versions of bash (which include 4.1.X and 4.3.X), it does execute cmd 2 .  (Incidentally, this further impeaches phk’s interpretation that the exit value of the assignment applies before the right side of the assignment.) But here’s a surprise: In my versions of bash, readonly A C= something A= something T= something cmd 0 does execute cmd 0 .  In particular, C=$( cmd 1 ) A=$( cmd 2 ) T=$( cmd 3 ) cmd 0 executes cmd 1 and cmd 3 , but not cmd 2 .  (Note that this is the opposite of its behavior when there is no command.)  And it sets T (as well as C ) in the environment of cmd 0 .  I wonder whether this is a bug in bash. Not so simple: The first paragraph of this answer refers to “simple commands”. The specification says, A “simple command” is a sequence of optional variable assignments and redirections, in any sequence, optionally followed by words and redirections, terminated by a control operator. These are statements like the ones in my first example block: $ FOO=BAR $ FOO=$(bar) $ FOO=$(bar) baz $ foo $(bar) the first three of which include variable assignments, and the last three of which include command substitutions. But some variable assignments aren’t quite so simple. bash(1) says, Assignment statements may also appear as arguments to the alias , declare , typeset , export , readonly , and local builtin commands ( declaration commands). For export , the POSIX specification says, EXIT STATUS 0 All name operands were successfully exported. >0 At least one name could not be exported, or the -p option was specified and an error occurred. And POSIX doesn’t support local , but bash(1) says, It is an error to use local when not within a function.  The return status is 0 unless local is used outside a function, an invalid name is supplied, or name is a readonly variable. Reading between the lines, we can see that declaration commands like export FOO=$(bar) and local FOO=$(bar) are more like foo $(bar) insofar as they ignore the exit status from bar and give you an exit status based on the main command ( export , local , or foo ).  So we have weirdness like Command Exit Status $ FOO=$(bar) Exit status from "bar" (unless FOO is readonly) $ export FOO=$(bar) 0 (unless FOO is readonly, or other error from “export”) $ local FOO=$(bar) 0 (unless FOO is readonly, statement is not in a function, or other error from “local”) which we can demonstrate with $ export FRIDAY=$(date -d tomorrow) $ echo "FRIDAY = $FRIDAY, status = $?" FRIDAY = Fri, May 04, 2018 8:58:30 PM, status = 0 $ export SATURDAY=$(date -d "day after tomorrow") date: invalid date ‘day after tomorrow’ $ echo "SATURDAY = $SATURDAY, status = $?" SATURDAY = , status = 0 and myfunc() { local x=$(echo "Foo"; true); echo "x = $x -> $?" local y=$(echo "Bar"; false); echo "y = $y -> $?" echo -n "BUT! " local z; z=$(echo "Baz"; false); echo "z = $z -> $?" } $ myfunc x = Foo -> 0 y = Bar -> 0 BUT! z = Baz -> 1 Luckily ShellCheck catches the error and raises SC2155 , which advises that export foo="$(mycmd)" should be changed to foo=$(mycmd) export foo and local foo="$(mycmd)" should be changed to local foo foo=$(mycmd) Credit and Reference I got the idea of concatenating command substitutions — $(bar)$(quux) — from Gilles’s answer to How can I get bash to exit on backtick failure in a similar way to pipefail? , which contains a lot of information relevant to this question.
{ "source": [ "https://unix.stackexchange.com/questions/270828", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
270,977
In Bash, when specifying command line arguments to a command, what characters are required to be escaped? Are they limited to the metacharacters of Bash: space, tab, | , & , ; , ( , ) , < , and > ?
The following characters have special meaning to the shell itself in some contexts and may need to be escaped in arguments: Character Unicode Name Usage ` U+0060 (Grave Accent) Backtick Command substitution ~ U+007E Tilde Tilde expansion ! U+0021 Exclamation mark History expansion # U+0023 Number sign Hash Comments $ U+0024 Dollar sign Parameter expansion & U+0026 Ampersand Background commands * U+002A Asterisk Filename expansion and globbing ( U+0028 Left Parenthesis Subshells ) U+0029 Right Parenthesis Subshells U+0009 Tab ( ⇥ ) Word splitting (whitespace) { U+007B Left Curly Bracket Left brace Brace expansion [ U+005B Left Square Bracket Filename expansion and globbing | U+007C Vertical Line Vertical bar Pipelines \ U+005C Reverse Solidus Backslash Escape character ; U+003B Semicolon Separating commands ' U+0027 Apostrophe Single quote String quoting " U+0022 Quotation Mark Double quote String quoting with interpolation ↩ U+000A Line Feed Newline Line break < U+003C Less than Input redirection > U+003E Greater than Output redirection ? U+003F Question mark Filename expansion and globbing U+0020 Space Word splitting 1 (whitespace) Some of those characters are used for more things and in more places than the one I linked. There are a few corner cases that are explicitly optional: ! can be disabled with set +H , which is the default in non-interactive shells. { can be disabled with set +B . * and ? can be disabled with set -f or set -o noglob . = Equals sign (U+003D) also needs to be escaped if set -k or set -o keyword is enabled. Escaping a newline requires quoting — backslashes won't do the job. Any other characters listed in IFS will need similar handling. You don't need to escape ] or } , but you do need to escape ) because it's an operator. Some of these characters have tighter limits on when they truly need escaping than others. For example, a#b is ok, but a #b is a comment, while > would need escaping in both contexts. It doesn't hurt to escape them all conservatively anyway, and it's easier than remembering the fine distinctions. If your command name itself is a shell keyword ( if , for , do ) then you'll need to escape or quote it too. The only interesting one of those is in , because it's not obvious that it's always a keyword. You don't need to do that for keywords used in arguments, only when you've (foolishly!) named a command after one of them. Shell operators ( ( , & , etc) always need quoting wherever they are. 1 Stéphane has noted that any other single-byte blank character from your locale also needs escaping. In most common, sensible locales, at least those based on C or UTF-8, it's only the whitespace characters above. In some ISO-8859-1 locales, U+00A0 no-break space is considered blank, including Solaris, the BSDs, and OS X (I think incorrectly). If you're dealing with an arbitrary unknown locale, it could include just about anything, including letters, so good luck. Conceivably, a single byte considered blank could appear within a multi-byte character that wasn't blank, and you'd have no way to escape that other than putting the whole thing in quotes. This isn't a theoretical concern: in an ISO-8859-1 locale from above, that A0 byte which is considered a blank can appear within multibyte characters like UTF-8 encoded "à" ( C3 A0 ). To handle those characters safely you would need to quote them "à" . This behaviour depends on the locale configuration in the environment running the script, not the one where you wrote it. I think this behaviour is broken multiple ways, but we have to play the hand we're dealt. If you're working with any non-self-synchronising multibyte character set, the safest thing would be to quote everything. If you're in UTF-8 or C, you're safe (for the moment).
{ "source": [ "https://unix.stackexchange.com/questions/270977", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
271,013
From man touch : -f (ignored) But I don't get what is meant by ignored . I've tried following: $ ls -l file -rw-rw-r-- 1 pandya pandya 0 Mar 20 16:17 file $ touch -f file $ ls -l file -rw-rw-r-- 1 pandya pandya 0 Mar 20 16:18 file And noticed that it changes timestamps in spite of -f . So, I want to know what -f stands for, or what it does.
For GNU utilities, the full documentation is in the info page, where you can read: -f Ignored; for compatibility with BSD versions of `touch'. See historic BSD man pages for touch , where -f was to force the touch. If you look at the source of those old BSDs, there was no utimes() system call, so touch would open the file in read+write mode, read one byte, seek back and write it again so as to update the last access and last modification time . Obviously, you needed both read and write permissions ( touch would avoid trying to do that if access(W_OK|R_OK) returned false ). -f tried to work around that by temporarily changing the permissions temporarily to 0666 ! 0666 means read and write permission to everybody. It had to be that as otherwise (like with a more restrictive permission such as 0600 that still would have permitted the touch ) that could mean during that short window, processes that would otherwise have read or write permission to the file couldn't any more, breaking functionality . That means however that processes that would not otherwise have access to the file now have a short opportunity to open it, breaking security . That's not a very sensible thing to do. Modern touch implementations don't do that. Since then, the utime() system call has been introduced, allowing changing modification and access time separately without having to mingle with the content of the files (which means it also works with non-regular files) and only needs write access for that. GNU touch still doesn't fail if passed the -f option, but just ignores the flag. That way, scripts written for those old versions of BSD don't fail when ported to GNU systems. Not much relevant nowadays.
{ "source": [ "https://unix.stackexchange.com/questions/271013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
271,140
Ideally I'd like a command like this rm --only-if-symlink link-to-file because I have burned myself too many times accidentally deleting the file instead of the symlink pointing to the file. This can be especially bad when sudo is involved. Now I do of course do a ls -al to make sure it's really a symlink and such but that's vulnerable to operator error (similarly named file, typo, etc) and race conditions (if somebody wanted me to delete a file for some reason). Is there some way to check if a file is a symlink and only delete it if it is in one command?
$ rm_if_link(){ [ ! -L "$1" ] || rm -v "$1"; } #test $ touch nonlink; ln -s link $ rm_if_link nonlink $ rm_if_link link removed 'link'
{ "source": [ "https://unix.stackexchange.com/questions/271140", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38485/" ] }
271,471
I just formatted microSD card, and would like to run a dd command. Unfortunately dd command fails: $ sudo dd bs=1m if=2016-02-26-raspbian-jessie-lite.img of=/dev/rdisk2 dd: /dev/rdisk2: Resource busy $ Everyone on the internet says I need to unmount the disk first. Sure, can do that and move on. But I want to understand why / what exactly in OS X is making the device busy ? How do I diagnose this? So far I tried: Listing open files: $ lsof /dev/disk2 $ lsof /dev/disk2s1 $ Also: $ lsof /Volumes/UNTITLED $ Listing users working on the file: $ fuser -u /dev/disk2 /dev/disk2: $ fuser -u /dev/disk2s1 /dev/disk2s1: $ Also: $ fuser -u /Volumes/UNTITLED $ Check for system messages: $ sudo dmesg | grep disk $ Also: $ sudo dmesg | grep /Volumes/UNTITLED $ My environment Operating system: Darwin Eugenes-MacBook-Pro-2.local 15.3.0 Darwin Kernel Version 15.3.0: Thu Dec 10 18:40:58 PST 2015; root:xnu-3248.30.4~1/RELEASE_X86_64 x86_64 Information about my microSD: diskutil list disk2 /dev/disk2 (internal, physical): #: TYPE NAME SIZE IDENTIFIER 0: FDisk_partition_scheme *31.9 GB disk2 1: DOS_FAT_32 UNTITLED 31.9 GB disk2s1 P.S. I'm using OS X 10.11. Update 22/3/2016 . Figured it out. I re-ran the lsof and fuser from above using sudo , and finally got to the bottom of the issue: $ sudo fuser /Volumes/UNTITLED/ /Volumes/UNTITLED/: 62 282 $ And: $ sudo lsof /Volumes/UNTITLED/ COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mds 62 root 8r DIR 1,6 32768 2 /Volumes/UNTITLED mds 62 root 22r DIR 1,6 32768 2 /Volumes/UNTITLED mds 62 root 23r DIR 1,6 32768 10 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD mds 62 root 25u REG 1,6 0 999999999 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/journalExclusion mds_store 282 root txt REG 1,6 3277 17 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexGroups mds_store 282 root txt REG 1,6 8 23 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexCompactDirectory mds_store 282 root txt REG 1,6 312 19 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexTermIds mds_store 282 root txt REG 1,6 3277 29 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexGroups mds_store 282 root txt REG 1,6 1024 35 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexCompactDirectory mds_store 282 root txt REG 1,6 312 21 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexPositionTable mds_store 282 root txt REG 1,6 8192 31 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexTermIds mds_store 282 root txt REG 1,6 2056 22 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexDirectory mds_store 282 root txt REG 1,6 8192 33 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexPositionTable mds_store 282 root txt REG 1,6 8224 34 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexDirectory mds_store 282 root txt REG 1,6 16 16 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexIds mds_store 282 root txt REG 1,6 65536 48 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/reverseDirectoryStore mds_store 282 root txt REG 1,6 704 24 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexArrays mds_store 282 root txt REG 1,6 65536 26 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.directoryStoreFile mds_store 282 root txt REG 1,6 32768 28 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexIds mds_store 282 root txt REG 1,6 65536 36 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexArrays mds_store 282 root txt REG 1,6 65536 38 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.directoryStoreFile mds_store 282 root 5r DIR 1,6 32768 10 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD mds_store 282 root 17u REG 1,6 8192 12 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/psid.db mds_store 282 root 32r DIR 1,6 32768 10 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD mds_store 282 root 41u REG 1,6 28 15 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/indexState $ From the above it's easy to see that processes called mds and mds_store have created and are holding lots of files on the volume.
Apple court, Apple rules. Try diskutil : $ diskutil list ... # if mounted somewhere $ sudo diskutil unmount $device # all the partitions (there's also a "force" option, see the manual) $ sudo diskutil unmountDisk $device # remember zip drives? this would launch them. good times! $ sudo diskutil eject $device (In the case of a disk image, the hdiutil command may also be of interest. You can also click around in Disk Utility.app .)
{ "source": [ "https://unix.stackexchange.com/questions/271471", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30894/" ] }
271,475
I'm working on a bash script that will split the contents of a text document depending on the data in the line. If the contents of the original file were along the lines of 01 line 01 line 02 line 02 line How can I insert into line 3 of this file using bash to result in 01 line 01 line text to insert 02 line 02 line I'm hoping to do this using a heredoc or something similar in my script #!/bin/bash vim -e -s ./file.txt <<- HEREDOC :3 | startinsert | "text to insert\n" :update :quit HEREDOC The above doesn't work of course but any recommendations that I could implement into this bash script?
You can use the POSIX tool ex by line number: ex a.txt <<eof 3 insert Sunday . xit eof Or string match: ex a.txt <<eof /Monday/ insert Sunday . xit eof https://pubs.opengroup.org/onlinepubs/9699919799/utilities/ex.html
{ "source": [ "https://unix.stackexchange.com/questions/271475", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89568/" ] }
271,659
The question is about special variables. Documentation says: !!:$ designates the last argument of the preceding command . This may be shortened to !$ . ( $_ , an underscore.) At shell startup, set to the absolute pathname used to invoke the shell or shell script being executed as passed in the environment or argument list. Subsequently, expands to the last argument to the previous command after expansion. Also set to the full pathname used to invoke each command executed and placed in the environment exported to that command. There must be some difference I cannot catch, because: $ echo "hello" > /tmp/a.txt $ echo "!$" echo "/tmp/a.txt" /tmp/a.txt $ echo "hello" > /tmp/a.txt $ echo $_ hello What is the difference?
!$ is a word designator of history expansion; it expands to the last word of the previous command in history . In other words, the last word of the previous entry in history. This word is usually the last argument to the command, but not in case of redirection. In: echo "hello" > /tmp/a.txt the whole command 'echo "hello" > /tmp/a.txt' appeared in history, and /tmp/a.txt is the last word of that command. _ is a shell parameter; it expands to the last argument of the previous command. Here, the redirection is not a part of arguments passed to the command, so only hello is the argument passed to echo . That's why $_ expanded to hello . _ is no longer one of shell standard special parameters . It works in bash , zsh , mksh and dash only when interactive, ksh93 only when two commands are on separated lines: $ echo 1 && echo $_ 1 /usr/bin/ksh $ echo 1 1 $ echo $_ 1
{ "source": [ "https://unix.stackexchange.com/questions/271659", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39370/" ] }
271,714
How is it possible to run multiple commands and background them using bash? For example: $ for i in {1..10}; do wait file$i &; done where wait is a custom binary. Right now I get an error: syntax error near unexpected token `;' when running the above command. Once backgrounded the commands should run in parallel.
The & , just like ; is a list terminator operator. They have the same syntax and can be used interchangeably (depending on what you want to do). This means that you don't want, or need, command1 &; command2 , all you need is command1 & command2 . So, in your example, you could just write: for i in {1..10}; do wait file$i & done and each wait command will be launched in the background and the loop will immediately move on to the next.
{ "source": [ "https://unix.stackexchange.com/questions/271714", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82521/" ] }
272,353
I am writing an installation script that will be run as /bin/sh . There is a line prompting for a file: read -p "goat may try to change directory if cd fails to do so. Would you like to add this feature? [Y|n] " REPLY I would like to break this long line into many lines so that none of them exceed 80 characters. I'm talking about the lines within the source code of the script; not about the lines that are to be actually printed on the screen when the script is executed! What I've tried: Frist approach: read -p "oat may try to change directory if cd fails to do so. " \ "Would you like to add this feature? [Y|n] " REPLY This doesn't work since it doesn't print Would you like to add this feature? [Y|n] . Second approach: echo "oat may try to change directory if cd fails to do so. " \ "Would you like to add this feature? [Y|n] " read REPLY Doesn't work as well. It prints a newline after the prompt. Adding -n option to echo doesn't help: it just prints: -n goat oat may try to change directory if cd fails to do so. Would you like to add this feature? [Y|n] # empty line here My current workaround is printf '%s %s ' \ "oat may try to change directory if cd fails to do so." \ "Would you like to add this feature? [Y|n] " read REPLY and I wonder if there is a better way. Remember that I am looking for a /bin/sh compatible solution.
First of all, let's decouple the read from the text line by using a variable: text="line-1 line-2" ### Just an example. read -p "$text" REPLY In this way the problem becomes: How to assign two lines to a variable. Of course, a first attempt to do that, is: a="line-1 \ line-2" Written as that, the var a actually gets the value line-1 line-2 . But you do not like the lack of indentation that this creates, well, then we may try to read the lines into the var from a here-doc (be aware that the indented lines inside the here-doc need a tab, not spaces, to work correctly): a="$(cat <<-_set_a_variable_ line-1 line-2 _set_a_variable_ )" echo "test1 <$a>" But that would fail as actually two lines are written to $a . A workaround to get only one line might be: a="$( echo $(cat <<-_set_a_variable_ line 1 line 2 _set_a_variable_ ) )" echo "test2 <$a>" That is close, but creates other additional issues. Correct solution. All the attempts above will just make this problem more complex that it needs to be. A very basic and simple approach is: a="line-1" a="$a line-2" read -p "$a" REPLY The code for your specific example is (for any shell whose read supports -p ): #!/bin/dash a="goat can try change directory if cd fails to do so." a="$a Would you like to add this feature? [Y|n] " # absolute freedom to indent as you see fit. read -p "$a" REPLY For all the other shells, use: #!/bin/dash a="goat can try change directory if cd fails to do so." a="$a Would you like to add this feature? [Y|n] " # absolute freedom to indent as you see fit. printf '%s' "$a"; read REPLY
{ "source": [ "https://unix.stackexchange.com/questions/272353", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128489/" ] }
272,611
I mistakenly entered chsh -s /usr/bin instead of chsh -s /bin/bash and now I can't log into a root shell, how do I start a bash shell as root manually ?
While root does not have access, a user in the sudo group can still run privileged commands - it seems the error is not in sudo, but elsewhere in the sudo chsh command (e.g. chsh error). As such your sudo is apparently working. The passwd file can be edited with: sudo vipw And the root shell changed manually. (first line of /etc/passwd usually) root:x:0:0:root:/root:/bin/bash Fom man vipw The vipw and vigr commands edits the files /etc/passwd and /etc/group, respectively. With the -s flag, they will edit the shadow versions of those files, /etc/shadow and /etc/gshadow, respectively. The programs will set the appropriate locks to prevent file corruption.
{ "source": [ "https://unix.stackexchange.com/questions/272611", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/149443/" ] }
272,617
I am writing a program that will be sending external emails using postfix with the mail -s command and I need to verify that the email was sent to the specified address. In which case, I am curious if postfix right away reports an error that I am able to get as a return code if email failed sending or if postfix will say it was a success as long as a valid email ([email protected]) was entered and then later report in a log file or such that the email was unable to be sent? Also if the network is down, will postfix still return success as long as a valid email address is used, or will a failure be reported right away?
While root does not have access, a user in the sudo group can still run privileged commands - it seems the error is not in sudo, but elsewhere in the sudo chsh command (e.g. chsh error). As such your sudo is apparently working. The passwd file can be edited with: sudo vipw And the root shell changed manually. (first line of /etc/passwd usually) root:x:0:0:root:/root:/bin/bash Fom man vipw The vipw and vigr commands edits the files /etc/passwd and /etc/group, respectively. With the -s flag, they will edit the shadow versions of those files, /etc/shadow and /etc/gshadow, respectively. The programs will set the appropriate locks to prevent file corruption.
{ "source": [ "https://unix.stackexchange.com/questions/272617", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72302/" ] }
272,850
with logical I mean everything legal in the command ip link as in, for instance: ip link add link dum0 name dum0.200 type vlan protocol 802.1Q id 200 where the logical type would be "vlan". All valid types are, to quote the man page: vlan | veth | vcan | dummy | ifb | macvlan | macvtap | can | bridge | ipoib | ip6tnl | ipip | sit | vxlan |gre | gretap | ip6gre | ip6gretap | vti Note that this clearly is not the physical device type (like ethernet, wifi, ppp etc.) as asked in this question , which does contain a gem of a reference to the physical type which led me to test for it : find /sys/class/net ! -type d | xargs --max-args=1 realpath | while read d; do b=$(basename $d) ; n=$(find $d -name type) ; echo -n $b' ' ; cat $n; done dum0.200 1 dum0.201 1 dum1.300 1 dum1.301 1 dummy0 1 ens36 1 ens33 1 lo 772 dum0 1 dum1 1 wlan0 1 But which apparently finds both dummy, vlan and wlan devices to be of type ARPHRD_ETHER. Does somebody know more? Thanks in advance. ==== Revising this in 2022 . It's from a system with two real ethernet interfaces, one wifi, docker installed but inactive, and libvirt with two networks and five virtual machines. The jq is from stedolan.github.io/jq, commonly installed with a decent package manager. $ ( sudo ip -details -j l | jq -r '.[]|"@", .ifname, .link_type, .linkinfo.info_data.type, .linkinfo.info_kind, .linkinfo.info_slave_kind' | tr '\n' ' ' | tr '@' '\n' ; echo ) | column -t lo loopback null null null enp43s0 ether null null null wlp0s20f3 ether null null null docker0 ether null bridge null virbr2 ether null bridge null virbr1 ether null bridge null enx00e04c680108 ether null null null vnet0 ether tap tun bridge vnet1 ether tap tun bridge vnet2 ether tap tun bridge vnet3 ether tap tun bridge vnet4 ether tap tun bridge vnet5 ether tap tun bridge vnet6 ether tap tun bridge vnet7 ether tap tun bridge vnet8 ether tap tun bridge vnet9 ether tap tun bridge
A simpler solution: ip -details link show For virtual devices, device type is shown on the third line.
{ "source": [ "https://unix.stackexchange.com/questions/272850", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44864/" ] }
272,858
If I want to move a file called longfile from /longpath/ to /longpath/morepath/ can I do something like mv (/longpath)/(longfile) $1/morepath/$2 i.e. can I let the bash know that it should remember a specific part of the input so I can reuse it later in the same input? (In the above example I use an imaginary remembering command that works by enclosing in parantheses and an imaginary reuse command $ that inserts the content of the groups)
You could do this: mv /longpath/longfile !#:1:h/morepath/ See https://www.gnu.org/software/bash/manual/bashref.html#History-Interaction !# is the current command :1 is the first argument in this command :h is the "head" -- think dirname /morepath/ appends that to the head and you're moving a file to a directory, so it keeps the same basename. If you want to alter the "longfile" name, say add a ".txt" extension, you could mv /longpath/longfile !#:1:h/morepath/!#:1:t.txt Personally, I would cut and paste with my mouse. In practice I never get much more complicated than !! or !$ or !!:gs/foo/bar/
{ "source": [ "https://unix.stackexchange.com/questions/272858", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106476/" ] }
272,868
How can I use the download-dl to download video through url playlist only format mp4 instead format .mkv or .webm ? I use this command to download videos: youtube-dl -itcv --yes-playlist https://www.youtube.com/playlist?list=.... The result this command are video with extension .mp4 , .mkv or .webm
To list the available formats type: youtube-dl -F url Then you can choose to download a certain format-type by entering the number for the format code (in the sample below 11 ): youtube-dl -f 11 url Example from webupd8 youtube-dl -F http://www.youtube.com/watch?v=3JZ_D3ELwOQ sample output: [youtube] Setting language [youtube] 3JZ_D3ELwOQ: Downloading webpage [youtube] 3JZ_D3ELwOQ: Downloading video info webpage [youtube] 3JZ_D3ELwOQ: Extracting video information [info] Available formats for 3JZ_D3ELwOQ: format code extension resolution note 171 webm audio only DASH webm audio , audio@ 48k (worst) 140 m4a audio only DASH audio , audio@128k 160 mp4 192p DASH video 133 mp4 240p DASH video 134 mp4 360p DASH video 135 mp4 480p DASH video 136 mp4 720p DASH video 137 mp4 1080p DASH video 17 3gp 176x144 36 3gp 320x240 5 flv 400x240 43 webm 640x360 18 mp4 640x360 22 mp4 1280x720 (best) You can choose best and type youtube-dl -f 22 http://www.youtube.com/watch?v=3JZ_D3ELwOQ To get the best video quality (1080p DASH - format "137") and best audio quality (DASH audio - format "140"), you must use the following command: youtube-dl -f 137+140 http://www.youtube.com/watch?v=3JZ_D3ELwOQ EDIT You can get more options here Video Selection: --playlist-start NUMBER Playlist video to start at (default is 1) --playlist-end NUMBER Playlist video to end at (default is last) --playlist-items ITEM_SPEC Playlist video items to download. Specify indices of the videos in the playlist separated by commas like: "--playlist-items 1,2,5,8" if you want to download videos indexed 1, 2, 5, 8 in the playlist. You can specify range: "--playlist-items 1-3,7,10-13", it will download the videos at index 1, 2, 3, 7, 10, 11, 12 and 13. --match-title REGEX Download only matching titles (regex or caseless sub-string) --reject-title REGEX Skip download for matching titles (regex or caseless sub-string) --max-downloads NUMBER Abort after downloading NUMBER files --min-filesize SIZE Do not download any videos smaller than SIZE (e.g. 50k or 44.6m) --max-filesize SIZE Do not download any videos larger than SIZE (e.g. 50k or 44.6m) --date DATE Download only videos uploaded in this date --datebefore DATE Download only videos uploaded on or before this date (i.e. inclusive) --dateafter DATE Download only videos uploaded on or after this date (i.e. inclusive) --min-views COUNT Do not download any videos with less than COUNT views --max-views COUNT Do not download any videos with more than COUNT views --match-filter FILTER Generic video filter (experimental). Specify any key (see help for -o for a list of available keys) to match if the key is present, !key to check if the key is not present,key > NUMBER (like "comment_count > 12", also works with >=, <, <=, !=, =) to compare against a number, and & to require multiple matches. Values which are not known are excluded unless you put a question mark (?) after the operator.For example, to only match videos that have been liked more than 100 times and disliked less than 50 times (or the dislike functionality is not available at the given service), but who also have a description, use --match-filter "like_count > 100 & dislike_count <? 50 & description" . --no-playlist Download only the video, if the URL refers to a video and a playlist. --yes-playlist Download the playlist, if the URL refers to a video and a playlist. --age-limit YEARS Download only videos suitable for the given age --download-archive FILE Download only videos not listed in the archive file. Record the IDs of all downloaded videos in it. --include-ads Download advertisements as well (experimental)
{ "source": [ "https://unix.stackexchange.com/questions/272868", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150122/" ] }
272,871
I have a file that looks like this: foo03a foo02b quux01a foo01a foo02a foo01b foo03b quux01b I'd like it ordered by the last character (so a and b appear together) and then by the preceding number, and then by the prefix (though this is not essential). So that it results in: foo01a quux01a foo02a foo03a foo01b quux01b foo02b foo03b It actually doesn't particularly matter where quux01a and quux01b appear, as long as they're in the relevant group -- they can appear as shown, before foo01b , or after foo03b . Why? These are server names used in a blue/green deployment, so I want the 'A' servers together, then the 'B' servers. I found the -k switch to GNU sort, but I don't understand how to use it to specify a particular character, counting from the end of the string. I tried cat foos | rev | sort | rev , but that sorts foo10a and foo10b (when we count up that far) into the wrong place.
To list the available formats type: youtube-dl -F url Then you can choose to download a certain format-type by entering the number for the format code (in the sample below 11 ): youtube-dl -f 11 url Example from webupd8 youtube-dl -F http://www.youtube.com/watch?v=3JZ_D3ELwOQ sample output: [youtube] Setting language [youtube] 3JZ_D3ELwOQ: Downloading webpage [youtube] 3JZ_D3ELwOQ: Downloading video info webpage [youtube] 3JZ_D3ELwOQ: Extracting video information [info] Available formats for 3JZ_D3ELwOQ: format code extension resolution note 171 webm audio only DASH webm audio , audio@ 48k (worst) 140 m4a audio only DASH audio , audio@128k 160 mp4 192p DASH video 133 mp4 240p DASH video 134 mp4 360p DASH video 135 mp4 480p DASH video 136 mp4 720p DASH video 137 mp4 1080p DASH video 17 3gp 176x144 36 3gp 320x240 5 flv 400x240 43 webm 640x360 18 mp4 640x360 22 mp4 1280x720 (best) You can choose best and type youtube-dl -f 22 http://www.youtube.com/watch?v=3JZ_D3ELwOQ To get the best video quality (1080p DASH - format "137") and best audio quality (DASH audio - format "140"), you must use the following command: youtube-dl -f 137+140 http://www.youtube.com/watch?v=3JZ_D3ELwOQ EDIT You can get more options here Video Selection: --playlist-start NUMBER Playlist video to start at (default is 1) --playlist-end NUMBER Playlist video to end at (default is last) --playlist-items ITEM_SPEC Playlist video items to download. Specify indices of the videos in the playlist separated by commas like: "--playlist-items 1,2,5,8" if you want to download videos indexed 1, 2, 5, 8 in the playlist. You can specify range: "--playlist-items 1-3,7,10-13", it will download the videos at index 1, 2, 3, 7, 10, 11, 12 and 13. --match-title REGEX Download only matching titles (regex or caseless sub-string) --reject-title REGEX Skip download for matching titles (regex or caseless sub-string) --max-downloads NUMBER Abort after downloading NUMBER files --min-filesize SIZE Do not download any videos smaller than SIZE (e.g. 50k or 44.6m) --max-filesize SIZE Do not download any videos larger than SIZE (e.g. 50k or 44.6m) --date DATE Download only videos uploaded in this date --datebefore DATE Download only videos uploaded on or before this date (i.e. inclusive) --dateafter DATE Download only videos uploaded on or after this date (i.e. inclusive) --min-views COUNT Do not download any videos with less than COUNT views --max-views COUNT Do not download any videos with more than COUNT views --match-filter FILTER Generic video filter (experimental). Specify any key (see help for -o for a list of available keys) to match if the key is present, !key to check if the key is not present,key > NUMBER (like "comment_count > 12", also works with >=, <, <=, !=, =) to compare against a number, and & to require multiple matches. Values which are not known are excluded unless you put a question mark (?) after the operator.For example, to only match videos that have been liked more than 100 times and disliked less than 50 times (or the dislike functionality is not available at the given service), but who also have a description, use --match-filter "like_count > 100 & dislike_count <? 50 & description" . --no-playlist Download only the video, if the URL refers to a video and a playlist. --yes-playlist Download the playlist, if the URL refers to a video and a playlist. --age-limit YEARS Download only videos suitable for the given age --download-archive FILE Download only videos not listed in the archive file. Record the IDs of all downloaded videos in it. --include-ads Download advertisements as well (experimental)
{ "source": [ "https://unix.stackexchange.com/questions/272871", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46851/" ] }
273,118
Is there a way to pipe the output of a command and direct it to the stdout as well? So for example, fortune prints a fortune cookie to stdout and also pipes it to next command: $ fortune | tee >(?stdout?) | pbcopy "...Unix, MS-DOS, and Windows NT (also known as the Good, the Bad, and the Ugly)." (By Matt Welsh)
Your assumption: fortune | tee >(?stdout?) | pbcopy won't work because the fortune output will be written to standard out twice, so you will double the output to pbcopy . In OSX (and other systems support /dev/std{out,err,in} ), you can check it: $ echo 1 | tee /dev/stdout | sed 's/1/2/' 2 2 output 2 twice instead of 1 and 2 . tee outputs twice to stdout , and tee process's stdout is redirected to sed by the pipe, so all these outputs run through sed and you see double 2 here. You must use other file descriptors, example standard error through /dev/stderr : $ echo 1 | tee /dev/stderr | sed 's/1/2/' 1 2 or use tty to get the connected pseudo terminal: $ echo 1 | tee "$(tty)" | sed 's/1/2/' 1 2 With zsh and multios option set, you don't need tee at all: $ echo 1 >/dev/stderr | sed 's/1/2/' 1 2
{ "source": [ "https://unix.stackexchange.com/questions/273118", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/159858/" ] }
273,182
I wanted to install the command locate , which is available via sudo apt-get install mlocate . However, I first ran sudo apt-get install locate which seems to have installed something else. Typing the command locate <package> however seems to call upon mlocate . What is the package locate , and can (should) it be safely removed?
The locate package is the implementation of locate from GNU findutils . The mlocate package is another implementation of the same concept called mlocate . They implement the same basic functionality: quick lookup of file names based on an index that's (typically) rebuilt every night. They differ in some of their functionality beyond basic usage. In particular, GNU locate builds an index of world-readable files only (unless you run it from your account), whereas mlocate builds an index of all files but only lets the calling user see files that it could access. This makes mlocate more useful in most circumstances, but unusable in some unusual installations where it isn't run by the system administrator (because mlocate has to be setuid root ), and a security risk. Under Debian and derivatives, if you install both, locate will run the mlocate implementation, and you need to run locate.findutils to run the GNU implementation. This is managed through alternatives . If you have both installed, they'll both spend time rebuilding their respective index, but other than that they won't conflict with each other.
{ "source": [ "https://unix.stackexchange.com/questions/273182", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136753/" ] }
273,424
One of my favorite tricks in Bash is when I open my command prompt in a text editor. I do this (in vi mode) by pressing ESC v . When I do this, whatever is in my command prompt is now displayed in my $EDITOR of choice. I can then edit the command as if it were a document, and when I save and exit everything in that temp file is executed. I'm surprised that none of my friends have heard of this tip, so I've been looking for docs I can share. The problem is that I haven't been able to find anything on it. Also, the search terms related to this tip are very common, so that doesn't help when Googling for the docs. Does anyone know what this technique is called so I can actually look it up?
In bind -p listing, I can see the command is called edit-and-execute-command , and is bound to C-x C-e in the emacs mode.
{ "source": [ "https://unix.stackexchange.com/questions/273424", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54004/" ] }
273,437
I'm trying to run an rsync between two servers. I'm basing things off of this post: How to rsync files between two remotes? What I find missing is how to facilitate the rsync (via ssh) when a key (the same key) is required for logging into each server. Here's the closest I've got: ssh -i ~/path/to/pem/file.pem -R localhost:50000:SERVER2:22 ubuntu@SERVER1 'rsync -e "ssh -p 50000" -vur /home/ubuntu/test localhost:/home/ubuntu/test' It seems like the initial connection works properly, however I can't seem to figure out how to specific the key and username for SERVER2. Any thoughts?
In bind -p listing, I can see the command is called edit-and-execute-command , and is bound to C-x C-e in the emacs mode.
{ "source": [ "https://unix.stackexchange.com/questions/273437", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/163655/" ] }
273,529
Bash has the PROMPT_DIRTRIM option, e.g. when I set PROMPT_DIRTRIM=3 , then a long path like: user@computer: /this/is/some/silly/path would show instead as: user@computer: .../some/silly/path Does a similar option exist for zsh ?
To get a similar effect like in bash , that is including the ... , try using: %(4~|.../%3~|%~) in in your PROMPT variable (which might be also named PS1 in your configuration) place of %~ . This checks, if the path is at least 4 elements long ( %(4~|true|false) ) and, if true, prints some dots with the last 3 elements ( .../%3~ ), otherwise the full path is printed ( %~ ). I noticed that bash seems to shorten paths in the home directory differently, for example: ~/.../some/long/path For a similar effect, you may want to use: %(5~|%-1~/…/%3~|%4~) This checks, whether the path is at least 5 elements long, and in that case prints the first element ( %-1~ ), some dots ( /…/ ) and the last 3 elements. It is not exactly the same as paths, that are not in your home directory, will also have the first element at the beginning, while bash just prints dots in that case. So /this/…/some/silly/path instead of .../some/silly/path But this might not necessarily a bad thing. Instead of %~ you can also use %d (or your current PROMPT might already use %d ). The difference is, that %d shows full absolute paths, while %~ shows shorthands for “named directories”: e.g. /home/youruser becomes ~ and /home/otheruser becomes ~otheruser . If you prefer to use the full path as basis for the shortening, just replace any occurrence of ~ with d .
{ "source": [ "https://unix.stackexchange.com/questions/273529", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58056/" ] }
273,624
I need to search for a keyword using awk, but I want to perform a case-insensitive (non case sensitive) search. I think the best approach is to capitalize both the search term ("key word") and the target line that awk is reading at the same time. From this question I how to use toupper to print in all uppercase, but I don't know how to use it in a match because that answer just shows printing and doesn't leave the uppercase text in a variable. Here is an example, given this input: blablabla &&&Key Word&&& I want all these text and numbers 123 and chars !"£$%& as output &&&KEY WORD&&& blablabla I'd like this output: I want all these text and numbers 123 and chars !"£$%& as output This is what I have, but I don't know how to add in toupper : awk "BEGIN {p=0}; /&&&key word&&&/ { p = ! p ; next } ; p { print }" text.txt
Replace your expression to match a pattern (i.e. /&&&key word&&&/ ) by another expression explicitly using $0 , the current line: tolower($0) ~ /&&&key word&&&/ or toupper($0) ~ /&&&KEY WORD&&&/ so you have awk 'tolower($0) ~ /&&&key word&&&/ { p = ! p ; next }; p' text.txt You need single quotes because of the $0 , the BEGIN block can be removed as variables are initialised by default to "" or 0 on first use, and {print} is the default action, as mentioned in the comments below.
{ "source": [ "https://unix.stackexchange.com/questions/273624", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153322/" ] }
273,660
I assigned a var like this: MYCUSTOMTAB=' ' But using it in echo both: echo $MYCUSTOMTAB"blah blah" or echo -e $MYCUSTOMTAB"blah blah" just returns a single space and the rest of the string: blah blah How can I print the full string untouched? I want to use it for have a custom indent because \t is too much wide for my tastes.
Put your variable inside double quote to prevent field splitting , which ate your spaces: $ MYCUSTOMTAB=' ' $ echo "${MYCUSTOMTAB}blah blah" blah blah
{ "source": [ "https://unix.stackexchange.com/questions/273660", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143935/" ] }
273,861
In zsh , I want to have unlimited history. I set HISTSIZE= , which works in bash . Now I import an old history mv old_history .history which is pretty big wc -l .history 43562 .history If I now close and start zsh again, I see wc -l .history 32234 .history Can't I have unlimited history in zsh ?
There is the limit and the possibilities of your machines. HISTFILE="$HOME/.zsh_history" HISTSIZE=10000000 SAVEHIST=10000000 setopt BANG_HIST # Treat the '!' character specially during expansion. setopt EXTENDED_HISTORY # Write the history file in the ":start:elapsed;command" format. setopt INC_APPEND_HISTORY # Write to the history file immediately, not when the shell exits. setopt SHARE_HISTORY # Share history between all sessions. setopt HIST_EXPIRE_DUPS_FIRST # Expire duplicate entries first when trimming history. setopt HIST_IGNORE_DUPS # Don't record an entry that was just recorded again. setopt HIST_IGNORE_ALL_DUPS # Delete old recorded entry if new entry is a duplicate. setopt HIST_FIND_NO_DUPS # Do not display a line previously found. setopt HIST_IGNORE_SPACE # Don't record an entry starting with a space. setopt HIST_SAVE_NO_DUPS # Don't write duplicate entries in the history file. setopt HIST_REDUCE_BLANKS # Remove superfluous blanks before recording entry. setopt HIST_VERIFY # Don't execute immediately upon history expansion. setopt HIST_BEEP # Beep when accessing nonexistent history. From the ZSH Mailing list : You should determine how much memory you have, how much of it you can allow to be occupied by the history (AFAIK it is always fully loaded into memory) and act accordingly. Removing the limit is not wiser as it leaves you with an idea that there is no limit while it is always limited by available resources. Or if you do not think you will ever hit a problem with resource exhaustion you can just set HISTSIZE to LONG_MAX from limits.h: it is the maximum number HISTSIZE can have. Which explain the Gentoo solution: export HISTSIZE=2000 export HISTFILE="$HOME/.history" History won't be saved without the following command: export SAVEHIST=$HISTSIZE To prevent history from recording duplicated entries (such as ls -l entered many times during single shell session), you can set the hist_ignore_all_dups option: setopt hist_ignore_all_dups A useful trick to prevent particular entries from being recorded into a history by preceding them with at least one space. setopt hist_ignore_space
{ "source": [ "https://unix.stackexchange.com/questions/273861", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58056/" ] }
273,876
The following message appears almost every time I shutdown my computer: A stop job is running for Session c2 of user ... (1min 30s) It waits for 1min30s then continues the shutdown process. I follow this systemd shutdown diagnosis guide and get the shutdown-log.txt (I can't paste directly the log here because it's very long). Unfortunately, I don't understand the log by myself. Could anyone help me to find out what makes my system doesn't shutdown properly? I run Arch Linux with kernel 4.4.5-1-ARCH , my systemd version is 229-3 . Addition 1: I observe that every time I logout, and then shutdown my computer from the login screen, it doesn't get the message A stop job is running... . I tried to logout before shutdown for many times, so I think it doesn't occur by chance. Hope that information could help. Addition 2: It is always session c2 that causes shutdown hanging. So as @n.st suggest, I looked at Diagnosing Shutdown Problems again and stored loginctl session-status c2 instead of dmesg , but then there is nothing on the shutdown-log.txt . I replaced loginctl session-status c2 by systemd-cgls and got the following log: Control group /: -.slice └─init.scope ├─ 1 /usr/lib/systemd/systemd-shutdown reboot --log-level 6 --log-target ... ├─1069 /usr/lib/systemd/systemd-shutdown reboot --log-level 6 --log-target ... ├─1071 /bin/sh /usr/lib/systemd/system-shutdown/debug.sh reboot └─1074 systemd-cgls Any ideas? Note: After I updated to kernel 4.6.4-1-ARCH and systemd 230-7 , the error no longer happened.
A workaround to this problem is to reduce this timeout in /etc/systemd/system.conf down from 90s to for example 10s: DefaultTimeoutStopSec=10s and run the following command in terminal after making changes $ systemctl daemon-reload
{ "source": [ "https://unix.stackexchange.com/questions/273876", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154739/" ] }
273,971
We can get CPU information using lscpu command, is there any command to get hard disk information on Linux terminal, in a similar way?
If you are looking for partitioning information you can use fdisk or parted . If you are more interested into how the various partitions are associated with the mount points try lsblk which I often use as: lsblk -o "NAME,MAJ:MIN,RM,SIZE,RO,FSTYPE,MOUNTPOINT,UUID" to include UUID info. And finally smartctl -a /dev/yourdrive gives you detailed info like: === START OF INFORMATION SECTION === Device Model: WDC WD40EFRX-68WT0N0 Serial Number: WD-WCC4E4LA4965 LU WWN Device Id: 5 0014ee 261ca5a3f Firmware Version: 82.00A82 User Capacity: 4,000,787,030,016 bytes [4.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 5400 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Sun Apr 3 10:59:55 2016 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled and more. Some of these commands need to be run sudo to get all info.
{ "source": [ "https://unix.stackexchange.com/questions/273971", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164038/" ] }
273,982
I have some nodejs projects and other files that I want to put on the coreos private server, what is the easiest method to take files from the workstation (windows) and put them into the coreos system? Is there anything I could do other than making a docker container with ftp? The goal is to be able to type with my favorite editor with my pc and then bring it to coreos server in order to build docker files from here. What are the best solution for this?
If you are looking for partitioning information you can use fdisk or parted . If you are more interested into how the various partitions are associated with the mount points try lsblk which I often use as: lsblk -o "NAME,MAJ:MIN,RM,SIZE,RO,FSTYPE,MOUNTPOINT,UUID" to include UUID info. And finally smartctl -a /dev/yourdrive gives you detailed info like: === START OF INFORMATION SECTION === Device Model: WDC WD40EFRX-68WT0N0 Serial Number: WD-WCC4E4LA4965 LU WWN Device Id: 5 0014ee 261ca5a3f Firmware Version: 82.00A82 User Capacity: 4,000,787,030,016 bytes [4.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 5400 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Sun Apr 3 10:59:55 2016 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled and more. Some of these commands need to be run sudo to get all info.
{ "source": [ "https://unix.stackexchange.com/questions/273982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112485/" ] }
274,144
I have a test.wav file.  I need to use this file to process an application, with following properties: monochannel 16 kHz sample rate 16-bit Now, I'm using the following commands to attain these properties: sox disturbence.wav -r 16000 disturbence_16000.wav sox disturbence_16000.wav -c 1 disturbence_1600_mono.wav sox disturbence_1600_mono.wav -s -b 16 disturbence_1600_mono_16bit.wav Here to get a single file, three steps are involved and two temporary files are created.  It is a time-consuming process. I thought of writing a script to do these process but I'm keeping this is a last option. In single command, can I convert a .wav file to the required format?
sox disturbence.wav -r 16000 -c 1 -b 16 disturbence_16000_mono_16bit.wav gives within one command Sample rate of 16 kHz ( -r 16000 ), one channel (mono) ( -c 1 ), 16 bits bit depth ( -b 16 ).
{ "source": [ "https://unix.stackexchange.com/questions/274144", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95366/" ] }
274,175
I have a function for quickly making a new SVN branch which looks like so function svcp() { svn copy "repoaddress/branch/$1.0.x" "repoaddress/branch/dev/$2" -m "dev branch for $2"; } Which I use to quickly make a new branch without having to look up and copy paste the addresses and some other stuff. However for the message (-m option), I'd like to have it so that if I provide a third parameter then that is used as the message, otherwise the 'default' message of "dev branch for $2" is used. Can someone explain how this is done?
function svcp() { msg=${3:-dev branch for $2} svn copy "repoaddress/branch/$1.0.x" "repoaddress/branch/dev/$2" -m "$msg"; } the variable msg is set to $3 if $3 is non-empty, otherwise it is set to the default value of dev branch for $2 . $msg is then used as the argument for -m .
{ "source": [ "https://unix.stackexchange.com/questions/274175", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155535/" ] }
274,229
I know I can write to a file by simply doing :w <file> . I would like to know though how can I write to a file by appending to it instead of overwriting it. Example use case: I want to take some samples out of a log file into another file. To achieve that today I can do: Open the log file Select some lines with Shift+v Write to a file: :w /tmp/samples Select some more lines with Shift+v Append to /tmp/samples with :w !cat - >> /foo/samples Unfortunately step 5 is long, ugly and error prone (missing a > makes you lose data). I hope Vim has something better here.
From :h :w : :w_a :write_a E494 :[range]w[rite][!] [++opt] >> Append the specified lines to the current file. :[range]w[rite][!] [++opt] >> {file} Append the specified lines to {file}. '!' forces the write even if file does not exist. So, if you have selected the text using visual mode, just do :w >> /foo/samples ( :'<,'> will be automatically prepended). If you miss out on a > , Vim will complain: E494: Use w or w>>
{ "source": [ "https://unix.stackexchange.com/questions/274229", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72086/" ] }
274,273
I was reading the famous Unix Recovery Legend , and it occurred to me to wonder: If I had a BusyBox shell open, and the BusyBox binary were itself deleted, would I still be able to use all the commands included in the BusyBox binary? Clearly I wouldn't be able to use the BB version of those commands from another running shell such as bash , since the BusyBox file itself would be unavailable for bash to open and run. But from within the running instance of BusyBox, it appears to me there could be two methods by which BB would run a command: It could fork and exec a new instance of BusyBox, calling it using the appropriate name—and reading the BusyBox file from disk to do so. It could fork and perform some internal logic to run the specified command (for example, by running it as a function call). If (1) is the way BusyBox works, I would expect that certain BusyBox-provided commands would become unavailable from within a running instance of BB after the BB binary were deleted. If (2) is how it works, BusyBox could be used even for recovery of a system where BB itself had been deleted—provided there were still a running instance of BusyBox accessible. Is this documented anywhere? If not, is there a way to safely test it?
By default, BusyBox doesn't do anything special regarding the applets that it has built in (the commands listed with busybox --help ). However, if the FEATURE_SH_STANDALONE and FEATURE_PREFER_APPLETS options are enabled at compile time, then when BusyBox sh¹ executes a command which is a known applet name, it doesn't do the normal PATH lookup, but instead runs its built-in applets through a shortcut: Applets that are declared as “noexec” in the source code are executed as function calls in a forked process. As of BusyBox 1.22, the following applets are noexec: chgrp , chmod , chown , cksum , cp , cut , dd , dos2unix , env , fold , hd , head , hexdump , ln , ls , md5sum , mkfifo , mknod , sha1sum , sha256sum , sha3sum , sha512sum , sort , tac , unix2dos . Applets that are declared as “nofork” in the source code are executed as function calls in the same process. As of BusyBox 1.22, the following applets are nofork: [[ , [ , basename , cat , dirname , echo , false , fsync , length , logname , mkdir , printenv , printf , pwd , rm , rmdir , seq , sync , test , true , usleep , whoami , yes . Other applets are really executed (with fork and execve ), but instead of doing a PATH lookup, BusyBox executes /proc/self/exe , if available (which is normally the case on Linux), and a path defined at compile time otherwise. This is documented in a bit more detail in docs/nofork_noexec.txt . The applet declarations are in include/applets.src.h in the source code. Most default configurations turn these features off, so that BusyBox executes external commands like any other shell. Debian turns these features on in both its busybox and busybox-static packages. So if you have a BusyBox executable compiled with FEATURE_SH_STANDALONE and FEATURE_PREFER_APPLETS , then you can execute all BusyBox commands from a BusyBox shell even if the executable is deleted (except for the applets that are not listed above, if /proc/self/exe is not available). ¹ There are actually two implementations of "sh" in BusyBox — ash and hush — but they behave the same way in this respect.
{ "source": [ "https://unix.stackexchange.com/questions/274273", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
274,330
#!/bin/bash MAXCDN_ARRAY="108.161.176.0/20 94.46.144.0/20 146.88.128.0/20 198.232.124.0/22 23.111.8.0/22 217.22.28.0/22 64.125.76.64/27 64.125.76.96/27 64.125.78.96/27 64.125.78.192/27 64.125.78.224/27 64.125.102.32/27 64.125.102.64/27 64.125.102.96/27 94.31.27.64/27 94.31.33.128/27 94.31.33.160/27 94.31.33.192/27 94.31.56.160/27 177.54.148.0/24 185.18.207.65/26 50.31.249.224/27 50.31.251.32/28 119.81.42.192/27 119.81.104.96/28 119.81.67.8/29 119.81.0.104/30 119.81.1.144/30 27.50.77.226/32 27.50.79.130/32 119.81.131.130/32 119.81.131.131/32 216.12.211.59/32 216.12.211.60/32 37.58.110.67/32 37.58.110.68/32 158.85.206.228/32 158.85.206.231/32 174.36.204.195/32 174.36.204.196/32" $IP = 108.161.184.123 if [ $IP in $MAXCDN_ARRAY ]; then: echo "$IP is in MAXCDN range" else: echo "$IP is not in MAXCDN range" fi I have a list of IPs in MAXCDN_ARRAY to be used as whitelist. I want to check if a specific IP address is in range in this array. How can I structure the code so that it can compare all IPs in the array and say the specific IP in in range of this list or not?
You can use grepcidr to check if an IP address is in a list of CIDR networks. #! /bin/bash NETWORKS="108.161.176.0/20 94.46.144.0/20 146.88.128.0/20 198.232.124.0/22 23.111.8.0/22 217.22.28.0/22 64.125.76.64/27 64.125.76.96/27 64.125.78.96/27 64.125.78.192/27 64.125.78.224/27 64.125.102.32/27 64.125.102.64/27 64.125.102.96/27 94.31.27.64/27 94.31.33.128/27 94.31.33.160/27 94.31.33.192/27 94.31.56.160/27 177.54.148.0/24 185.18.207.65/26 50.31.249.224/27 50.31.251.32/28 119.81.42.192/27 119.81.104.96/28 119.81.67.8/29 119.81.0.104/30 119.81.1.144/30 27.50.77.226/32 27.50.79.130/32 119.81.131.130/32 119.81.131.131/32 216.12.211.59/32 216.12.211.60/32 37.58.110.67/32 37.58.110.68/32 158.85.206.228/32 158.85.206.231/32 174.36.204.195/32 174.36.204.196/32" for IP in 108.161.184.123 108.161.176.123 192.168.0.1 172.16.21.99; do grepcidr "$NETWORKS" <(echo "$IP") >/dev/null && \ echo "$IP is in MAXCDN range" || \ echo "$IP is not in MAXCDN range" done NOTE: grepcidr expects the IP address(es) it is matching to be in a file, not just an argument on the command line. That's why I had to use <(echo "$IP") above. Output: 108.161.184.123 is in MAXCDN range 108.161.176.123 is in MAXCDN range 192.168.0.1 is not in MAXCDN range 172.16.21.99 is not in MAXCDN range grepcidr is available pre-packaged for several distros, including Debian: Package: grepcidr Version: 2.0-1 Description-en: Filter IP addresses matching IPv4 CIDR/network specification grepcidr can be used to filter a list of IP addresses against one or more Classless Inter-Domain Routing (CIDR) specifications, or arbitrary networks specified by an address range. As with grep, there are options to invert matching and load patterns from a file. grepcidr is capable of comparing thousands or even millions of IPs to networks with little memory usage and in reasonable computation time. . grepcidr has endless uses in network software, including: mail filtering and processing, network security, log analysis, and many custom applications. Homepage: http://www.pc-tools.net/unix/grepcidr/ Otherwise, the source is available at the link above. Another alternative is to write a perl or python script using one of the many libraries/modules for manipulating and checking IPv4 addresses with those languages. For example, the perl module Data::Validate::IP has an is_innet_ipv4($ip, $network) function; Net::CIDR::Lite has a very similar $cidr->find($ip); method; and Net::IPv4Addr has an ipv4_in_network() function. python has comparable libraries, including ipy , ipaddr , and ipcalc , amongst others.
{ "source": [ "https://unix.stackexchange.com/questions/274330", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81390/" ] }
274,428
I have a pdf file that contains images and I want to reduce its size in order to upload it to a site with a size limit. So, how can I reduce the size of a pdf file from the command-line?
You can use gs - GhostScript (PostScript and PDF language interpreter and previewer) as follows: Set pdfwrite as output device by -sDEVICE=pdfwrite Use the appropriate -dPDFSETTINGS . From Documentation : -dPDFSETTINGS= configuration Presets the "distiller parameters" to one of four predefined settings: /screen selects low-resolution output similar to the Acrobat Distiller "Screen Optimized" setting. /ebook selects medium-resolution output similar to the Acrobat Distiller "eBook" setting. /printer selects output similar to the Acrobat Distiller "Print Optimized" setting. /prepress selects output similar to Acrobat Distiller "Prepress Optimized" setting. /default selects output intended to be useful across a wide variety of uses, possibly at the expense of a larger output file. -o option to output file which also set -dNOPAUSE and -dBATCH (see Interaction-related parameters ) Example: $ du -h file.pdf 27M file.pdf $ gs -sDEVICE=pdfwrite -dPDFSETTINGS=/ebook -q -o output.pdf file.pdf $ du -h output.pdf 900K output.pdf Here -q suppress normal startup messages, and also do the equivalent of -dQUIET which suppresses routine information comments
{ "source": [ "https://unix.stackexchange.com/questions/274428", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
274,658
After downloading the source code for Bash, I was browsing through the doc directory and came across the following files: bash.1 is a regular troff file used to build the man page . bash.0 is like a plain text version of the man page – only that it has the ^H backspace control character liberally distributed throughout it. These control characters are not displayed in the representation provided by the Git web interface but the actual file can be downloaded and examined in text editor such as Vim. Running the file command on bash.0 prints the following output: bash.0: ASCII text, with overstriking I’ve never come across this file format before and I was wondering what its purpose is and how it’s used. Searching the Web for the phrase “ASCII text, with overstriking” hasn’t been very enlightening.
Overstriking is a method used in nroff (see the Troff User’s Manual ) to offer more typographical possibilities than plain ASCII would allow: bold text (by overstriking the same character) underlined text (by overstriking _ ) accents and diacritics ( e.g. é produced by overstriking e with ’ ) and various other symbols, as permitted by the target output device. In bash , these .0 files are produced directly by nroff , with Makefile rules such as .1.0: $(RM) $@ -${NROFF} -man $< > $@ You can view such files using less ; it will process the overstriking sequences and replace them as appropriate: less bash.0 Originally nroff 's output targeted typewriter-style output devices, which would back up every time they received a backspace character; overstriking would produce the desired visual output. As pointed out by chirlu , striking the same character twice would usually result in a bolder appearance thanks to the inevitable misalignment of the successive strikes; the increase in the amount of ink deposited would also help. ( troff targeted typesetting machines.)
{ "source": [ "https://unix.stackexchange.com/questions/274658", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22812/" ] }
275,053
For example: [root@ip-10-0-7-125 ~]# history | grep free 594 free -m 634 free -m | xargs | awk '{print "free/total memory" $17 " / " $ 8}' 635 free -m 636 free -m | xargs | awk '{print "free/total memory" $9 " / " $ 10}' 736 df -h | xargs | awk '{print "free/total disk: " $11 " / " $9}' 740 df -h | xargs | awk '{print "free/total disk: " $11 " / " $8}' 741 free -m | xargs | awk '{print "free/total memory: " $17 " / " $8 " MB"}' I'm just wondering if there any way to execute the 636 command without typing it again, just type something plus the number, like history 636 or something.
In bash, just !636 will be ok.
{ "source": [ "https://unix.stackexchange.com/questions/275053", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138782/" ] }
275,060
I'm trying to write a shell script using bash for the following problem: Write a loop to go through three values (A B C) and displays each of these values to the screen (hint use a ‘for’ loop). I figured out it would be something like this but I'm not sure, so any advice would be much appreciated. For (( EXP1; EXP2; EXP3 )) do command1 command2 command3 done
In bash, just !636 will be ok.
{ "source": [ "https://unix.stackexchange.com/questions/275060", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164697/" ] }
275,074
I am using openSUSE Tumbleweed. I've bought a new PCI-E Wireless card, that supports 5G WiFi (Intel 5100 AGN). It doesn't show up in lspci and even if I take old adapter out it still cannot see my new one. I have tried switching it off and on again in BIOS, but nothing helps. The driver must be installed according to firmware folder /lib/firmware/iwlwifi-100-5.ucode /lib/firmware/iwlwifi-1000-3.ucode /lib/firmware/iwlwifi-1000-5.ucode /lib/firmware/iwlwifi-105-6.ucode /lib/firmware/iwlwifi-135-6.ucode /lib/firmware/iwlwifi-2000-6.ucode /lib/firmware/iwlwifi-2030-6.ucode /lib/firmware/iwlwifi-3160-10.ucode /lib/firmware/iwlwifi-3160-12.ucode /lib/firmware/iwlwifi-3160-13.ucode /lib/firmware/iwlwifi-3160-16.ucode /lib/firmware/iwlwifi-3160-7.ucode /lib/firmware/iwlwifi-3160-8.ucode /lib/firmware/iwlwifi-3160-9.ucode /lib/firmware/iwlwifi-3945-2.ucode /lib/firmware/iwlwifi-4965-2.ucode /lib/firmware/iwlwifi-5000-1.ucode /lib/firmware/iwlwifi-5000-2.ucode /lib/firmware/iwlwifi-5000-5.ucode /lib/firmware/iwlwifi-5150-2.ucode /lib/firmware/iwlwifi-6000-4.ucode /lib/firmware/iwlwifi-6000g2a-5.ucode /lib/firmware/iwlwifi-6000g2a-6.ucode /lib/firmware/iwlwifi-6000g2b-5.ucode /lib/firmware/iwlwifi-6000g2b-6.ucode /lib/firmware/iwlwifi-6050-4.ucode /lib/firmware/iwlwifi-6050-5.ucode /lib/firmware/iwlwifi-7260-10.ucode /lib/firmware/iwlwifi-7260-12.ucode /lib/firmware/iwlwifi-7260-13.ucode /lib/firmware/iwlwifi-7260-16.ucode /lib/firmware/iwlwifi-7260-7.ucode /lib/firmware/iwlwifi-7260-8.ucode /lib/firmware/iwlwifi-7260-9.ucode /lib/firmware/iwlwifi-7265-10.ucode /lib/firmware/iwlwifi-7265-12.ucode /lib/firmware/iwlwifi-7265-13.ucode /lib/firmware/iwlwifi-7265-16.ucode /lib/firmware/iwlwifi-7265-8.ucode /lib/firmware/iwlwifi-7265-9.ucode /lib/firmware/iwlwifi-7265D-10.ucode /lib/firmware/iwlwifi-7265D-12.ucode /lib/firmware/iwlwifi-7265D-13.ucode /lib/firmware/iwlwifi-7265D-16.ucode /lib/firmware/iwlwifi-8000C-13.ucode /lib/firmware/iwlwifi-8000C-16.ucode DMESG: rextuz@linux-c84g:~$ dmesg | grep Firmware [ 0.358267] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored [ 0.401370] acpi PNP0A08:00: [Firmware Info]: MMCONFIG for domain 0000 [bus 00-3f] only partially covers this bridge rextuz@linux-c84g:~$ dmesg | grep firmware [ 5.713117] psmouse serio2: trackpoint: IBM TrackPoint firmware: 0x0e, buttons: 3/3 [ 7.639514] iwlwifi 0000:03:00.0: loaded firmware version 39.31.5.1 build 35138 op_mode iwldvm [ 5123.606856] usb 2-1.2: device firmware changed [12107.630137] usb 2-1.2: device firmware changed [12111.314260] usb 2-1.2: device firmware changed rextuz@linux-c84g:~$ dmesg | grep Wireless [ 7.622057] Intel(R) Wireless WiFi driver for Linux [ 7.659264] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Wireless-N 1000 BGN, REV=0x6C lspci and lshw linux-c84g:/home/rextuz # lspci -vnn | grep -i net 00:19.0 Ethernet controller [0200]: Intel Corporation 82579LM Gigabit Network Connection [8086:1502] (rev 04) 03:00.0 Network controller [0280]: Intel Corporation Centrino Wireless-N 1000 [Condor Peak] [8086:0084] linux-c84g:/home/rextuz # lshw -C network *-network description: Ethernet interface product: 82579LM Gigabit Network Connection vendor: Intel Corporation physical id: 19 bus info: pci@0000:00:19.0 logical name: enp0s25 version: 04 serial: f0:de:f1:6f:61:8d capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=3.2.6-k firmware=0.13-3 latency=0 link=no multicast=yes port=twisted pair resources: irq:29 memory:f2500000-f251ffff memory:f252b000-f252bfff ioport:5080(size=32) *-network DISABLED description: Wireless interface product: Centrino Wireless-N 1000 [Condor Peak] vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlp3s0 version: 00 serial: 8c:a9:82:be:c0:9e width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=4.5.0-2-default firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:28 memory:f2400000-f2401fff *-network description: Ethernet interface physical id: 2 logical name: enp0s29u1u2 serial: c6:bc:a4:94:d0:53 capabilities: ethernet physical configuration: broadcast=yes driver=rndis_host driverversion=22-Aug-2005 firmware=RNDIS device ip=192.168.42.209 link=yes multicast=yes How do I make the kernel to use my new adapter instead or together with the old one?
In bash, just !636 will be ok.
{ "source": [ "https://unix.stackexchange.com/questions/275074", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140213/" ] }
275,243
Edited: do not run this to test it unless you want to destroy data. Could someone help me understand what I got? dd if=/dev/zero of=/dev/sda bs=4096 count=4096 Q: Why specifically 4096 for count ? dd if=/dev/zero of=/dev/sda bs=512 count=4096 seek=$(expr blockdev --getsz /dev/sda - 4096) Q: What exactly does this do? Warning; Above code will render some/all specified device/disk's data useless!
dd if=/dev/zero of=/dev/sda bs=4096 count=4096 Q: why 4096 is particularly used for counter? This will zero out the first 16 MiB of the drive. 16 MiB is probably more than enough to nuke any "start of disk" structures while being small enough that it won't take very long. dd if=/dev/zero of=/dev/sda bs=512 count=4096 seek=$(expr blockdev --getsz /dev/sda - 4096) Q: What does this exactly? blockdev --getsz gets the size of the block device in "512 byte sectors". So this command looks like it was intended to zero out the last 2 MiB of the drive. Unfortunately this command is broken syntax wise. I expect the command was originally intended to be dd if=/dev/zero of=/dev/sda bs=512 count=4096 seek=$(expr `blockdev --getsz /dev/sda` - 4096) and the backticks got lost somewhere along the line of people copy/pasting it between different environments. Old partition tables, LVM metadata, raid metadata etc can cause problems when reusing a drive. Zeroing out sections at the start and end of the drive will generally avoid these problems while being much faster than zeroing out the whole drive.
{ "source": [ "https://unix.stackexchange.com/questions/275243", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164982/" ] }
275,329
I'm still very new to scripting in bash, and just trying a few what I thought would be basic things. I want to run DDNS that updates from the my server running Ubuntu 14.04. Borrowing some code from dnsimple, this is what I have so far: #!/bin/bash LOGIN="email" TOKEN="token" DOMAIN_ID="domain" RECORD_ID="record" IP=`curl -s http://icanhazip.com/` OUTPUT=` curl -H "Accept: application/json" \ -H "Content-Type: application/json" \ -H "X-DNSimple-Domain-Token: $TOKEN" \ -X "PUT" \ -i "https://api.dnsimple.com/v1/domains/$DOMAIN_ID/records/$RECORD_ID" \ -d "{\"record\":{\"content\":\"$IP\"}}"` if ! echo "$OUTPUT" | grep -q "(Status:\s200)"; then echo "match" $(echo "$OUTPUT" | grep -oP '(?<="message":")(.[^"]*)' >> /home/ddns/ddns.log) $(echo "$OUTPUT"| grep -P '(Status:\s[0-9]{3}\s)' >> /home/ddns/ddns.log) fi The idea is that it runs every 5 minutes, which I have working using a cronjob. I then want to check the output of the curl to see if the status is "200" or other. If it is something else, then I want to save the output to a file. What I can't get working is the if statement. As I understand it, the -q on the grep command will provide an exit code for the if statement. However I can't seem to get it work. Where have I gone wrong?
You're almost there. Just omit the exclamation mark: OUTPUT='blah blah (Status: 200)' if echo "$OUTPUT" | grep -q "(Status:\s200)"; then echo "MATCH" fi Result: MATCH The if condition is fulfilled if grep returns with exit code 0 (which means a match). The ! exclamation mark will negate this.
{ "source": [ "https://unix.stackexchange.com/questions/275329", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165027/" ] }
275,516
Standard Unix utilities like grep and diff use some heuristic to classify files as "text" or "binary". (E.g. grep 's output may include lines like Binary file frobozz matches .) Is there a convenient test one can apply in a zsh script to perform a similar "text/binary" classification? (Other than something like grep '' somefile | grep -q Binary .) (I realize that any such test would necessarily be heuristic, and therefore imperfect.)
If you ask file for just the mime-type you'll get many different ones like text/x-shellscript , and application/x-executable etc, but I imagine if you just check for the "text" part you should get good results. Eg ( -b for no filename in output): file -b --mime-type filename | sed 's|/.*||'
{ "source": [ "https://unix.stackexchange.com/questions/275516", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10618/" ] }
275,517
If you want to execute one command and then another one after the first one finish, you can execute command1 & which prints the PID of the process executing command1 . You can then think of what you want to do after command1 has finished and execute: wait [PID printed by the previous command] && command2 However, this only works in the same terminal window and gets really, really messy if command1 prints output. If you open up a new terminal window and try to wait, you're shown something like this: $ wait 10668 bash: wait: pid 10668 is not a child of this shell Is there a terminal emulator which supports waiting for programs without having to write the next command in the output of the currently executing command without throwing the output of the first command away (like piping it to /dev/null )? It doesn't have to work via wait or something similar. Right-clicking and choosing "execute after current command returned" would be perfectly fine. I don't mean simply concatenating commands but being able to run a command and then decide on what to run right after that one finished.
If you ask file for just the mime-type you'll get many different ones like text/x-shellscript , and application/x-executable etc, but I imagine if you just check for the "text" part you should get good results. Eg ( -b for no filename in output): file -b --mime-type filename | sed 's|/.*||'
{ "source": [ "https://unix.stackexchange.com/questions/275517", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147785/" ] }
275,794
I read that it is bad to write things like for line in $(command) , the correct way seem to be instead: command | while IFS= read -r line; do echo $line; done This works great. But what if what I want to iterate on is the contents of a variable , not the direct result of a command? For example, imagine that you create the following file quickfox : The quick brown foxjumps\ over - the lazy , dog. I would like to be able to do something like this: # This is just for the example, # I could of course stream the contents to `read` variable=$(cat quickfox); while IFS= read -r line < $variable; do echo $line; done; # this is incorrect
In modern shells like bash and zsh, you have a very useful `<<<' redirector that accepts a string as an input. So you would do while IFS= read -r line ; do echo $line; done <<< "$variable" Otherwise, you can always do echo "$variable" | while IFS= read -r line ; do echo $line; done
{ "source": [ "https://unix.stackexchange.com/questions/275794", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45354/" ] }
275,824
Is there a generic way of running a bash script and seeing the commands that result, but not actually run the commands - i.e. a "dry run"/simulator of sorts? I have a database install script (actually "make install" after running ./configure and make) that I wish to run, but it's installing all sorts of stuff that I don't want. So I'd like a way to see exactly what it's going to do before I run it for real - maybe even run the commands by hand instead. Is there any utility that can perform such a task (or anything related/similar)?
GNU make has an option to do a dry-run: ‘-n’ ‘--just-print’ ‘--dry-run’ ‘--recon’ “No-op”. Causes make to print the recipes that are needed to make the targets up to date, but not actually execute them. Note that some recipes are still executed, even with this flag (see How the MAKE Variable Works). Also any recipes needed to update included makefiles are still executed. So for your situation, just run make -n install to see the commands that make would execute.
{ "source": [ "https://unix.stackexchange.com/questions/275824", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99494/" ] }
275,827
Is there a terminal command that merges all open terminal windows into one window with tabs? Been searching all over the place, but have yet to find any solutions.
GNU make has an option to do a dry-run: ‘-n’ ‘--just-print’ ‘--dry-run’ ‘--recon’ “No-op”. Causes make to print the recipes that are needed to make the targets up to date, but not actually execute them. Note that some recipes are still executed, even with this flag (see How the MAKE Variable Works). Also any recipes needed to update included makefiles are still executed. So for your situation, just run make -n install to see the commands that make would execute.
{ "source": [ "https://unix.stackexchange.com/questions/275827", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165339/" ] }
275,883
I have a symbolic link for a directory, e.g ln -s /tmp /xxx Now when I type /xx and press tab key, bash would complete the line to /xxx If I press it again it become /xxx/ Now, how can I ask bash to complete /xx to /xxx/ automatically (provided that there's only one match)
Add the following line to your ~/.inputrc file: set mark-symlinked-directories on See "Readline Init File Syntax" in the Bash Reference Manual for more on this topic.
{ "source": [ "https://unix.stackexchange.com/questions/275883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11318/" ] }
275,907
My computer says: $ uptime 10:20:35 up 1:46, 3 users, load average: 0,03, 0,10, 0,13 And if I check last I see: reboot system boot 3.19.0-51-generi Tue Apr 12 08:34 - 10:20 (01:45) And then I check: $ ls -l /var/log/boot.log -rw-r--r-- 1 root root 4734 Apr 12 08:34 boot.log Then I see in /var/log/syslog the first line of today being: Apr 12 08:34:39 PC... rsyslogd: [origin software="rsyslogd" swVersion="7.4.4" x-pid="820" x-info="http://www.rsyslog.com"] start So all seems to converge in 8:34 being the time when my machine has booted. However, I wonder: what is the exact time uptime uses? Is uptime a process that launches and checks some file or is it something on the hardware? I'm running Ubuntu 14.04.
On my system it gets the uptime from /proc/uptime : $ strace -eopen uptime open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 open("/lib/libproc-3.2.8.so", O_RDONLY|O_CLOEXEC) = 3 open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 open("/proc/version", O_RDONLY) = 3 open("/sys/devices/system/cpu/online", O_RDONLY|O_CLOEXEC) = 3 open("/etc/localtime", O_RDONLY|O_CLOEXEC) = 3 open("/proc/uptime", O_RDONLY) = 3 open("/var/run/utmp", O_RDONLY|O_CLOEXEC) = 4 open("/proc/loadavg", O_RDONLY) = 4 10:52:38 up 3 days, 23:38, 4 users, load average: 0.00, 0.02, 0.05 From the proc manpage : /proc/uptime This file contains two numbers: the uptime of the system (seconds), and the amount of time spent in idle process (seconds). The proc filesystem contains a set of pseudo files. Those are not real files, they just look like files, but they contain values that are provided directly by the kernel. Every time you read a file, such as /proc/uptime , its contents are regenerated on the fly. The proc filesystem is an interface to the kernel. In the linux kernel source code of the file fs/proc/uptime.c at line 49 , you see a function call: proc_create("uptime", 0, NULL, &uptime_proc_fops); This creates a proc filesystem entry called uptime (the procfs is usually mounted under /proc ), and associates a function to it, which defines valid file operations on that pseudo file and the functions associated to them. In case of uptime it's just read() and open() operations. However, if you trace the functions back you will end up here , where the uptime is calculated. Internally, there is a timer-interupt which updates periodically the systems uptime (besides other values). The interval, in which the timer-interupt ticks, is defined by the preprocessor-macro HZ , whose exact value is defined in the kernel config file and applied at compilation time. The idle time and the number of CPU cycles, combined with the frequency HZ (cycles per second) can be calculated in a number (of seconds) since the last boot. To address your question: When does “uptime” start counting from? Since the uptime is a kernel internal value, which ticks up every cycle, it starts counting when the kernel has initialized. That is, when the first cycle has ended. Even before anything is mounted, directly after the bootloader gives control to the kernel image.
{ "source": [ "https://unix.stackexchange.com/questions/275907", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40596/" ] }
276,037
A short while ago I asked this question - Does "apt-get -s upgrade" or some other apt command have an option to list the repositories the packages will be downloaded from? , about how to list the repositories packages would be upgraded from. I have now learned another command, apt-cache madison which will list the repos a package will be installed from. Why a name like madison which is in no way related to the task at hand?
The madison command was added in apt 0.5.20 . It produces output that's similar to a then-existing tool called madison which was used by Debian server administrators. Several of these tools had names which were common female forenames, I don't know if there's a specific history behind that. The madison tool no longer exists but there's a partial reimplementation called madison-lite (querying a local package archive, like the original), as well as a script called rmadison in devscripts which queries remote servers. apt-cache madison is not emphasized because most of what it displays is also available through apt-cache showpkg and apt-cache policy .
{ "source": [ "https://unix.stackexchange.com/questions/276037", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26026/" ] }
276,168
I'm reading Wikipedia about X11 and it says that: In its standard distribution it is a complete, albeit simple, display and interface solution which delivers a standard toolkit and protocol stack for building graphical user interfaces on most Unix-like operating systems... But later it says that: X primarily defines protocol and graphics primitives - it deliberately contains no specification for application user-interface design, such as button, menu, or window title-bar styles. So, does X11 provide widgets like a button or a window panel/frame, etc or not? What is a graphic primitive? What does X11 provide exactly? It is also stated that: X does not mandate the user interface; individual client programs handle this. Programs may use X's graphical abilities with no user interface. What does this mean?
Like many words, “X11” can have multiple meanings. “X11” is, strictly speaking, a communication protocol. In the sentences “X primarily defines protocol and graphics primitives …” and “X does not mandate the user interface …”, that's what X refers to. X is a family of protocols, X11 is the 11th version and the only one that's been in use in the last 25 years or so. The first sentence in your question refers to a software distribution which is the reference implementation of the X11 protocol. The full name of this software distribution is “the X Window System”. This distribution includes programs that act as servers in the X11 protocol, programs that act as clients in the X11 protocol, code libraries that contain code that makes use of the X11 protocol, associated documentation, resources such as fonts and keyboard layouts that can be used with the aforementioned programs and libraries, etc. Historically , this software distribution was made by MIT; today it is maintained by the X.Org Foundation . The X11 protocol allows applications to create objects such as windows and use basic drawing primitives (e.g. fill a rectangle, display some text). Widgets like buttons, menus, etc. are made by client libraries. The X Window System includes a basic library (the Athena widget set ) but most applications use fancier libraries such as GTK+ , Qt , Motif , etc. Some X11 programs don't have a graphical user interface at all, for example command line tools such as xset , xsel and xdotool , key binding programs such as xbindkeys , etc. Most X11 programs do of course have a GUI.
{ "source": [ "https://unix.stackexchange.com/questions/276168", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/159975/" ] }
276,173
Command on AIX is: [root@hx042:/home/user1]$ lqueryvg -Atp hdiskpower13 0516-1396 lqueryvg: The physical volume hdiskpower13, was not found in the system database. Max LVs: 256 PP Size: 30 Free PPs: 0 LV count: 3 PV count: 3 Total VGDAs: 3 Conc Allowed: 0 MAX PPs per PV: 1016 MAX PVs: 32 Quorum (disk): 1 Quorum (dd): ??????? Auto Varyon ?: 1 Conc Autovaryon 0 Varied on Conc: 0 Logical: 00f62b5c00004c000000014de7f073b1.1 prekod 1 00f62b5c00004c000000014de7f073b1.2 prekre 1 00f62b5c00004c000000014de7f073b1.3 prekcf 1 Physical: 00f62b5ceb80c074 1 0 00f62b5ceb76311b 1 0 00f62b5ceb790075 1 0 Total PPs: 309 LTG size: 128 HOT SPARE: 0 AUTO SYNC: 0 VG PERMISSION: 0 SNAPSHOT VG: 0 IS_PRIMARY VG: 0 PSNFSTPP: 4352 VARYON MODE: ??????? VG Type: 0 Max PPs: 32512 Mirror Pool Str n Sys Mgt Mode: ??????? VG Reserved: ??????? PV RESTRICTION: ??????? Infinite Retry: 2 Varyon State: 0 Disk Block Size 512 I need only these values out: prekod prekre prekcf I tried: [root@hx042:/home/user1]$ lqueryvg -Atp hdiskpower13|sed -n -e '/Logical/,/Physical/ p' 0516-1396 lqueryvg: The physical volume hdiskpower13, was not found in the system database. Logical: 00f62b5c00004c000000014de7f073b1.1 prekod 1 00f62b5c00004c000000014de7f073b1.2 prekre 1 00f62b5c00004c000000014de7f073b1.3 prekcf 1 Physical: 00f62b5ceb80c074 1 0 and now I'm stuck because there is Logical in same line as first value I need, also there is this unavoidable error message which is not useful at all at this point which I also don't need.
Like many words, “X11” can have multiple meanings. “X11” is, strictly speaking, a communication protocol. In the sentences “X primarily defines protocol and graphics primitives …” and “X does not mandate the user interface …”, that's what X refers to. X is a family of protocols, X11 is the 11th version and the only one that's been in use in the last 25 years or so. The first sentence in your question refers to a software distribution which is the reference implementation of the X11 protocol. The full name of this software distribution is “the X Window System”. This distribution includes programs that act as servers in the X11 protocol, programs that act as clients in the X11 protocol, code libraries that contain code that makes use of the X11 protocol, associated documentation, resources such as fonts and keyboard layouts that can be used with the aforementioned programs and libraries, etc. Historically , this software distribution was made by MIT; today it is maintained by the X.Org Foundation . The X11 protocol allows applications to create objects such as windows and use basic drawing primitives (e.g. fill a rectangle, display some text). Widgets like buttons, menus, etc. are made by client libraries. The X Window System includes a basic library (the Athena widget set ) but most applications use fancier libraries such as GTK+ , Qt , Motif , etc. Some X11 programs don't have a graphical user interface at all, for example command line tools such as xset , xsel and xdotool , key binding programs such as xbindkeys , etc. Most X11 programs do of course have a GUI.
{ "source": [ "https://unix.stackexchange.com/questions/276173", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165435/" ] }
276,417
For example, I have a output: Hello, this is the output. (let's say that for example hello is colored red, and the is colored green, and output is colored purple). Now, let's say that this is the output of a command named x . If I use this command, the output becomes white: x | grep hello I've read that one could use grep --color=always . However, this changes the color to highlight the result I searched for instead of keeping the original line colors. I want to keep the original line colors. How do I use grep while keeping them?
You could do this, x | grep --color=never hello To quickly test it, you can do, ls -l /etc/ --color=always | grep --color=never .
{ "source": [ "https://unix.stackexchange.com/questions/276417", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165621/" ] }
276,624
A huge application needs, at one specific time, to perform a small number of writes to a file which requires root permissions. It is not really a file but a hardware interface which is exposed to Linux as a file. To avoid giving root privileges to the whole application, I wrote a bash script which does the critical tasks. For example, the following script will enable port 17 of the hardware interface as output: echo "17" > /sys/class/gpio/export echo "out" > /sys/class/gpio/gpio17/direction However, as suid is disabled for bash scripts on my system, I wonder what is the best way to achieve this. Use some workaround presented here Call the script with sudo from the main application, and edit the sudoers list accordingly, to avoid requiring a password when calling the script. I'm a little bit uncomfortable to give sudo privileges to echo . Just write a C program, with fprintf , and set it to suid root. Hardcode the strings and filenames and make sure only root can edit it. Or read the strings from a text file, similarly making sure that no one can edit the file. Some other solution which didn't occur to me and is safer or simpler then the ones presented above?
You don't need to give sudo access to echo . In fact, that's pointless because, e.g. with sudo echo foo > bar , the redirection is done as the original user, not as root. Call the small script with sudo , allowing NOPASSWD: access to ONLY that script (and any other similar scripts) by the user(s) who need access to it. This is always the best/safest way to use sudo . Isolate the small number of commands that need root privileges into their own separate script(s) and allow the un-trusted or partially-trusted user to only run that script as root. The small sudo -able script(s) should either not take args (or input) from the user (i.e. any other programs it calls should have hard-coded options and args) or it should very careful validate any arguments/input that it has to accept from the user. Be paranoid in the validation - rather than look for 'known bad' things to exclude, allow only 'known good' things and abort on any mismatch or error or anything even remotely suspicious. The validation should occur as early in the script as possible (preferably before it does anything else as root). I really should have mentioned this when I first wrote this answer, but if your script is a shell script it MUST properly quote all variables. Be especially careful to quote variables containing input supplied by the user in any way, but don't assume some variables are safe, QUOTE THEM ALL . That includes environment variables potentially controlled by the user (e.g. "$PATH" , "$HOME" , "$USER" etc. And definitely including "$QUERY_STRING" and "HTTP_USER_AGENT" etc in a CGI script). In fact, just quote them all. If you have to construct a command line with multiple arguments, use an array to build the args list and quote that - "${myarray[@]}" . Have I said "quote them all" often enough yet? remember it. do it.
{ "source": [ "https://unix.stackexchange.com/questions/276624", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46448/" ] }
276,741
I have a text file of this type, and I would look for any lines containing the string Validating Classification and then obtain uniquely the reported errors. I do not know the types of possible errors. Input file: 201600415 10:40 Error Validating Classification: error1 201600415 10:41 Error Validating Classification: error1 201600415 10:42 Error Validating Classification: error2 201600415 10:43 Error Validating Classification: error3 201600415 10:44 Error Validating Classification: error3 Output file 201600415 10:40 Error Validating Classification: error1 201600415 10:42 Error Validating Classification: error2 201600415 10:43 Error Validating Classification: error3 Can I achieve this using grep, pipes and other commands?
You will need to discard the timestamps, but 'grep' and 'sort --unique' together can do it for you. grep --only-matching 'Validating Classification.*' | sort --unique So grep -o will only show the parts of the line that match your regex (which is why you need to include the .* to include everything after the "Validating Classification" match). Then once you have just the list of errors, you can use sort -u to get just the unique list of errors.
{ "source": [ "https://unix.stackexchange.com/questions/276741", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165975/" ] }
276,751
In a shebang, is a space or more allowed between #! and the interpreter? For example, #! /bin/bash . It seems work, but some said that it is incorrect.
Yes, this is allowed. The Wikipedia article about the shebang includes a 1980 email from Dennis Ritchie, when he was introducing kernel support for the shebang (as part of a wider package called interpreter directives ) into Version 8 Unix (emphasis mine): The system has been changed so that if a file being executed begins with the magic characters #! , the rest of the line is understood to be the name of an interpreter for the executed file. […] To take advantage of this wonderful opportunity, put #! /bin/sh at the left margin of the first line of your shell scripts. Blanks after ! are OK. So spaces after the shebang have been around for quite a while, and indeed, Dennis Ritchie’s example is using them. Note that early versions of Unix had a limit of 16 characters in this interpreter line, so you couldn’t have an arbitrary amount of whitespace there. This restriction no longer applies in modern kernels.
{ "source": [ "https://unix.stackexchange.com/questions/276751", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }