source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
253,279
How can I split a word's letters, with each letter in a separate line? For example, given "StackOver" I would like to see StackOver I'm new to bash so I have no clue where to start.
I would use grep : $ grep -o . <<<"StackOver"StackOver or sed : $ sed 's/./&\n/g' <<<"StackOver"StackOver And if empty space at the end is an issue: sed 's/\B/&\n/g' <<<"StackOver" All of that assuming GNU/Linux.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/253279", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150030/" ] }
253,290
I am getting this error: [ 2614.727471] ata1: exception Emask 0x10 SAct 0x0 SErr 0x4000000 action 0xe frozen[ 2614.727477] ata1: irq_stat 0x00000040, connection status changed[ 2614.727481] ata1: SError: { DevExch }[ 2614.727488] ata1: limiting SATA link speed to 1.5 Gbps[ 2614.727491] ata1: hard resetting link[ 2615.450561] ata1: SATA link down (SStatus 0 SControl 310)[ 2615.450577] ata1: EH complete and I DO NOT HAVE ANY SATA disk drives connected. I have an IDE disk!!! my kernel version is recent: 4.2.8-300.fc23.x86_64 , Fedora 23, motherboard: ASRock supercomputer X58 Why is it telling me I have a link if that is not true? Is there a way to diagnose this? I suppose the IDE interface on my motherboard is somehow mapped to SATA controller, so the error I am getting is not originated from the disk, but from the controller. Then, why does it tell me that it is resetting the link to 1.5 Gbps??? Maximum IDE speed is 133MB/s. Very weird. And btw, I the disk is working perfectly without any problems.
I would use grep : $ grep -o . <<<"StackOver"StackOver or sed : $ sed 's/./&\n/g' <<<"StackOver"StackOver And if empty space at the end is an issue: sed 's/\B/&\n/g' <<<"StackOver" All of that assuming GNU/Linux.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/253290", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150940/" ] }
253,306
if [ -z "$OPTION" ] # if option is not given(empty) then: then command1 --defaultOption else command1 $OPTIONfi \ 2> >( function1 "$DETAILS" ) \ < <( command2 "$OTHER_DETAILS" ) I am seriosly puzzled how directing stderr to a file and feeding a file into stdin interact with an if statement.Well known things are: 2>filename# Redirect stderr to file "filename."2>>filename# Redirect and append stderr to file "filename."command < input-file > output-file< input-file command > output-file My guess would be: command2 generates a file which is forwarded either to command1's stdin with --defaultOption (if $OPTION is empty, then case) or to command1's stdin with $OPTION (if $OPTION is not empty, else case).stderr of command1 is redirected to function1 (which as an example might be some sort of progress-bar display). So my questions are: Are the whitespaces between the brackets < < and > > necessary? Is it actual an append (whitespace ignored), or a "double" redirect?Am I missing an interaction between brackets and braces >( and <( ?Does it somehow influence the evaluation of the the if? Or is only -z $OPTION tested? Can I understand what's going on better if I write the outputted file of command2 to the disk, then check for the option and read it again in the if statement? command2 "$OTHER_DETAILS" --out=file.txtif [ -z "$OPTION] then command1 --defaultOption --in=file.txt 2>function1 else command1 "$OPTION" --in=file.txt 2>function1fi This is part of a script I found over there: http://linuxtv.org/wiki/index.php/V4L_capturing/script (lines 912 through 924)
I would use grep : $ grep -o . <<<"StackOver"StackOver or sed : $ sed 's/./&\n/g' <<<"StackOver"StackOver And if empty space at the end is an issue: sed 's/\B/&\n/g' <<<"StackOver" All of that assuming GNU/Linux.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/253306", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150042/" ] }
253,321
I am currently trying to see , all the files which are using /var mount. With lsof | grep /var* when Its displaying size in bytes. How can I display file size in MB. Thank you.
Starting with GNU Coreutils version 8.21 (released on Dec-2013), there is a new standard program called numfmt (=number format).It will do exactly what you want. Example: lsof | grep /var* | numfmt --field=8 --to=iec | head The parameter --to accepts iec (where 1K=1024B) or si (where 1K=1000). There are few additional options, more information here: http://www.gnu.org/s/coreutils/numfmt . (disclaimer: I wrote the initial implementation of numfmt ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/253321", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19072/" ] }
253,349
Small talk as background EINTR is the error which so-called interruptible system calls may return. If a signal occurs while a system call is running, that signal is not ignored. If a signal handler was defined for it without SA_RESTART set and this handler handles that signal, then the system call will return the EINTR error code. As a side note, I got this error very often using ncurses in Python. The question Is there a rationale behind this behaviour specified by the POSIX standard? One can understand it may be not possible to resume (depending on the kernel design), however, what's the rationale for not restarting it automatically at the kernel level? Is this for legacy or technical reasons? If this is for technical reasons, are these reasons still valid nowadays? If this is for legacy reasons, then what's the history?
It is difficult to do nontrivial things in a signal handler, since the rest of the program is in an unknown state. Most signal handlers just set a flag, which is later checked and handled elsewhere in the program. Reason for not restarting the system call automatically: Imagine an application which receives data from a socket by the blocking and uninterruptible recv() system call. In our scenario, data comes very slow and the program resides long in that system call. That program has a signal handler for SIGINT that sets a flag (which is evaluated elsewhere), and SA_RESTART is set that the system call restarts automatically. Imagine that the program is in recv() which waits for data. But no data arrives. The system call blocks. The program now catches ctrl - c from the user. The system call is interrupted and the signal handler, which just sets the flag is executed. Then recv() is restarted, still waiting for data. The event loop is stuck in recv() and has no opportunity to evaluate the flag and exit the program gracefully. With SA_RESTART not set: In the above scenario, when SA_RESTART is not set, recv() would recieve EINTR instead of being restarted. The system call exits and thus can continue. Off course, the program should then (as early as possible) check the flag (set by the signal handler) and do clean up or whatever it does.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/253349", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36670/" ] }
253,376
Why can't I run this command in my terminal: open index.html Wasn't it supposed open this file on my browser? Also can't I run this command: open index.html -a "Sublime Text" . The result of these commands are: $ open index.htmlCouldn't get a file descriptor referring to the console$ open index.html -a "Sublime Text" - open: invalid option -- 'a' Usage: open [OPTIONS] -- command
The primary purpose of OS X's open command is to open a file in the associated application. The equivalent of that on modern non-OSX unices is xdg-open . xdg-open index.html xdg-open doesn't have an equivalent of OSX's open -a to open a file in specific application. That's because the normal way to open a file in an application is to simply type the name of the application followed by the name of the file. More precisely, you need to type the name of the executable program that implements the application. sublime_text index.html Linux, like other Unix systems (but not, as far as I know, the non-Unixy parts of OS X) manages software by tracking it with a package manager, and puts individual files where they are used . For example, all executable programs are in a small set of directories and all those directories are listed in the PATH variable ; running sublime_text looks up a file called sublime_text in the directories listed in PATH . OS X needs an extra level of indirection, through open -a , to handle applications which are unpacked in a single directory tree and registered in an application database. Linux doesn't have any application database, but it's organized in such a way that it doesn't need one. If running the command sublime_text shell doesn't work for you, then Sublime Text hasn't been installed properly. I've never used it, and apparently it comes as a tar archive, not as a distribution package (e.g. deb or rpm), so it's possible that you need to do an extra installation step. It's really the job of the makers of Sublime Text to make this automatic, but if they haven't done it, you can probably do it yourself by running the command sudo -s …/sublime_text /usr/local/bin Replace … by the path where the sublime_text executable is, of course. The open command you encountered is an older name for the openvt command (some Linux distributions only include it under the name openvt ). The openvt command creates a new virtual console , which can only be done by root and isn't used very often in this century since most people only ever work in a graphical window environment.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/253376", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150122/" ] }
253,422
For example, while this works: $ echo foofoo This doesn't: $ /bin/sh -c echo foo Whereas this does: $ /bin/sh -c 'echo foo; echo bar'foobar Is there an explanation?
From man sh -c string If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0 It means your command should be like this: $ sh -c 'echo "$0"' foo foo Similarly: $ sh -c 'echo "$0 $1"' foo barfoo bar That was the first part to understand; the second case is simple and doesn't need explanation, I guess.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/253422", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67045/" ] }
253,426
I'm happy with the default usage of detox for sanitizing filenames, except I don't always want to replace whitespace with a single underscore. Would like to replace whitespace with a single space. Know how to do this?
From man sh -c string If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0 It means your command should be like this: $ sh -c 'echo "$0"' foo foo Similarly: $ sh -c 'echo "$0 $1"' foo barfoo bar That was the first part to understand; the second case is simple and doesn't need explanation, I guess.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/253426", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150158/" ] }
253,467
I have Cygwin install 64-bit version and have installed apt-cyg; using this, I have installed openssh and vim just fine, but I am not able to install pdftk. Does it have some other name? Or is there some other tool I can use to split and merge pdf files in Cygwin?
I got PDFtk commands with Cygwin by installing the server version of PDFtk ( https://www.pdflabs.com/tools/pdftk-server/ ) in Windows 7 64bits. While installing PDFtk Server, make sure to check the box saying to apply PDFtk to the system environment.Once the installation is done, type "pdftk" in Cygwin and it should give the expected results.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/253467", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18538/" ] }
253,489
I have configured my keyboard layout by adding a call to setxkbmap to my .xinitrc . This works for my laptop's internal keyboard and for any external keyboard that is plugged in when the X server starts. If I plug in an external keyboard later, it uses the default US keymap. How can I make sure that any keyboard that I plug in has my preferred layout?
As Gilles commented on Dominik R's answer yesterday, the udev approach only works for the root user and doesn't work well as a general, unprivileged solution. I'd suggest considering inputplug(1) by Andrew Shadura available in Debian as the package inputplug as well as at the project site . inputplug(1) is a rather straightforward as a XINPUT event loop listener which will invoke a script with decoded event parameters as arguments. Since you're using .xinitrc, I imagine you're using a modest window manager / environment and a background listener of this sort should be pretty straightforward for you. Another possibility is using udev in a less traditional way by writing a script parsing the output from "udevadm monitor" and invoking setxkbmap upon recognizing a matching device being connnected. Good Luck!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/253489", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18110/" ] }
253,519
I'm unable to understand this example given in manpage of dc : $ dc 1 0:a 0Sa 2 0:a La 0;ap 1 To me answer should be 2 because: 1 0:a Here we store 1 at 0th position of array a . 0Sa Now we push 0 to the stack of register a . 2 0:a Now here again we store 2 at 0th position of array a thereby overwriting the previous 1 stored at that location. La Now we pop the 0 stored on stack of register a and push it to main stack. 0;a Now we again push 0 to the main stack and then pop it to use as an array index and so we push the 2 stored at 0th location of array a to the main stack. p Now we print the top of main stack which is 2. So answer should be 2. What am I missing? PS- I wanted to use dc as a tag but looks like it doesn't exist and it's compulsory to use at least one tag so used debian (my workstation OS).
As in intermixing arrays and stacks. In the example register a is used both as an array and a as a stack. 1 0:a 0 Sa 2 0:a La 0;a p First :a - register a is treated as an array . Then Sa - register a is treated as a stack. In effect pushing the array from from pt 1. down and creating a new array. As by man : Note that each stacked instance of a register has its own array associated with it. Then :a - register a is treated as an array . Pushing the prior value and the first array value down. Then La - register a is treated as a stack . To get to the fist stacked array throws away the a[0]=2 as it is an array. Then ;a - register a is treated as an array . Now there is only one value left, the first array value added to a which is 1 . Se bottom of answer for some more examples. As per comment: «How many arrays and stacks are there per register? I thought one register has one stack and one separate array.» A Stack: The stack in dc is a push-down-stack or LIFO (Last In First Out). Same as plates in a restaurant. ,------ pop / push - take / leave / | v[-----] top[-----] ... ... .[-----] [-----] [-----] .. (main) (register-a) (register-b) ... We have a main stack or work stack which is the one used unless a operation demanding a register is specified. Each register has it's own stack.' Basic register operations: Sr : pop one value from main stack and push it onto stack specified by register r . Both stacks modified. Lr : pop one value from register stack specified by r and push it onto main stack. Both stacks modified. sr : pop one value from main stack and write it to register r . In effect change topmost value in stack specified by register r . If no value in that stack – add it. main stack modified. Register stack preserved beside changed value. lr : read value from register r . In effect the topmost value if there are several. main changed. Register stack preserved. :r : pop two values from main stack and use first, topmost, as index for where to store second value in array at register r . main changed. Register stack preserved. ;r : pop one value from main stack and use it as index from where to read from current array in register specified by r . main changed. Register stack preserved. Stack and arrays intermingled One way to look at is in pairs. When you start out all the registers are empty. When you add an element to the stack by Sr , you hide any underlying elements in that stack. Say you do: 1 Sx2 Sx4 Sxx = (S) 4 VISIBLE (S) 2 HIDDEN (S) 1 HIDDEN Now you can change the value in register x , that is; change the topmost element, by sx , and you can read by lx – without altering the number of elements in the stack: lx p # Read value in register x - in effect read topmost element from stack.4 # Printed value by p.3 sx # Change value in register x - in effect change topmost element in stack.x = (S) 3 VISIBLE (S) 2 HIDDEN (S) 1 HIDDEN If you decide to add array elements things start to go in a more complex direction. 4 1:x5 2:x x = [A] [2]=5 VISIBLE [1]=4 VISIBLE (S) 3 VISIBLE (S) 2 HIDDEN (S) 1 HIDDEN Now we have added values to the current array in the stack. We can read and modify any VISIBLE elements. 44 1:x55 2:x33 sx1;x p # Read and print index 144lx p # Read and print stack top.33x = [A] [2]=55 VISIBLE [1]=44 VISIBLE (S) 33 VISIBLE (S) 2 HIDDEN (S) 1 HIDDEN If we then add a stack element one way can say that the stack frame is non-expandable as we have added values to the array above it. Thus a new stack frame is added. 6 Sx7 Sxx = (S) 7 VISIBLE (S) 6 HIDDEN [A] [2]=55 HIDDEN [1]=44 HIDDEN (S) 33 HIDDEN (S) 2 HIDDEN (S) 1 HIDDEN If we now try to acces the last array it is hidden. In effect we read from an empty array and the result is the default value 0 . We can modify the register value 7 by sr , but not access the array two levels down unless we get rid of the two stack elements above. If we now decide to add some array elements, they are added to a new array positioned (as a paired array) with the top stack element. 8 1:x9 2:xx = [A] [2]=9 VISIBLE [1]=8 VISIBLE (S) 7 VISIBLE (S) 6 HIDDEN [A] [2]=55 HIDDEN [1]=44 HIDDEN (S) 33 HIDDEN (S) 2 HIDDEN (S) 1 HIDDEN Now if we do a pop of the stack we pop of 7 but as there is an array in between (so to speak) it is also removed. Lx p # Pop + print top stack element.7 # Value printed.x = (S) 6 VISIBLE [A] [2]=55 HIDDEN [1]=44 HIDDEN (S) 33 HIDDEN (S) 2 HIDDEN (S) 1 HIDDEN The array with 8 and 9 is gone. The stack element with value 6 is visible. But the underlying array is blocked. A read by 1;x p yield 0 . In a way we can say the stack elements are blocking, whilst the arrays are opaque. The arrays are sort of hanging on to the stack elements. We need to do yet another pop from the stack to reveal the underlying stack element + array. Lx p # Pop + print top stack element.6 # Value printed.x = [A] [2]=55 VISIBLE [1]=44 VISIBLE (S) 33 VISIBLE (S) 2 HIDDEN (S) 1 HIDDEN In conclusion one can say that the number of arrays and stack are not limited to one per register. – How many arrays and stacks are there per register? – Depends on how many alternating Sr and :r operations you do on that register. Another way of looking at it is that there is only one stack but multiple arrays iff we add stack elements between adding array elements ... Yet another way is to say that at any given moment the current array is not [register][array] but [register][stack-element][array] which gives: [register][stack-element][array][...][register][stack-element][array][1][register][stack-element][array][0] and that the stack-element part is opaque, read only, etc. Though in that case we also have to remember that we do not need a value for stack element. It is OK to only add array values to a register. Or each stack element has is paired with a zero filled array that we can modify: 1 Sx2 Sx3 Sx4 1:x5 2:x6 Sx7 Sx8 1:x9 2:xx = (S) 7 + A[0]=0 A[1]=8 A[2]=9 A[3]=0 ... A[2048]=0 (S) 6 + A[0]=0 ... A[2048]=0 (S) 33 + A[0]=0 A[1]=4 A[2]=5 A[3]=0 ... A[2048]=0 (S) 2 + A[0]=0 ... A[2048]=0 (S) 1 + A[0]=0 ... A[2048]=0 Hope it made it a bit more clear. Some examples $ dc[ein] 0:a[zwei] Sa[drei] 0:a0;ap # Copy + print index 0 of topmost array in register adreiLap # Throws away [drei] and pops+prints first element in stack*zwei0;ap # Copy + print index 0 of first array ein $ dc[uno] 0:a # Array a(1)[dos] 1:a # Array a(1)[tres] 2:a # Array a(1)[cuatro] Sa # Array a(2)[cinco] Sa # Array a(2)[seis] Sa # Array a(2)[siete] 0:a # Array a(3)[ocho] 1:a # Array a(3)[nueve] 2:a # Array a(3)Laf # Throws away Array 3 to get to first stack array, Array 2.seisLafcincoseisLafcuatrocincoseis2;af # Now we're at the first array, Array 1.trescuatrocincoseis1;afdostrescuatrocincoseis0;afunodostrescuatrocincoseis
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/253519", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26534/" ] }
253,524
Is there any objective reason to prefer one form to the other? Performance, reliability, portability? filename=/some/long/path/to/a_fileparentdir_v1="${filename%/*}"parentdir_v2="$(dirname "$filename")"basename_v1="${filename##*/}"basename_v2="$(basename "$filename")"echo "$parentdir_v1"echo "$parentdir_v2"echo "$basename_v1"echo "$basename_v2" Produces: /some/long/path/to/some/long/path/toa_filea_file (v1 uses shell parameter expansion, v2 uses external binaries.)
Both have their quirks, unfortunately. Both are required by POSIX, so the difference between them isn't a portability concern¹. The plain way to use the utilities is base=$(basename -- "$filename")dir=$(dirname -- "$filename") Note the double quotes around variable substitutions, as always, and also the -- after the command, in case the file name begins with a dash (otherwise the commands would interpret the file name as an option). This still fails in one edge case, which is rare but might be forced by a malicious user²: command substitution removes trailing newlines. So if a filename is called foo/bar␤ then base will be set to bar instead of bar␤ . A workaround is to add a non-newline character and strip it after the command substitution: base=$(basename -- "$filename"; echo .); base=${base%.}dir=$(dirname -- "$filename"; echo .); dir=${dir%.} With parameter substitution, you don't run into edge cases related to expansion of weird characters, but there are a number of difficulties with the slash character. One thing that is not an edge case at all is that computing the directory part requires different code for the case where there is no / . base="${filename##*/}"case "$filename" in */*) dirname="${filename%/*}";; *) dirname=".";;esac The edge case is when there's a trailing slash (including the case of the root directory, which is all slashes). The basename and dirname commands strip off trailing slashes before they do their job. There's no way to strip the trailing slashes in one go if you stick to POSIX constructs, but you can do it in two steps. You need to take care of the case when the input consists of nothing but slashes. case "$filename" in */*[!/]*) trail=${filename##*[!/]}; filename=${filename%%"$trail"} base=${filename##*/} dir=${filename%/*};; *[!/]*) trail=${filename##*[!/]} base=${filename%%"$trail"} dir=".";; *) base="/"; dir="/";;esac If you happen to know that you aren't in an edge case (e.g. a find result other than the starting point always contains a directory part and has no trailing / ) then parameter expansion string manipulation is straightforward. If you need to cope with all the edge cases, the utilities are easier to use (but slower). Sometimes, you may want to treat foo/ like foo/. rather than like foo . If you're acting on a directory entry then foo/ is supposed to be equivalent to foo/. , not foo ; this makes a difference when foo is a symbolic link to a directory: foo means the symbolic link, foo/ means the target directory. In that case, the basename of a path with a trailing slash is advantageously . , and the path can be its own dirname. case "$filename" in */) base="."; dir="$filename";; */*) base="${filename##*/}"; dir="${filename%"$base"}";; *) base="$filename"; dir=".";;esac The fast and reliable method is to use zsh with its history modifiers (this first strips trailing slashes, like the utilities): dir=$filename:h base=$filename:t ¹ Unless you're using pre-POSIX shells like Solaris 10 and older's /bin/sh (which lacked parameter expansion string manipulation features on machines still in production — but there's always a POSIX shell called sh in the installation, only it's /usr/xpg4/bin/sh , not /bin/sh ). ² For example: submit a file called foo␤ to a file upload service that doesn't protect against this, then delete it and cause foo to be deleted instead
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/253524", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
253,591
I am using RedHat OpenShift server to deploy my WebApps. To access the contents of my application I have to SSH into their server, e.g.: ssh [email protected] But the ssh fails, because of outgoing 22 port being blocked. Also, I don't have any public IP assigned system for port forwarding. Is there any way to make the ssh work?
You must contact whoever is in charge of the network, and convince them that your access request is legitimate. Regardless of the sanity of the access restrictions, circumventing them will at the very least land you in hot water with your boss, and could even be taken as "hacking" and get you prosecuted.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/253591", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147035/" ] }
253,596
When I run tar -cvzf archive.tgz file1 file2 ; rm file1 file2 it normally creates a compressed tarball, but how is it possible that tar -xvf archive.tgz gives me back the uncompressed files? I've always thought that the -z flag would be required.
Your tar implementation, likely the GNU one, is detecting the file passed as a parameter is compressed. The mostly used tar implementations these days, GNU tar and busybox ones, are looking to the first bytes of the file, a.k.a. magic number, to figure out if it is compressed and the compression algorithm to use. The tar implementations found on commercial Unixes that are based on the original AT&T code do not support the -z flag in the first place. One notable exception is Solaris 11 tar where this extension has been added, including the ability to detect the file format.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/253596", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150269/" ] }
253,606
I'm trying to use the Bash parameter expansions to modify the output of a command substitution or another parameter expansion. The following nested expansions work quite well in Zsh; but result in a "bad substitution" error in Bash: ${${PWD##*/}//trunk/latest} or ${$(basename $PWD)//trunk/latest} the output should be the last folder of the $PWD , replaced by latest when my current directory is trunk so /home/user/trunk should become latest Is there a Bash equivalent allowing to chain expansions without relying on variables or pipes? Or do Bash expansions only allows the input to be a string or a plain variable?
No, that nesting of substitution operators is unique to zsh . Note that with zsh like with (t)csh , you can also do ${PWD:t:s/trunk/latest/} . Though bash also supports those csh history modifiers for history expansion, it doesn't support them for its parameter expansions. Here with bash , use a temporary variable: var=${PWD##*/} var=${var//trunk/latest}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/253606", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150286/" ] }
253,611
I'm using CentOS 6.5 on oracle VirtualBox. In order to gain access to the internet, I've added a second network card connected to the physical network card of my pc. I rebooted the virtual server and the network interface does not show on ifconfig and not on /etc/sysconfig/network-scripts/ How can I add the interface? Is there another way to gain access to the internet by assigning a default gateway somewhere?
No, that nesting of substitution operators is unique to zsh . Note that with zsh like with (t)csh , you can also do ${PWD:t:s/trunk/latest/} . Though bash also supports those csh history modifiers for history expansion, it doesn't support them for its parameter expansions. Here with bash , use a temporary variable: var=${PWD##*/} var=${var//trunk/latest}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/253611", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150289/" ] }
253,634
When installing RVM the one gets the following message: * WARNING: You have '~/.profile' file, you might want to load it, to do that add the following line to '/home/dotancohen/.bash_profile': source ~/.profile I'm concerned because my ~/.profile file contains xmodmap ~/.Xmodmap which I obviously don't want to run (swapping my CapsLock and ESC keys) every time I open a new shell. Why might the wise RVM devs suggest sourcing .profile in .bash_profile ?
.profile and .bash_profile are identical in terms of when they're meant to be executed: they're executed when you log in. The difference is that only bash runs .bash_profile ; Bourne-style shells (dash, ksh, etc.) runs .profile . Bash itself runs .profile if .bash_profile doesn't exist. Even if you have bash as your login shell, .profile is often the one that's executed when you log in in graphical mode — many distributions set up the X session startup script to run under sh and load .profile . Hence the advice to use .profile instead of .bash_profile to do things like defining environment variables. Unless you absolutely need bash-specific features, just put everything in .profile . But even if you do, there's a reason to keep a .bash_profile , which is that when bash loads it, it doesn't load .bashrc , even if it's interactive. Hence, for most people, ~/.bash_profile should consist of these two lines: . ~/.profilecase $- in *i*) . ~/.bashrc;; esac You should not run xmodmap from .profile . This isn't executed when you open a new shell, but it is executed, for example, when you log in remotely with SSH with X11 forwarding. Unfortunately, there's no standard file that's loaded when you log in in graphical mode. Debian loads ~/.xsessionrc (I think this applies to all display managers, except Gdm which loads ~/.xprofile instead); other distributions have different setups. If you need cross-distribution portability, it may be easier to configure your desktop environment to execute xmodmap when it starts. If all you're doing is swapping CapsLock and Ctrl, this can be done with XKB settings that most modern desktop environments provide an interface to.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/253634", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9760/" ] }
253,636
Everyday I am receiving an email with the following content: /etc/cron.daily/man-db.cron:mandb: can't set the locale; make sure $LC_* and $LANG are correct When I check the results of of /etc/locale.conf , I see the result is set to LANG=en_EN.UTF-8 When I run the command locale I see the following output: -sh-4.2$ localeLANG=nl_NL.UTF-8LC_CTYPE="nl_NL.UTF-8"LC_NUMERIC="nl_NL.UTF-8"LC_TIME="nl_NL.UTF-8"LC_COLLATE="nl_NL.UTF-8"LC_MONETARY="nl_NL.UTF-8"LC_MESSAGES="nl_NL.UTF-8"LC_PAPER="nl_NL.UTF-8"LC_NAME="nl_NL.UTF-8"LC_ADDRESS="nl_NL.UTF-8"LC_TELEPHONE="nl_NL.UTF-8"LC_MEASUREMENT="nl_NL.UTF-8"LC_IDENTIFICATION="nl_NL.UTF-8"LC_ALL= Now I see that the LC_ALL is not set but when I set it using the following command: -sh-4.2$ export LC_ALL=nl_NL.UTF-8 and then run the command locale again LC_ALL=nl_NL.UTF-8 you will see that it is set but somehow when I go out of SSH and check sometime later, I will see again that it is not set and I keep receiving the email. My question is, how can I solve the locale issue so I don't keep receiving the emails from man-db.cron . I am using CentOS Linux release 7.1.1503 (Core).
.profile and .bash_profile are identical in terms of when they're meant to be executed: they're executed when you log in. The difference is that only bash runs .bash_profile ; Bourne-style shells (dash, ksh, etc.) runs .profile . Bash itself runs .profile if .bash_profile doesn't exist. Even if you have bash as your login shell, .profile is often the one that's executed when you log in in graphical mode — many distributions set up the X session startup script to run under sh and load .profile . Hence the advice to use .profile instead of .bash_profile to do things like defining environment variables. Unless you absolutely need bash-specific features, just put everything in .profile . But even if you do, there's a reason to keep a .bash_profile , which is that when bash loads it, it doesn't load .bashrc , even if it's interactive. Hence, for most people, ~/.bash_profile should consist of these two lines: . ~/.profilecase $- in *i*) . ~/.bashrc;; esac You should not run xmodmap from .profile . This isn't executed when you open a new shell, but it is executed, for example, when you log in remotely with SSH with X11 forwarding. Unfortunately, there's no standard file that's loaded when you log in in graphical mode. Debian loads ~/.xsessionrc (I think this applies to all display managers, except Gdm which loads ~/.xprofile instead); other distributions have different setups. If you need cross-distribution portability, it may be easier to configure your desktop environment to execute xmodmap when it starts. If all you're doing is swapping CapsLock and Ctrl, this can be done with XKB settings that most modern desktop environments provide an interface to.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/253636", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150312/" ] }
253,637
I was playing around in my terminal, which runs zsh . I typed the following: $ `bquote> yesbquote> ` And then, yes started to run in the background, I think. Neither Ctrl + C nor Ctrl + \ worked to kill the process. I closed the terminal, but the process just seems to continue. All I can say is that I can confirm that my fan still works. I've run the following commands, which don't work either: pkill yes pkill yes\ \<defunct\> (showed up when using Tab completion) killall -9 yes pkill zsh killall -9 zsh I can't reboot my computer because there's a large file being copied to another computer and I don't want to restart that process. Here's my top output: top - 16:06:16 up 7:41, 3 users, load average: 1,49, 1,33, 1,02Tasks: 305 total, 3 running, 301 sleeping, 0 stopped, 1 zombie%Cpu(s): 53,8 us, 2,5 sy, 0,0 ni, 43,5 id, 0,0 wa, 0,0 hi, 0,2 si, 0,0 stKiB Mem: 6009896 total, 5897432 used, 112464 free, 17152 buffersKiB Swap: 7811068 total, 280 used, 7810788 free. 2225944 cached Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24814 john 20 0 2367448 2,219g 3896 R 98,9 38,7 12:29.00 zsh 2134 john 20 0 1576256 117104 69868 S 2,7 1,9 2:44.03 compiz 1163 root 20 0 311032 66020 25016 S 2,3 1,1 9:28.03 Xorg 25428 john 20 0 30220 3800 3008 S 2,0 0,1 0:08.48 htop 408 root -51 0 0 0 0 S 1,3 0,0 4:25.59 irq/32-iwl+ 25359 john 20 0 581928 31888 25080 S 1,3 0,5 0:00.92 gnome-term+ 2051 john 20 0 653056 32296 23640 S 1,0 0,5 0:05.72 unity-pane+ 25479 john 20 0 29276 3164 2544 R 0,7 0,1 0:00.04 top 819 message+ 20 0 40748 4044 2372 S 0,3 0,1 0:04.27 dbus-daemon 1995 john 20 0 363388 10984 5828 S 0,3 0,2 0:20.36 ibus-daemon 2049 john 20 0 39252 3544 3016 S 0,3 0,1 0:00.27 dbus-daemon 2103 john 20 0 205408 6516 5936 S 0,3 0,1 0:05.65 ibus-engin+ 2157 john 20 0 551436 10652 8376 S 0,3 0,2 0:01.35 indicator-+ 24009 nobody 20 0 275852 14904 12260 S 0,3 0,2 0:23.73 smbd 24536 root 20 0 0 0 0 S 0,3 0,0 0:00.33 kworker/u8+ 1 root 20 0 33888 4452 2720 S 0,0 0,1 0:01.67 init 2 root 20 0 0 0 0 S 0,0 0,0 0:00.00 kthreadd Here is my ps aux | grep yes output: $ ps aux | grep yesjohn 25004 0.1 0.0 0 0 ? Z 15:53 0:01 [yes] <defunct>john 25603 0.0 0.0 15976 2220 pts/25 S+ 16:13 0:00 grep --color=auto yes
This answer on stackoverflow by Bill Karwin is exactly what you are looking for: You have killed the process, but a dead process doesn't disappear from the process table until its parent process performs a task called "reaping" (essentially calling wait(3) for that process to read its exit status). Dead processes that haven't been reaped are called "zombie processes." The parent process id you see for 31756 is process id 1, which always belongs to init. That process should reap its zombie processes periodically, but if it can't, they will remain zombies in the process table until you reboot. Except in this case, the parent process is zsh. kill -9 the zsh process and the defunct yes will be gone. Check out htop to get a better grasp for process ownership hierarchies (toggle flat/hierarchical view with t ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/253637", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45006/" ] }
253,767
To restart or shut off Linux from the terminal, one can use reboot and poweroff , respectively. However, both of these commands require root privileges. Why is this so? What security risk is posed by not requiring this to have root privileges? The GUI provides a way for any user to shut off or restart, so why do the terminal commands need to be run as root? Speaking of the options from the GUI, if the terminal requires root privileges to shut off or restart the Linux computer, how is the GUI able to present an option that does the same without requiring the entering of a password?
Warning: by the end of this answer you'll probably know more about linux than you wanted to Why reboot and poweroff require root privileges GNU/Linux operating systems are multi-user , as were its UNIX predecessors. The system is a shared resource, and multiple users can use it simultaneously . In the past this usually happened on computer terminals connected to a minicomputer or a mainframe . The popular PDP-11 minicomputer. A bit large, by today's standards :) In modern days, this can happen either remotely over the network (usually via SSH ), on thin clients or on a multiseat configuration , where there are several local users with hardware attached to the same computer. A multi-seat configuration. Photo by Tiago Vignatti In practice, there can be hundreds or thousands of users using the same computer simultaneously. It wouldn't make much sense if any user could power off the computer, and prevent everyone else from using it. What security risk is posed by not requiring this to have root privileges? On a multi-user system, this prevents what is effectively a denial-of-service attack The GUI provides a way for any user to shut off or restart, so why do the terminal commands need to be run as root? Many Linux distributions do not provide a GUI. The desktop Linux distributions that do are usually oriented to a single user pattern, so it makes sense to allow this from the GUI. Possible reasons why the commands still require root privileges: Most users of a desktop-oriented distro will use the GUI, not the command line, so it's not worth the trouble Consistency with accepted UNIX conventions (Arguably misguided) security, as it prevents naive programs or scripts from powering off the system How is the GUI able to present shutdown without root privileges? The actual mechanism will vary depending on the specific desktop manager (GUI). Generally speaking, there are several mechanisms available for this type of task: Running the GUI itself as root (hopefully that shouldn't happen on any proper implementation...) setuid sudo with NOPASSWD Communicating the command to another process that has those privileges, usually done with D-Bus . On popular GUIs, this is usually managed by polkit . In summary Linux is used in very diverse environments - from mainframes, servers and desktops to supercomputers, mobile phones, and microwave ovens . It's hard to keep everyone happy all the time! :)
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/253767", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139098/" ] }
253,783
I have written a Linux (bash) shell script using while loop and I kept sleep command to execute the script for every 60 sec and the output is redirected to other file. After few hours I stopped the script and it doesn't stop executing and I deleted the script still it is running and the output file is updated every 60 seconds. I could see sleep command in the process running by Linux. I tried to kill the PID of sleep using kill -9 PID command. No use. It is in my production server.can some one help out. How should we stop the execution of script.
Warning: by the end of this answer you'll probably know more about linux than you wanted to Why reboot and poweroff require root privileges GNU/Linux operating systems are multi-user , as were its UNIX predecessors. The system is a shared resource, and multiple users can use it simultaneously . In the past this usually happened on computer terminals connected to a minicomputer or a mainframe . The popular PDP-11 minicomputer. A bit large, by today's standards :) In modern days, this can happen either remotely over the network (usually via SSH ), on thin clients or on a multiseat configuration , where there are several local users with hardware attached to the same computer. A multi-seat configuration. Photo by Tiago Vignatti In practice, there can be hundreds or thousands of users using the same computer simultaneously. It wouldn't make much sense if any user could power off the computer, and prevent everyone else from using it. What security risk is posed by not requiring this to have root privileges? On a multi-user system, this prevents what is effectively a denial-of-service attack The GUI provides a way for any user to shut off or restart, so why do the terminal commands need to be run as root? Many Linux distributions do not provide a GUI. The desktop Linux distributions that do are usually oriented to a single user pattern, so it makes sense to allow this from the GUI. Possible reasons why the commands still require root privileges: Most users of a desktop-oriented distro will use the GUI, not the command line, so it's not worth the trouble Consistency with accepted UNIX conventions (Arguably misguided) security, as it prevents naive programs or scripts from powering off the system How is the GUI able to present shutdown without root privileges? The actual mechanism will vary depending on the specific desktop manager (GUI). Generally speaking, there are several mechanisms available for this type of task: Running the GUI itself as root (hopefully that shouldn't happen on any proper implementation...) setuid sudo with NOPASSWD Communicating the command to another process that has those privileges, usually done with D-Bus . On popular GUIs, this is usually managed by polkit . In summary Linux is used in very diverse environments - from mainframes, servers and desktops to supercomputers, mobile phones, and microwave ovens . It's hard to keep everyone happy all the time! :)
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/253783", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150439/" ] }
253,787
I think about doing something like: sudo ln -s ~/myCustomCrontab /var/spool/cron/crontabs/username because I'd like to have all customized files in my home directory. Possible risks I can think of are security (How should the permissions be to still be secure?) system failure Or is there a better way to "keep track of the file"?
This does not work , at least in Debian-like systems (symlinked or hardlinked crontab files for (not-system) users are ignored at all ).It also fails if you use crontab to change your crontab file. If the cron version still accepts symlinked crontab files, it creates possible security holes as the crontab file is not checked anymore for consistency. With your symlink solution, crontab -e crashes if you (or some install script) changes the crontab file: crontab: crontabs/username: rename: Operation not permitted as it moves a temporary file to /var/spool/cron/crontabs/username to replace the old file instantaneously. crontab has a lot of additional security checks built in such thatthe cron system cannot be used to compromise the system. It checks for example the content of the cron file before installing or changing it. A invalid cronfile may crash the cron daemon or could (at least theoretically) misuse it to gain more privileges on the system.With your solution, there is no check anymore for the crontab file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/253787", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104470/" ] }
253,810
I am running Wine on a Linux Server so as to run some old Windows Applications. I now need to write a script to make sure they are running. Is it possible to create an ssh connection to the server and start the application? e.g. if I am on the desktop, open a terminal window and run wine "Z:\home\user\Desktop\application" the application opens. But If I connect by SSH and run wine "Z:\home\user\Desktop\application" I get: Application tried to create a window, but no driver could be loaded.Make sure that your X server is running and that $DISPLAY is set correctly.err:systray:initialize_systray Could not create tray windowApplication tried to create a window, but no driver could be loaded.Make sure that your X server is running and that $DISPLAY is set correctly. I'm assuming I need to tell it where to start the application rather than just starting it, but can't see how to do this? ADDITIONAL INFO: I am currently working on a Windows PC, and connecting with Putty to the Linux/Wine server. (I also have a RDP connection so I can see the desktop). Long term I will be running the script on another Linux server (MgmtSrv) that will make an ssh connection to the Linux/Wine server to manage it. The MgmtSrv does not have Wine installed, and does not have an X-Display set up.
As you surmise, you need to tell Wine where to display its applications. Since your Wine server has an X display, it's probably :0 : DISPLAY=:0 wine ... should do the trick (assuming your X authentication cookies are OK; if they're not you'll get an Invalid MIT-MAGIC-COOKIE error).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/253810", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102428/" ] }
253,816
Is there a way to tell the Linux kernel to only use a certain percentage of memory for the buffer cache? I know /proc/sys/vm/drop_caches can be used to clear the cache temporarily, but is there any permanent setting that prevents it from growing to more than e.g. 50% of main memory? The reason I want to do this, is that I have a server running a Ceph OSD which constantly serves data from disk and manages to use up the entire physical memory as buffer cache within a few hours. At the same time, I need to run applications that will allocate a large amount (several 10s of GB) of physical memory. Contrary to popular belief (see the advice given on nearly all questions concerning the buffer cache), the automatic freeing up the memory by discarding clean cache entries is not instantaneous: starting my application can take up to a minute when the buffer cache is full (*), while after clearing the cache (using echo 3 > /proc/sys/vm/drop_caches ) the same application starts nearly instantaneously. (*) During this minute of startup time, the application is faulting in new memory but spends 100% of its time in the kernel, according to Vtune in a function called pageblock_pfn_to_page . This function seems to be related to memory compaction needed to find huge pages, which leads me to believe that actually fragmentation is the problem.
If you do not want an absolute limit but just pressure the kernel to flush out the buffers faster, you should look at vm.vfs_cache_pressure This variable controls the tendency of the kernel to reclaim the memory which is used for caching of VFS caches, versus pagecache and swap. Increasing this value increases the rate at which VFS caches are reclaimed. Ranges from 0 to 200. Move it towards 200 for higher pressure. Default is set at 100. You can also analyze your memory usage using the slabtop command. In your case, the dentry and *_inode_cache values must be high. If you want an absolute limit, you should look up cgroups . Place the Ceph OSD server within a cgroup and limit the maximum memory it can use by setting the memory.limit_in_bytes parameter for the cgroup. memory.memsw.limit_in_bytes sets the maximum amount for the sum of memory and swap usage. If no units are specified, the value is interpreted as bytes. However, it is possible to use suffixes to represent larger units — k or K for kilobytes, m or M for Megabytes, and g or G for Gigabytes. References: [1] - GlusterFS Linux Kernel Tuning [2] - RHEL 6 Resource Management Guide
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/253816", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150451/" ] }
253,836
I want to clean up some files, and make make the way in which they are written more uniform. So, my input looks something like this: $a$h$l )r^9 ^5 l\ urd The thing is, some spaces are "unnecessary" and make comparing the files difficult. For this reason, I want to remove all spaces, unless they follow directly after one of the following characters: $ ^ T iN (N being a variable, any character 1 byte long) oN (N being a variable, as above) s sN (N being a variable, as above) @ ! / ( ) =N (N being a variable, as above) %N (N being a variable, as above) So, an example-input might be: :$ $ $N$ $ $asa s l r*56 l ro1 o 2%%x v Where the wanted output would be: :$ $ $N$ $ $asa s lr*56lro1 o 2%%xv For the %%x v case, the space is removed because it's the third character following the initial % , where the second % acts as the variable. I'm using a GNU/Linux operating system.
If you do not want an absolute limit but just pressure the kernel to flush out the buffers faster, you should look at vm.vfs_cache_pressure This variable controls the tendency of the kernel to reclaim the memory which is used for caching of VFS caches, versus pagecache and swap. Increasing this value increases the rate at which VFS caches are reclaimed. Ranges from 0 to 200. Move it towards 200 for higher pressure. Default is set at 100. You can also analyze your memory usage using the slabtop command. In your case, the dentry and *_inode_cache values must be high. If you want an absolute limit, you should look up cgroups . Place the Ceph OSD server within a cgroup and limit the maximum memory it can use by setting the memory.limit_in_bytes parameter for the cgroup. memory.memsw.limit_in_bytes sets the maximum amount for the sum of memory and swap usage. If no units are specified, the value is interpreted as bytes. However, it is possible to use suffixes to represent larger units — k or K for kilobytes, m or M for Megabytes, and g or G for Gigabytes. References: [1] - GlusterFS Linux Kernel Tuning [2] - RHEL 6 Resource Management Guide
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/253836", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147807/" ] }
253,870
Can I use bash variable substitution to extract a piece of a variable based on a delimeter? I'm trying to get the immediate directory name of a filename (in this case, foo ). $ filename=./foo/bar/baz.xml I know I could do something like echo $filename | cut -d '/' -f 2 or echo $filename | awk -F '/' '{print $2}' but it's getting slow to fork awk / cut for multiple filenames. I did a little profiling of the various solutions, using my real files: echo | cut: real 2m56.805suser 0m37.009ssys 1m26.067s echo | awk: real 2m56.282suser 0m38.157ssys 1m31.016s @steeldriver's variable substitution/shell parameter expansion: real 0m0.660suser 0m0.421ssys 0m0.235s @jai_s's IFS-wrangling: real 1m26.243suser 0m13.751ssys 0m28.969s Both suggestions were a huge improvement over my existing ideas, but the variable substitution is fastest because it doesn't require forking any new processes.
You can remove the shortest leading substring that matches */ tmp="${filename#*/}" and then remove the longest trailing substring that matches /* echo "${tmp%%/*}"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/253870", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32038/" ] }
253,892
I have two (Debian) Linux servers. I am creating a shell script. On the first one I create an array thus: #!/bin/bashtarget_array=( "/home/user/direct/filename -p123 -r") That works fine. But when I run this on the other server I get: Syntax error: "(" unexpected As far as I can tell both servers are the same. Can anyone shed some light on why this doesn't work? If I type it into the terminal directly it is fine?? It would appear that when I run it as sh scriptname.sh I get the error, but if I run it as ./scriptname.sh it seems to be ok. What's the difference?
When you use ./scriptname.sh it executes with /bin/bash as in the first line with #! . But when you use sh scriptname.sh it executes sh , not bash . The sh shell has no syntax to create arrays, but Bash has the syntax you used.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/253892", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102428/" ] }
253,903
I am running a docker server on Arch Linux (kernel 4.3.3-2) with several containers. Since my last reboot, both the docker server and random programs within the containers crash with a message about not being able to create a thread, or (less often) to fork. The specific error message is different depending on the program, but most of them seem to mention the specific error Resource temporarily unavailable . See at the end of this post for some example error messages. Now there are plenty of people who have had this error message, and plenty of responses to them. What’s really frustrating is that everyone seems to be speculating how the issue could be resolved, but no one seems to point out how to identify which of the many possible causes for the problem is present. I have collected these 5 possible causes for the error and how to verify that they are not present on my system: There is a system-wide limit on the number of threads configured in /proc/sys/kernel/threads-max ( source ). In my case this is set to 60613 . Every thread takes some space in the stack. The stack size limit is configured using ulimit -s ( source ). The limit for my shell used to be 8192 , but I have increased it by putting * soft stack 32768 into /etc/security/limits.conf , so it ulimit -s now returns 32768 . I have also increased it for the docker process by putting LimitSTACK=33554432 into /etc/systemd/system/docker.service ( source , and I verified that the limit applies by looking into /proc/<pid of docker>/limits and by running ulimit -s inside a docker container. Every thread takes some memory. A virtual memory limit is configured using ulimit -v . On my system it is set to unlimited , and 80% of my 3 GB of memory are free. There is a limit on the number of processes using ulimit -u . Threads count as processes in this case ( source ). On my system, the limit is set to 30306 , and for the docker daemon and inside docker containers, the limit is 1048576 . The number of currently running threads can be found out by running ls -1d /proc/*/task/* | wc -l or by running ps -elfT | wc -l ( source ). On my system they are between 700 and 800 . There is a limit on the number of open files, which according to some source s is also relevant when creating threads. The limit is configured using ulimit -n . On my system and inside docker, the limit is set to 1048576 . The number of open files can be found out using lsof | wc -l ( source ), on my system it is about 30000 . It looks like before the last reboot I was running kernel 4.2.5-1, now I’m running 4.3.3-2. Downgrading to 4.2.5-1 fixes all the problems. Other posts mentioning the problem are this and this . I have opened a bug report for Arch Linux . What has changed in the kernel that could be causing this? Here are some example error messages: Crash dump was written to: erl_crash.dumpFailed to create aux thread Jan 07 14:37:25 edeltraud docker[30625]: runtime/cgo: pthread_create failed: Resource temporarily unavailable dpkg: unrecoverable fatal error, aborting: fork failed: Resource temporarily unavailableE: Sub-process /usr/bin/dpkg returned an error code (2) test -z "/usr/include" || /usr/sbin/mkdir -p "/tmp/lib32-popt/pkg/lib32-popt/usr/include"/bin/sh: fork: retry: Resource temporarily unavailable /usr/bin/install -c -m 644 popt.h '/tmp/lib32-popt/pkg/lib32-popt/usr/include'test -z "/usr/share/man/man3" || /usr/sbin/mkdir -p "/tmp/lib32-popt/pkg/lib32-popt/usr/share/man/man3"/bin/sh: fork: retry: Resource temporarily unavailable/bin/sh: fork: retry: No child processes/bin/sh: fork: retry: Resource temporarily unavailable/bin/sh: fork: retry: No child processes/bin/sh: fork: retry: No child processes/bin/sh: fork: retry: Resource temporarily unavailable/bin/sh: fork: retry: Resource temporarily unavailable/bin/sh: fork: retry: No child processes/bin/sh: fork: Resource temporarily unavailable/bin/sh: fork: Resource temporarily unavailablemake[3]: *** [install-man3] Error 254 Jan 07 11:04:39 edeltraud docker[780]: time="2016-01-07T11:04:39.986684617+01:00" level=error msg="Error running container: [8] System error: fork/exec /proc/self/exe: resource temporarily unavailable" [Wed Jan 06 23:20:33.701287 2016] [mpm_event:alert] [pid 217:tid 140325422335744] (11)Resource temporarily unavailable: apr_thread_create: unable to create worker thread
The problem is caused by the TasksMax systemd attribute. It was introduced in systemd 228 and makes use of the cgroups pid subsystem, which was introduced in the linux kernel 4.3. A task limit of 512 is thus enabled in systemd if kernel 4.3 or newer is running. The feature is announced here and was introduced in this pull request and the default values were set by this pull request . After upgrading my kernel to 4.3, systemctl status docker displays a Tasks line: # systemctl status docker● docker.service - Docker Application Container Engine Loaded: loaded (/etc/systemd/system/docker.service; disabled; vendor preset: disabled) Active: active (running) since Fri 2016-01-15 19:58:00 CET; 1min 52s ago Docs: https://docs.docker.com Main PID: 2770 (docker) Tasks: 502 (limit: 512) CGroup: /system.slice/docker.service Setting TasksMax=infinity in the [Service] section of docker.service fixes the problem. docker.service is usually in /usr/share/systemd/system , but it can also be put/copied in /etc/systemd/system to avoid it being overridden by the package manager. A pull request is increasing TasksMax for the docker example systemd files, and an Arch Linux bug report is trying to achieve the same for the package. There is some additional discussion going on on the Arch Linux Forum and in an Arch Linux bug report regarding lxc . DefaultTasksMax can be used in the [Manager] section in /etc/systemd/system.conf (or /etc/systemd/user.conf for user-run services) to control the default value for TasksMax . Systemd also applies a limit for programs run from a login-shell. These default to 4096 per user (will be increased to 12288 ) and are configured as UserTasksMax in the [Login] section of /etc/systemd/logind.conf .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/253903", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59955/" ] }
253,930
I'm launching a script from a Jenkins server (on RHEL6) that, among other things, uses SCP with "BatchMode yes" to copy a file from a remote machine. The script runs properly outside of Jenkins, but fails inside. The verbose SCP log shows: debug1: Next authentication method: publickeydebug1: Offering public key: /var/lib/jenkins/.ssh/id_rsadebug1: Server accepts key: pkalg ssh-rsa blen 277debug1: Trying private key: /var/lib/jenkins/.ssh/id_dsa So there's something missing from the Jenkins user that is required to login. It's not a known_hosts entry, or at least the correct host is listed in /var/lib/jenkins/.ssh/known_hosts. What else could it be? Edit : Per request, the command was scp -vvv -o "BatchMode yes" [email protected]:myfile.txt . Here is a more extensive log snippet: Executing: program /usr/bin/ssh host myserver.com, user myuser, command scp -v -f myfile.txtOpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013debug1: Reading configuration data /etc/ssh/ssh_configdebug1: Applying options for *debug2: ssh_connect: needpriv 0debug1: Connecting to myserver.com [xxx.xxx.xxx.xxx] port 22.debug1: Connection established.debug1: identity file /var/lib/jenkins/.ssh/identity type -1debug1: identity file /var/lib/jenkins/.ssh/identity-cert type -1debug1: identity file /var/lib/jenkins/.ssh/id_rsa type 1debug1: identity file /var/lib/jenkins/.ssh/id_rsa-cert type -1debug1: identity file /var/lib/jenkins/.ssh/id_dsa type -1debug1: identity file /var/lib/jenkins/.ssh/id_dsa-cert type -1debug1: identity file /var/lib/jenkins/.ssh/id_ecdsa type -1debug1: identity file /var/lib/jenkins/.ssh/id_ecdsa-cert type -1debug1: Remote protocol version 2.0, remote software version OpenSSH_6.2debug1: match: OpenSSH_6.2 pat OpenSSH*debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_5.3debug2: fd 3 setting O_NONBLOCKdebug1: SSH2_MSG_KEXINIT sentdebug3: Wrote 960 bytes for a total of 981debug1: SSH2_MSG_KEXINIT receiveddebug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],ssh-rsa,ssh-dssdebug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: none,[email protected],zlibdebug2: kex_parse_kexinit: none,[email protected],zlibdebug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: none,[email protected]: kex_parse_kexinit: none,[email protected]: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5debug1: kex: server->client aes128-ctr hmac-md5 nonedebug2: mac_setup: found hmac-md5debug1: kex: client->server aes128-ctr hmac-md5 nonedebug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_GROUPdebug3: Wrote 24 bytes for a total of 1005debug2: dh_gen_key: priv key bits set: 131/256debug2: bits set: 773/1536debug1: SSH2_MSG_KEX_DH_GEX_INIT sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_REPLYdebug3: Wrote 208 bytes for a total of 1213debug3: check_host_in_hostfile: host myserver.com filename /var/lib/jenkins/.ssh/known_hostsdebug3: check_host_in_hostfile: host myserver.com filename /var/lib/jenkins/.ssh/known_hostsdebug3: check_host_in_hostfile: match line 2debug3: check_host_in_hostfile: host xxx.xxx.xxx.xxx filename /var/lib/jenkins/.ssh/known_hostsdebug3: check_host_in_hostfile: host xxx.xxx.xxx.xxx filename /var/lib/jenkins/.ssh/known_hostsdebug3: check_host_in_hostfile: match line 3debug1: Host 'myserver.com' is known and matches the RSA host key.debug1: Found key in /var/lib/jenkins/.ssh/known_hosts:2debug2: bits set: 759/1536debug1: ssh_rsa_verify: signature correctdebug2: kex_derive_keysdebug2: set_newkeys: mode 1debug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug3: Wrote 16 bytes for a total of 1229debug2: set_newkeys: mode 0debug1: SSH2_MSG_NEWKEYS receiveddebug1: SSH2_MSG_SERVICE_REQUEST sentdebug3: Wrote 48 bytes for a total of 1277debug2: service_accept: ssh-userauthdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug2: key: /var/lib/jenkins/.ssh/identity ((nil))debug2: key: /var/lib/jenkins/.ssh/id_rsa (0x7f38a83ee310)debug2: key: /var/lib/jenkins/.ssh/id_dsa ((nil))debug2: key: /var/lib/jenkins/.ssh/id_ecdsa ((nil))debug3: Wrote 64 bytes for a total of 1341debug1: Authentications that can continue: publickey,keyboard-interactivedebug3: start over, passed a different list publickey,keyboard-interactivedebug3: preferred gssapi-keyex,gssapi-with-mic,publickeydebug3: authmethod_lookup publickeydebug3: remaining preferred: ,gssapi-with-mic,publickeydebug3: authmethod_is_enabled publickeydebug1: Next authentication method: publickeydebug1: Trying private key: /var/lib/jenkins/.ssh/identitydebug3: no such identity: /var/lib/jenkins/.ssh/identitydebug1: Offering public key: /var/lib/jenkins/.ssh/id_rsadebug3: send_pubkey_testdebug2: we sent a publickey packet, wait for replydebug3: Wrote 368 bytes for a total of 1709debug1: Server accepts key: pkalg ssh-rsa blen 277debug2: input_userauth_pk_ok: SHA1 fp 72:a5:45:d3:f2:6d:15:c4:2e:f9:37:34:44:10:2b:b9:59:ee:18:c0debug3: sign_and_send_pubkey: RSA 72:a5:45:d3:f2:6d:15:c4:2e:f9:37:34:44:10:2b:b9:59:ee:18:c0debug1: Trying private key: /var/lib/jenkins/.ssh/id_dsadebug3: no such identity: /var/lib/jenkins/.ssh/id_dsadebug1: Trying private key: /var/lib/jenkins/.ssh/id_ecdsadebug3: no such identity: /var/lib/jenkins/.ssh/id_ecdsadebug2: we did not send a packet, disable methoddebug1: No more authentication methods to try.Permission denied (publickey,keyboard-interactive).
The problem is caused by the TasksMax systemd attribute. It was introduced in systemd 228 and makes use of the cgroups pid subsystem, which was introduced in the linux kernel 4.3. A task limit of 512 is thus enabled in systemd if kernel 4.3 or newer is running. The feature is announced here and was introduced in this pull request and the default values were set by this pull request . After upgrading my kernel to 4.3, systemctl status docker displays a Tasks line: # systemctl status docker● docker.service - Docker Application Container Engine Loaded: loaded (/etc/systemd/system/docker.service; disabled; vendor preset: disabled) Active: active (running) since Fri 2016-01-15 19:58:00 CET; 1min 52s ago Docs: https://docs.docker.com Main PID: 2770 (docker) Tasks: 502 (limit: 512) CGroup: /system.slice/docker.service Setting TasksMax=infinity in the [Service] section of docker.service fixes the problem. docker.service is usually in /usr/share/systemd/system , but it can also be put/copied in /etc/systemd/system to avoid it being overridden by the package manager. A pull request is increasing TasksMax for the docker example systemd files, and an Arch Linux bug report is trying to achieve the same for the package. There is some additional discussion going on on the Arch Linux Forum and in an Arch Linux bug report regarding lxc . DefaultTasksMax can be used in the [Manager] section in /etc/systemd/system.conf (or /etc/systemd/user.conf for user-run services) to control the default value for TasksMax . Systemd also applies a limit for programs run from a login-shell. These default to 4096 per user (will be increased to 12288 ) and are configured as UserTasksMax in the [Login] section of /etc/systemd/logind.conf .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/253930", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77505/" ] }
253,933
In the process of trying to diagnose WiFi dropouts, I discovered that the regulatory domain on my WiFi interface is set to "world" (00), and changing it to my region (US) should help fix the issue. However, every attempt I've made to do so has been ignored. Running iw reg set US has no evident effect: $ iw reg getcountry 00: DFS-UNSET (2402 - 2472 @ 40), (6, 20), (N/A) (2457 - 2482 @ 40), (6, 20), (N/A), PASSIVE-SCAN (2474 - 2494 @ 20), (6, 20), (N/A), NO-OFDM, PASSIVE-SCAN (5170 - 5250 @ 160), (6, 20), (N/A), PASSIVE-SCAN (5250 - 5330 @ 160), (6, 20), (0 ms), DFS, PASSIVE-SCAN (5490 - 5730 @ 160), (6, 20), (0 ms), DFS, PASSIVE-SCAN (5735 - 5835 @ 80), (6, 20), (N/A), PASSIVE-SCAN (57240 - 63720 @ 2160), (N/A, 0), (N/A)$ sudo iw reg set US$ iw reg getcountry 00: DFS-UNSET (2402 - 2472 @ 40), (6, 20), (N/A) (2457 - 2482 @ 40), (6, 20), (N/A), PASSIVE-SCAN (2474 - 2494 @ 20), (6, 20), (N/A), NO-OFDM, PASSIVE-SCAN (5170 - 5250 @ 160), (6, 20), (N/A), PASSIVE-SCAN (5250 - 5330 @ 160), (6, 20), (0 ms), DFS, PASSIVE-SCAN (5490 - 5730 @ 160), (6, 20), (0 ms), DFS, PASSIVE-SCAN (5735 - 5835 @ 80), (6, 20), (N/A), PASSIVE-SCAN (57240 - 63720 @ 2160), (N/A, 0), (N/A) After extensive Googling on the subject, it seems that what's supposed to happen is iw reg set causes the kernel to emit a udev event, which causes crda to get executed and cough up the relevant regulatory info. However, near as I can tell with udevadm , this event is never emitted. This event's absence is corroborated by the following kluge not working: $ sudo iw reg set US; sudo COUNTRY=US crdaFailed to set regulatory domain: -7 The error message is from crda . The kernel will accept WiFi regulatory changes only if it has emitted a udev event/request for them and is expecting a response. Since crda fails, the kernel clearly wasn't expecting it, suggesting no udev event was emitted. The WiFi interface is an Intel 7265D; whose kernel driver is iwlmvm . I have crda and wireless-regdb installed, and /etc/default/crda contains REGDOMAIN=US . Removing and reloading the iwlmvm driver has no effect. Any suggestions of what more to check?
I tried revisiting this issue yesterday, and still have the problem even with kernel 4.6.3. Manually installing the latest firmware image also didn't help. However, trying iw reg set US on a second laptop running the same kernel worked fine. The problem machine is a Thinkpad X1 Carbon (Gen.3), which has an Intel 7265D WiFi card; the working machine is a Thinkpad T440p, which has an Intel 7260. I therefore conclude that there's a bug in the 7265D driver or firmware. Workaround I also discovered a workaround for the 7265D. Be aware this is a workaround, and may cause conflicts if/when an actual fix is released: Remove all WiFi kernel drivers and dependent modules: sudo modprobe -r iwlmvm Install the cfg80211 kernel module, using a kernel parameter to force the regulatory domain (in this case, 'US'): sudo modprobe cfg80211 ieee80211_regdom=US Re-install the WiFi kernel drivers: sudo modprobe iwlmvm You should now see the WiFi interface configured for the US (or whatever) regulatory domain: $ iw reg getcountry US: DFS-FCC (2402 - 2472 @ 40), (N/A, 30), (N/A) (5170 - 5250 @ 80), (N/A, 17), (N/A) (5250 - 5330 @ 80), (N/A, 23), (0 ms), DFS (5490 - 5730 @ 160), (N/A, 23), (0 ms), DFS (5735 - 5835 @ 80), (N/A, 30), (N/A) (57240 - 63720 @ 2160), (N/A, 40), (N/A) Update 2016.11.17: Fixed in Kernel 4.8 Series I checked this issue again today for the first time after updating a couple weeks ago to a 4.8.x kernel, and discovered that the WiFi interface is now seems to be properly accepting the regulatory domain. This happened in or prior to kernel rev 4.8.5. $ iw reg getglobalcountry 00: DFS-UNSET (2402 - 2472 @ 40), (6, 20), (N/A) (2457 - 2482 @ 20), (6, 20), (N/A), AUTO-BW, PASSIVE-SCAN (2474 - 2494 @ 20), (6, 20), (N/A), NO-OFDM, PASSIVE-SCAN (5170 - 5250 @ 80), (6, 20), (N/A), AUTO-BW, PASSIVE-SCAN (5250 - 5330 @ 80), (6, 20), (0 ms), DFS, AUTO-BW, PASSIVE-SCAN (5490 - 5730 @ 160), (6, 20), (0 ms), DFS, PASSIVE-SCAN (5735 - 5835 @ 80), (6, 20), (N/A), PASSIVE-SCAN (57240 - 63720 @ 2160), (N/A, 0), (N/A)phy#0 (self-managed)country US: DFS-UNSET (2402 - 2482 @ 40), (6, 22), (N/A), AUTO-BW, NO-HT40PLUS, NO-80MHZ, NO-160MHZ (5170 - 5250 @ 80), (6, 22), (N/A), NO-OUTDOOR, AUTO-BW, IR-CONCURRENT, NO-HT40PLUS, NO-160MHZ, PASSIVE-SCAN (5250 - 5330 @ 80), (6, 22), (0 ms), DFS, AUTO-BW, NO-HT40PLUS, NO-160MHZ, PASSIVE-SCAN (5490 - 5730 @ 80), (6, 22), (0 ms), DFS, AUTO-BW, NO-HT40PLUS, NO-160MHZ, PASSIVE-SCAN (5735 - 5815 @ 80), (6, 22), (N/A), AUTO-BW, IR-CONCURRENT, NO-HT40PLUS, NO-160MHZ, PASSIVE-SCAN (5815 - 5835 @ 20), (6, 22), (N/A), AUTO-BW, IR-CONCURRENT, NO-HT40MINUS, NO-HT40PLUS, NO-80MHZ, NO-160MHZ, PASSIVE-SCAN
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/253933", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49524/" ] }
253,938
I have a folder with a complicated folder structure: ├── folder1│ ├── 0001.jpg│ └── 0002.jpg├── folder2│ ├── 0001.jpg│ └── 0002.jpg├── folder3│ └── folder4│ ├── 0001.jpg│ └── 0002.jpg└── folder5 └── folder6 └── folder7 ├── 0001.jpg └── 0002.jpg I would like to flatten the the folder structure such that all the files reside in the parent directory with unique names such as folder1_0001.jpg, folder1_0002.jpg... folder5_folder6_folder7_0001.jpg etc. I have attempted to use the code suggested in "Flattening folder structure" $ find */ -type f -exec bash -c 'file=${1#./}; echo mv "$file" "${file//\//_}"' _ '{}' \; The echo demonstrates that it is working: mv folder3/folder4/000098.jpg folder3_folder4_000098.jpg But the output files are not placed in the parent directory. I have searched the entire drive and cannot find the output files. I have also attempted "Flatten a folder structure to a file name in Bash" $ find . -type f -name "*.jpg" | sed 'h;y/\//_/;H;g;s/\n/ /g;s/^/cp -v /' | sh -v demonstrates that it is working: ‘./folder3/folder4/000098.jpg’ -> ‘._folder3_folder4_000098.jpg’ However the output creates hidden files in the parent directory, this complicates my workflow. I am able to view these hidden files in the parent directory using ls -a I have also tried the suggested code below from "Renaming Duplicate Files with Flatten Folders Command" find . -mindepth 2 -type f | xargs mv --backup=numbered -t . && find . -type d -empty -delete But the command overwrites files with similar file names. Any suggestions on how to flatten a complicated folder structure without overwriting files with similar names? The current solutions seem to only work on folder structures one layer deep. My ultimate goal is to convert the unique names into sequential numbers as described in "Renaming files in a folder to sequential numbers" a=1 for i in *.jpg; do new=$(printf "%04d.jpg" "$a") #04 pad to length of 4 mv -- "$i" "$new" let a=a+1 done
I have no idea why the first solution in your question wouldn't work. I can only assume you forgot to remove the echo . Be that as it may, here's another approach that should also do what you need, assuming you're running bash : shopt -s globstarfor i in **/*jpg; do mv "$i" "${i//\//_}"; done Explanation The shopt -s globstar turns on bash's globstar feature which makes ** recursively match any number of directories or files. for i in **/*jpg; will iterate over all files (or directories) whose name ends in .jpg , saving each as $i . "${i//\//_}" is the name of the current file (or directory) with all instances of / replaced with _ . If you can also have directories with names ending in .jpg and want to skip them, do this instead: shopt -s globstarfor i in **/*jpg; do [ -f "$i" ] && echo mv "$i" "${i//\//_}"; done And for all files, irrespective of extension: shopt -s globstarfor i in **/*; do [ -f "$i" ] && echo mv "$i" "${i//\//_}"; done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/253938", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150389/" ] }
253,940
I have a script that I use for iperf to be able to start, stop, and check the status of it like a typical service. If I issue a "service iperf start" or a "service iperf stop" it works correctly. When I try to issue a "service iperf status" it always returns it as running even though its not. When running: #service iperf status/usr/bin/iperf pid 37567 is running!!! When not running: # service iperf status/usr/bin/iperf pid is running!! Here is my entire script: #!/bin/bash# chkconfig: - 50 50# description: iperfDAEMON=/usr/bin/iperfservice=iperfinfo=$(pidof /usr/bin/iperf)is_running=`ps aux | grep -v grep | grep iperf | wc -l | awk '{print $1}'`case "$1" in start) $DAEMON -s -D ;; stop) pidof $DAEMON | xargs kill -9 ;; status) if [ $is_running != "1" ]; then echo $DAEMON pid $info is running!!! else echo $DAEMON is NOT running!!! fi ;; restart) $DAEMON stop $DAEMON start ;; *) echo "Usage: $0 {start|stop|status|restart}" exit 1 ;;esac It seems like its not able to read past the first echo statement in the status section of the script. I have ran the ps command outside of the script from command line to verify I am getting the correct output. If the service is running it returns a "1" just like the script checks for. If its not running it returns a "0" so it should move on to the next echo statement but its not. Can anyone tell me how to fix this? I am on RHEL 6.7 bash version 4.1.2
I have no idea why the first solution in your question wouldn't work. I can only assume you forgot to remove the echo . Be that as it may, here's another approach that should also do what you need, assuming you're running bash : shopt -s globstarfor i in **/*jpg; do mv "$i" "${i//\//_}"; done Explanation The shopt -s globstar turns on bash's globstar feature which makes ** recursively match any number of directories or files. for i in **/*jpg; will iterate over all files (or directories) whose name ends in .jpg , saving each as $i . "${i//\//_}" is the name of the current file (or directory) with all instances of / replaced with _ . If you can also have directories with names ending in .jpg and want to skip them, do this instead: shopt -s globstarfor i in **/*jpg; do [ -f "$i" ] && echo mv "$i" "${i//\//_}"; done And for all files, irrespective of extension: shopt -s globstarfor i in **/*; do [ -f "$i" ] && echo mv "$i" "${i//\//_}"; done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/253940", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81926/" ] }
253,954
I want to know if there is an easy way to split a line in a file into multiple lines, such as this: I have A B C1 2 3 4 And I want to get something like this, based on the first string of the line: A BA C1 21 31 4 Basically based on the first string in the line create multiple lines with the second and third and fourth string and so on.
Here goes awk '{for (i=2; i<=NF; ++i) print $1, $i}' fileA BA C1 21 31 4
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/253954", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148472/" ] }
254,154
How can I change folder and file permissions for all files and folders recursively inside current directory? I am not sure, why, but my command fails with this output: chmod: missing operand after '644./components/path/path/path' My command is: find . * -type d -exec chmod 755{} \; As user pdo pointed, I want change folder permissions to 755-
You're missing a space after 644. Also, 644 is probably not what you want on a directory. You probably want 755. Edit to include answer from comments below: For directories: find . -type d -exec chmod 755 {} \; For files: find . -type f -exec chmod 644 {} \; There's very likely other ways (maybe shorter) to do this, but this will work.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/254154", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106176/" ] }
254,280
Running, either in promiscuous mode or not : tcpdump -i "$INTERFACE" -vvv -n -XX -S -s0 -e I got a bunch of lines and this conclusion when I stopped it : 601 packets captured938 packets received by filter230 packets dropped by kernel Why the difference ? Where are the 107 packets missing ? And is it possible at all to get/capture 100% of the packets on the local network - it's just me behind a router ?
When tcpdump "drops" packets, is because it has not enough buffer space to keep up with the packets arriving from the network. The difference between packets captured and received can be due to implementations of the OS or tcpdump, or more commonly due to aborting the process with ^C. Setting the buffer size per packet with "s0" has the consequence of setting it as 64KB per man tcpdump ; normally at most I set it up as 1500 if using -X to see the whole packet, and if only using tcpdump to watch headers even less than that is needed - 160 bytes which is the size of IPv4 headers. Normally working with the screen is also slower, if needing speed I would direct the output to a file if you have no need to watch it in true realtime. From man "tcpdump": "Note that taking larger snapshots both increases the amount of time it takes to process packets and, effectively, decreases the amount of packet buffering. This may cause packets to be lost. You should limit snaplen to the smallest number that will capture the protocol information you're interested in."
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254280", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50019/" ] }
254,286
This is OpenSUSE Leap 42. I have a computer with 2x 500 GB SATA HDD drives and to speed it up I put in a small 30GB SSD drive for the system. During installation HDDs were disconnected as they confused the installer (and me). Once system was up I quite easily exchanged the /home directory for a XFS logical volume (I use LVM primarily to add space easily). Then /opt filled up (chrome and botanicula) and I wanted to put that on a volume on HDD. So I created a volume and formatted it with BTRFS. After some head scratching - the @ subvolumes in fstab made me read up on BTRFS I did what I needed - /opt now is 100 GB in size. But the question is: Does it make sense to format a LVM volume with btrfs? Essentially they both are volume handling systems. For illustration I paste my fstab (# comments show my edits) and vgscan + lvscan output: ~> cat /etc/fstabUUID=1b511986-9c20-4885-8385-1cc03663201b swap swap defaults 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af / btrfs defaults 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /boot/grub2/i386-pc btrfs subvol=@/boot/grub2/i386-pc 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /boot/grub2/x86_64-efi btrfs subvol=@/boot/grub2/x86_64-efi 0 0UUID=3e103686-52e9-44ac-963f-5a76177af56b /opt btrfs defaults 0 0#UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /opt btrfs subvol=@/opt 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /srv btrfs subvol=@/srv 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /tmp btrfs subvol=@/tmp 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /usr/local btrfs subvol=@/usr/local 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/crash btrfs subvol=@/var/crash 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/lib/libvirt/images btrfs subvol=@/var/lib/libvirt/images 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/lib/mailman btrfs subvol=@/var/lib/mailman 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/lib/mariadb btrfs subvol=@/var/lib/mariadb 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/lib/mysql btrfs subvol=@/var/lib/mysql 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/lib/named btrfs subvol=@/var/lib/named 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/lib/pgsql btrfs subvol=@/var/lib/pgsql 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/log btrfs subvol=@/var/log 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/opt btrfs subvol=@/var/opt 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/spool btrfs subvol=@/var/spool 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/tmp btrfs subvol=@/var/tmp 0 0UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /.snapshots btrfs subvol=@/.snapshots 0 0UUID=c4c4f819-a548-4881-b854-a0ed62e7952e /home xfs defaults 1 2#UUID=e14edbfa-ddc2-4f6d-9cba-245d828ba8aa /home xfs defaults 1 2 ~> # vgscan Reading all physical volumes. This may take a while... Found volume group "r0data" using metadata type lvm2 Found volume group "r0sys" using metadata type lvm2# lvscan ACTIVE '/dev/r0data/homer' [699.53 GiB] inherit ACTIVE '/dev/r0sys/optr' [100.00 GiB] inherit After the answer: Thanks, I understand now the key differences. To me LVM is indeed better for managing space with whatever filesystems on top of it, but BTRFS should be used for features specific to it - mainly snapshots. In simple home network use it is probably better to stay away from it. I've had too much grief managing space on a small drive, but I'd imagine space would be eaten away also on big drives.
Maybe this explains ( from the btrfs wiki by the way ) A subvolume in btrfs is not the same as an LVM logical volume or a ZFS subvolume. With LVM, a logical volume is a block device in its own right (which could for example contain any other filesystem or container like dm-crypt, MD RAID, etc.) - this is not the case with btrfs.A btrfs subvolume is not a block device (and cannot be treated as one) instead, a btrfs subvolume can be thought of as a POSIX file namespace. This namespace can be accessed via the top-level subvolume of the filesystem, or it can be mounted in its own right. see also https://btrfs.wiki.kernel.org/index.php/FAQ Interaction with partitions, device managers and logical volumes Btrfs has subvolumes, does this mean I don't need a logical volume manager and I can create a big Btrfs filesystem on a raw partition? There is not a single answer to this question. Here are the issues to think about when you choose raw partitions or LVM: Performance raw partitions are slightly faster than logical volumes btrfs does write optimisation (sequential writes) across a filesystem subvolume write performance will benefit from this algorithm creating multiple btrfs filesystems, each on a different LV, means that the algorithm can be ineffective (although the kernel will still perform some optimization at the block device level) Online resizing and relocating the filesystem across devices: the pvmove command from LVM allows filesystems to move between devices while online raw partitions can only be moved to a different starting cylinder while offline raw partitions can only be made bigger if there is free space after the partition, while LVM can expand an LV onto free space anywhere in the volume group - and it can do the resize online subvolume/logical volume size constraints LVM is convenient for creating fixed size logical volumes (e.g. 10MB for each user, 20GB for each virtual machine image, etc) subvolumes don't currently enforce such rigid size constraints, although the upcoming qgroups feature will address this issue .... the FAQ continues to explain the scenario's in which LVM+BTRFS make sense
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/254286", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54498/" ] }
254,308
Say I start off from a PDF document, say of 12 pages, viewed with evince . To produce another PDF of 6 sheets, with a page setup of two pages per side, I normally use the "Print to File" device listed in the ^P dialogue window.This works out pretty neatly. I would like to translate this operation for the command line. To my understanding, this is not an operation that pdftk can do. Please cross check. The command lp , which would accept the option -o number-up=2 , does not recognize any device called "Print to File", which indeed does not show up in lpstat -p -d . I am aware of the post What is “Print to File” and can it be used from command line? . I have installed cups-pdf whereby a new printer named PDF is acknowledged. However, the print quality of a simple text file is way too raw (for example, no print margins to start with). Moreover, if I reprint an existing PDF file on this device, say lp -p PDF existing.pdf , evince can't even manage to open that copycatted output, while this is not the case with the "Print to File" way. I had a look at man evince . At the bottom, it touches upon a few print preview options and redirects to a GNOME-developer project page . Admittedly I am not able to make sense and use of it. Is there actually a way to combine the flexibility of the command line with the print quality that I obtain from that "Print to File" option in the GUI evince? My test case, again, would be to create from the command line a PDF out of a source document printed with two pages per sheet. Thanks for thinking along.
There is the pdfnup (or pdfjam ) command line tool. You can install it from the repositories of your distribution ( sudo apt-get install pdfjam for Debian-based distributions, yaourt -S pdfnup on Arch etc). The default options will take the input PDF file and produce an output PDF with two input pages per page: pdfnup -o output.pdf input.pdf
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/254308", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/132913/" ] }
254,316
While creating a swapfile I issue vi /etc/fstab as root.fstab file comes up, great. Issue G $ to jump to end of line and i to enter INSERT mode. Press the right arrow to move the cursor over by one character to the right. vi inserts a capital C on a newline. Confused, I press the left arrow key. vi inserts a capital B . I'm pretty confused. Where do I even start to figure out what's happening here? I need to be able to edit files with vi .
There is the pdfnup (or pdfjam ) command line tool. You can install it from the repositories of your distribution ( sudo apt-get install pdfjam for Debian-based distributions, yaourt -S pdfnup on Arch etc). The default options will take the input PDF file and produce an output PDF with two input pages per page: pdfnup -o output.pdf input.pdf
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/254316", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150831/" ] }
254,328
This question: " How can I get my external IP address in bash? " Solves the question with a call to dig: dig +short myip.opendns.com @resolver1.opendns.com; that is the fastest resolution possible, as it involves a single udp packet. However, that is only one site: OpenDNS, Is there any alternative? And, makes use of dig, that is not available by default.Again, Is there an alternative? Note : I needed to solve this problem as the OpenDNS service didn't work locally due to a (also local) redirect of port 53. I finally found out the reason (I had forgotten about the redirect). First, by using this command to find if dnssec is working (locally missing ad flag): dig pir.org +dnssec +multi And also using this command to find out if your ISP is redirecting OpenDNS resolutions: host -t txt which.opendns.com 208.67.220.220 If being redirected, you will get an answer of: "I am not an OpenDNS resolver."
An alternative is Google DNS: myip(){ dig @8.8.8.8 -t txt o-o.myaddr.l.google.com | grep "client-subnet" | grep -o "\([0-9]\{1,3\}\.\)\{3\}\([0-9]\{1,3\}\)" ; } If your system does not have dig, then host will be exactly equivalent: myip(){ host -t txt o-o.myaddr.l.google.com 8.8.8.8 | grep -oP "client-subnet \K(\d{1,3}\.){3}\d{1,3}"; } Both GNU grep (Perl-like -P ) and basic (BRE) regex are shown. And, of course the original site, will also work. With dig: myip(){ dig myip.opendns.com @208.67.220.222 | grep "^myip\.opendns\.com\." | grep -o "\([0-9]\{1,3\}\.\)\{3\}\([0-9]\{1,3\}\)" ; } And with host: myip(){ host myip.opendns.com 208.67.220.222 | grep -oP "^myip\.opendns\.com.* \K(\d{1,3}\.){3}(\d{1,3})" ; } The two snippets above will work with any of this four OpenDNS resolver addresses: echo 208.67.22{0,2}.22{0,2} If the direct DNS solution fails in the future, use curl to one of this sites (urls): IFS=$'\n' read -d '' -a urls <<-'_end_of_text_'api.ipify.orgbot.whatismyipaddress.com/canhazip.com/checkip.dyndns.com/corz.org/ipcurlmyip.com/eth0.me/icanhazip.com/ident.me/ifcfg.me/ifconfig.me/ip.appspot.com/ipecho.net/plainipof.in/txtip.tyk.nu/l2.io/iptnx.nl/ipwgetip.com/whatismyip.akamai.com/_end_of_text_ Call an array address like this: $ i=5; curl -m10 -L "http://${urls[i]}"116.132.27.203 In some sites https may also work.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/254328", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
254,340
How to create an alias that actually extends another alias of the same name inBash? Why: I used to have GREP_OPTIONS set on .bashrc to something like this: GREP_OPTIONS="-I --exclude=\*~" I also had a script (let us say, setup-java.sh ) which I would call beforeworking on some Java projects. It would contain the line: GREP_OPTIONS="$GREP_OPTIONS --exclude-dir=classes" If I also use Sass, then I would call setup-sass.sh which contains the line: GREP_OPTIONS="$GREP_OPTIONS --exclude-dir=\*/.sass-cache" But GREP_OPTIONS wasdeprecated andapparently the standard solution is either create an alias or some script...
Bash stores the values of aliases in an array called BASH_ALIASES : $ alias foo=bar$ echo ${BASH_ALIASES[foo]}bar With parameterexpansion we can get the either the last set alias (if exists) or the default value: alias grep="${BASH_ALIASES[grep]:-grep} -I --exclude=\*~" Now just do it on setup-java.sh : alias grep="${BASH_ALIASES[grep]:-grep} -I --exclude=\*~ --exclude-dir=classes" ...and finally on setup-sass.sh : alias grep="${BASH_ALIASES[grep]:-grep} -I --exclude=\*~ --exclude-dir=\*/.sass-cache" If the three lines are called, we get what we want: $ echo ${BASH_ALIASES[grep]:-grep}grep -I --exclude=\*~ -I --exclude=\*~ --exclude-dir=classes -I --exclude=\*~ --exclude-dir=\*/.sass-cache
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/254340", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3084/" ] }
254,352
We were taught Windows when I was in School. I then opted for a Bachelor course in Computer Engineering where we were made to use Linux instead of Windows. They never taught about basics of Linux and directly started teaching programming. It was not that difficult for me because I was somewhat familiar with Linux. But still I have some doubts about the administrator or root or superuser system in Linux. Now my question comprises of several different sub questions. So here it goes: I know that # (hash) in the terminal prompt means that one is operating as a superuser and $ (dollar) means that one is not. But even though my account is an Administrator account I don't see # on my terminal prompt. Instead I have to login using the su command to have Administrative rights. Why is that? Are the terms Administrator, root and superuser same? I'm confused because in Windows, there is just one term Administrator and if your account is an Admin account, then you are basically logged in with Admin privileges all the time i.e. one doesn't have to explicitly log in as an Admin unlike in Linux. We have Ubuntu installed in our College computers where if you have to log in as admin, then you type in su and then it prompts for password where you have to enter your current account password to become the superuser. But I didn't like the Ubuntu design so I switched to and installed Fedora on my laptop where the installation asked me for Two passwords, one for my normal account (which has admin rights) and the other for the 'root' user. So does that mean whenever I have to login as an admin using my normal account then I have to login my 'root' password instead of my normal password? And if that's the case, why did the software asked for my normal password if it won't give me the admin rights directly? EDIT: Can someone explain me the admin system in Linux? What is root? Why do I not have admin rights despite being an admin?
Every account on a Unix/Linux system has a numeric identifier, the "user ID" or UID. By convention, UID 0 (zero) is named "root", and is given special privileges (generally, the permission to access anything on the system). You could just log in as the root user directly, if you have the root password. However, it's generally considered bad practice to do so. For one thing, it's often the case that Unix/Linux gives you plenty of room to shoot yourself in the foot with no safety — there are many typos and accidents from which the easiest recovery is to do a complete reinstall and/or restore from backup. So, having to actually switch to root when you need to keeps you from accidentally doing something you didn't mean to. It also helps limit the spread of malware. If your web browser is running under UID 0 — "as root", we say — then a programming bug could be exploited by remote websites to take complete control over your computer. Keeping to a "regular" user account limits that damage. Both of these follow a general best practice called "the principle of least privilege" — which, honestly, is a good thing to follow in system design in general. You can read more in specific about reasons to not always run as root under Concern about logging in as root overrated? Now, that leaves the question of how you can get access to protected things as a non-root user. There are two basic ways — su and sudo . The first requires the root password, and the second, in usual configuration, requires your password. It's often the case that you use sudo to run a single command "as root", rather than switching to the root account for a whole session. (You can also do this with su -c , something you will often see in documentation.) For a long discussion of the relative merits of these, see Which is the safest way to get root privileges: sudo, su or login? . (And, for completeness, there are other mechanisms which aren't sudo but work in the same way, like PackageKit, usually used for GUI applications.) You ask whether the terms "root", "superuser", and "administrator" are the same. "Root" and "superuser" basically are. To be precise, one might say: "The root account is the superuser, because it has UID 0." "Administrator" could mean the same thing, but in Fedora, we* use it in a slightly different way. Not every user on the system has the power to get root privileges via sudo. In Fedora in the default setup, members of the group wheel can do this. And, in the installer and in the documentation and other places, we call this an "administrator account". One that isn't root, but has the power to access root privileges. (Oh, and one final thing: that # vs $ in your prompt is just a visual convention and isn't definitive. You can change the environment variable PS1 to make the prompt do all sorts of things.) * I work on Fedora.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254352", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150849/" ] }
254,359
I am trying to clarify my understanding of terminal here. Terminal is actually a device (keyboard+monitor). When in CLI mode, the input from your keyboard goes directly to shell and also displayed on monitor. Meanwhile, when using GUI mode, you have to open terminal emulator program to interact with shell. The input from your keyboard goes to terminal emulator program and also displayed on terminal emulator window on monitor. The input does not directly goes to shell. The terminal emulator program will relay the input from your keyboard to shell. The terminal emulator program communicates with the shell using pseudo-terminal. There is no terminal emulator program involved when you go straight to CLI from boot. Please comment and correct me if anything wrong with my understanding. Update : I read back TTY demystified . I think what I should ask is the difference between text terminal (boot straight to text mode) and GUI terminal because I thought terminal=text terminal, terminal emulator=GUI terminal e.g. Gnome Terminal, which are wrong. From the answers in regards to before this update, a user is actually using terminal emulator program (user space) too like in GUI mode. May I know is it TTY program because I found TTY process when running command 'ps aux'. I never knew there is terminal emulator program involved too (not referring to terminal emulator in kernel space) in text mode. Update2 : I read Linux console . According to it, text mode is console, meanwhile terminal software in GUI mode is terminal emulator. Well, it makes sense and it is same with my understanding before. However, according diagram from TTY demystified , terminal emulator is in the kernel space instead of user space. Interestingly, the diagram refers to text mode.
There are a few separate terms here that need to be independently defined: Terminal: A real keyboard/monitor interface, like the VT100: https://en.wikipedia.org/wiki/VT100 Terminal emulator(TTY): Emulates a terminal, providing input and output. Press ctrl+alt+F2 in most Linux distros, and you'll be in one. Type "w" in the terminal, and you'll see your w command run by a "tty". Pseudo-Terminal(PTY): A master/slave pair (ptmx) in which some other piece of software like ssh or a GUI terminal provide a terminal-like interface through a slave (pts). http://linux.die.net/man/4/ptmx Type "w" in a GUI terminal, and you'll see the w command listed as coming from a pts. Shell : A command line interpreter which is run on login, and interprets your input. Examples of this are bash/zsh. However, please keep in mind that these terms have all become interchangeable in conversation. When someone refers to the "terminal", "terminal emulator", "console", "command line", or "shell", unless context specifies otherwise, they are probably referring more generally to: "That text-based thing with which I control some computer." Edit for question update See below for all processes matching pts or pty: root@localhost tests]# ps fauxww | grep -P [pt]t[ys] root 2604 2.3 0.8 50728 34576 tty1 Ss+ 07:09 1:15 \_ /usr/bin/Xorg :0 -br -verbose -audit 4 -auth /var/run/gdm/auth-for-gdm-VRHaoJ/database -nolisten tcp vt1root 2569 0.0 0.0 2008 500 tty2 Ss+ 07:09 0:00 /sbin/mingetty /dev/tty2root 2571 0.0 0.0 2008 500 tty3 Ss+ 07:09 0:00 /sbin/mingetty /dev/tty3root 2573 0.0 0.0 2008 504 tty4 Ss+ 07:09 0:00 /sbin/mingetty /dev/tty4root 2575 0.0 0.0 2008 500 tty5 Ss+ 07:09 0:00 /sbin/mingetty /dev/tty5root 2577 0.0 0.0 2008 504 tty6 Ss+ 07:09 0:00 /sbin/mingetty /dev/tty6sin 3374 0.2 0.7 90668 28564 ? Sl 07:13 0:09 /usr/bin/python /usr/bin/terminator <<< Added this parent of 3377 manually to see the pts sourcesin 3377 0.0 0.0 2076 620 ? S 07:13 0:00 \_ gnome-pty-helpersin 3378 0.0 0.0 5236 1712 pts/0 Ss 07:13 0:00 \_ /bin/bashroot 4054 0.0 0.0 5124 1676 pts/0 S 07:23 0:00 | \_ bashroot 5034 0.0 0.0 5056 1092 pts/0 R+ 08:03 0:00 | \_ ps fauxwwroot 5035 0.0 0.0 4416 740 pts/0 S+ 08:03 0:00 | \_ grep -P [pt]t[ys]sin 4154 0.0 0.0 5236 1708 pts/1 Ss 07:23 0:00 \_ /bin/bashsin 4485 0.0 0.0 7252 3500 pts/1 S+ 07:41 0:00 \_ python You'll see both pts and tty related processes. You assume that because you see tty in ps, this is the thing your GUI terminal is using, but it's not. In this case, the mingetty TTY processes are all the ones I can use with ctrl+shift+F2-6, and the pty are slaves related to my GUI terminal process. To see for sure, check lsof of your GUI terminal's process: [root@localhost tests]# ps fauxww | grep terminatorsin 3374 0.2 0.7 90668 28564 ? Sl 07:13 0:08 /usr/bin/python /usr/bin/terminator[root@localhost tests]# lsof -p 3374 | grep '[pt]t[ys]'/usr/bin/ 3374 sin 17u CHR 136,0 0t0 3 /dev/pts/0/usr/bin/ 3374 sin 25u CHR 136,1 0t0 4 /dev/pts/1 When you boot into text mode, you're going into a TTY just like if you press ctrl+alt+f2 from the desktop. When you use SSH/GUI terminals, you're using a pseudo-terminal.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254359", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63649/" ] }
254,367
In bash scripting: we create variable by just naming it: abc=ok or we can use declare declare abc=ok what's the difference? and why does bash make so many ways to create a variable?
From help -m declare : NAME declare - Set variable values and attributes. SYNOPSIS declare [ -aAfFgilnrtux ] [ -p ] [ name [ = value ] ...] DESCRIPTION Set variable values and attributes. Declare variables and give them attributes. If no NAMEs are given, display the attributes and values of all variables. Options: -f restrict action or display to function names and definitions -F restrict display to function names only (plus line number and source file when debugging) -g create global variables when used in a shell function; otherwise ignored -p display the attributes and value of each NAME Options which set attributes: -a to make NAMEs indexed arrays (if supported) -A to make NAMEs associative arrays (if supported) -i to make NAMEs have the ‘integer’ attribute -l to convert NAMEs to lower case on assignment -n make NAME a reference to the variable named by its value -r to make NAMEs readonly -t to make NAMEs have the ‘trace’ attribute -u to convert NAMEs to upper case on assignment -x to make NAMEs export Using ‘ + ’ instead of ‘ - ’ turns off the given attribute. Variables with the integer attribute have arithmetic evaluation (see the let command) performed when the variable is assigned a value. When used in a function, declare makes NAMEs local, as with the local command. The ‘ -g ’ option suppresses this behavior. Exit Status: Returns success unless an invalid option is supplied or a variable assignment error occurs. SEE ALSO bash(1) IMPLEMENTATION GNU bash, version 4.3.11(1)-release (i686-pc-linux-gnu) Copyright (C) 2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> So, declare is used for setting variable values and attributes . Let me show the use of two attributes with a very simple example: $ # First Example:$ declare -r abc=ok$ echo $abcok$ abc=not-okbash: abc: readonly variable$ # Second Example:$ declare -i x=10$ echo $x10$ x=ok$ echo $x0$ x=15$ echo $x15$ x=15+5$ echo $x20 From the above example, I think you should understand the usage of declare variable over normal variable! This type of declare ation is useful in functions, loops with scripting. Also visit Typing variables: declare or typeset
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/254367", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106512/" ] }
254,389
I'm trying to use $(patsubst %-%,%:%,$(MAKECMDGOALS)) to replace dashes with colons in the make target but it has no effect. How could I achieve this?
You can only have a single wildcard in patsubst . To replace all dashes with colons, you can use subst : $(subst -,:,$(MAKECMDGOALS))
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254389", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13265/" ] }
254,410
From Bash's manual 6.4 Bash Conditional Expressions Conditional expressions are used by the [[ compound command and thetest and [ builtin commands. Expressions may be unary or binary. Unaryexpressions are often used to examine the status of a file. There arestring operators and numeric comparison operators as well. If the fileargument to one of the primaries is of the form /dev/fd/N, then filedescriptor N is checked. If the file argument to one of the primaries is one of /dev/stdin, /dev/stdout, or /dev/stderr, file descriptor 0,1, or 2, respectively, is checked. When used with [[, the ‘<’ and ‘>’operators sort lexicographically using the current locale. The testcommand uses ASCII ordering. Unless otherwise specified, primaries that operate on files follow symbolic links and operate on the targetof the link, rather than the link itself. What is the definition of a primary? What is the difference between a primary and an operator or operation?
As noted, this is jargon. The bash reference manual does not define the term; it is assumed that the reader knows about it. You can easily find it used for operands in an arithmetic expression. See for example Arithmetic Expressions in a Fortran 77 Language Reference Manual , which says A primary is the basic component in an arithmetic expression. The forms of a primary are the following: an unsigned arithmetic constant a symbolic name of an arithmetic constant an arithmetic variable reference an arithmetic array element reference an arithmetic function reference an arithmetic expression enclosed in parentheses In POSIX, it is (still) used mostly relying upon the reader's prior knowledge of the term. For instance, in the shell command language, it refers to primaries of the find command: (such as in the argument to the find - name primary when find is being called using one of the exec functions as defined in the System Interfaces volume of POSIX.1-2008, or in the pattern argument to the fnmatch() function), and on reading that section it is apparent that primaries means the same as operands. That is, at each level of command-parsing, the command (or primary) has some further primaries to consider until all that are left are constants or variables: aka "operand".
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254410", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
254,418
When doing a binary diff between compiled Linux kernels for x86_64, the difference from version to version is relatively large (much more than 25%). The differences in size between the source archives, from version to version, is smaller (around 8% or smaller). Would the difference in size between binary kernel images for ARM be smaller than for x86_64? I read somewhere that the binary difference between ARM executables are smaller than for x86_64 executables, since the compiled code is placed in more predictable locations, but I can't remember where I found it. Is the difference between versions of binary ARM Linux kernel images smaller than for x86_64?
As for the kernel code, besides the architecture specific code, which is a very small portion (1% to 5%?), all the kernel source code is common to all architectures. About the binaries: Actually in most Linux distributions, vmlinuz is a symbolic link that points to the actual gzipped kernel code; like vmlinuz-3.16.0-4-amd64 . I am sure the OP is talking about the latter, however mentioning the former for the benefit of the reader. https://www.ibm.com/developerworks/community/blogs/mhhaque/entry/anatomy_of_the_initrd_and_vmlinuz While it is indeed true that ARM code is indeed smaller, even if the kernels were not compressed, the kernel codes in ARM are often custom made, and have far less code activated, than in the Intel counterpart versions (e.g. Intel has a lot of video cards, even if just the modules stubs, while usually the custom ARM kernel only has to deal with the one that is present in the SoC). In addition comparing already compressed random binary blobs maybe not yield always the true results as per some strange coincidence a bigger binary might become smaller due to some compression optimisation. So in reality, to compare effectively the binary kernels, you have to compile them with identical options, and keep them uncompressed (or uncompress the resulting vmlinuzxxx file). A fair match would be comparing other, non-compressed binaries for instance /bin/ls , or /usr/sbin/tcpdump , and furthermore an architecture similar to the one we are trying to match (ARM machines is still largely 32-bits, however there are already a few 64 bits ones) Needless to say, the same code compiled in ARM will always be (far) smaller because the ARM machine code is a RISC platform code. It has a smaller subset of machine code instructions too, that result in a smaller code. In the other hand, the Intel has a bigger set of instructions, also due to the retro-compatibility inheritance with multiple generations of microprocessors. From http://www.decryptedtech.com/editorials/intel-vs-arm-risc-against-cisc-all-over-again The concept of the RISC CPU is an old one, and it is also a very efficient one. In RISC CPUs you have smaller workloads being run by each instruction, these instructions are also very often split into I/O and Memory to further eliminate overhead. This makes for very efficient use of CPU and memory time, but can also require some bulky code on the software side to make everything work. When RISC first was developed, it was the way to go simply because memory and HDD access was slow. The bulky and complex instructions found in an x86 CPU were cumbersome on older CPUs and could not keep up with RISC systems. Nevertheless, the conversation is not that straight enough, as Intel chips are a complex beast nowadays, and deep down the pseudo-CISC layer they have RISC strategies and designs that decode and emulate the Intel opcodes as we know them. The ARM opcodes are also bulky, compared say to a MIPS, since the ARM is a cheap processor with specialised instructions dedicated to video decoding (around 30% of the processor die is dedicated to them). As a short exercise, take the tcpdump binary and the four Linux architectures I have access to: MIPS 32 bits -> 502.4K ARM 32 bits -> 718K Intel 32 bits (i386) -> 983K Intel 64 bits (x86_64) -> 1.1M So coming back to your original question: ARM kernels "grow" less in size because the architecture has lessdiversity in the base hardware for a specific distribution. However, and much more significant,they grow less in size because the code produced is more efficientand compact.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254418", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3920/" ] }
254,421
Because files created by default having permission of 666 and umask (in permission bits form) subtract bit-wise from this permission, can we do something to give execute permission without using permission character (r,w,x) ? I am refering to using bit-wise mask, e.g umask 002 not setting permission character such as umask u+xumask u=rwx
This is not possible. umask only prevents permissions but never adds them. Thus you get execute permission only if the creating open() syscall does contain them. This is the case if a compiler creates an executable file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144104/" ] }
254,427
On Linux, cd /tmpmkdir foo; cd foo Now, running find . -name 'foo' gives no output. Whereas running find /tmp/foo -name 'foo' Gives the output /tmp/foo which doesn't make sense to me. Can somebody explainwhy?
find traverses the specified directory tree(s), and evaluates the given expression for each file that it finds. The traversal starts at the given path. Here's a summary of how find . -name foo operates: First path on the command line: . Does the base name ( . ) match the pattern foo ? No, so do nothing. It so happens that /tmp/foo is another name for the same directory. But find doesn't know that (and it isn't supposed to try to find out). Is the path a directory? Yes, so traverse it. Enumerate the entries in . , and for each entry, perform the traversal process. The directory is empty: it contains no entry other than . and .. , which find does not traverse recursively. So the job is finished. And find /tmp/foo : First path on the command line: /tmp/foo Does the base name ( foo ) match the pattern foo ? Yes, so the condition matches. There is no action associated with this condition, so perform the default action, which is to print the path. Is the path a directory? Yes, so traverse it. Enumerate the entries in /tmp/foo , and for each entry, perform the traversal process. The directory is empty: it contains no entry other than . and .. , which find does not traverse recursively. So the job is finished. It so happens that . and /tmp/foo are the same directory, but that's not enough to guarantee that find has the same behavior on both. The find command has ways to distinguish between paths to the same file; the -name predicate is one of them. find /tmp/foo -name foo matches the starting directory as well as any file underneath it that's called foo . find . -name . matches the starting directory only ( . can never be found during a recursive traversal).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/254427", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77414/" ] }
254,440
like i have this lines on the file {218394 ted 'y' ted} {131241 john 'n' ted} and i want to check if its y continue if its n dont ! i have this code until now!: read -p "Enter your answer : " echo "your answer is: $answer"if grep -q "$answer" "$sin"then echo "y"else echo "n"fi what i want is to control if inside 'sin' file in the specific column/line for example $3 its y or n!
find traverses the specified directory tree(s), and evaluates the given expression for each file that it finds. The traversal starts at the given path. Here's a summary of how find . -name foo operates: First path on the command line: . Does the base name ( . ) match the pattern foo ? No, so do nothing. It so happens that /tmp/foo is another name for the same directory. But find doesn't know that (and it isn't supposed to try to find out). Is the path a directory? Yes, so traverse it. Enumerate the entries in . , and for each entry, perform the traversal process. The directory is empty: it contains no entry other than . and .. , which find does not traverse recursively. So the job is finished. And find /tmp/foo : First path on the command line: /tmp/foo Does the base name ( foo ) match the pattern foo ? Yes, so the condition matches. There is no action associated with this condition, so perform the default action, which is to print the path. Is the path a directory? Yes, so traverse it. Enumerate the entries in /tmp/foo , and for each entry, perform the traversal process. The directory is empty: it contains no entry other than . and .. , which find does not traverse recursively. So the job is finished. It so happens that . and /tmp/foo are the same directory, but that's not enough to guarantee that find has the same behavior on both. The find command has ways to distinguish between paths to the same file; the -name predicate is one of them. find /tmp/foo -name foo matches the starting directory as well as any file underneath it that's called foo . find . -name . matches the starting directory only ( . can never be found during a recursive traversal).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/254440", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150818/" ] }
254,449
What is the difference between running startx and starting your display manager with sudo service (display_manager) start ? Two different events happen, so I am curious as to know the difference behind-the-scenes.
The graphical user interface on traditional Unix systems, as well as most modern Unix systems other than Mac OS X, is built on the X Window System . One component, the X server, communicates with the hardware (display and input peripherals) and offers basic primitives to display windows and route user input. Other programs, said to be X clients, display windows and listen to user input by communicating with the X server. In order to talk with the hardware, an X server may require special privileges; for example, on some systems, the X server is setuid root. Recent systems try to avoid having the X server run as root in order to improve security. Depending on the system, running an X server on the system console may be restricted to certain users, or to the user with physical access to the console. The X server alone does nothing but display a hard-coded background pattern and a mouse cursor. In order to do anything useful, some clients need to be started, typically including a window manager . The normal way to run a GUI session is to run a session manager program, which takes care of launching all desired clients (window manager, desktop widgets, clipboard manager, restored programs from the user's previous session, etc.). The session manager needs to be started after the X server since it will interact with it. Each desktop environment comes with its own session manager; just about any window manager can also be used as a session manager, and in a pinch a terminal running a shell can be seen as a minimalistic session manager — what matters is that the user has some way to launch the programs they want to run. There are two traditional ways to launch a GUI session: If a user is already logged in, but they don't have a GUI yet, they can run the xinit command. This command starts an X server, then starts a session manager, and waits for the session manager to exit; then it kills the X server. This way, the client side of the session and the X server have the same lifetime. The startx program is a small wrapper around xinit . It's also possible to start a GUI before any user is logged in. In that case, the only client is a display manager , which provides a login prompt. Once a user has logged in, the display manager invokes their session manager. When the session manager exits, the display manager ensures that no more programs are running in that session, and shows a new login prompt. Another way to see this is that in order to have a graphical login session, there needs to be a graphical interface and the user needs to log in. These two steps can be performed in either order: login then start the GUI ( startx method), or start the GUI then login (display manager method). Other setups are uncommon but possible. For example, in a kiosk setup, the system startup scripts start an X server and a single full-screen client. In an autologin setup, the display manager runs a session manager for the default user at boot time.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/254449", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139098/" ] }
254,462
In Gnome, while I can connect my bluetooth headphones in HFP/HSP mode, I can't get them to connect in A2DP mode which is what I need. Surprisingly, I can connect it in A2DP mode in KDE in just one click. I am using Arch Linux with Gnome 3.18. Update: $ pactl list short | grep bluetooth 8 module-bluetooth-policy 9 module-bluetooth-discover
Same issue here, Ubuntu 15.10, Gnome Shell 3.18.2. Unfortunately your workaround didn't work for me, I found the workaround/fix here that is based on the same arch wiki provided by you. Here is what I did: (1) run the following command in terminal: sudo setfacl -m u:gdm:r /usr/bin/pulseaudio (2) reboot Ubuntu or restart the pulseaudio by running the following command in terminal: sudo pkill pulseaudio
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254462", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52733/" ] }
254,494
I've noticed that { can be used in brace expansion: echo {1..8} or in command grouping: {ls;echo hi} How does bash know the difference?
A simplified reason is the existence of one character: space . Brace expansions do not process (un-quoted) spaces. A {...} list needs (un-quoted) spaces. The more detailed answer is how the shell parses a command line . The first step to parse (understand) a command line is to divide it into parts. These parts (usually called words or tokens) result from dividing a command line at each meta-character from the link : Splits the command into tokens that are separated by the fixed set of meta-characters: SPACE, TAB, NEWLINE, ;, (, ), <, >, |, and &. Types of tokens include words, keywords, I/O redirectors, and semicolons. Meta-characters: space tab enter ; , < > | and & . After splitting, words may be of a type (as understood by the shell): Command pre-asignements: LC=ALL ... Command LC=ALL echo Arguments LC=ALL echo "hello" Redirection LC=ALL echo "hello" >&2 Brace expansion Only if a "brace string" (without spaces or meta-characters) is a single word (as described above) and is not quoted , it is a candidate for "Brace expansion". More checks are performed on the internal structure later. Thus, this: {ls,-l} qualifies as "Brace expansion" to become ls -l , either as first word or argument (in bash, zsh is different). $ {ls,-l} ### executes `ls -l`$ echo {ls,-l} ### prints `ls -l` But this will not: {ls ,-l} . Bash will split on space and parse the line as two words: {ls and ,-l} which will trigger a command not found (the argument ,-l} is lost): $ {ls ,-l} bash: {ls: command not found Your line: {ls;echo hi} will not become a "Brace expansion" because of the two meta-characters ; and space . It will be broken into this three parts: {ls new command: echo hi} . Understand that the ; triggers the start of a new command. The command {ls will not be found, and the next command will print hi} : $ {ls;echo hi}bash: {ls: command not foundhi} If it is placed after some other command, it will anyway start a new command after the ; : $ echo {ls;echo hi}{lshi} List One of the "compound commands" is a "Brace List" (my words): { list; } . As you can see, it is defined with spaces and a closing ; . The spaces and ; are needed because both { and } are "Reserved Words ". And therefore, to be recognized as words, must be surrounded by meta-characters (almost always: space ). As described in the point 2 of the linked page Checks the first token of each command to see if it is .... , {, or (, then the command is actually a compound command. Your example: {ls;echo hi} is not a list. It needs a closing ; and one space (at least) after { . The last } is defined by the closing ; . This is a list { ls;echo hi; } . And this { ls;echo hi;} is also (less commonly used, but valid)(Thanks @choroba for the help). $ { ls;echo hi; }A-list-of-fileshi But as argument (the shell knows the difference) to a command, it triggers an error: $ echo { ls;echo hi; }bash: syntax error near unexpected token `}' But be careful in what you believe the shell is parsing: $ echo { ls;echo hi;{ lshi
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/254494", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106512/" ] }
254,525
I have files named in the style 02.04.11 DJ Kilbot.mp3 (for several dates) and I want to reformat the name in this manner: DJ Kilbot 2011-02-04.mp3 . In other words, the current format is MM.DD.YY DJ-NAME.mp3 and I want to change it to DJ-NAME YYYY-MM-DD.mp3 . What's the easiest way to do this, for several year's worth of files?
cd to the directory then run the following (using perl-rename ). This is a "dry-run" first. rename -n 's/^([0-9]{2})\.([0-9]{2})\.([0-9]{2}) (.*)\.mp3$/$4 20$3-$1-$2.mp3/' *02.04.11 DJ Kilbot.mp3 -> DJ Kilbot 2011-02-04.mp3 If you are happy with the output, then run it for real. rename 's/^([0-9]{2})\.([0-9]{2})\.([0-9]{2}) (.*)\.mp3$/$4 20$3-$1-$2.mp3/' * Explanation rename -n : run a test "dry-run". 's/FOO/BAR/' substitute the regex FOO and replace with BAR . ^([0-9]{2})\.([0-9]{2})\.([0-9]{2}) (.*)\.mp3$ : regex to capture. Match the start of the string ^ , then three lots of [0-9]{2} (i.e. two consecutive numbers) separated by a dot ( \. when escaped). Then a space and (.*)\.mp3$ . Parens () capture the contents for use in the replacement. $4 20$3-$1-$2.mp3 : replace with the DJ name the fourth capturing group ( $4 ), or (.*) above, then the rest of the string as specified (i.e. the third, first and second groups). * : act on all files in the directory. Simplify This regex has a bit of error checking built in. If you are sure that all files are named consistently, you can simplify it slightly to the following. rename 's/^(..)\.(..)\.(..) (.*)\.mp3$/$4 20$3-$1-$2.mp3/' *
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254525", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/394/" ] }
254,599
All the results of my searches end up having something to do with hostname or uname -n . I looked up the manual for both, looking for sneaky options, but no luck. I am trying to find an equivalent of OSX's scutil --get ComputerName on Linux systems. On Mac OS X, the computer name is used as a human-readable identifier for the computer; it's shown in various management screens ( e.g. on inventory management, Bonjour-based remote access, ...) and serves as the default hostname (after filtering to handle spaces etc.).
The closest equivalent to a human-readable (and human-chosen) name for any computer running Linux is the default hostname stored in /etc/hostname . On some (not all) Linux distributions, this name is entered during installation as the computee’s name (but with network hostname constraints, unlike macOS’s computer name). This can be namespaced, i.e. each UTS namespace can have a different hostname. Systems running systemd distinguish three different hostnames, including a “pretty” human-readable name which is supposed to be descriptive in a similar fashion to macOS’s computer name; this can be set and retrieved using hostnamectl ’s --pretty option. The other two hostnames are the static hostname, which is the default hostname described above, and the transient hostname which reflects the current network configuration. Systemd also supports a chassis type ( e.g. “tablet”) and an icon for the host; see systemd-hostnamed.service .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/254599", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45354/" ] }
254,625
I have the following in a file description: ''' This rule forbids throwing string literals or interpolations. While JavaScript (and CoffeeScript by extension) allow any expression to be thrown, it is best to only throw <a href="https://developer.mozilla.org /en/JavaScript/Reference/Global_Objects/Error"> Error</a> objects, because they contain valuable debugging information like the stack trace. Because of JavaScript's dynamic nature, CoffeeLint cannot ensure you are always throwing instances of <tt>Error</tt>. It will only catch the simple but real case of throwing literal strings. <pre> <code># CoffeeLint will catch this: throw "i made a boo boo" # ... but not this: throw getSomeString() </code> </pre> This rule is enabled by default. ''' with several other things in this file. I extract this part in my shell script via sed -n "/'''/,/'''/p" $1 (where $1 is the file). This gives me a variable with the content as one liner description: ''' This rule forbids throwing string literals or interpolations. While JavaScript (and CoffeeScript by extension) allow any expression to be thrown, it is best to only throw <a href="https://developer.mozilla.org /en/JavaScript/Reference/Global_Objects/Error"> Error</a> objects, because they contain valuable debugging information like the stack trace. Because of JavaScript's dynamic nature, CoffeeLint cannot ensure you are always throwing instances of <tt>Error</tt>. It will only catch the simple but real case of throwing literal strings. <pre> <code># CoffeeLint will catch this: throw "i made a boo boo" # ... but not this: throw getSomeString() </code> </pre> This rule is enabled by default. ''' How can I now extract the part between the ''' ? Or is there even a better way to retrieve it from the multiline file ? I'm on Mac El Captain 10.11.2 and GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin15)
perl -l -0777 -ne "print for /'''(.*?)'''/gs" file would extract (and print followed by a newline) the part between each pair of '''. Beware that perl slurps the whole file in memory before starting processing it so that solution may not be appropriate for very large files.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/254625", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151021/" ] }
254,644
Suppose I have a file called file : $ cat fileHelloWelcome toUnix I want to add and Linux at the end of the last line of the file. If I do echo " and Linux" >> file will be added to a new line. But I want last line as Unix and Linux So, in order to work around this, I want to remove newline character at the end of file. Therefore, how do I remove the newline character at the end of file in order to add text to that line?
If all you want to do is add text to the last line, it's very easy with sed. Replace $ (pattern matching at the end of the line) by the text you want to add, only on lines in the range $ (which means the last line). sed '$ s/$/ and Linux/' <file >file.new &&mv file.new file which on Linux can be shortened to sed -i '$ s/$/ and Linux/' file If you want to remove the last byte in a file, Linux (more precisely GNU coreutils) offers the truncate command, which makes this very easy. truncate -s -1 file A POSIX way to do it is with dd . First determine the file length, then truncate it to one byte less. length=$(wc -c <file)dd if=/dev/null of=file obs="$((length-1))" seek=1 Note that both of these unconditionally truncate the last byte of the file. You may want to check that it's a newline first: length=$(wc -c <file)if [ "$length" -ne 0 ] && [ -z "$(tail -c -1 <file)" ]; then # The file ends with a newline or null dd if=/dev/null of=file obs="$((length-1))" seek=1fi
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/254644", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
254,653
What is the count default in dd command if not specified ? dd if=/dev/mem bs=1k skip=768 instead of full form like dd if=/dev/mem bs=1k skip=768 count=50 I did not find an answer with Google.
The default is unlimited - keep going until you run out of space.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254653", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144104/" ] }
254,657
I remembered reading one question how would you back up the MBR of a disk . Two of the choices are dd if=/dev/sda of=/dev/sdb bs=512 count=1dd if=/dev/sda of=/dev/sdb bs=440 count=1 and the correct answer is dd if=/dev/sda of=/dev/sdb bs=440 count=1 I am confused. Is the MBR size 440B or 512B ?
The MBR IS 512 bytes. So the first example is how you would back it up. The partition table is at the end, in the area after 440 bytes in - so, if you wanted to back it up WITHOUT the partition table, then you could use the second example (why you'd want that, I don't know).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144104/" ] }
254,670
I mean if I see core.<pid> in filesystem does it mean that core file generation finished and I can use it on my own? My question was answered. But I decided to explain it a little bit. I supposed that core.<pid> is generated in some hidden file f.e. .code.<pid>~ first and only after generation finished just moved (renamed) to target path. In that case operation can be fast and atomic.
I wouldn't bet on it, especially on a busy multithreaded system, or if the dump location is on a network share (memorably, a professor would generate 8 GB core files that had to be spooled over 10Mbit ethernet via NFS). File system atomicity usually requires locking, or the write-to-a-temporary-file-and-then- rename(1) trick. Some delving around in fs/coredump.c for the linux 4.3.3 kernel indicates no such locking or rename tricks, as the kernel figures out the filename to use (with an unlinking race condition!) and then spools out the file: file_start_write(cprm.file); core_dumped = binfmt->core_dump(&cprm); file_end_write(cprm.file); Since there's probably no giant kernel lock to prevent other user-land things from running while the above is doing its business (this could be tested by slowing down the generation of a large core file, then seeing how that system behaves), I don't see anything atomic about this process.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254670", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17424/" ] }
254,693
I have a data file ( $file1 ) which contains two lines of data per individual. I need to intersperse a third line of data from another data file ( $file2 ). So my input looks like: >cat $file1 bob 1 1 0 bob 1 0 1 alan 0 0 1 alan 0 1 1>cat $file2 bob a a b alan a c a So the desired result would be: >cat $file3 bob 1 1 0 bob 1 0 1 bob a a b alan 0 0 1 alan 0 1 1 alan a c a If I just needed to intersperse every other line I would have used paste like so: >paste '-d\n' $file1 $file2 What would be the best tool to use to achieve this? I am using zsh .
Just: paste -d '\n' -- - - "$file2" < "$file1" (provided $file2 is not - ). Or with GNU sed , provided $file2 (the variable content, the file name) doesn't contain newline characters and doesn't start with a space or tab character: sed "2~2R$file2" "$file1" < "$file2" With awk (provided $file1 doesn't contain = characters (or at least that if it does, the part before it is not an acceptable awk variable name)): export file2awk '{print} NR % 2 == 0 {if ((getline l < ENVIRON["file2"]) > 0) print l} ' "$file1"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254693", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93135/" ] }
254,698
I am wondering if there is a way to simply output to the terminal the value of some global/standard definitions of C/GCC, e.g. using the echo command, without writing C code and using printf ? I mean things like __GNUC_ , __UINT64_MAX__ , _POSIX_C_SOURCE ...
You can view the value of any defined constant as follows: echo __GNUC__ | gcc -E - If you need to add an include file: echo O_APPEND | gcc -include fcntl.h -E -
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122146/" ] }
254,726
Following up question from https://unix.stackexchange.com/a/254637/18887 I have the following string in a bash variable This rule forbids throwing string literals or interpolations. While JavaScript (and CoffeeScript by extension) allow any expression to be thrown, it is best to only throw <a href=\"https://developer.mozilla.org /en/JavaScript/Reference/Global_Objects/Error\"> Error</a> objects, because they contain valuable debugging information like the stack trace. Because of JavaScript's dynamic nature, CoffeeLint cannot ensure you are always throwing instances of <tt>Error</tt>. It will only catch the simple but real case of throwing literal strings. <pre> <code># CoffeeLint will catch this: throw \"i made a boo boo\" # ... but not this: throw getSomeString() </code> </pre> This rule is enabled by default. I'd like to replace a text in a file with this variable. Currently I do this via perl -i -pe "s/_PLACEHOLDER_/$text/g" $file but with the structure of the text with " etc, I get Backslash found where operator expected at -e line 6, near "Error\"(Might be a runaway multi-line // string starting on line 1)syntax error at -e line 6, near "Error\"Can't find string terminator '"' anywhere before EOF at -e line 6. How can I replace the text in the file?
You can view the value of any defined constant as follows: echo __GNUC__ | gcc -E - If you need to add an include file: echo O_APPEND | gcc -include fcntl.h -E -
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254726", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151021/" ] }
254,811
Alternate title: What is a "collating sequence" or "collating element" in a POSIX-compliant regex? I found the exact technical definition in Section 9.3.5 of the POSIX specs , as item #4 in the list, but it's not really clear to me. I googled around on the web for examples and explanations and came up not completely empty-handed, but definitely not enlightened . The only thing I've sort of gotten is that in certain circumstances, you can make your regex treat multiple characters as though they were a single character for purposes of length comparison and determining what the "longest match" is (since regexes are greedy and return the longest possible match). Is that all, though? I'm having trouble seeing a use for it, but I suspect my understanding is incomplete. What actually is "collating" for a regex? And how does [[.ch.]] , the example in the POSIX specs, relate to this?
Collation elements are usually referenced in the context of sorting. In many languages, collation (sorting like in a dictionary) is not only done per-character. For instance, in Czech, ch doesn't sort between cg and ci like it would in English, but is considered as a whole for sorting. It is a collating element (we can't refer to a character here, character are a subset of collating elements) that sorts in between h and i . Now you may ask, What has that to do with regular expressions? , Why would I want to refer to a collating element in a bracket expression? . Well, inside bracket expressions, one does use order. For instance in [c-j] , you want the characters in between c and j . Well, do you? You'd rather want collating elements there. [h-i] in a Czech locale matches ch : $ echo cho | LC_ALL=cs_CZ.UTF-8 grep '^[h-i]o'cho So, if you're able to list a range of collating elements in a bracket expression, then you'd expect to be able to list them individually as well. [a-cch] would match that collating elements in between a and c and the c and h characters. To have a-c and the ch collating element, we need a new syntax: $ echo cho | LC_ALL=cs_CZ.UTF-8 grep '^[a-c[.ch.]]o'cho (the ones in between a and c and the ch one). Now, the world is not perfect yet and probably never will. The example above was on a GNU system and worked. Another example of a collating element could be e with a combining acute accent in UTF-8 ( $'e\u0301' rendered like $'\u00e9' as é ). é and é are the same character except that one is represented with one character and the other one with two. $ echo $'e\u301t\ue9' | grep '^[d-f]t' Will work properly on some systems but not others (not GNU ones for instance). And it's unclear whether $'[[.\ue9.]]' should match only $'\ue9' or both $'\ue9' and $'e\u301' . Not to mention non-alphabetic scripts, or scripts with different, regional, sorting orders, things like ffi ( ffi in one character) which become tricky to handle with such a simple API.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254811", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
254,820
xdotool lets you search for windows using its search subcommand. I need to locate a window, that has class 'gvim' and title containing word 'TODO'. How do I do this? What I've tried: You can do xdotool search --name --class , but it would only accept one pattern for both name and title. xdotool supports command chaining, but I could not find a way to chain two search calls -- the second one simply overrides first one.
My xdotool help informs me that your two switches are the same (xdotool version 3.20150503.1), --name check regexp_pattern agains the window name--title DEPRECATED. Same as --name. and as such doesn't do anything.My xdotool does the same as yours with replacing the window stack, so I did it with a shell script.A shell script doing what you want is delivered below: pids=$(xdotool search --class "gvim")for pid in $pids; do name=$(xdotool getwindowname $pid) if [[ $name == *"TODO"* ]]; then #Do what you want, $pid is your sought for PID, #matching both class gvim and TODO in title fidone The asterisks in the if statement is there in order to do a substring match for TODO , so that it can occur wherever in the title.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/254820", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74069/" ] }
254,827
I've a not really permissive program to work with. Unfortunately, this program doesn't just allow the following "command -a -b -c" or "command -abc" So I've to always type the following command -a && command -b && command -c I am pretty sure there is more efficient way to type that, but can't figure it out.
You could do: echo -a -b -c | xargs -n 1 command Or: xargs -n1 command <<< '-a -b -c' with some shells. But beware that it affects the stdin of command . With zsh : autoload zargs # best in ~/.zshrczargs -n 1 -- -a -b -c -- command Or simply: for o (-a -b -c) command $o None of those would abort if any of the command invocations failed (except if it fails with the 255 exit status). For that, you'd want: for o (-a -b -c) command $o || break That still fails to the $? to the exit status of the failed command invocation). You could change that to: (){for o (-a -b -c) command $o || return} But by that time, it's already longer than: command -a && command -b && command -c
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/254827", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137423/" ] }
254,866
I have a challenge to take an output file ("inventory.list") in this format: hostname1.env1.domain | abc | environment1hostname2.env1.domain | abc | environment1hostname3.env2.domain | abc | environment2hostname4.env2.domain | abc | environment2hostname5.env1.domain | def | environment1hostname6.env2.domain | def | environment2(6 rows) and write it out into another file in a different format: [abc.environment1]hostname1.env1.domainhostname2.env1.domain[abc.environment2]hostname3.env2.domainhostname4.env2.domain[def.environment1]hostname5.env1.domain[def.environment2]hostname6.env2.domain abc and def are roles assigned to servers and there can be multiple servers for each role, as well as same-named roles but in different environments. I have to break out each hostname into unique groups of [role.environment], and additionally, completely delete the final line of the file which is a row count (this file is an output from an sql query). I can read the file, strip out the pipes and whitespaces and assign/output the role/environment groupings, no problem: #! /bin/bashwhile IFS='| ' read -r certname role env; do printf '%s\n' "[""$role"".""$env""]"done < "/tmp/inventory.list" ...which neatly gives me the role/environment group names: [abc.environment1][abc.environment2][def.environment1][def.environment2] but I cannot figure out how to print out the hostnames linked to each role/environment group underneath each group name, neither can I work out how to get my script to ignore the last row count line. I'm guessing I have to further assign my role and environment fields (second and third fields) to its own array to then refer to it to grab out the hostnames linked to each unique grouping, but I have no idea how to achieve this. Can anyone advise, please?
Use an associative array to store the certificate names per role and environment. #! /bin/bashunset -v envsdeclare -A envswhile IFS='| ' read -r certname role env; do envs["$role.$env"]+="$certname"$'\n' done < /tmp/inventory.listfor e in "${!envs[@]}" ; do printf '%s\n' "[$e]" "${envs[$e]}"done To sort the sections, you can print the keys, sort them, and then read them back and output the associated values: for e in "${!envs[@]}" ; do printf '%s\n' "$e"done | sort | while read -r e ; do printf '%s\n' "[$e]" "${envs[$e]}"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254866", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151159/" ] }
254,887
The ABI armhf stand for ARM Hard-Float but what is el an abbreviation for in armel ?
el stands for little-endian; see the email explaining (briefly) the decision; the follow-up emails contain more information. Endianness isn't the most distinguishing feature of armel , but that's the name that was chosen... Further evidence is in Wookey's Debconf7 talk introducing the armel architecture; in the video at 26:47 he explicitly says " armel , basically little-endian ARM". Other architectures using el in the same way include mipsel and ppc64el . As pointed out by Kurt , el is the initials of "little-endian" read in little-endian order.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254887", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36718/" ] }
254,906
If I have a string such as /home/user/a/directory/myapp.app or just /home/user/myapp.app how can I split this so that I just have two variables (the path and the application) e.g. path="/home/user/"appl="myapp.app" I've seen numerous examples of splitting strings, but how can I get just the last part, and combine all the rest?
The commands basename and dirname can be used for that, for example: $ basename /home/user/a/directory/myapp.app myapp.app$ dirname /home/user/a/directory/myapp.app /home/user/a/directory For more information, do not hesitate to do man basename and man dirname .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/254906", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102428/" ] }
254,908
I know the -v switch can be used to awk on the command line to set the value for a variable. Is there any way to set values for awk associative array elements on the command line? Something like: awk -v myarray[index]=value -v myarray["index two"]="value two" 'BEGIN {...}'
No. It is not possible to assign non-scalar variables like this on the command line. But it is not too hard to make it. If you can accept a fixed array name: awk -F= ' FNR==NR { a[$1]=$2; next} { print a[$1] }' <(echo $'index=value\nindex two=value two') <(echo index two) If you have a file containing the awk syntax for array definitions, you can include it: $ cat <<EOF >file.awkar["index"] = "value"ar["index two"] = "value two"EOF$ gawk '@include "file.awk"; BEGIN{for (i in ar)print i, ar[i]}' or $ gawk --include file.awk 'BEGIN{for (i in ar)print i, ar[i]}' If you really want, you can run gawk with -E rather than -f , which leaves you with an uninterpreted command line. You can then process those command line options (if it looks like a variable assignment, assign the variable). Should you want to go that route, it might be helpful to look at ngetopt.awk .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254908", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
254,956
What is the difference between Docker, LXD, and LXC. Do they offer the same services or different.
No, LXC, Docker, and LXD, are not quite the same. In short: LXC LinuX Containers (LXC) is an operating system-level virtualization method for running multiple isolated Linux systems (containers) on a single control host (LXC host) https://wiki.archlinux.org/index.php/Linux_Containers low level ... https://linuxcontainers.org/ Docker by Docker, Inc a container system making use of LXC containers so you can: Build, Ship, and Run Any App, Anywhere http://www.docker.com LXD by Canonical, Ltd a container system making use of LXC containers so that you can: run LXD on Ubuntu and spin up instances of RHEL, CentOS, SUSE, Debian, Ubuntu and just about any other Linux too, instantly, ... http://www.zdnet.com/article/ubuntu-lxd-not-a-docker-replacement-a-docker-enhancement/ Docker vs LXD Docker specializes in deploying apps LXD specializes in deploying (Linux) Virtual Machines Source: http://linux.softpedia.com/blog/infographic-lxd-machine-containers-from-ubuntu-linux-492602.shtml Originally: https://insights.ubuntu.com/2015/09/23/infographic-lxd-machine-containers-from-ubuntu/ Minor technical note installing LXD includes a command line program coincidentally named lxc http://blog.scottlowe.org/2015/05/06/quick-intro-lxd/
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/254956", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148778/" ] }
254,962
The special variable $RANDOM has a new value every time it's accessed. In this respect, it is reminiscent of the "generator" objects found in some languages. Is there a way to implement something like this in zsh ? I tried to do this with named pipes, but I did not find a way to extract items from the fifo in a controlled manner without killing the "generator" process. For example: % mkfifo /tmp/ints% (index=0 while ( true ) do echo $index index=$(( index + 1 )) done) > /tmp/ints &[1] 16309% head -1 /tmp/ints0[1] + broken pipe ( index=0 ; while ( true; ); do; echo $index; index=$(( ... Is there some other way to implement such a generator-type object in zsh? EDIT: This does not work: #!/usr/bin/env zshFIFO=/tmp/fifo-$$mkfifo $FIFOINDEX=0while true; do echo $(( ++INDEX )) > $FIFO; done &cat $FIFO If I put the above in a script, and run it, the output rarely the expected single line 1 rather, it usually consists of several integers; e.g. 12345 The number of lines produced varies from one run to the next. EDIT2: As jimmij pointed out, changing echo to /bin/echo takes care of the problem.
ksh93 has disciplines which are typically used for this kind of thing. With zsh , you could hijack the dynamic named directory feature : Define for instance: zsh_directory_name() { case $1 in (n) case $2 in (incr) reply=($((++incr))) esac esac} And then you can use ~[incr] to get an incremented $incr each time: $ echo ~[incr]1$ echo ~[incr] ~[incr]2 3 Your approach fails because in head -1 /tmp/ints , head opens the fifo, reads a full buffer, prints one line, and then closes it . Once closed, the writing end sees a broken pipe. Instead, you could either do: $ fifo=~/.generators/incr$ (umask 077 && mkdir -p $fifo:h && rm -f $fifo && mkfifo $fifo)$ seq infinity > $fifo &$ exec 3< $fifo$ IFS= read -rneu31$ IFS= read -rneu32 There, we leave the reading end open on fd 3, and read reads one byte at a time, not a full buffer to be sure to read exactly one line (up to the newline character). Or you could do: $ fifo=~/.generators/incr$ (umask 077 && mkdir -p $fifo:h && rm -f $fifo && mkfifo $fifo)$ while true; do echo $((++incr)) > $fifo; done &$ cat $fifo1$ cat $fifo2 That time, we instantiate a pipe for every value. That allows returning data containing any arbitrary number of lines. However, in that case, as soon as cat opens the fifo, the echo and the loop is unblocked, so more echo could be run, by the time cat reads the content and closes the pipe (causing the next echo to instantiate a new pipe). A work around could be to add some delay, like for instance by running an external echo as suggested by @jimmij or add some sleep , but that would still not be very robust, or you could recreate the named pipe after each echo : while mkfifo $fifo && echo $((++incr)) > $fifo && rm -f $fifodo : nothingdone & That still leaves short windows where the pipe doesn't exist (between the unlink() done by rm and the mknod() done by mkfifo ) causing cat to fail, and very short windows where the pipe has been instantiated but no process will ever write again to it (between the write() and the close() done by echo ) causing cat to return nothing, and short windows where the named pipe still exists but nothing will ever open it for writing (between the close() done by echo and the unlink() done by rm ) where cat will hang. You could remove some of those windows by doing it like: fifo=~/.generators/incr( umask 077 mkdir -p $fifo:h && rm -f $fifo && mkfifo $fifo && while mkfifo $fifo.new && { mv $fifo.new $fifo && echo $((++incr)) } > $fifo do : nothing done) & That way, the only problem is if you run several cat at the same time (they all open the fifo before our writing loop is ready to open it for writing) in which case they will share the echo output. I would also advise against creating fixed name, world readable fifos (or any file for that matters) in world writable directories like /tmp unless it's a service to be exposed to all users on the system.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/254962", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10618/" ] }
254,985
I'm using LXDE. Is there a utility, menu item or other GUI-ish way to rotate the display on my monitor (e.g. by 90 degrees)? lxrandr doesn't seem to offer me that option. My system is Debian Stretch 64bit, it's some kind of Intel Graphics on-board chip, and the monitor is a Dell U2312. It works on Windows.
You can use arandr , a GUI front end for xrandr . This also is capable of rotating the screen. It is in the Debian repositories. Arandr's web page also mentions alternative GUI tools, no idea how up to date that is.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/254985", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34868/" ] }
255,003
I am writing a script to automate the configuration of software. I want to begin by checking to see if the script needs to install the software first, then configure it. If I was to check $ software --version and I get bash: command: command not found , then I know that I will want to install it first. Does bash: command: command not found return false? Edit: For any answers, could the answer be explained?
Yes. $ spameggspamegg: command not found$ echo $?127 You could just do: if software --version &>/dev/null; then ## True, do somethingelse ## False, do somethingfi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/255003", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/110689/" ] }
255,017
I have a number of header.php files that have a malicious script tag contained within them (don't ask). I've written a not-so-elegant shell script to replace these with blank space. I had initially tried to subtract the payload from the header.php but this didn't seem possible as the file was not a sorted list. Below is my code: echo 'Find all header.php files'find -name header.php -print0 > tempheaderecho 'Remove malware script from headers'cat tempheader | xargs -0 sed -i 's/\<script\>var a=''; setTimeout(10); var default_keyword = encodeURIComponent(document.title); var se_referrer = encodeURIComponent(document.referrer); var host = encodeURIComponent(window.location.host); var base = "http:\/\/someplacedodgy.kr\/js\/jquery.min.php"; var n_url = base + "?default_keyword=" + default_keyword + "\&se_referrer=" + se_referrer + "\&source=" + host; var f_url = base + "?c_utt=snt2014\&c_utm=" + encodeURIComponent(n_url); if (default_keyword !== null \&\& default_keyword !== '' \&\& se_referrer !== null \&\& se_referrer !== ''){document.write('\<script type="text\/javascript" src="' + f_url + '"\>' + '\<' + '\/script\>');}\<\/script\>/ /g' The issue is that this code fails to execute with the error: sed: -e expression #1, char 578: unterminated s' command`. My assumption is that there are unescaped characters causing this issue, I have tried escaping all <> and {}'s, however this didn't seem to help (note the <> are still escaped above). If there is a way to input a file containing the string into sed like sed -i 's/$payload/ /g' I have not been able to work that out yet.
Yes. $ spameggspamegg: command not found$ echo $?127 You could just do: if software --version &>/dev/null; then ## True, do somethingelse ## False, do somethingfi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/255017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151298/" ] }
255,035
I have a huge csv file with 10 fields separated by commas. Unfortunately, some lines are malformed and do not contain exactly 10 commas (what causes some problems when I want to read the file into R). How can I filter out only the lines that contain exactly 10 commas?
Another POSIX one: awk -F , 'NF == 11' <file If the line has 10 commas, then there will be 11 fields in this line. So we simply make awk use , as the field delimiter. If the number of fields is 11, the condition NF == 11 is true, awk then performs the default action print $0 .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/255035", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31337/" ] }
255,064
I have a concern whereby in a script, I will be calling parallel versions of a function to run in the background and then issuing a wait. a () { sleep $1}a 10 &PIDs="$PIDs $!"a 50000 &PIDs="$PIDs $!"wait $PIDs The concern is the first function call takes 10 seconds (sleep 10), but the second takes nearly 14 hours (sleep 50000). I am concerned the PID from the first invocation will be recylced and 14 hours later, when the second invocation is terminates, that PID is being used by another process and will prevent the script from continuing. Or does wait remove the first PID from the list as soon as that invocation completes and just simply waits for the second process to complete rather than waiting for both of them to finish at the end?
See for yourself that the wait builtin will not wait for a random process -- only children of the current shell. #!/bin/bashsleep 2 &P=$!echo sleeping 3 seconds...sleep 3echo waiting for $P ...wait $PR=$RANDOMecho waiting for $R ...wait $Recho done$ ./t.shsleeping 3 seconds...waiting for 93208... ## this returns immediately, since that PID is gonewaiting for 31941 ..../t.sh: line 10: wait: pid 31941 is not a child of this shelldone
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/255064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151321/" ] }
255,068
Each file has an inode. Is there an inode for every directory ? If not, how does Linux manage directories.
Directories are special files , hence they have inodes. You can test that with ls : ls -li or using stat : stat -c '%F : %i : %n' * Example: % stat -c '%F : %i : %n' *regular file : 670637 : bar.csvregular file : 656301 : file.txtdirectory : 729178 : foobar The number in the middle is the inode number.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/255068", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144104/" ] }
255,079
We are using am335x based custom board, we have eMMC as secondary storage device. Now to list partitions we are using parted utility but parted prints partition sizes in the MB instead of MiB . Is there any way to ask parted to print partition sizes in MiB unit instead of MB unit? You can refer to below output which shows the parted is printing sizes in the KB or MB but not in KiB or MiB . # parted --listModel: MMC MMC04G (sd/mmc)Disk /dev/mmcblk0: 3842MBSector size (logical/physical): 512B/512BPartition Table: gptDisk Flags:Number Start End Size File system Name Flags 1 131kB 262kB 131kB 2 262kB 393kB 131kB 3 393kB 524kB 131kB 4 524kB 1573kB 1049kB 5 1573kB 2621kB 1049kB 6 2621kB 3146kB 524kB 7 3146kB 3277kB 131kB 8 3277kB 8520kB 5243kB 9 8520kB 13.8MB 5243kB 10 13.8MB 19.0MB 5243kB 11 19.0MB 19.3MB 262kB 12 19.3MB 19.5MB 262kB 13 19.5MB 19.8MB 262kB 14 21.0MB 32.5MB 11.5MB 15 33.6MB 243MB 210MB ext4 16 243MB 453MB 210MB ext4 17 453MB 558MB 105MB ext4 18 558MB 621MB 62.9MB ext4 19 621MB 830MB 210MB ext4 20 830MB 867MB 36.7MB ext4 21 867MB 3827MB 2960MB ext4
Is there any way to ask parted to print partition sizes in MiB unit instead of MB unit? Yes: parted <<<'unit MiB print all' or printf %s\\n 'unit MiB print list' | parted or parted <<\IN unit MiB print listIN Same in interactive mode: launch parted and then enter unit MiB print list
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/255079", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60966/" ] }
255,107
I'd like to know how it's possible using the lvm cli tool to clone an existing lvm thin volume, creating another thin-volume with the same contents (but possibly a larger size) as the original one. So something like LXC does when you execute lxc-clone . The only information I could find about creating a thin volume with another as its origin was about creating snapshots.
Is there any way to ask parted to print partition sizes in MiB unit instead of MB unit? Yes: parted <<<'unit MiB print all' or printf %s\\n 'unit MiB print list' | parted or parted <<\IN unit MiB print listIN Same in interactive mode: launch parted and then enter unit MiB print list
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/255107", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9715/" ] }
255,108
This should be possible, given that the latest Qmmp player has a CD plugin with options regarding CD tracks info - both CD-Text and CDDB. But I see no menu option like 'Open CD' etc. Is it there? When trying to use 'Open files' to select Audio CD in the File browser, there is a message 'You can only select local files'.
Is there any way to ask parted to print partition sizes in MiB unit instead of MB unit? Yes: parted <<<'unit MiB print all' or printf %s\\n 'unit MiB print list' | parted or parted <<\IN unit MiB print listIN Same in interactive mode: launch parted and then enter unit MiB print list
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/255108", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
255,139
In a file that looks like: 549.4320861680.0000E+00 9.6988e-04 2.0580E-021.0000E+01 9.6988e-04 2.0580E-022.0000E+01 9.6988e-04 2.0580E-02...5.6000E+02 7.0997e-06 -3.7538E-04 Delete the last line if the difference between the number in first line and the first column of last line is greater than 10. So, in this case the last line will get deleted since 560 - 549.432086168 is greater than 10. Any suggestions as to how can this be done efficiently?
Typical job for awk : awk 'NR == 1 {first = $1}; $1 - first <= 10' < file Or to do it only for the last line: awk 'NR == 1 {first = last = $0; next} {print last; last = $0} END {if (NR && last - first <= 10) print last}' < file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/255139", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8141/" ] }
255,140
Using just shell script, how to search a text file and list all whole blocks of lines that has inside some text (simple grep criteria). The text file has blocks of lines separated by "-----------------" (precisely, each block start with "\n\n\n--------------------" ... about 50 chars "-"). A sample could be: -------------------------------Abracadabra, blablablalbalbablablablabla, banana-------------------------------Text, sample text, sample text, sample textText, sample text, sample text, sample textText, sample text, sample text, sample textText, sample text, sample text, sample text------------------------------- Text, sample text, sample text, sample textbanana. Sample text, sample text, sample text, sample textText, sample text, sample text, sample text Lets consider the word "banana" the search criteria. So, the blocks listed would be: -------------------------------Abracadabra, blablablalbalbablablablabla, banana-------------------------------Text, sample text, sample text, sample textbanana. Sample text, sample text, sample text, sample textText, sample text, sample text, sample text EDIT: Testing answers to try awk, like: awk 'BEGIN{RS="\n------------"}/INFO/{print}' where INFO is what was searched for. I cannot get the whole block. So, follows a real sample and the result: A REAL SAMPLE (including the first 3 new lines): -------------------------------------------------Diretório separado do nome o arquivo: adis, IWZLM (/home/interx/adis/src/IWZLM.SRC)Gerando rotina em linguagem C:(yla5 adis IWZLM -if).INFO =>Rotina BLOQUEADA (status 'M'): Geracao ignorada (use -is para ignorar checagem do status)[ OK-I ] IWZLM (adis) - Lista lay: Geracao ignorada do codigo em C.-------------------------------------------------Diretório separado do nome d arquivo: adis, ADISA (/home/interx/adis/src/ADISA.SRC)Gerando rotina em linguagem C:(yla5 adis ADISA -if).ERRO: Falha inesperadaCompilando o programa:(ycomp adis ADISA -exe adis/exe/ADISA.temp.exe )adis/exe/ADISA.temp.exe => adis/exe/ADISA[ OK ] ADISA (adis) - Menu A : Gerada e compilada com sucesso.-------------------------------------------------Diretório separado do nome o arquivo: adis, ADISD1 (/home/interx/adis/src/ADISD1.SRC)Gerando rotina em linguagem C:(yla5 adis ADISD1 -if).ATENCAO: Definicao nao localizadaCompilando o programa:(ycomp adis ADISD1 -exe adis/exe/ADISD1.temp.exe )adis/exe/ADISD1.temp.exe => adis/exe/ADISD1[ OK ] ADISD1 (adis) - Menu : Gerada e compilada com sucesso. I cannot get the whole block, just the line containing "INFO", like a ordinary grep, either setting or not ORS: $ cat file | awk 'BEGIN{RS="\n------------"}/INFO/{print}' .INFO =>Rotina BLOQUEADA (status 'M'): Geracao ignorada (use -is para ignorar checagem do status) NOTES: It is the awk from AIX 7.1, not gawk.
awk '{ if (/-------------------------------------------------/) { if (hold ~ /INFO/) { print hold; } hold=""; } else { hold=hold "\n" $0 }} END { if (hold ~ /INFO/) { print hold; }}' file This uses a 'hold'ing variable (ala sed) to accumulate lines between separated blocks; once a new block (or EOF) is encountered, print the held value only if it matches the /INFO/ pattern. (re: the older comments, I deleted my previous inadequate awk and perl answers to clean up this answer)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/255140", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114939/" ] }
255,190
Using rm -rf LargeDirectory to delete a large directory can take a large amount of time to complete depending on the size of the directory. Is it possible to get a status update or somehow monitor the progress of this deletion to give a rough estimate as to where along in the process the command is?
from man rm use the -v option: -v, --verboseexplain what is being done
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/255190", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41184/" ] }
255,197
I am running Kali 2.0 64-bit, and I recently noticed that avahi-daemon is starting at boot time, listening on several udp ports. How do I disable it completely, without purging the package itself? I have tried sudo rcconf --off avahi-daemon But there is a warning: Service 'avahi-daemon' is already off. Skipping... I then tried sudo update-rc.d -f avahi-daemon remove It doesn't produce any errors, nor warnings, but avahi-daemon still persists at boot time. I then tried editing the /etc/default/avahi-daemon file by adding AVAHI_DAEMON_START = 0 But that doesn't work either. I finally used the UPSTART manual override -->> echo manual | sudo tee /etc/init/avahi-daemon.override And still no go. Please help, I am at my wits's end! Thank you.
sudo systemctl disable avahi-daemon to disable boot time startup. A few other options are systemctl list-units for a list of all known units, systemctl enable to enable boot time startup, systemctl start to start the service from terminal, but not enable boot time loading and systemctl stop to stop a service which has been started. man systemctl and man systemd will provide complete set of options. Most (not all though) modern Linux distributions have switched or are switching to systemd from the traditional SysV init scripts. Also, http://blog.jorgenschaefer.de/2014/07/why-systemd.html covers some of the basics of systemd.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/255197", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151403/" ] }
255,292
I am using a terminal emulator on Android called "Termux". It comes with the APT command installed. However, the repository is very limited. What I want to do, is to see if larger repositories like that of Ubuntu and Debian can be used with it. Since I only want to use and install CLI programs (like cowsay ), hopefully it shouldn't be a problem. Is this a good idea? If yes, how can it be done? If not, can individual packages be downloaded and installed any other easy way?
If you did not noticed already, android is a Linux kernel with completely different userland, so if you want to (re)use regular Linux binaries on it, you're probably out of luck. Android has a different filesystem layout and has no regular Unix standard files. It even lacks /tmp directory. You can try to setup a chroot on it. It may become complicated, especially if it will require mounting custom filesystems inside it. Still, many many programs will be unable to be run on android, like those X11 ones requiring X11 server running, because android does not run any X11 server.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/255292", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147811/" ] }
255,338
I have a bash terminal open. I want to find out whether the option extglob is enabled or disabled in this session, before I change its value. How can I do this?
Just run: $ shopt extglob It will return the current status: $ shopt extglob extglob on$ shopt -u extglob $ shopt extglob extglob off To show all options, just run: $ shopt
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/255338", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56755/" ] }
255,344
I am trying to set up my computer (running Crunchbang Linux Waldorf and i3) so that it is always, by default, configured so that pressing Ctrl + Shift and arrow keys resizes the window according to the direction of the arrows. The i3 user guide provides this example which I think is very close to what I want: mode "resize" { # These bindings trigger as soon as you enter the resize mode # Pressing left will shrink the window’s width. # Pressing right will grow the window’s width. # Pressing up will shrink the window’s height. # Pressing down will grow the window’s height. bindsym j resize shrink width 10 px or 10 ppt bindsym k resize grow height 10 px or 10 ppt bindsym l resize shrink height 10 px or 10 ppt bindsym semicolon resize grow width 10 px or 10 ppt # same bindings, but for the arrow keys bindsym Left resize shrink width 10 px or 10 ppt bindsym Down resize grow height 10 px or 10 ppt bindsym Up resize shrink height 10 px or 10 ppt bindsym Right resize grow width 10 px or 10 ppt # back to normal: Enter or Escape bindsym Return mode "default" bindsym Escape mode "default"}# Enter resize modebindsym $mod+r mode "resize" But I want to build it in natively, without having to enter and exit resize modes. I just want to use arrow keys, not J , K , L and ; keys. Any thoughts on how I would do that?
Best solution that I have figured out myself: Go to ~/.i3/config and open the file. Paste following code at the end: bindsym $mod+Ctrl+Right resize shrink width 1 px or 1 pptbindsym $mod+Ctrl+Up resize grow height 1 px or 1 pptbindsym $mod+Ctrl+Down resize shrink height 1 px or 1 pptbindsym $mod+Ctrl+Left resize grow width 1 px or 1 ppt Save it and run i3-msg reload .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/255344", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/149766/" ] }
255,350
After upgrading to OpenRC 0.20 the system fails to boot properly: mounted into runlelevel unknown (kernel 3.17.1) The / partition is mounted read-only /dev/sda3 on / type ext4 (ro, realtime, data=ordered) so I did the following: # mount / -o remount,rw .. whch worked, after that I did # mount -a which mounted my /dev/sda4 (/home) But any service I try to start gets me a segfault, e.g. # service root startSegmentation fault I am running openrc 0.20 which seems to have been installed yesterday in my latest emerge world.
Best solution that I have figured out myself: Go to ~/.i3/config and open the file. Paste following code at the end: bindsym $mod+Ctrl+Right resize shrink width 1 px or 1 pptbindsym $mod+Ctrl+Up resize grow height 1 px or 1 pptbindsym $mod+Ctrl+Down resize shrink height 1 px or 1 pptbindsym $mod+Ctrl+Left resize grow width 1 px or 1 ppt Save it and run i3-msg reload .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/255350", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120813/" ] }
255,353
What I would like is to use avahi-daemon to multicast more then one name. So that I could connect to it with domainA.local domainB.local. I could then reroute these addresses to a different web interface of different applications with nginx. Is it possible to configure the avahi-daemon in such a way that it would multicast multiple names? P.S. Using the avahi-daemon is not a requirement. If there is another program that has this functionality I would gladly switch. Research and results So as suggested by gollum, I tried avahi-aliases first. It is in the repositories, but it did not appear to have installed correctly on my system. According to the instructions is should have installed a script in /etc/init.d/, but there was none. I then gave the other link that gollum suggested a try and this worked straight away. It does depend on python-avahi and is just an example of a python script that needs to run in the background. I am now able to broadcast domainA.local, domainB.local and domainC.local and in combination with nginx that leads to different web interfaces on the machine, but are all accessible on port 80. Update After some more fiddling with the two, I also discovered that avahi-aliases can only broadcast subdomains. So if your computername would be elvispc then avahi-aliases can only broadcast subdomainA.elvispc.local and subdomainB.elvispc.local, where the python script will broadcast any name.
A cumbersome solution would be running several instances of the following command in background: avahi-publish -a -R whatever.local 192.168.123.1 A better solution is probably publishing cnames using python-avahi. See e.g. https://github.com/airtonix/avahi-aliases or http://www.avahi.org/wiki/Examples/PythonPublishAlias Update: The avahi wiki seems to be gone. Here is the archived page of the link I've posted: https://web.archive.org/web/20151016190620/http://www.avahi.org:80/wiki/Examples/PythonPublishAlias
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/255353", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138422/" ] }
255,373
I have 25GB text file that needs a string replaced on only a few lines. I can use sed successfully but it takes a really long time to run. sed -i 's|old text|new text|g' gigantic_file.sql Is there a quicker way to do this?
You can try: sed -i '/old text/ s//new text/g' gigantic_file.sql From this ref : OPTIMIZING FOR SPEED: If execution speed needs to be increased (due to large input files or slow processors or hard disks), substitution will be executed more quickly if the "find" expression is specified before giving the "s/.../.../" instruction. Here is a comparison over a 10G file. Before: $ time sed -i 's/original/ketan/g' wiki10gbreal 5m14.823suser 1m42.732ssys 1m51.123s After: $ time sed -i '/ketan/ s//original/g' wiki10gbreal 4m33.141suser 1m20.940ssys 1m44.451s
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/255373", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151547/" ] }
255,379
Was reading this article about automatically logging into a raspberry pi and they say to use this command: 1:2345:respawn:/bin/login -f pi tty1 </dev/tty1 >/dev/tty1 2>&1 After going through the manual I see that -f means no auth and that pi is the user, but what does tty1 </dev/tty1 >/dev/tty1 2>&1 do? I assume tty1 is the terminal to login into or something, but then the following arguments are confusing as well. Why are there angle braces </dev/tty1 > ? Are they doing some weird redirection? I would really appreciate if someone could break it down. I'm not a fan of using commands I'm unfamiliar with.
You can try: sed -i '/old text/ s//new text/g' gigantic_file.sql From this ref : OPTIMIZING FOR SPEED: If execution speed needs to be increased (due to large input files or slow processors or hard disks), substitution will be executed more quickly if the "find" expression is specified before giving the "s/.../.../" instruction. Here is a comparison over a 10G file. Before: $ time sed -i 's/original/ketan/g' wiki10gbreal 5m14.823suser 1m42.732ssys 1m51.123s After: $ time sed -i '/ketan/ s//original/g' wiki10gbreal 4m33.141suser 1m20.940ssys 1m44.451s
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/255379", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102764/" ] }
255,380
I have data like this: chr1 134901 139379 - "ENSG00000237683.5";chr1 860260 879955 + "ENSG00000187634.6";chr1 861264 866445 - "ENSG00000268179.1";chr1 879584 894689 - "ENSG00000188976.6";chr1 895967 901095 + "ENSG00000187961.9"; I generated by parsing a GTF file I want to remove the " 's and ; 's from column 5 using awk or sed if it possible. The result would look like this: chr1 134901 139379 - ENSG00000237683.5chr1 860260 879955 + ENSG00000187634.6chr1 861264 866445 - ENSG00000268179.1chr1 879584 894689 - ENSG00000188976.6chr1 895967 901095 + ENSG00000187961.9
Using gsub : awk '{gsub(/\"|\;/,"")}1' filechr1 134901 139379 - ENSG00000237683.5chr1 860260 879955 + ENSG00000187634.6chr1 861264 866445 - ENSG00000268179.1chr1 879584 894689 - ENSG00000188976.6chr1 895967 901095 + ENSG00000187961.9 If you want to operate only on the fifth field and preserve any quotes or semicolons in other fields: awk '{gsub(/\"|\;/,"",$5)}1' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/255380", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139639/" ] }
255,385
When I looked in the manual for agetty all I saw was alternative getty
There was a program named getty in 1st Edition Unix. The BSDs usually have a program named getty that is a (fairly) direct descendant of this. It (nowadays) reads /etc/ttys for the database of configured terminal devices and /etc/gettytab for the database of terminal line types (a line type being passed as an argument to the getty program). The Linux world has a collection of clones and reimplementations, as did minix before it. agetty was written by Wietse Venema, as an "alternative" to AT&T System 5 and SunOS getty , and ported to Linux by Peter Orbaek (who also provided simpleinit alongside it). It is suitable for use with serial devices, with either modems or directly connected terminals, as well as with virtual terminal devices. Paul Sutcliffe, Jr's getty and uugetty is hard to find nowadays, but was an alternative to agetty . (The getty-ps package containing them both can still be found in SlackWare.) Fred van Kempen wrote an "improved" getty and init for minix in 1990. Gert Doering's mgetty is another getty that is suitable for use with actual serial devices, and was designed to support "smart" modems such as fax-modems and voice-modems, not just "dumb" terminal-only modems. Florian La Roche's mingetty was designed not to support serial devices, and generic getty functionality on any kind of terminal device. Rather, it is specific to virtual terminal devices and cuts out all of the traditional getty hooplah that is associated with modems and serial devices. Felix von Leitner's fgetty was derived from mingetty , adjusted to use a C library with a smaller footprint than the GNU C library, and tweaked to include things like the checkpasswd mechanism. Nikola Vladov's ngetty was a rearchitecture of the whole getty mechanism. Instead of init (directly or indirectly) knowing about the TTYs database and spawning multiple instances of getty, each to respond on one terminal, init spawns one ngetty process that monitors all of the terminals.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/255385", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102764/" ] }
255,438
I made a small systemctl service to manqge HLTV server (it records demos from a game and stores them on disk): [Unit]Description=HLTV serverRequires=cs16.serviceAfter=cs16.service[Service]Type=simpleUser=cs16Group=cs16UMask=007ExecStart=/home/cs16/server/hltv_start.shRestart=on-failure# Configures the time to wait before service is stopped forcefully.TimeoutStopSec=300[Install]WantedBy=multi-user.target It works great, but if I shutdown/restart system service, it kills the process, which corrupts demo that is currently being written. To properly save the demo, I need to type "quit" or "stop" in htlv command tool.Is there a way to make systemctl send one of those commands to the program before closing it?
By default, if you systemctl stop a process, systemd sends SIGTERM to the process. That process is responsible for handling the signal and shutting down gracefully. If the process does not shutdown within 90 seconds, SIGKILL will forcefully stop the process. There are several ways to change this default behaviour: ExecStop= : If your process would normally shutdown via an external signal from a socket, shared-memory, touched file, you can execute a script that sends that to your process using ExecStop= . KillSignal= : If your process doesn't respond well to SIGTERM, you can change which signal is sent to the process using KillSignal= . Maybe your process responds better to SIGINT, SIGABRT, or SIGQUIT. Note that this signal should initiate the save that your program will perform. TimeoutStopSec= , This defaults to 90s, giving that time between the first KillSignal and FinalKillSignal . I see you've already increased this to 300s. If your saves normally take longer than this, increase it. If that's more than you'd expect for a save, then your process is probably not responding to your current KillSignal= . SendSIGKILL= : If your process does start the save, but you want to wait indefinatley, you can set this to no to let the service hang forever. (Play with TimeoutStopSet first). FinalKillSignal= : If your killsignal didn't work well and you want to followup after TimeoutStopSet with something other than SIGKILL you can set that code here. Think about how you would normally stop the process if you weren't running that as a service. Then emulate that. Do you use CTRL+D in a terminal? Do you kill the process? Since your process was started with ExecStart=/home/cs16/server/hltv_start.sh , maybe you want ExecStop=/home/cs16/server/hltv_stop.sh . To properly save the demo, I need to type "quit" or "stop" in htlv command tool. Is this command-tool a command-line application? If so, maybe you just need: ExecStop=/home/cs16/server/hltv_command_tool.sh -c stop If this tool is part of the HLTV GUI you might be able to send the quit or stop text by sending text via your window manager. The following command runs a bash shell, then uses xdotool to search for a window with hltv in the title. Then it will type quit into that window and then it will press Enter : ExecStop=/bin/bash -c 'xdotool type --window "$(xdotool search hltv | head -1)" quit && xdotool key --window "$(xdotool search hltv | head -1)" Enter'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/255438", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88034/" ] }
255,480
I get what I expected when doing this in bash : [ "a" == "a" ] && echo yes It gave me yes . But when I do this in zsh , I get the following: zsh: = not found Why does the same command ( /usr/bin/[ ) behave differently in different shells?
It's not /usr/bin/[ in either of the shells. In Bash, you're using the built-in test / [ command , and similarly in zsh . The difference is that zsh also has an = expansion : =foo expands to the path to the foo executable. That means == is treated as trying to find a command called = in your PATH . Since that command doesn't exist, you get the error zsh: = not found that you saw (and in fact, this same thing would happen even if you actually were using /usr/bin/[ ). You can use == here if you really want. This works as you expected in zsh: [ "a" "==" "a" ] && echo yes because the quoting prevents =word expansion running. You could also disable the equals option with setopt noequals . However, you'd be better off either: Using single = , the POSIX-compatible equality test ; or Better still, using the [[ conditionals with == in both Bash and zsh . In general, [[ is just better and safer all around, including avoiding this kind of issue (and others) by having special parsing rules inside.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/255480", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/141443/" ] }
255,484
I know how to create a bridge using brctl , but I have been advised not to use this anymore, and to use iproute2 or ip instead(since brctl is deprecated presumably). Assuming this is good advice, how do I create a bridge using ip ? For instance, say I wanted to bridge eth0 and eth1 .
You can use the bridge object ip the ip command, or the bridge command that makes part of the iproute2 package. Basic link manipulation To create a bridge named br0 , that have eth0 and eth1 as members: ip link add name br0 type bridgeip link set dev br0 upip link set dev eth0 master br0ip link set dev eth1 master br0 To remove an interface from the bridge: ip link set dev eth0 nomaster And finally, to destroy a bridge after no interface is member: ip link del br0 Forwarding manipulation To manipulate other aspects of the bridge like the FDB( Forwarding Database ) I suggest you to take a look at the bridge(8) command . Examples: Show forwarding database on br0 bridge fdb show dev br0 Disable a port( eth0 ) from processing BPDUs . This will make the interface filter any incoming bpdu bridge link set dev eth0 guard on Setting STP Cost to a port( eth1 for example): bridge link set dev eth1 cost 4 To set root guard on eth1: bridge link set dev eth1 root_block on Cost is calculated using some factors, and the link speed is one of them. Using a fix cost and disabling the processing of BPDUs and enabling root_block is somehow simmilar to a guard-root feature from switches. Other features like vepa, veb and hairpin mode can be found on bridge link sub-command list. VLAN rules manipulation The vlan object from the bridge command will allow you to create ingress/egress filters on bridges. To show if there is any vlan ingress/egress filters: bridge vlan show To add rules to a given interface: bridge vlan add dev eth1 <vid, pvid, untagged, self, master> To remove rules. Use the same parameters as vlan add at the end of the command to delete a specific rule. bridge vlan delete dev eth1 Related stuff: bridge(8) manpage How to create a bridge interface
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/255484", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92090/" ] }
255,504
I have a long psql command to execute within a bash shell script, and I am wondering what the correct way is to split it across several lines? The standard Unix script 'split line' backslash character doesn't work. Within the postgres environment of course you could keep going line after line and the command doesn't get processed until you enter the closing semi-colon, but you don't use that within a shell script. My command is this: sudo -u postgres /opt/puppet/bin/psql puppetdb -t -c "select certname, r.value as role, e.value as env from certname_facts r join certname_facts e using (certname) where r.name = 'role' and e.name = 'env' order by role,env,certname" | grep -v "^$" > /home/ansible/inventory_list I don't want to change the command, it all works perfectly when entered manually, but I need to know the correct way to make this into a split-line entry, something like: sudo -u postgres /opt/puppet/bin/psql puppetdb -t-c "select certname, r.value as role, e.value as envfrom certname_facts r join certname_faces e using (certname)where r.name = 'role' and e.name = 'env'order by role,env,certname" | grep -v "^$" > /home/ansible/inventory_list Any suggestions, please?
Nothing wrong with splitting it up with backlashes as you showed. However, it's generally better to send the SQL via stdin. For postgres, this is especially true, since the '-c' option is fixed to return output from only one command, whereas accepting commands from stdin, you can stack as many commands together as you like. So, you would do something like: sudo -u postgres /opt/puppet/bin/psql puppetdb -t <<SQL | ... select certname, r.value as role, e.value as env from certname_facts r join certname_faces e using (certname) where r.name = 'role' and e.name = 'env' order by role,env,certnameSQL Bash variables might get interpolated here. To avoid that, quote the first instance of SQL : sudo -u postgres /opt/puppet/bin/psql puppetdb -t <<'SQL' | ...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/255504", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151159/" ] }
255,509
When dual booting Windows 7/10 and Linux Mint/Ubuntu, you may find yourself having to re-pair your Bluetooth devices again and again. This will happen every time you switch OS. Now, how do you prevent this? I'm answering my own question with the following guide, which has been tested on Ubuntu 14.4 and Linux Mint 17.2, 17.3 and now Linux Mint 18. x .
Why does this happen? Basically, when you pair your device, your Bluetooth service generates a unique set of pairing keys. First, your computer stores the Bluetooth device's MAC address and pairing key. Second, your Bluetooth device stores your computer's MAC address and the matching key. This usually works fine, but the MAC address for your Bluetooth port will be the same on both Linux and Windows (it is set on the hardware level). Thus, when you re-pair the device in Windows or Linux and it generates a new key, that key overwrites the previously stored key on the Bluetooth device. Windows overwrites the Linux key and vice versa. Bluetooth LE Devices: These may pair differently. I haven't investigated myself, but this may help Dual Boot Bluetooth LE (low energy) device pairing How to fix Using the instructions below, we'll first pair your Bluetooth devices with Ubuntu/Linux Mint, and then we'll pair Windows. Then we'll go back into our Linux system and copy the Windows-generated pairing key(s) into our Linux system. Pair all devices w/ Mint/Ubuntu Pair all devices w/ Windows Copy your Windows pairing keys in one of two ways: Use psexec -s -i regedit.exe from Windows (harder). You need psexec as normal regedit doesn't have enough permissions to show this values. Go to "Device & Printers" in Control Panel and go to your Bluetooth device's properties. Then, in the Bluetooth section, you can find the unique identifier. Copy that (you will need it later). Note: on newer versions of windows the route to the device's properties is to go through Settings -> Bluetooth & devices -> Devices -> More devices and printer settings Download PsExec from http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx . Unzip the zip you downloaded and open a cmd window with elevated privileges. (Click the Start menu, search for cmd , then right-click the CMD and click "Run as Administrator".) cd into the folder where you unzipped your download. Run psexec -s -i regedit.exe Navigate to find the keys at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BTHPORT\Parameters\Keys . If there is no CurrentControlSet , try ControlSet001 . You should see a few keys labels with the MAC addresses - write down the MAC address associated with the unique identifier you copied before. Note: If there are no keys visible after pairing, you likely need to add permissions to read Keys\ Use chntpw from your Linux distro (easier). Start in a terminal then: sudo apt-get install chntpw Mount your Windows system drive in read-write mode cd /[WindowsSystemDrive]/Windows/System32/config chntpw -e SYSTEM opens a console Run these commands in that console: > cd CurrentControlSet\Services\BTHPORT\Parameters\Keys> # if there is no CurrentControlSet, then try ControlSet001> # on Windows 7, "services" above is lowercased.> ls# shows you your Bluetooth port's MAC addressNode has 1 subkeys and 0 values key name <aa1122334455>> cd aa1122334455 # cd into the folder> ls # lists the existing devices' MAC addressesNode has 0 subkeys and 1 values size type value name [value if type DWORD] 16 REG_BINARY <001f20eb4c9a>> hex 001f20eb4c9a=> :00000 XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX ...ignore..chars..# ^ the XXs are the pairing key Make a note of which Bluetooth device MAC address matches which pairing key. The Mint/Ubuntu one won't need the spaces in-between.  Ignore the :00000 . Go back to Linux (if not in Linux) and add our Windows key to our Linux config entries. Just note that the Bluetooth port's MAC address is formatted differently when moving from Windows to Linux - referenced as aa1122334455 in Windows in my example above.The Linux version will be in all caps and punctuated by ':' after every two characters - for example AA:11:22:33:44:55. Based on your version of Linux, you can do one of these: Before Mint 18/16.04 you could do this: sudo edit /var/lib/bluetooth/[MAC address of Bluetooth]/linkkeys - [the MAC address of Bluetooth] should be the only folder in that Bluetooth folder. This file should look something like this: [Bluetooth MAC] [Pairing key] [digits in pin] [0]AA:11:22:33:44:55 XXXXXXXXxxXXxXxXXXXXXxxXXXXXxXxX 5 000:1D:D8:3A:33:83 XXXXXXXXxxXXxXxXXXXXXxxXXXXXxXxX 4 0 Change the Linux pairing key to the Windows one, minus the spaces. In Mint 18 (and Ubuntu 16.04) you may have to do this: Switch to root: su - (In more modern versions of Ubuntu, 'sudo -i') cd to your Bluetooth config location /var/lib/bluetooth/[bth port MAC addresses] Here you'll find folders for each device you've paired with. The folder names being the Bluetooth devices' MAC addresses and contain a single file info . In these files, you'll see the link key you need to replace with your Windows ones, like so: [LinkKey]Key=B99999999FFFFFFFFF999999999FFFFF Once updated, restart your Bluetooth service in one of the following ways, and then it works! Ubuntu, Mint, Arch: sudo systemctl restart bluetooth Alternatively, reboot your machine into Linux. Reboot into Windows - it works!
{ "score": 9, "source": [ "https://unix.stackexchange.com/questions/255509", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151465/" ] }
255,519
I installed VirtualBox, in the first run i got this warning: WARNING: The vboxdrv kernel module is not loaded. Either there is no module available for the current kernel (3.10.0-327.4.4.el7.x86_64) or it failed to load. Please recompile the kernel module and install it by sudo /sbin/rcvboxdrv setup After launching this command, I got: sudo /sbin/rcvboxdrv setup Bad argument setup I tried another way which is the following commands: /etc/init.d/vboxdrv setup bash: /etc/init.d/vboxdrv: No such file or directory Last way that I tried: /usr/lib/virtualbox/boxdrv.sh setup Stopping VirtualBox kernel modules [ OK ] Uninstalling old VirtualBox DKMS kernel modules [ OK ] Removing old VirtualBox pci kernel module [ OK ] Removing old VirtualBox netadp kernel module [ OK ] Removing old VirtualBox netflt kernel module [ OK ] Removing old VirtualBox kernel module [ OK ] Trying to register the VirtualBox kernel modules using DKMSdepmod: ... Some Warnings ... [ OK ] Starting VirtualBox kernel modules [FAILED] What can i try?
I hit the same issue today after upgrading to 5.0.14 (I am on Ubuntu 15.10, using the official Virtualbox apt repo) I fixed it with: sudo /usr/lib/virtualbox/vboxdrv.sh setup
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/255519", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151669/" ] }
255,524
I am trying to obtain the full date (created or modified) of a particular file for passing to another program. I have tried variations of options with the ls command but none provide a full date for files less than 6 months old and I have limit usage of the options. When I try certain options I have seen trying to research this I get the following message: usage: ls [-1ACFHLNRabcdefgilmnopqrstuxEUX] [File...] None of which seems to provide what I needs as far as I can tell so I tried to use the stat command but it is not available to me. I am using the Korn shell on AIX 5.3 which has limited commands available, can anyone suggest another way that I might be able to obtain a file's created or modified date as a full date (either dd/mm/yyyy or yyyy/mm/dd).
With ls , though you may not always be able to get the time , you should be able to derive the date (year, month and day of the month). In the C locale, the date output in ls -l should either be Mmm dd HH:MM for recent files (and you should be able to derive the year (either this year or the previous one) or Mmm dd YYYY for older files or files with a modification time in the future. So you should always be able to get the date (YYYY-mm-dd) out of that: eval "$(date +'year=%Y month=%m')"LC_ALL=C ls -dn file | awk -v y="$year" -v m="$month" '{ month = index("--JanFebMarAprMayJunJulAugSepOctNovDec", $6) / 3 day = $7 if ($8 ~ /:/) year = y - (month > m) else year = $8 printf "%04d-%02d-%02d\n", year, month, day exit}' Now, if you want the full modification time with maximum precision, I'm afraid there is no standard command for that. You'll find some ls implementations that have options for that ( ls --full-time with GNU ls or -D <format> with FreeBSD ls for instance) There exist a number of different and incompatible implementations of a stat command (IRIX, zsh builtin, GNU, BSD) that can give you that. Or you could use the -printf predicate of GNU find . Or the -r option of GNU date . Not all implementations will give you sub-second granularity. And beware of time zones and DST as depending on the format you choose and the timezone you're in, a given output may be ambiguous and refer to more than one possible date. For symlinks, you may also want to ask yourself whether it's the modification time of the link or its target you're after. Some of the options mentioned here will do one or the other by default and some of them can be told to do one or the other on demand. zsh stat: stat -F '%Y-%m-%d %T.%N %z' +mtime file ↳ 1992-05-13 14:57:00.123368710 +0100 GNU stat : stat -c %y file ↳ 1992-05-13 14:57:00.123368710 +0100 BSD stat : stat -t '%F %T %z' -f %Sm file ↳ 1992-05-13 14:57:00 +0100 IRIX stat : stat -m file GNU find : find file -prune -printf '%TF %TT %Tz\n' ↳ 1992-05-13 14:57:00.1233687100 +0100 GNU date : date -r file '+%F %T.%N %z' ↳ 1992-05-13 14:57:00.123368710 +0100 FreeBSD ls : ls -D '[%F %T %z]' -l file ↳ -r-xr-xr-x 2 bin bin 372298 [1992-05-13 14:57:00 +0100] file GNU ls : ls --full-time -l file ↳ -r-xr-xr-x 2 bin bin 372298 1992-05-13 14:57:00.123368710 +0100 file ast-open ls : ls -Z '%(mtime:time=%F %T.%N %z)s' ↳ 1992-05-13 14:57:00.123368710 +0100 AIX, which your ls synopsis suggests you may be using has an istat command ( AIX 5.3 man page ) that displays dates in full (without sub-second granularity, and ambiguous unless you force TZ to UTC0 ), though not that easy to parse: $ LC_ALL=C TZ=UTC0 istat fileInode 10360 on device 10/6 FileProtection: r-xr-xr-xOwner: 2(bin) Group: 2(bin)Link count: 2 Length 372298 bytesLast updated: Wed May 13 14:08:13 1992Last modified: Wed May 13 13:57:00 1992Last accessed: Sun Jan 31 15:49:23 1993 Also note that for symlinks, you'll get the date of the target of the symlink, not the symlink itself. If you don't have access to any of those, your best bet for portability may be perl : $ perl -MPOSIX -le 'print strftime("%Y-%m-%d %T %z", localtime((lstat(shift))[9]))' file1992-05-13 14:57:00 +0100 Note that few systems have a creation time for files (sometimes called birth time ), and there's no standard API, let alone command to query it, so the situation is even worse than for the modification time.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/255524", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140241/" ] }