source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
412,952
An ancient version of ipconfig (inside initramfs) requires its user input to supply only up to 7 colon separated elements, like: ip=client-ip:server-ip:gw-ip:netmask:hostname:device:autoconf result in an ipconfig error when users do supply more than 7 elements. Therefore the extra (2 DNS resolvers) should be chopped off. That can be done inside a subshell with cut , like: validated_input=$(echo ${user_input} | cut -f1,2,3,4,5,6,7 -d:) How can such cut be written using (b)ash parameter expansion/substitution? Without: launching subshell(s)/subprocess(es) (piping) IFS-wrangling/mangling Because of (1) speed, see Using bash variable substitution instead of cut/awk , and (2) learning. In other words: How to do a lookup for n-th (7-th) character occurrence and remove/trim everything from there until the end of the string?
This uses only parameter expansion: ${var%:"${var#*:*:*:*:*:*:*:}"} Example : $ var=client-ip:server-ip:gw-ip:netmask:hostname:device:autoconf:morefields:another:youwantanother:haveanother:$ echo "${var%:"${var#*:*:*:*:*:*:*:}"}"client-ip:server-ip:gw-ip:netmask:hostname:device:autoconf Thanks ilkkachu for coming up with a fix to the trailing : ! ${parameter#word}${parameter##word} The word is expanded to produce a pattern just as in filename expansion (see Filename Expansion). If the pattern matches the beginning of the expanded value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the ‘#’ case) or the longest matching pattern (the ‘##’ case) deleted. If parameter is ‘@’ or ‘ ’, the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with ‘@’ or ‘ ’, the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list. This will attempt to match the beginning of your parameter, and if it does it will strip it. Example : $ var=a:b:c:d:e:f:g:h:i$ echo "${var#a}":b:c:d:e:f:g:h:i$ echo "${var#a:b:}"c:d:e:f:g:h:i$ echo "${var#*:*:}"c:d:e:f:g:h:i$ echo "${var##*:}" # Two hashes make it greedyi ${parameter%word}${parameter%%word} The word is expanded to produce a pattern just as in filename expansion. If the pattern matches a trailing portion of the expanded value of parameter, then the result of the expansion is the value of parameter with the shortest matching pattern (the ‘%’ case) or the longest matching pattern (the ‘%%’ case) deleted. If parameter is ‘@’ or ‘ ’, the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with ‘@’ or ‘ ’, the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list. This will attempt to match the end of your parameter, and if it does it will strip it. Example : $ var=a:b:c:d:e:f:g:h:i$ echo "${var%i}"a:b:c:d:e:f:g:h:$ echo "${var%:h:i}"a:b:c:d:e:f:g$ echo "${var%:*:*}"a:b:c:d:e:f:g$ echo "${var%%:*}" # Two %s make it greedya So in the answer: ${var%:"${var#*:*:*:*:*:*:*:}"} (note the quotes around ${var#...} so that it is treated as a literal string (not a pattern) to be stripped off the end of $var ). When applied to: var=client-ip:server-ip:gw-ip:netmask:hostname:device:autoconf:morefields:another:youwantanother:haveanother: ${var#*:*:*:*:*:*:*:} = morefields:another:youwantanother:haveanother: That is expanded inside ${var%: ... } like so: ${var%:morefields:another:youwantanother:haveanother:} So you are saying give me: client-ip:server-ip:gw-ip:netmask:hostname:device:autoconf:morefields:another:youwantanother:haveanother: But trim :morefields:another:youwantanother:haveanother: off the end. The Bash Reference Manual ( 3.5.3 )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/412952", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17560/" ] }
412,980
I have the following file variable and values # more file.txtexport worker01="sdg sdh sdi sdj sdk"export worker02="sdg sdh sdi sdj sdm"export worker03="sdg sdh sdi sdj sdf" I perform source in order to read the variable # source file.txt example: echo $worker01sdg sdh sdi sdj sdk until now every thing is perfect but now I want to read the variables from the file and print the valuesby simple bash loop I will read the second field and try to print value of the variable # for i in ` sed s'/=/ /g' /tmp/file.txt | awk '{print $2}' ` do echo $i declare var="$i" echo $var done but its print only the variable and not the values worker01worker01worker02worker02worker03worker03 expected output: worker01sdg sdh sdi sdj sdkworker02sdg sdh sdi sdj sdmworker03sdg sdh sdi sdj sdf
You have export worker01="sdg sdh sdi sdj sdk" , then you replace = with a space to get export worker01 "sdg sdh sdi sdj sdk" . The space separated fields in that are export , worker01 , "sdg , sdh , etc. It's probably better to split on = , and remove the quotes, so with just the shell: $ while IFS== read -r key val ; do val=${val%\"}; val=${val#\"}; key=${key#export }; echo "$key = $val"; done < varsworker01 = sdg sdh sdi sdj sdkworker02 = sdg sdh sdi sdj sdmworker03 = sdg sdh sdi sdj sdf key contains the variable name, val the value. Of course this doesn't actually parse the input, it just removes the double quotes if they happen to be there.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/412980", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
412,984
I run Debian 9.3. I went to the NodeJS website to see how to install NodeJS v9.X on my machine and ran the code provided. curl -sL https://deb.nodesource.com/setup_9.x | sudo -E bash -sudo apt-get install -y nodejs But the terminal spit out this message: Reading package lists... DoneBuilding dependency tree Reading state information... Donenodejs is already the newest version (4.8.2~dfsg-1).0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. My machine is stuck with NodeJS v4.8.2 and NPM v1.4.21. How do I upgrade to the latest NodeJS and NPM? UPDATE I followed @GAD3R's instructions. It still installs v4.8.2. Here's what I get after running GAD3R's commands then running sudo apt install nodejs . Reading package lists... DoneBuilding dependency tree Reading state information... DoneThe following additional packages will be installed: libuv1The following NEW packages will be installed: libuv1 nodejs0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.Need to get 0 B/3,524 kB of archives.After this operation, 14.5 MB of additional disk space will be used.Do you want to continue? [Y/n] ySelecting previously unselected package libuv1:amd64.(Reading database ... 141225 files and directories currently installed.)Preparing to unpack .../libuv1_1.9.1-3_amd64.deb ...Unpacking libuv1:amd64 (1.9.1-3) ...Selecting previously unselected package nodejs.Preparing to unpack .../nodejs_4.8.2~dfsg-1_amd64.deb ...Unpacking nodejs (4.8.2~dfsg-1) ...Setting up libuv1:amd64 (1.9.1-3) ...Processing triggers for libc-bin (2.24-11+deb9u1) ...Processing triggers for man-db (2.7.6.1-2) ...Setting up nodejs (4.8.2~dfsg-1) ...update-alternatives: using /usr/bin/nodejs to provide /usr/bin/js (js) in auto mode When I run update-alternatives --config nodejs , the terminal prints update-alternatives: error: no alternatives for nodejs == When I run apt-cache policy nodejs , I get this... nodejs: Installed: 4.8.2~dfsg-1 Candidate: 4.8.2~dfsg-1 Version table: 9.3.0-1nodesource1 500 500 https://deb.nodesource.com/node_9.x stretch/main amd64 Packages 8.9.3~dfsg-2 1 1 http://ftp.us.debian.org/debian experimental/main amd64 Packages 6.12.0~dfsg-2 500 500 http://ftp.us.debian.org/debian unstable/main amd64 Packages *** 4.8.2~dfsg-1 990 990 http://ftp.us.debian.org/debian stretch/main amd64 Packages 100 /var/lib/dpkg/status == I ran sudo /etc/apt/preferences , which did not exist until now, and wrote this in it: Package: *Pin: release n=experimentalPin-Priority: 100Package: *Pin: release n=unstablePin-Priority: 100Package: *Pin: release n=stablePin-Priority: 500 I re-ran the commands from GAD3R's post, but still Debian installed v4.8.2 of nodejs package.
You have export worker01="sdg sdh sdi sdj sdk" , then you replace = with a space to get export worker01 "sdg sdh sdi sdj sdk" . The space separated fields in that are export , worker01 , "sdg , sdh , etc. It's probably better to split on = , and remove the quotes, so with just the shell: $ while IFS== read -r key val ; do val=${val%\"}; val=${val#\"}; key=${key#export }; echo "$key = $val"; done < varsworker01 = sdg sdh sdi sdj sdkworker02 = sdg sdh sdi sdj sdmworker03 = sdg sdh sdi sdj sdf key contains the variable name, val the value. Of course this doesn't actually parse the input, it just removes the double quotes if they happen to be there.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/412984", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148559/" ] }
413,012
When I run date +"%Y%m%d%H%M%S" I receive 20171225203309 here in CET time zone. Can I use date to obtain a the current time in the same format, but for timezone GMT?
You can use date -u ( universal time ) which is equivalent to GMT. Quoting date manual: ‘-u’ ‘--utc’ ‘--universal’ Use Universal Time by operating as if the ‘TZ’ environment variable were set to the string ‘UTC0’. UTC stands for Coordinated Universal Time, established in 1960. Universal Time is often called “Greenwich Mean Time” (GMT) for historical reasons. Typically, systems ignore leap seconds and thus implement an approximation to UTC rather than true UTC.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/413012", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105620/" ] }
413,144
Background I'm running a larger one-line command. It is unexpectedly outputting (twice per iteration) the following: __bp_preexec_invoke_exec "$_" Here is the pared down command (removed other activity in loop): for i in `seq 1 3`; do sleep .1 ; done note: after i have played with this a few times it inexplicably stops printing the unexpected output What I've tried If I remove sleep .5 I do not get the unexpected output If I simply run sleep .5 the prompt returns but there is no output I have googled around for __bp_preexec_invoke_exec , but I am unable to determine how it applies to what I'm doing Question What is __bp_preexec_invoke_exec "$_"? How can I run this without the unwanted output? More info on solution thanks to @gina2x: Here is the output of declare -f | grep preexec preexec_functions+=(preexec); __bp_preexec_interactive_mode="on"__bp_preexec_invoke_exec () if [[ -z "$__bp_preexec_interactive_mode" ]]; then __bp_preexec_interactive_mode=""; __bp_preexec_interactive_mode=""; local preexec_function; local preexec_ret_value=0; for preexec_function in "${preexec_functions[@]}"; if type -t "$preexec_function" > /dev/null; then $preexec_function "$this_command"; preexec_ret_value="$?"; __bp_set_ret_value "$preexec_ret_value" "$__bp_last_argument_prev_command" if [[ -z "${iterm2_ran_preexec:-}" ]]; then __iterm2_preexec ""; iterm2_ran_preexec="";__iterm2_preexec () iterm2_ran_preexec="yes"; I see in there a lot of "iterm2" information (I'm on a Mac and using iTerm2.app). In fact, when I try to reproduce using Terminal.app, I am unable to reproduce the unexpected output . Excellent sleuthing with declare -f - thank you!
Seems like __bp_preexec_invoke_exec is part of https://github.com/rcaloras/bash-preexec/blob/master/bash-preexec.sh . And it seems like that there is a bug in that script. That project adds 'preexec' functionality to bash by adding DEBUG trap, I did not test, but I can imagine that it might not work properly in the way you see it. Check if it is installed in you environment - you could do so by declare -f . Seems like with newer bash you can use PS0 instead of that project, which probably would do the same without problems you see.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/413144", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206424/" ] }
413,179
When the status code is useless, is there anyway to construct a pipeline based on output from stdout? I'd prefer the answer not address the use-case but the question in the scope of shell scripting. What I'm trying to do is find the most-specific package available in the repository by guessing the name based on country and language codes. Take for instance this, $PACKAGE1=hunspell-en-zz $PACKAGE2=hunspell-en The first guess is more appropriate but it may not exist. In this case, I want to return hunspell-en ( $PACKAGE2 ) because the first option hunspell-en-zz ( $PACKAGE1 ) does not exist. pipelines of apt-cache The command apt-cache returns success (which is defined by shell as exit-code zero) whenever the command is able to run (from the docs of apt-cache ) apt-cache returns zero on normal operation, decimal 100 on error. That makes using the command in a pipeline more difficult. Normally, I expect the package-search equivalent of a 404 to result in an error (as would happen with curl or wget ). I want to search to see if a package exists, and if not fall back to another package if it exists . This returns nothing, as the first command returns success (so the rhs in the || never runs) apt-cache search hunspell-en-zz || apt-cache search hunspell-en apt-cache search with two arguments This returns nothing, as apt-cache ANDs its arguments, apt-cache search hunspell-en-zz hunspell-en From the docs of apt-cache Separate arguments can be used to specify multiple search patterns that are and'ed together. So as one of those arguments clearly doesn't exist, this returns nothing. The question What is the shell idiom to handle conventions like those found in apt-cache where the return code is useless for the task? And success is determined only by the presence of output on STDOUT? Similar to make find fail when nothing was found they both stemming from the same problem. The chosen answer there mentions find -z which sadly isn't applicable solution here and is use-case specific. There is no mention of an idiom or constructing a pipeline without using null-termination (not an option on apt-cache )
Create a function that takes a command and returns true iff it has some output. r() { local x=$("$@"); [ -n "$x" ] && echo "$x"; }( ( r echo -n ) || echo 'nada' ) | cat # Prints 'nada'( ( r echo -n foo ) || echo 'nada' ) | cat # Prints 'foo' So for this use case it 'll work like this, r apt-cache search hunspell-en-zz || r apt-cache search hunspell-en
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/413179", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
413,182
I am learning umask command , and I get few questions. So:1.For files and directories default permissions is 666 and 777how can I config the default permission, specifically where is configuration file for default permissions.2 The umask command is reduce permissions, how to ADD permissions ?
Create a function that takes a command and returns true iff it has some output. r() { local x=$("$@"); [ -n "$x" ] && echo "$x"; }( ( r echo -n ) || echo 'nada' ) | cat # Prints 'nada'( ( r echo -n foo ) || echo 'nada' ) | cat # Prints 'foo' So for this use case it 'll work like this, r apt-cache search hunspell-en-zz || r apt-cache search hunspell-en
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/413182", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/258037/" ] }
413,204
Say I log into a shell on a unix system and begin tapping away commands. I initially begin in my user's home directory ~ . I might from there cd down to the directory Documents . The command to change working directory here is very simple intuitively to understand: the parent node has a list of child nodes that it can access, and presumably it uses an (optimised) variant of a search to locate the existence of a child node with the name the user entered, and the working directory is then "altered" to match this — correct me if I'm wrong there. It may even be simpler that the shell simply "naively" tries to attempt to access the directory exactly as per the user's wishes and when the file system returns some type of error, the shell displays a response accordingly. What I am interested in however, is how the same process works when I navigate up a directory, i.e. to a parent, or a parent's parent. Given my unknown, presumably "blind" location of Documents , one of possibly many directories in the entire file system tree with that name, how does Unix determine where I should be placed next? Does it make a reference to pwd and examine that? If yes, how does pwd track the current navigational state?
The other answers are oversimplifications, each presenting only parts of the story, and are wrong on a couple of points. There are two ways in which the working directory is tracked: For every process, in the kernel-space data structure that represents that process, the kernel stores two vnode references to the vnodes of the working directory and the root directory for that process. The former reference is set by the chdir() and fchdir() system calls, the latter by chroot() . One can see them indirectly in /proc on Linux operating systems or via the fstat command on FreeBSD and the like: % fstat -p $$|head -n 5 USER CMD PID FD MOUNT INUM MODE SZ|DV R/WJdeBP zsh 92648 text / 24958 -r-xr-xr-x 702360 rJdeBP zsh 92648 ctty /dev 148 crw--w---- pts/4 rwJdeBP zsh 92648 wd /usr/home/JdeBP 4 drwxr-xr-x 124 rJdeBP zsh 92648 root / 4 drwxr-xr-x 35 r % When pathname resolution operates, it begins at one or the other of those referenced vnodes, according to whether the path is relative or absolute. (There is a family of …at() system calls that allow pathname resolution to begin at the vnode referenced by an open (directory) file descriptor as a third option.) In microkernel Unices the data structure is in application space, but the principle of holding open references to these directories remains the same. Internally, within shells such as the Z, Korn, Bourne Again, C, and Almquist shell, the shell additionally keeps track of the working directory using string manipulation of an internal string variable. It does this whenever it has cause to call chdir() . If one changes to a relative pathname, it manipulates the string to append that name. If one changes to an absolute pathname, it replaces the string with the new name. In both cases, it adjusts the string to remove . and .. components and to chase down symbolic links replacing them with their linked-to names. ( Here is the Z shell's code for that , for example.) The name in the internal string variable is tracked by a shell variable named PWD (or cwd in the C shells). This is conventionally exported as an environment variable (named PWD ) to programs spawned by the shell. These two methods of tracking things are revealed by the -P and -L options to the cd and pwd shell built-in commands, and by the differences between the shells' built-in pwd commands and both the /bin/pwd command and the built-in pwd commands of things like (amongst others) VIM and NeoVIM. % mkdir a ; ln -s a b % (cd b; pwd; /bin/pwd; printenv PWD) /usr/home/JdeBP/b/usr/home/JdeBP/a/usr/home/JdeBP/b % (cd b; pwd -P; /bin/pwd -P) /usr/home/JdeBP/a/usr/home/JdeBP/a % (cd b; pwd -L; /bin/pwd -L) /usr/home/JdeBP/b/usr/home/JdeBP/b % (cd -P b; pwd; /bin/pwd; printenv PWD) /usr/home/JdeBP/a/usr/home/JdeBP/a/usr/home/JdeBP/a % (cd b; PWD=/hello/there /bin/pwd -L) /usr/home/JdeBP/a % As you can see: obtaining the "logical" working directory is a matter of looking at the PWD shell variable (or environment variable if one is not the shell program); whereas obtaining the "physical" working directory is a matter of calling the getcwd() library function. The operation of the /bin/pwd program when the -L option is used is somewhat subtle. It cannot trust the value of the PWD environment variable that it has inherited. After all, it need not have been invoked by a shell and intervening programs may not have implemented the shell's mechanism of making the PWD environment variable always track the name of the working directory. Or someone may do what I did just there. So what it does is (as the POSIX standard says) check that the name given in PWD yields the same thing as the name . , as can be seen with a system call trace: % ln -s a c % (cd b; truss /bin/pwd -L 3>&1 1>&2 2>&3 | grep -E '^stat|__getcwd') stat("/usr/home/JdeBP/b",{ mode=drwxr-xr-x ,inode=120932,size=2,blksize=131072 }) = 0 (0x0) stat(".",{ mode=drwxr-xr-x ,inode=120932,size=2,blksize=131072 }) = 0 (0x0) /usr/home/JdeBP/b % (cd b; PWD=/usr/local/etc truss /bin/pwd -L 3>&1 1>&2 2>&3 | grep -E '^stat|__getcwd') stat("/usr/local/etc",{ mode=drwxr-xr-x ,inode=14835,size=158,blksize=10240 }) = 0 (0x0) stat(".",{ mode=drwxr-xr-x ,inode=120932,size=2,blksize=131072 }) = 0 (0x0)__getcwd("/usr/home/JdeBP/a",1024) = 0 (0x0) /usr/home/JdeBP/a % (cd b; PWD=/hello/there truss /bin/pwd -L 3>&1 1>&2 2>&3 | grep -E '^stat|__getcwd') stat("/hello/there",0x7fffffffe730) ERR#2 'No such file or directory' __getcwd("/usr/home/JdeBP/a",1024) = 0 (0x0) /usr/home/JdeBP/a % (cd b; PWD=/usr/home/JdeBP/c truss /bin/pwd -L 3>&1 1>&2 2>&3 | grep -E '^stat|__getcwd') stat("/usr/home/JdeBP/c",{ mode=drwxr-xr-x ,inode=120932,size=2,blksize=131072 }) = 0 (0x0) stat(".",{ mode=drwxr-xr-x ,inode=120932,size=2,blksize=131072 }) = 0 (0x0) /usr/home/JdeBP/c % As you can see: it only calls getcwd() if it detects a mismatch; and it can be fooled by setting PWD to a string that does indeed name the same directory, but by a different route. The getcwd() library function is a subject in its own right. But to précis: Originally it was purely a library function, that built up a pathname from the working directory back up to the root by repeatedly trying to look up the working directory in the .. directory. It stopped when it reached a loop where .. was the same as its working directory or when there was an error trying to open the next .. up. This would be a lot of system calls under the covers. Nowadays the situation is slightly more complex. On FreeBSD, for example (this being true for other operating systems as well), it is a true system call, as you can see in the system call trace given earlier. All of the traversal from the working directory vnode up to the root is done in a single system call, which takes advantage of things like kernel mode code's direct access to the directory entry cache to do the pathname component lookups much more efficiently. However, note that even on FreeBSD and those other operating systems the kernel does not keep track of the working directory with a string. Navigating to .. is again a subject in its own right. Another précis: Although directories conventionally (albeit, as already alluded to, this is not required) contain an actual .. in the directory data structure on disc, the kernel tracks the parent directory of each directory vnode itself and can thus navigate to the .. vnode of any working directory. This is somewhat complicated by the mountpoint and changed root mechanisms, which are beyond the scope of this answer. Aside Windows NT in fact does a similar thing. There is a single working directory per process, set by the SetCurrentDirectory() API call and tracked per process by the kernel via an (internal) open file handle to that directory; and there is a set of environment variables that Win32 programs (not just the command interpreters, but all Win32 programs) use to track the names of multiple working directories (one per drive), appending to or overwriting them whenever they change directory. Conventionally, unlike the case with Unix and Linux operating systems, Win32 programs do not display these environment variables to users. One can sometimes see them in Unix-like subsystems running on Windows NT, though, as well as by using the command interpreters' SET commands in a particular way. Further reading " pwd " . The Open Group Base Specifications Issue 7. IEEE 1003.1:2008. The Open Group. 2016. "Pathname Resolution" . The Open Group Base Specifications Issue 7. IEEE 1003.1:2008. The Open Group. 2016. https://askubuntu.com/a/636001/43344 How are files opened in unix? what is inode for, in FreeBSD or Solaris Strange environment variable !::=::\ in Cygwin Why does CDPATH not work as documented in the manuals? How can I set zsh to use physical paths? Going into a directory linked by a link
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/413204", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/267637/" ] }
413,229
what does this ip address notation means [::]:[4443] .
[::] indicate all ipv6 addresses. 4443 is a port number. So if a service is bound to [::]:4443 it'll be listening to all ipv6 addresses available in your system. It's similar to listening 0.0.0.0 for ipv4. Some services bind to all IPs available (including ipv4) while binding to [::] . Strictly speaking [::] indicates ipv6 only. As per ipv6 writing convention, one consecutive block of 0's in an ipv6 address can be replaced with :: . Considering ipv6 is 128 bits the address :: is 0000:0000:0000:0000:0000:0000:0000:0000 in expanded hexadecimal form.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/413229", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/262519/" ] }
413,299
Basically, I want to delete all subfolders, but leave all the files intact. For example: Folder1/ randomStuff/ nope.txt installer.jar build.sh I want randomStuff and its files deleted, but keep installer.jar and build.sh intact.
Use the fact that a filename that ends in a slash always refers to a directory, and never a regular file. The command rm -r -- ./*/ will accomplish what you describe.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/413299", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/267731/" ] }
413,312
I would like to switch between hostnames and hex IP addresses, and vice versa. I have installed syslinux-utils on Debian Stretch , which provides gethostip : gethostip -x google.com D83ACD2E How can I switch D83ACD2E back to hostname? In older version of Debian Wheezy , I can use the commands getaddrinfo' and 'getnameinfo # getaddrinfo google.comD83ACD4E# getnameinfo D83ACD4E mil04s25-in-f14.1e100.net I was unable to find these tools in Debian Stretch . Were these tools replaced by others?
You could hexify D83ACD2E , pack it into a (Network byte order!) 32-bit integer, then print the (unsigned!) character components of that integer joined by dots. (This is also possible if somewhat more verbose in assembly .) $ perl -e 'printf "%v*d\n", ".", pack "N", hex shift' D83ACD2E216.58.205.46$ With fewer complications the decimal flag to gethostip gives that value directly, which then can be fed to host or nslookup or getent hosts $ gethostip -d google.com172.217.3.206$ host `gethostip -d google.com`206.3.217.172.in-addr.arpa domain name pointer sea15s12-in-f206.1e100.net.206.3.217.172.in-addr.arpa domain name pointer sea15s12-in-f14.1e100.net.$ getent hosts `gethostip -d google.com`172.217.3.206 sea15s12-in-f206.1e100.net$ that's the DNS PTR record associated with the given IP address, which may or may not be set, or may or may not be the hostname you are looking for. Or if you search around with apt-file $ sudo apt-file search getaddrinfo | grep 'getaddrinfo$'gnulib: /usr/share/gnulib/modules/getaddrinfolibruli-bin: /usr/bin/ruli-getaddrinfolibsocket-getaddrinfo-perl: /usr/bin/socket_getaddrinfo$ sudo apt-file search getnameinfo | grep 'getnameinfo$'libsocket-getaddrinfo-perl: /usr/bin/socket_getnameinfo$ sudo apt-get install libsocket-getaddrinfo-perl... but that version does not appear to support your notation: $ socket_getnameinfo D83ACD4EUnrecognised address or port format - Name or service not known$ but does if the conventional 0x for hex prefix is used $ socket_getnameinfo 0xD83ACD4EResolved address '0xD83ACD4E' mil04s25-in-f78.1e100.net$ (according to the man page Debian did rename the program, which I now recall LeoNerd mentioning on IRC a while ago...) If you're dead set on accepting D83ACD4E this can be done with the above hex to numify that value, packing it, and punching that blindly through Socket module functions. But this really should be a script with error checking, input validation, tests, etc $ perl -MSocket=:addrinfo,pack_sockaddr_in \ -E '($e,$h)=getnameinfo pack_sockaddr_in(0, pack("N", hex shift));' \ -E 'say $h' D83ACD2Emil04s24-in-f46.1e100.net$
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/413312", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
413,349
What is the meaning of the following location block in Nginx? location ~ /\.ht { deny all;} I ask since I have a small WordPress site and I removed this block from its configuration and restarted the server but the site kept working fine, seemingly.
location ~ /\.ht { deny all;} This directive tells the webserver to deny all incoming requests for any files starting with .ht in the root directory ( / ). The tilde ~ tells nginx to use regular expressions. Thus, files like .htaccess , .htpasswd , etc, will not be served. Note: The backslash ( \ ) before the dot, is just to escape the dot (the dot that comes before htaccess , htpassword , etc.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/413349", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/258010/" ] }
413,370
For last few weeks, there has been weird activity in my Ubuntu test server. Please check the below screenshot from htop. Everyday this weird service (which seems like a cryptocurrency mining service) is running and taking 100% of CPU. My server is only accessible through ssh key and password login has been disabled. I have tried to find any file with this name, but couldn't find any. Can you please help me with the below issues How to find the process location from process ID? How do I completely remove this? Any idea how this may got into my server? The server runs mainly test version of few Django deployments.
As explained by other answers it's a malware that uses your computer to mine cryptocoins. Good news is that it's unlikely to be doing anything else than using your CPU and electricity. Here is a bit more information and what you can do to fight back once you've got rid of it. The malware is mining an altcoin called monero to one of the largest monero pools, crypto-pool.fr . That pool is legitimate and they are unlikely to be the source of the malware, that's not how they make money. If you want to annoy whoever wrote that malware, you could contact the administrator of the pool (there is an email on the support page of their site). They don't like botnets so if you report to them the address used by the malware (the long string that starts with 42Hr... ), they will probably decide to suspend the payments to that address which will make the life of the hacker who wrote that piece of sh.. a bit more difficult. This may help too: How can I kill minerd malware on an AWS EC2 instance? (compromised server)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/413370", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10207/" ] }
413,449
Simple question. Does the bash shell have any support for using pointers when writing a shell script? I am familiar with expansion notation, ${var[@]} when iterating over the array $var , but it is not clear this is utilizing pointers to iterate over the array indices. Does bash provide access to memory addresses like other languages? If bash does not support using pointers, what other shells do?
A pointer (to a location of memory ) is not really a useful concept in anything higher-level than C, be it something like Python or the shell. References to objects are of course useful in high-level languages, perhaps even necessary for building complex data structures. But in most cases thinking in terms of memory addresses is too low level to be very useful. In Bash (and other shells), you can get the values of array elements with the ${array[index]} notation, assign them with array[index]=... and get the number of elements in the array with ${#array[@]} . The expression inside the brackets is an arithmetic expression. As a made-up example, we could add a constant prefix to all array members: for ((i=0 ; i < ${#array[@]} ; i++ )) ; do array[i]="foo-${array[i]}"done (If we only cared about the values, and not the indexes, just for x in "${array[@]}" ; do... would be fine.) With associative or sparse arrays , a numerical loop doesn't make much sense, but instead we'd need to fetch the array keys/indexes with ${!array[@]} . E.g. declare -A assoc=([foo]="123" [bar]="456")for i in "${!assoc[@]}" ; do echo "${assoc[$i]}"done In addition to that, Bash has two ways to point indirectly to another variable: indirect expansion , using the ${!var} syntax , which uses the value of the variable whose name is in var , and namerefs , which need to be created with the declare builtin (or the ksh -compatible synonym, typeset ). declare -n ref=var makes ref a reference to the variable var . Namerefs also support indexing, in that if we have arr=(a b c); declare -n ref=arr; then ${ref[1]} will expand to b . Using ${!p[1]} would instead take p as an array, and refer to the variable named by its second element. In Bash, namerefs are literally that, references by name , and using a nameref from inside a function will use the local value of the named variable. This will print local value of var . #!/bin/bashfun() { local var="local value of var" echo "$ref";}var="global var"declare -n ref=varfun BashFAQ has a longer article on indirection , too.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/413449", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42920/" ] }
413,484
Both Apt and DNF/Yum, the two most popular package management schemes for Linux distributions to my knowledge, only support system-wide installation of packages: Files owned by root, binaries go in (/usr)?/s?bin , settings go in /etc and so on. However, on systems in which there are multiple individual users who don't have root privileges, it very often - if not always - happens that a user wants to install some apps or utilities which are available for that distribution; and s/he is fine with an installation that's personal and not common to many/all users. Now, it does not seem a far-fetched or even incredibly complicated idea for packages to be adaptable, at installation time, with a different root directory or set of root directories, so that users can do this. Nor is it much of an issue to manage a user-specific registry of installed packages (whether or not an individual user has his/her own package DB). So what's the reason that this functionality has not been added to those common package management systems/schemes? Note: This is an informative question, i.e. I'm asking about what people know about the past , not what people think about this feature.
While common package managers don't address this use case, there are several projects that do: Zero Install Linuxbrew - a port of Homebrew for Linux Gentoo Prefix Nix pkgsrc - can be used to install packages as an unprivileged user according to somebody's blog post My best guess as to why traditional package managers don't address this use case is that it greatly complicates the package building and installation process, since package maintainers will need to be very careful to ensure that their packages correctly support a dynamic installation directory. In fact, many common package formats such as RPM support a dynamic installation directory, but hardly any maintainers take advantage of this feature when building packages due to the high additional overhead.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/413484", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34868/" ] }
413,515
I would like to know a command or function in linux bash to get the timestamp (and maybe format its output) of the last modified file inside a directory. Let's say I have /path/mydir and this directory has a big bunch of files inside. Given its path, I want to output the timestamp of the most recent modified file. I guess that the procedure could be do a foreach file recursive, check them all and update a variable with the most recent time each time a more recent one is found. Edit: Sorry for the confusion but I wanted to intend the Epoch timestamp :)
One option: use GNU find to recurse through all files; print timestamp with filepath and sort by date: find /path/mydir -type f -printf "%T+\t%p\n" | sort | tail -1 For just the epoch timestamp, find /path/mydir -type f -printf "%T@\n" | sort | tail -1
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/413515", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143935/" ] }
413,527
i've just upgraded my debian system from debian stretch to debian sid which not stable version so after i finished upgrading and try to reboot my system hangs at "Started Update UTMP about System Runlevel Changes"i search the internet for solution most of them were talking about video card problem so i've tried every solution but none of them is workingalso i tried to install nvidia driver manually using .run file so i get an error saying failed to run /usr/sbin/dkms build i am using nvidia G 210 VGA
One option: use GNU find to recurse through all files; print timestamp with filepath and sort by date: find /path/mydir -type f -printf "%T+\t%p\n" | sort | tail -1 For just the epoch timestamp, find /path/mydir -type f -printf "%T@\n" | sort | tail -1
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/413527", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/217078/" ] }
413,532
I have read that $@ is an array that holds the positional parameters. I have tried to output an element of the $@ array: echo ${@[1]} But bash is giving me this error: test.sh: line 1: ${@[1]}: bad substitution
$@ is a "special parameter" , not an array; therefore, you cannot access it as an array. You can access the parameters directly, using their position: ${1} ... ${n} . $ set -- a b c d e f g h i j k l m n$ echo "$#"14$ echo "${10}"j Because I got curious about the brace behavior for parameters 10+, I ran a test against various shells: for shell in ash bash dash fish ksh mksh posh rc sash yash zshdo printf "The %s shell outputs: %s\n" "$shell" "$($shell -c 'set -- a b c d e f g h i j k l m n; echo $10')"done With these results: The ash shell outputs: jThe bash shell outputs: a0The dash shell outputs: jThe fish shell outputs:The ksh shell outputs: a0The mksh shell outputs: a0The posh shell outputs: a0rc: cannot find `set'The rc shell outputs:The sash shell outputs: jThe yash shell outputs: a0The zsh shell outputs: j The curly-brace behavior for shell parameters is explained in the Shell Command Language section on Shell Parameter expansion: The parameter name or symbol can be enclosed in braces, which are optional except for positional parameters with more than one digit ... and the $@ special parameter itself is described on the same page in the Special Parameters section.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/413532", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/267935/" ] }
413,542
I want to create a key binding for the common task: open a new terminal window and open the program ranger in it. The obvious command would be something like that: urxvt -e ranger The important things work right out of the box. But in ranger I want to use a different program called fzf , and this program is not found. The error message zsh:1: command not found: fzf . Same result with urxvt -e zsh -c ranger When I already have an opened terminal and call ranger in it, then fzf can be called without any problems. I took a look, and the path to the binary of fzf gets added to $PATH in my .zshrc . So my assumption is that this never sources my .zshrc , and it never gets added to the path. There is an obvious fix for this (call fzf inside ranger using the full path ~/.fzf/bin/fzf ), but this problem annoyed me quite a few times already, and I want a nice solution. How can I open a new terminal, that sources .zshrc and opens the program ranger ? One more observation that I don't understand: I created a script myranger.sh : #!/usr/bin/zshsource ~/.zshrcranger and created the new terminal with: urxvt -e myranger.sh The terminal with ranger opens, but fzf is still not in $PATH .What did I miss here? Btw, this is not zsh or urxvt specific. I also tested this with bash and/or gnome-terminal .
urxvt -e zsh -c ranger is pretty much equivalent to urxvt -e ranger . You're telling urxvt to run zsh, and zsh to run ranger, and that's it. urxvt -e zsh -c ranger does not load .zshrc : zsh only loads it when starting an interactive shell, i.e. a shell that reads user commands, not when starting a shell that runs a script (whether this script is in a file, or passed on the command line with -c ). You can load .zshrc explicitly ( urxvt -e zsh -c '. ~/.zshrc; ranger' , or use a wrapper script as you did). This isn't a good idea though, because .zshrc is for interactive settings of zsh — key bindings, aliases, etc. Environment variable settings (e.g. PATH) apply to all programs, so they should be done at login time, usually in ~/.profile . Move your PATH setting from .zshrc to .profile where it belongs.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/413542", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/196289/" ] }
413,545
Out of curiosity I'm reading some tutorials about transparent TOR proxies as it's quite interesting topic from a networking standpoint. As opposed to VPN gateways which just use tun / tap interfaces and are totally clear to me, TOR proxy uses a single port. All tutorials repeat the magic line: iptables -t nat -A PREROUTING -i eth0 -p tcp --syn -j REDIRECT --to-ports 9040 where eth0 is the input (LAN) interface and 9040 is some TOR port. The thing is, I completely don't get why such a thing makes sense at all from networking standpoint. According to my understanding of redirect / dst-nat chains and how it seems to work in physical routers, dst-nat chain takes dst-port and dst-addr BEFORE routing decision is taken and changes them to something else. So for example: before dst-nat : 192.168.1.2:46364 -> 88.88.88.88:80 after dst-nat : 192.168.1.2:46364 -> 99.99.99.99:8080 And 99.99.99.99:8080 is what further chains in IP packet flow lane see (for example filter table) and this is how the packet looks from now on after leaving device for example. Now many people around the internet (including on this stackexchange) claimed that redirect is basically the same as dst-nat with dst-addr set to local address of interface. In such light, this rule: iptables -t nat -A PREROUTING -i eth0 -p tcp --syn -j REDIRECT --to-ports 9040 clearly doesn't make sense. If that would be how it works, then TOR would get all packets with destination 127.0.0.1:9040 . For typical applications where app takes packet and responds to it somehow (for example web servers) it totally makes sense because after all, such a server process is the final destination of the packet anyways so it's okay that the destination address is localhost. But TOR router is well... a router so it has to know original destination of packet. Am I missing something? Does DNAT not affect what local applications receive? Or is it specific behavior of REDIRECT directive?
Take a look at this answer: How does a transparent SOCKS proxy know which destination IP to use? Quotation: iptables overrites the original destination address but it remembers the old one. The application code can then fetch it by asking for a special socket option, SO_ORIGINAL_DST .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/413545", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78925/" ] }
413,552
Context I have a set-top-box Octagon SF4008 , which is designed to run OpenEmbedded -based Linux distributions. I currently have openATV installed on it. Typically, users want to connect the video output of such set-top-boxes to a display and then they want to watch the channels via a front-end GUI like Enigma2 . My use case is different. I would like to stream the channels over the computer network. I could use Enigma2 for that, but I consider Tvheadend to be more user-friendly and more feature-rich. Question I would like to run Tvheadend directly on the set-top-box and I am looking for a simple way to install it there. Options The package manager used by openATV is opkg . The preconfigured repositories contain many Enigma2-specific packages, but only very few generic ones like perl , python , vim and similar. There is no Tvheadend package in there, nor in any other opkg-compatible repository for the compatible architecture (armv7l/armhf) that I am aware of. The preconfigured repositories contain no build tools like make , no compilers and no development versions of the basic libraries. So, compiling Tvheadend directly on the set-top-box would require quite a complex setup. It is definitely possible and perhaps easier to cross-compile it elsewhere. However, I would prefer to use precompiled binaries. I know that Tvheadend provides APT repositories with Debian packages for the compatible armhf architecture. I also found out that opkg can handle installing .deb files . However, because of the runtime dependencies, foreign packages would only work properly when all of their native dependencies are installed as well. Perhaps I could install Debian on the set-top-box directly. There is a flashing procedure which includes rewriting the kernel image and then extracting an archive of the root file system. I am not familiar with the bootloader and I do not know whether or how it needs to be modified in order to properly boot a standard Linux kernel. Moreover, the custom hardware drivers may at first need to be extracted from the currently running Linux kernel. Problem The above mentioned options may all work, but I consider them to be unnecessarily complex. I believe that there should be a simpler way. Perhaps the already mentioned options can be simplified. Or maybe there is a much simpler way of which I am just not aware.
Take a look at this answer: How does a transparent SOCKS proxy know which destination IP to use? Quotation: iptables overrites the original destination address but it remembers the old one. The application code can then fetch it by asking for a special socket option, SO_ORIGINAL_DST .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/413552", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21000/" ] }
413,576
I have been told that the spaces are important in bash or other shell scripts and I should not change the existence of spaces unless I know what I am doing. By "changing the existence" I mean either inserting a space between two non-space characters or removing a space between two non-space characters, e.g. changing var="$val" to var ="$val" or vice versa. I want to ask Are there any cases in which using a single space or using multiple consecutive spaces in a shell script makes a difference? . (Of course, inserting/deleting a space in quotes makes a difference ,like changing from echo "a b" to echo "a b" or vice versa. I am looking for examples other than this trivial example.) I have come across this question but that one is about adding and removing spaces between two non-space characters for which I know many examples that it would make a difference. Any help would be appreciated. Include more varieties of shells if possible.
Outside of quotes, the shell uses whitespace (spaces, tabs, newline, carriage-return, etc) as a word/token separator. That means: Things not separated by whitespace are considered to be one "word". Things separated by one-or-more whitespace characters are considered to be two (or more) words. The actual number of whitespace chars between each "thing" doesn't matter, as long as there is at least one.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/413576", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259023/" ] }
413,614
I have a collection of text files containing more data than I need. Each file's first line contains a comma-separated string that looks like this: stop_id,stop_code,stop_name,stop_desc,stop_lat,stop_lon,location_type,parent_station,zone_id Then, below those keys is all the data. I need to extract a subset of that data into a new text file so I can work with the subset (I don't need all the data, it's too much). I'm using this command to extract the first line: sed -n '1p' source.txt > destination.txt I'm also using this command to extract the specific lines I need: grep "string" source.txt > destination.txt The challenge is that when I run the two commands in the same script (pretty much as is, separated by a line or && ), the grep output overwrites the sed output. How can I run both in sequence and have the combined output of both? I noticed a question that seems similar and involves using a more complex grep command to locate the one line, followed by a range of lines. That won't work here because the first line of each of the files I need to extract data from is different. Ideally, I want to write a function that I can run against each of the files I need to work with but I need to chain these commands and combine their outputs first.
Just change the grep output to append, grep "string" source.txt >> destination.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/413614", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/262564/" ] }
413,664
I have a huge (70GB), one line , text file and I want to replace a string (token) in it. I want to replace the token <unk> , with another dummy token ( glove issue ). I tried sed : sed 's/<unk>/<raw_unk>/g' < corpus.txt > corpus.txt.new but the output file corpus.txt.new has zero-bytes! I also tried using perl: perl -pe 's/<unk>/<raw_unk>/g' < corpus.txt > corpus.txt.new but I got an out of memory error. For smaller files, both of the above commands work. How can I replace a string is such a file? This is a related question, but none of the answers worked for me. Edit :What about splitting the file in chunks of 10GBs (or whatever) each and applying sed on each one of them and then merging them with cat ? Does that make sense? Is there a more elegant solution?
The usual text processing tools are not designed to handle lines that don't fit in RAM. They tend to work by reading one record (one line), manipulating it, and outputting the result, then proceeding to the next record (line). If there's an ASCII character that appears frequently in the file and doesn't appear in <unk> or <raw_unk> , then you can use that as the record separator. Since most tools don't allow custom record separators, swap between that character and newlines. tr processes bytes, not lines, so it doesn't care about any record size. Supposing that ; works: <corpus.txt tr '\n;' ';\n' |sed 's/<unk>/<raw_unk>/g' |tr '\n;' ';\n' >corpus.txt.new You could also anchor on the first character of the text you're searching for, assuming that it isn't repeated in the search text and it appears frequently enough. If the file may start with unk> , change the sed command to sed '2,$ s/… to avoid a spurious match. <corpus.txt tr '\n<' '<\n' |sed 's/^unk>/raw_unk>/g' |tr '\n<' '<\n' >corpus.txt.new Alternatively, use the last character. <corpus.txt tr '\n>' '>\n' |sed 's/<unk$/<raw_unk/g' |tr '\n>' '>\n' >corpus.txt.new Note that this technique assumes that sed operates seamlessly on a file that doesn't end with a newline, i.e. that it processes the last partial line without truncating it and without appending a final newline. It works with GNU sed. If you can pick the last character of the file as the record separator, you'll avoid any portability trouble.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/413664", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47587/" ] }
413,671
How can I force pcmanfm to refresh its thumbnails. I have a directory of photos in JPG format, (taken with iphone). I have rotated some of these using Ubuntu Image Viewer. When I rotate the image the thumbnail does not update. How can I force it to update? I have tried deleting all thumbnails form ~/.cache/thumbnails and selecting "reload folder" in pcmanfm but no joy. Any suggestions? Where are the thumbnails actually stored? Using pcmanfm 1.2.4 on Ubuntu 16.04.
The usual text processing tools are not designed to handle lines that don't fit in RAM. They tend to work by reading one record (one line), manipulating it, and outputting the result, then proceeding to the next record (line). If there's an ASCII character that appears frequently in the file and doesn't appear in <unk> or <raw_unk> , then you can use that as the record separator. Since most tools don't allow custom record separators, swap between that character and newlines. tr processes bytes, not lines, so it doesn't care about any record size. Supposing that ; works: <corpus.txt tr '\n;' ';\n' |sed 's/<unk>/<raw_unk>/g' |tr '\n;' ';\n' >corpus.txt.new You could also anchor on the first character of the text you're searching for, assuming that it isn't repeated in the search text and it appears frequently enough. If the file may start with unk> , change the sed command to sed '2,$ s/… to avoid a spurious match. <corpus.txt tr '\n<' '<\n' |sed 's/^unk>/raw_unk>/g' |tr '\n<' '<\n' >corpus.txt.new Alternatively, use the last character. <corpus.txt tr '\n>' '>\n' |sed 's/<unk$/<raw_unk/g' |tr '\n>' '>\n' >corpus.txt.new Note that this technique assumes that sed operates seamlessly on a file that doesn't end with a newline, i.e. that it processes the last partial line without truncating it and without appending a final newline. It works with GNU sed. If you can pick the last character of the file as the record separator, you'll avoid any portability trouble.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/413671", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268027/" ] }
413,681
I am trying to understand the output of this command- echo ? .The output I see is a single | charcter.
Because ? is a special wildcard character for the shell. $abc is not present, so it's expanded to an empty string, and ? is replaced by any one-character file or directory existing in the current directory. So, there probably is a file/directory named | in your current directory. On my system, the output is different: $ echo $abc?_ 1 If there's no one-character file/directory, the ? comes out unexpanded. And, indeed, there are directories _ and 1 .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/413681", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/194688/" ] }
413,730
I'm subsetting a filename via grep and then concatenating the resulting files with cat . However, I'm still a bit confused as to how I should use the for statement, e.g. for ((i=1;i<23;i+=1)); Given my file file1.txt , I would like to grep sample1 as follows: grep -w '^sample1' file1.txt > sample1_file.txtgrep -w '^sample2' file2.txt > sample2_file.txtgrep -w '^sample3' file3.txt > sample3_file.txt....grep -w '^sample22' file22.txt > sample22_file.txt And then concatenate these: cat sample1_file.txt sample2_file.txt sample3_file.txt ... sample22_file.txt > final_output.txt
Try: for i in {1..22}do grep -w "^sample$i" "file$i.txt"done >final_output.txt Notes: {1..22} runs through all the integers from 1 to 22. For people not familiar with C, it is probably more intuitive (but less flexible) than ((i=1;i<23;i+=1)) It is important that the expression ^sample$i be inside double-quotes rather than single-quotes so that the shell will expand $i . If all you want is final_output.txt , there is no need to create the intermediate files. Notice that it is efficient to place the redirection to final_output.txt after the done statement: in this way, the shell needs to open and close this file only once.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/413730", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268073/" ] }
413,794
If I do something like this: IFS=,x=hello,hi,worldecho $x Then three arguments will be extracted (which are hello and hi and world ), and these three arguments will be passed to echo . But when I do not use a variable: IFS=,echo hello,hi,world bye Then word splitting will happen using the space delimiter and not the comma delimiter, and so the two arguments generated and passed to echo will be hello,hi,world and bye . Is there a way to make word splitting work with a non-space delimiter when not using a variable?
No, word splitting happens only after expansions, not on stuff given directly on the command line (on modern shells, that is). The text in POSIX says: 2.6.5 Field Splitting After parameter expansion (Parameter Expansion), command substitution (Command Substitution), and arithmetic expansion (Arithmetic Expansion), the shell shall scan the results of expansions and substitutions that did not occur in double-quotes for field splitting and multiple fields can result. (emphasis mine) And Bash : The shell scans the results of parameter expansion, command substitution, and arithmetic expansion that did not occur within double quotes for word splitting. I'm not sure that's much of a problem, since you could just replace the commas with spaces if the string is directly in the script. And if it comes from the outside, then splitting usually happens naturally, in a command substitution or when using read , etc. In the original Bourne shell, the behaviour was a bit different, @Stéphane Chazelas discussed this in an answer to another question a while back
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/413794", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/267935/" ] }
413,798
I downloaded Minix 3 from here and wrote it on a USB flash drive using Rufus (in Windows os).When it boots it says : ->NETBSD MBR boot->Errror No active partition I searched for "active partition" in google and found 2 ways to solve this problem using a live Linux. One of which is this: ->sudo fdisk /dev/sdxy ->use the "a" option->and then "w" it The other way is using Gparted in Ubuntu (in "Ubuntu live")and then: ->Right-click the Primary partition you wish to make Active and select Manage Flags.->In Manage Flags on ..., tick (to enable) the boot check box to make the partition Active. but none of this ways worked and I still have the problem. My laptop is LGX13. (However, this does not make any difference because I booted Minix on my other laptops and still problem was there). Have anyone else had this problem? How did you solve it? Is there any other way to activate a partition?
No, word splitting happens only after expansions, not on stuff given directly on the command line (on modern shells, that is). The text in POSIX says: 2.6.5 Field Splitting After parameter expansion (Parameter Expansion), command substitution (Command Substitution), and arithmetic expansion (Arithmetic Expansion), the shell shall scan the results of expansions and substitutions that did not occur in double-quotes for field splitting and multiple fields can result. (emphasis mine) And Bash : The shell scans the results of parameter expansion, command substitution, and arithmetic expansion that did not occur within double quotes for word splitting. I'm not sure that's much of a problem, since you could just replace the commas with spaces if the string is directly in the script. And if it comes from the outside, then splitting usually happens naturally, in a command substitution or when using read , etc. In the original Bourne shell, the behaviour was a bit different, @Stéphane Chazelas discussed this in an answer to another question a while back
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/413798", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268128/" ] }
413,818
Let's say I have this text: My name is #1#! . I want to replace the #1# with something that depends on the contents between the # , like: if [ $thing_between_hash -eq 1 ]; then subs=Johnelse subs=Maryfi Then the output would be: My name is John! Can I do it with a single sed substitution? How?
With a sed that supports -r : sed -r -e 's/#1#/John/g; s/#[^#]+#/Mary/g' <<< 'My name is #1#, not #5#!' otherwise: sed -e 's/#1#/John/g; s/#[^#][^#]*#/Mary/g' <<< 'My name is #1#, not #5#!'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/413818", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/254959/" ] }
413,840
I have a LINUX machine (remote), and a MAC machine (local). Our system administrator set up an "SSH" method, whereby I can ssh from my MAC, to my LINUX machine, via this command on my MAC: ssh [email protected] -p 12345 When I do this, I am prompted to put in the password for my LINUX machine, and when I do, I have access, which is great. What I want to do now though, is be able to scp from my MAC machine, to my LINUX machine, so that I can transfer files over. How do I do that? I have googled around but I am not sure what to do. Thank you
To copy from REMOTE to LOCAL : scp -P 12345 user@server:/path/to/remote/file /path/to/local/file To copy from LOCAL to REMOTE : scp -P 12345 /path/to/local/file user@server:/path/to/remote/file Note: The switch to specify port for scp is -P instead of -p If you want to copy all files in a directory you can use wildcards like below: scp -P 12345 user@server:/path/to/remote/dir/* /path/to/local/dir/ or even scp -P 12345 user@server:/path/to/remote/dir/*.txt /path/to/local/dir/
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/413840", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54970/" ] }
413,844
I use the source command in my bash script in order to read/print the variables values more linuxmachines_mount_point.txtexport linuxmachine01="sdb sdc sdf sdd sde sdg"export linuxmachine02="sde sdd sdb sdf sdc"export linuxmachine03="sdb sdd sdc sde sdf"export linuxmachine06="sdb sde sdf sdd"source linuxmachines_mount_point.txtecho $linuxmachine01sdb sdc sdf sdd sde sdg What is the opposite of source in order to unset the variables? Expected results echo $linuxmachine01< no output >
Using a subshell (Recommended) Run the source command in a subshell: (source linuxmachines_mount_point.txtcmd1 $linuxmachine02other_commands_using_variablesetc)echo $linuxmachine01 # Will return nothing Subshells are defined by parens: (...) . Any shell variables set within the subshell are forgotten when the subshell ends. Using unset This unsets any variable exported by linuxmachines_mount_point.txt : unset $(awk -F'[ =]+' '/^export/{print $2}' linuxmachines_mount_point.txt) -F'[ =]+' tells awk to use any combination of spaces and equal signs as the field separator. /^export/{print $2} This tells awk to select lines that begin with export and then print the second field. unset $(...) This runs the command inside $(...) , captures its stdout, and unsets any variables named by its output.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/413844", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
413,856
I installed the python-certbot-apache package per the instructions on certbot.eff.org but can't find any entry for the cron job its supposed to set up. The Certbot packages on your system come with a cron job that will renew your certificates automatically before they expire. Since Let's Encrypt certificates last for 90 days, it's highly advisable to take advantage of this feature. From: https://certbot.eff.org/#debianjessie-apache Where do I find this cron job? I've tried 'crontab -l', with and without sudo both, with no luck. I understand how to run the cron job to renew the cert; my question is: where is the cron job that this package installed? Did it install?
In any Debian derivate, to list the files installed for a package you usually do dpkg -L . So in your case: dpkg -L python-certbot-apache This is give you the list of all files installed, and where. You can also request the list of files from packages.debian.org From https://packages.debian.org/stretch/all/python-certbot-apache/filelist /usr/lib/python2.7/dist-packages/certbot_apache-0.10.2.egg-info/PKG-INFO/usr/lib/python2.7/dist-packages/certbot_apache-0.10.2.egg-info/dependency_links.txt/usr/lib/python2.7/dist-packages/certbot_apache-0.10.2.egg-info/entry_points.txt/usr/lib/python2.7/dist-packages/certbot_apache-0.10.2.egg-info/requires.txt/usr/lib/python2.7/dist-packages/certbot_apache-0.10.2.egg-info/top_level.txt/usr/lib/python2.7/dist-packages/certbot_apache/__init__.py/usr/lib/python2.7/dist-packages/certbot_apache/augeas_configurator.py/usr/lib/python2.7/dist-packages/certbot_apache/augeas_lens/httpd.aug/usr/lib/python2.7/dist-packages/certbot_apache/centos-options-ssl-apache.conf/usr/lib/python2.7/dist-packages/certbot_apache/configurator.py/usr/lib/python2.7/dist-packages/certbot_apache/constants.py/usr/lib/python2.7/dist-packages/certbot_apache/display_ops.py/usr/lib/python2.7/dist-packages/certbot_apache/obj.py/usr/lib/python2.7/dist-packages/certbot_apache/options-ssl-apache.conf/usr/lib/python2.7/dist-packages/certbot_apache/parser.py/usr/lib/python2.7/dist-packages/certbot_apache/tls_sni_01.py/usr/share/doc/python-certbot-apache/changelog.Debian.gz/usr/share/doc/python-certbot-apache/copyright It appears there is no cron job automatic added for the package. You also need to install the package certbot sudo apt-get install certbot List of files: /etc/cron.d/certbot/lib/systemd/system/certbot.service/lib/systemd/system/certbot.timer/usr/bin/certbot/usr/bin/letsencrypt/usr/share/doc/certbot/README.rst.gz/usr/share/doc/certbot/changelog.Debian.gz/usr/share/doc/certbot/changelog.gz/usr/share/doc/certbot/copyright/usr/share/man/man1/certbot.1.gz/usr/share/man/man1/letsencrypt.1.gz So from this last package, the crontab files installed are actually /etc/cron.d/certbot for crontab and you have /lib/systemd/system/certbot.service + /lib/systemd/system/certbot.timer for systemd
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/413856", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111977/" ] }
413,878
I've got a JSON array like so: { "SITE_DATA": { "URL": "example.com", "AUTHOR": "John Doe", "CREATED": "10/22/2017" }} I'm looking to iterate over this array using jq so I can set the key of each item as the variable name and the value as it's value. Example: URL="example.com" AUTHOR="John Doe" CREATED="10/22/2017" What I've got so far iterates over the array but creates a string: constants=$(cat ${1} | jq '.SITE_DATA' | jq -r "to_entries|map(\"\(.key)=\(.value|tostring)\")|.[]") Which outputs: URL=example.comAUTHOR=John DoeCREATED=10/22/2017 I am looking to use these variables further down in the script: echo ${URL} But this echos an empty output at the moment. I'm guessing I need an eval or something in there but can't seem to put my finger on it.
Your original version isn't going to be eval able because the author name has spaces in it - it would be interpreted as running a command Doe with the environment variable AUTHOR set to John . There's also virtually never a need to pipe jq to itself - the internal piping & dataflow can connect different filters together. All of this is only sensible if you completely trust the input data (e.g. it's generated by a tool you control). There are several possible problems otherwise detailed below, but let's assume the data itself is certain to be in the format you expect for the moment. You can make a much simpler version of your jq program: jq -r '.SITE_DATA | to_entries | .[] | .key + "=" + (.value | @sh)' which outputs: URL='example.com'AUTHOR='John Doe'CREATED='10/22/2017' There's no need for a map : .[] deals with taking each object in the array through the rest of the pipeline as a separate item , so everything after the last | is applied to each one separately. At the end, we just assemble a valid shell assignment string with ordinary + concatenation, including appropriate quotes & escaping around the value with @sh . All the pipes matter here - without them you get fairly unhelpful error messages, where parts of the program are evaluated in subtly different contexts. This string is eval able if you completely trust the input data and has the effect you want: eval "$(jq -r '.SITE_DATA | to_entries | .[] | .key + "=" + (.value | @sh)' < data.json)"echo "$AUTHOR" As ever when using eval , be careful that you trust the data you're getting, since if it's malicious or just in an unexpected format things could go very wrong. In particular, if the key contains shell metacharacters like $ or whitespace, this could create a running command. It could also overwrite, for example, the PATH environment variable unexpectedly. If you don't trust the data, either don't do this at all or filter the object to contain just the keys you want first: jq '.SITE_DATA | { AUTHOR, URL, CREATED } | ...' You could also have a problem in the case that the value is an array, so .value | tostring | @sh will be better - but this list of caveats may be a good reason not to do any of this in the first place. It's also possible to build up an associative array instead where both keys and values are quoted: eval "declare -A data=($(jq -r '.SITE_DATA | to_entries | .[] | @sh "[\(.key)]=\(.value)"' < test.json))" After this, ${data[CREATED]} contains the creation date, and so on, regardless of what the content of the keys or values are. This is the safest option, but doesn't result in top-level variables that could be exported. It may still produce a Bash syntax error when a value is an array, or a jq error if it is an object, but won't execute code or overwrite anything.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/413878", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/227429/" ] }
413,948
I am trying to have a local Unix domain socket, say, ~/docker.sock . I want it to proxy everything to a remote Unix domain socket running elsewhere over SSH. (You can find a diagram of what I’m trying to do below). OpenSSH supports this ( an example here). For instance, this command will proxy MySQL client connections on a remote server to my local instance: ssh -R/var/run/mysql.sock:/var/run/mysql.sock -R127.0.0.1:3306:/var/run/mysql.sock somehost But this is not how I want it to be like. It forwards the traffic that comes to the remote socket to my local socket (I want it the other way).
The man page for ssh offers two complementary options: -R for remote forwarding to local, and -L for local forwarding to remote. In your case just use -L instead of -R .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/413948", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12539/" ] }
413,976
we want to build 6 mount point folders as example /data/sdb/data/sdc/data/sdd/data/sde/data/sdf/data/sdg so we wrote this simple bash script using array folder_mount_point_list="sdb sdc sdd sde sdf sdg"folderArray=( $folder_mount_point_list )counter=0for i in disk1 disk2 disk3 disk4 disk4 disk5 disk6dofolder_name=${folderArray[counter]}mkdir /data/$folder_namelet counter=$counter+1done now we want to change the code without counter and let=$counter=counter+1 is it possible to shift each loop the array in order to get the next array value? as something like ${folderArray[++]}
A general remark. It does not make sense to define an array like this: folder_mount_point_list="sdb sdc sdd sde sdf sdg"folderArray=( $folder_mount_point_list ) You would do this instead: folderArray=(sdb sdc sdd sde sdf sdg) Now to your question: set -- sdb sdc sdd sde sdf sdgfor folder_name; do mkdir "/data/$folder_name"done or set -- sdb sdc sdd sde sdf sdgwhile [ $# -gt 0 ]; do mkdir "/data/$1" shiftdone
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/413976", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
414,004
This has probably been asked before, but I'm unsure how to word it. I want to type out a series of strings, which are basically string1, string2, string3; but without the redundant typing. So is there a way to type: cat 'expr "/dev/input/event" "[1 2 3 4]" " "'*4 and have it resolve to: cat /dev/input/event1 /dev/input/event2 /dev/input/event3 /dev/input/event4 so I don't have to type each device individually? I also apologize for what's obviously a horrible misunderstanding of how the expr command works.
For expanding filenames (or device nodes) that exist already , then filename globbing is usually what you want: The first would expand to event1 to event4 , the second to any and all eventXX that exist: cat /dev/input/event[1-4]cat /dev/input/event* If you don't care about existing files but want just strings, then brace expansion . Two ways to generate all of event1 to event4 , the first one just takes a list, the second a range: cat /dev/input/event{1,2,3,4}cat /dev/input/event{1..4} (The links are to BashGuide , which is a useful resource in itself.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/414004", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205005/" ] }
414,031
The following configuration file ( example 1 ) isn't configured as it should be. Each line in file should contain the /grid/sdX ( a to z )as described in example 2 . I need to find a way to write a bash script for this task. How to append the missing /grid/sdX in the end of the lines? example 1 more dfs_data_dir_mount.hist/grid/sdk/hadoop/hdfs/data,//grid/sdi/hadoop/hdfs/data,//grid/sdh/hadoop/hdfs/data,//grid/sdc/hadoop/hdfs/data,/grid/sdc/grid/sdj/hadoop/hdfs/data,//grid/sde/hadoop/hdfs/data,/grid/sde/grid/sdd/hadoop/hdfs/data,/grid/sdd/grid/sdb/hadoop/hdfs/data,/grid/sdb/grid/sdf/hadoop/hdfs/data,/grid/sdf/grid/sdg/hadoop/hdfs/data,/ expected results (example 2) /grid/sdk/hadoop/hdfs/data,/grid/sdk/grid/sdi/hadoop/hdfs/data,/grid/sdi/grid/sdh/hadoop/hdfs/data,/grid/sdh/grid/sdc/hadoop/hdfs/data,/grid/sdc/grid/sdj/hadoop/hdfs/data,/grid/sdj/grid/sde/hadoop/hdfs/data,/grid/sde/grid/sdd/hadoop/hdfs/data,/grid/sdd/grid/sdb/hadoop/hdfs/data,/grid/sdb/grid/sdf/hadoop/hdfs/data,/grid/sdf/grid/sdg/hadoop/hdfs/data,/grid/sdg
sed solution: sed -Ei 's~^(/[^/]+/[^/]+)(.*,)/$~\1\2\1~' dfs_data_dir_mount.hist ~ - treated as sed subcommand separator [^/]+ - match one or more character(s) except slash / ^ $ - are the start and end of the line respectively
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/414031", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
414,042
I have a script mycommand.sh that I can't run twice. I want to split output to two different files one file containing the lines that match a regex and one file containing the lines that don't match a regex. What I wish to have is basically something like this: ./mycommand.sh | grep -E 'some|very*|cool[regex].here;)' --match file1.txt --not-match file2.txt I know I can just redirect the output to a file and then to two different greps with and without -v option and redirect their output to two different files. But I was jsut wondering if it was possible to do it with one grep. So, Is it possible to achieve what I want in a single line?
There are many ways to accomplish this. Using awk The following sends any lines matching coolregex to file1. All other lines go to file2: ./mycommand.sh | awk '/[coolregex]/{print>"file1";next} 1' >file2 How it works: /[coolregex]/{print>"file1";next} Any lines matching the regular expression coolregex are printed to file1 . Then, we skip all remaining commands and jump to start over on the next line. 1 All other lines are sent to stdout. 1 is awk's cryptic shorthand for print-the-line. Splitting into multiple streams is also possible: ./mycommand.sh | awk '/regex1/{print>"file1"} /regex2/{print>"file2"} /regex3/{print>"file3"}' Using process substitution This is not as elegant as the awk solution but, for completeness, we can also use multiple greps combined with process substitution: ./mycommand.sh | tee >(grep 'coolregex' >File1) | grep -v 'coolregex' >File2 We can also split up into multiple streams: ./mycommand.sh | tee >(grep 'coolregex' >File1) >(grep 'otherregex' >File3) >(grep 'anotherregex' >File4) | grep -v 'coolregex' >File2
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/414042", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/231067/" ] }
414,099
Say I created the following variables: s=Johni=12345f=3.14 Are all of these variables stored in memory as string, or does bash have other data types?
Bash variables are untyped . Unlike many other programming languages, Bash does not segregate its variables by "type." Essentially, Bash variables are character strings, but, depending on context, Bash permits arithmetic operations and comparisons on variables. The determining factor is whether the value of a variable contains only digits. As another answer says , there is kind of weak form of typing with declare . This is a very weak form of the typing [1] available in certain programming languages. See an example: declare -i number# The script will treat subsequent occurrences of "number" as an integer. number=3echo "Number = $number" # Number = 3number=threeecho "Number = $number" # Number = 0# Tries to evaluate the string "three" as an integer. References: http://tldp.org/LDP/abs/html/untyped.html http://tldp.org/LDP/abs/html/declareref.html
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/414099", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268325/" ] }
414,159
This is about the behaviour of the backspace ( \b ) character. I have the following C program: int main() { printf("Hello\b\b"); sleep(5); printf("h\n"); return 0;} The output on my terminal is Helho with the cursor advancing to the first position of the following line. First, the entire thing prints only after the 5 second sleep, so from that I deduced the output from the kernel to the terminal is line buffered. So now, my questions are: Since the \b\b goes back two spaces, to the position of the (second) l , then similar to how l was replaced by h , the o should have been replaced by \n . Why wasn't it? If I remove the line printf("h\n"); , it prints Hello and goes back two characters, without erasing. This I got from other answers is because of a non-destructive backspace. Why is this behaviour different for input and output? That is, if I input something into the terminal (even the very same program) and press Backspace, it erases the last character, but not for the output. Why? I'm on an Ubuntu system on the xterm terminal using bash, if that helps.
First, the entire thing prints only after the 5 second sleep so from that I deduced the output from the kernel to the terminal is line buffered. No, the output from your program to the kernel is line-buffered. That's the default behaviour for stdio when stdout is a terminal. Add the call setbuf(stdout, NULL) to turn output buffering off for stdout . See setbuf(3) . Since the \b\b goes back two spaces, to the position of l then similar to how l was replaced by h , the o should have been replaced by \n . Why wasn't it? Because the newline character just moves the cursor (and scrolls the screen), it doesn't print as a visible character that would take up the place of a glyph on the terminal. If we assume it would take the place of a glyph, what would it look like? if I input something into the very same program, and press Backspace, it erases the last character, but not for the output. Why? Well, what happens when you type depends on what mode the terminal is in. Roughly, it can be in the usual "cooked" mode where the terminal itself provides elementary line editing (handles backspaces); or in a "raw" mode, where all keypresses go to the application, and it's up to the application to decide what to do with them, and what to output in response. Cooked mode usually goes along with "local echo" where the terminal (local to the user) prints out the characters as they are typed. In raw mode, the application usually takes care of echoing the typed characters, to have full control over what's visible. See e.g. this question for discussion on the terminal modes: What’s the difference between a “raw” and a “cooked” device driver? If you run e.g. cat , the terminal will be in cooked mode (the default) and handle the line editing. Hitting for example x Backspace Ctrl-D will result in cat just reading the empty input, signalling the end of input . You can check this with strace . But if you run an interactive Bash shell instead, it will handle the backspace by itself, and output what it considers appropriate to do what the user expects, i.e. wipe one character. Here's part of the output for strace -etrace=read,write -obash.trace bash , after entering the mentioned sequence x Backspace Ctrl-D : read(0, "x", 1) = 1write(2, "x", 1) = 1read(0, "\177", 1) = 1write(2, "\10\33[K", 4) = 4read(0, "\4", 1) = 1 First, Bash read s and write s the x , outputting it to the terminal. Then it reads the backspace (character code 0177 in octal or 127 in decimal), and outputs the backspace character (octal 010, decimal 8 (*) ) which moves the cursor back and outputs the control sequence for clearing the end of the line, <ESC> [K . The last \4 is the Ctrl-D , which is used by Bash to exit the program. (* in input, Ctrl-H would have the decimal character code 8. Backspace is either the same or 127 as here, depending again on how the terminal is set up.) In comparison, the same experiment with cat only shows a single read of zero bytes, the "end of file" condition. End of file can mean either a connected pipe or socket being closed, an actual end of file, or Ctrl-D being received from a terminal in cooked mode: read(0, "", 131072) = 0 In particular, cat doesn't see the x , nor the backspace, nor the actual code for Ctrl-D : they're are handled by the terminal. Which might be the virtual terminal driver in the kernel; an actual physical terminal over a serial connection or such; or a terminal emulator like xterm either running on the same machine or at the remote end of an SSH connection. It doesn't matter for the userspace software.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/414159", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268145/" ] }
414,226
In the Wikipedia article on Regular expressions , it seems that [[:digit:]] = [0-9] = \d . What are the circumstances where they do not equal? What is the difference? After some research, I think one difference is that bracket expression [:expr:] is locale dependent.
Yes, it is [[:digit:]] ~ [0-9] ~ \d (where ~ means approximate). In most programming languages (where it is supported) \d ≡ `[[:digit:]]` # (is identical to, it is a short hand for). The \d exists in less instances than [[:digit:]] (available in grep -P but not in POSIX). Unicode digits There are many digits in UNICODE , for example: 123456789 # Hindu-Arabic Arabic numerals ٠١٢٣٤٥٦٧٨٩ # ARABIC-INDIC ۰۱۲۳۴۵۶۷۸۹ # EXTENDED ARABIC-INDIC/PERSIAN ߀߁߂߃߄߅߆߇߈߉ # NKO DIGIT ०१२३४५६७८९ # DEVANAGARI All of which may be included in [[:digit:]] or \d , and even some cases of [0-9] . POSIX For the specific POSIX BRE or ERE: The \d is not supported (not in POSIX but is in GNU grep -P ). [[:digit:]] is required by POSIX to correspond to the digit character class, which in turn is required by ISO C to be the characters 0 through 9 and nothing else. So only in C locale all [0-9] , [0123456789] , \d and [[:digit:]] mean exactly the same. The [0123456789] has no possible misinterpretations, [[:digit:]] is available in more utilities and in some cases mean only [0123456789] . The \d is supported by few utilities. As for [0-9] , the meaning of range expressions is only defined by POSIX in the C locale; in other locales it might be different (might be codepoint order or collation order or something else). [0123456789] The most basic option for all ASCII digits. Always valid, (AFAICT) no known instance where it fails. It match only English Digits: 0123456789 . [0-9] It is generally believed that [0-9] is only the ASCII digits 0123456789 . That is painfully false in some instances: Linux in some locale that is not "C" (June of 2020) systems, for example: Assume: str='0123456789 ٠١٢٣٤٥٦٧٨٩ ۰۱۲۳۴۵۶۷۸۹ ߀߁߂߃߄߅߆߇߈߉ ०१२३४५६७८९' Try grep to discover that it allows most of them: $ echo "$str" | grep -o '[0-9]\+'0123456789٠١٢٣٤٥٦٧٨۰۱۲۳۴۵۶۷۸߀߁߂߃߄߅߆߇߈०१२३४५६७८ That sed has some troubles. Should remove only 0123456789 but removes almost all digits. That means that it accepts most digits but not some nine's (???): $ echo "$str" | sed 's/[0-9]\{1,\}//g' ٩ ۹ ߉ ९ That even expr suffers of the same issues of sed: expr "$str" : '\([0-9 ]*\)' # also matching spaces.0123456789 ٠١٢٣٤٥٦٧٨ And also ed printf '%s\n' 's/[0-9]/x/g' '1,p' Q | ed -v <(echo "$str")105xxxxxxxxxx xxxxxxxxx٩ xxxxxxxxx۹ xxxxxxxxx߉ xxxxxxxxx९ [[:digit:]] There are many languages: Perl, Java, Python, C. In which [[:digit:]] (and \d ) calls for an extended meaning. For example, this perl code will match all the digits from above: $ str='0123456789 ٠١٢٣٤٥٦٧٨٩ ۰۱۲۳۴۵۶۷۸۹ ߀߁߂߃߄߅߆߇߈߉ ०१२३४५६७८९'$ echo "$str" | perl -C -pe 's/[^\d]//g;' ; echo0123456789٠١٢٣٤٥٦٧٨٩۰۱۲۳۴۵۶۷۸۹߀߁߂߃߄߅߆߇߈߉०१२३४५६७८९ Which is equivalent to select all characters that have the Unicode properties of Numeric and digits : $ echo "$str" | perl -C -pe 's/[^\p{Nd}]//g;' ; echo0123456789٠١٢٣٤٥٦٧٨٩۰۱۲۳۴۵۶۷۸۹߀߁߂߃߄߅߆߇߈߉०१२३४५६७८९ Which grep could reproduce (the specific version of pcre may have a different internal list of numeric code points than Perl): $ echo "$str" | grep -oP '\p{Nd}+'0123456789٠١٢٣٤٥٦٧٨٩۰۱۲۳۴۵۶۷۸۹߀߁߂߃߄߅߆߇߈߉०१२३४५६७८९ shells Some implementations may understand a range to be something different than plain ASCII order (ksh93 for example) (when tested on May 2018 version (AT&T Research) 93u+ 2012-08-01): $ LC_ALL=en_US.utf8 ksh -c 'echo "${1//[0-9]}"' sh "$str" ۹ ߀߁߂߃߄߅߆߇߈߉ ९ Now (June 2020), the same package ksh93 from debian (same version sh (AT&T Research) 93u+ 2012-08-01): $ LC_ALL=en_US.utf8 ksh -c 'echo "${1//[0-9]}"' sh "$str" ٩ ۹ ߉ ९ And that seems to me as a sure source of bugs waiting to happen.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/414226", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/267519/" ] }
414,305
I want to capture only the disks from lsblk as showing here fd0 also appears in spite its not really disk for use in this case we can just do lsblk | grep disk | grep -v fd0 but maybe we missed some other devices that need to filter them by grep -v what other disk devices that could be appears from lsblk | grep disk and not really disks ? lsblk | grep disk fd0 2:0 1 4K 0 disksda 8:0 0 100G 0 disksdb 8:16 0 2G 0 disk /Kolsdc 8:32 0 2G 0 disksdd 8:48 0 2G 0 disksde 8:64 0 2G 0 disksdf 8:80 0 2G 0 disklsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTfd0 2:0 1 4K 0 disksda 8:0 0 150G 0 disk├─sda1 8:1 0 500M 0 part /boot└─sda2 8:2 0 149.5G 0 part├─vg00-yv_root 253:0 0 19.6G 0 lvm /├─vg00-yv_swap 253:1 0 15.6G 0 lvm [SWAP]└─vg00-yv_var 253:2 0 100G 0 lvm /varsdb 8:16 0 2G 0 disk /Kolsdc 8:32 0 2G 0 disksdd 8:48 0 2G 0 disksde 8:64 0 2G 0 disksdf 8:80 0 2G 0 disksr0 11:0 1 1024M 0 rom
If you want only disks identified as SCSI by the device major number 8 , without device partitions, you could search on device major rather than the string "disk": lsblk -d | awk '/ 8:/' where the -d (or --no-deps ) option indicates to not include device partitions. For reasonably recent linux systems, the simpler lsblk -I 8 -d should suffice, as noted by user Nick.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/414305", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
414,393
Originally posted this on the apple stackexchange , but I suspect the solution may be Linux-ey, e.g. adding something to my .bashrc . Currently, when I SSH onto a Linux machine, the ls output colors and syntax coloring in VIM are different from the colors on my local machine. The colors shown are not defined in my Profile...Colors...ANSI Colors, and include an ugly dark brown color for "yellow." How can I force the text from a remote session to match my ANSI colors, so the coloring is always consistent? Here's an example of what I'm talking about: left is VIM session on my local computer, right is VIM session within an SSH session. Notice the hideous brown. And here's an example of the ls problem -- the colors are different.
Terminal vim uses colors that your terminal makes available (the ANSI colors you pick, presumably—unless your terminal offers 256 color mode or settable colors), but which of those colors it uses is controlled by the vim color scheme and if it believes the background is light or dark. You can check if the background is set to light or dark by :set background? . You can change it the ordinary way (e.g., :set background=dark ). You can check the current color scheme by running :colorscheme and set it by running :colorscheme «NAME» . At least here, vim will tab-complete name letting you see all the available ones. Once you've found the settings you like, you can add them to your ~/.vimrc . EDIT: ls colors (with GNU coreutils) are set by the LS_COLORS environment variable; see info dircolors or (if that doesn't work) man dircolors . Although this might be a little harder, as your Mac OS X ls and GNU coreutils ls (as typically used on Linux) are entirely separate implementations.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/414393", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112647/" ] }
414,398
I've been trying to better understand how the linux filesystem works, looking at journaling, inodes, and access control list. In looking into this, I came across filesystems which don't seem to act how I would expect a filesystem to act, such as glusterfs and mergerfs. Instead of getting written to a hard drive similar to how mkfs.ext3 or mkfs.xfs would, they are run on top of the other filesystems. So both ext3 and mergerfs (or glusterfs) could be used with the same drive, which seems strange since I as far as I know, two filesystems can't be defined on the same partition. Is my understanding of filesystems wrong, or is there something special about the mergerfs/glusterfs systems which distinguish them from ext3 or xfs?
The word "filesystem" is somewhat overloaded, and I think that might be confusing you. In one sense a "filesystem" is the format in which files are written to some medium (e.g., a partition on a disk). In another sense, a "filesystem" (or more specifically, a "Virtual Filesystem") is an abstraction provided by the OS that presents a set of files (regular files, directories, etc). An OS can read the on-disk filesystem and present a filesystem abstraction. The files presented in the filesystem abstraction can be stored on disk (e.g., ext4), on some other host across the network (e.g., cifs, nfs), or elsewhere. Something like mergerfs takes multiple sources of files and presents them as if it was a single source. From their website "mergerfs logically merges multiple paths together. Think a union of sets." Take a look at the mergerfs website , they have a nice description of what that does.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/414398", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166282/" ] }
414,413
I like Midnight Commander. Might have to do with starting with DOS machines in the early 90s in Russia, but now I just really like the integration of the command line with a two-panel file list. And a key feature is that Ctrl+Enter copies the name of the currently selected file or directory into the command line, without starting it. Unfortunately, on Fedora (26 and 27) this fails in Konsole and, apparemntly, in all other X-based terminals too. It does work in the virtual console I get py pressing Ctrl-Alt-F3. On OpenSuse Leap (42.1, 42.2, 42.3) the Ctrl+Enter functionality works perfectly. And I could not work out any difference. (I use KDE on both, which, as far as I understand, means that on Fedora I have X.org, not Wayland). How can I make Ctrl+Enter work on Fedora? Alternatively if this is impossible, is there a way to reassign the very useful functionality to some other key combination in Midnight Commander? (I would also consider alternatives to Midnight Commander itself, but on;y those running in a console window, and it seems there aren't any. I don't need a graphical two-panel file manager, as I use MC to assist in crafting commands quickly).
tl;dr: Get used to Alt + Enter (a.k.a. ESC followed by Enter ) instead. Ctrl + Enter generates the exact same sequence in terminal emulators as Enter , so there's no way for an app to distinguish these two. Well, no way by looking at the input stream it receives from the terminal emulator. mc has an interesting feature called "X11 support". It does not only look at the bytes it receives from the terminal emulator, but (if this support is compiled in, and if X11 connection is available runtime) queries the X11 server for the state of the modifier keys. So basically it goes like: "Wow, I received an Enter from the terminal emulator. Hey, X11 server, is Ctrl pressed now?" There are multiple ways this might not work for you. Fedora's mc may have been compiled without X11 support, I don't know. Check the output of mc --version , does it contain "With support for X11 events"? su , sudo , screen , tmux , ssh or similar tools can also break this functionality in case the X11 connection isn't available inside them (e.g. credentials not properly set up / forwarded by su or sudo ; screen or tmux being detached and reattached from another X server; display not forwarded by ssh ). The feature doesn't work on Wayland either. I suspect it cannot be implemented in Wayland due to its security model, or at least not without some plugin/extension to some core Wayland component. But even if the state of modifiers could be detected, it's not yet done in mc .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/414413", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237283/" ] }
414,499
If I am wiping my /dev/sda harddrive, do I need to first repartition it (for example, with GParted), or will shred /dev/sda wipe the partition table too?
Yes, wiping /dev/sda wipes the partition table too. It also wipes any area of the drive unallocated to any partition. Although the kernel will typically keep partition tables in memory. partprobe can be used to tell the kernel to update them.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/414499", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269/" ] }
414,560
how to find the last word in file even after word there are empty lines the case I tried this to find the last word tail -1 /tmp/requests.txt but no output I try the following approach awk 'END {print $NF}' /tmp/requests.txt but no output I capture the word only by tail -6 ( because the word was 6 empty lines before the end of the file tail -6 /tmp/requests.txt "IN_PROGRESS" so what is the way to capture the last word in case we have empty space or not expected results echo $Last_word IN_PROGRESS
Just start reading from the bottom and print the last word of the first line containing at least "something": tac file | awk 'NF{print $NF; exit}' For example: $ cat -vet file # note the spaces and tabshello$bla ble bli$ $^I$$ tac file | awk 'NF{print $NF; exit}'bli If you happen to not have tac , just use the same logic when reading the file normally: awk 'NF{last=$NF} END{print last}' file That is, store the last word whenever there is "something" in a line. Finally, print the stored value.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/414560", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
414,636
I've seen the questions and answers about needing to double-escape the arguments to remote ssh commands. My question is: Exactly where and when does the second parsing get done? If I run the following: $ ssh otherhost pstree -a -p I see the following in the output: |-sshd,3736 | `-sshd,1102 | `-sshd,1109 | `-pstree,1112 -a -p The parent process for the remote command ( pstree ) is sshd , there doesn't appear to be any shell there that would be parsing the command line arguments to the remote command, so it doesn't seem as if double quoting or escaping would be necessary (but it definitely is). If instead I ssh there first and get a login shell, and then run pstree -a -p I see the following in the output: ├─sshd,3736 │ └─sshd,3733 │ └─sshd,3735 │ └─bash,3737 │ └─pstree,4130 -a -p So clearly there's a bash shell there that would do command line parsing in that case. But the case where I use a remote command directly, there doesn't seem to be a shell, so why is double quoting necessary?
There is always a remote shell. In the SSH protocol, the client sends the server a string to execute. The SSH command line client takes its command line arguments and concatenates them with a space between the arguments. The server takes that string, runs the user's login shell and passes it that string. (More precisely: the server runs the program that is registered as the user's shell in the user database, passing it two command line arguments: -c and the string sent by the client. The shell is not invoked as a login shell: the server does not set the zeroth argument to a string beginning with - .) It is impossible to bypass the remote shell. The protocol doesn't have anything like sending an array of strings that could be parsed as an argv array on the server. And the SSH server will not bypass the remote shell because that could be a security restriction: using a restricted program as the user's shell is a way to provide a restricted account that is only allowed to run certain commands (e.g. an rsync-only account or a git-only account). You may not see the shell in pstree because it may be already gone. Many shells have an optimization where if they detect that they are about to do “run this external command, wait for it to complete, and exit with the command's status”, then the shell runs “ execve of this external command” instead. This is what's happening in your first example. Contrast the following three commands: ssh otherhost pstree -a -pssh otherhost 'pstree -a -p'ssh otherhost 'pstree -a -p; true' The first two are identical: the client sends exactly the same data to the server. The third one sends a shell command which defeats the shell's exec optimization.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/414636", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52104/" ] }
414,639
I don't know how vlc is able to do it; I guess it takes sort of time-stamp of a movie and puts it in cache or somewhere like that. This is the way it works in vlc - a. You see a media file, say it consists of 1.5 hours, b. At some point, say after 15-30 minutes or whenever you feel, you stopped because you had some other work, a call came or anything which disrupted your viewing. c. After some time you start the media file again. In vlc in the top-right corner it would give a small button saying continue from where you left off. d. If you select that button/option, it starts playing the media file from where you last left off. I have also seen using 2-3 media files in succession and even then it remembers the position. Is it possible to have similar functionality in mpv? Is there a way this already works, or this would be a feature request I would need to make at mplayer github?
You can run mpv with the --save-position-on-quit option. e.g. mpv --save-position-on-quit /path/to/video.mkv Alternatively, if you want mpv to do that by default, you can add that option to its config file. For example: echo "save-position-on-quit" >> ~/.config/mpv/mpv.conf Or use your favourite text editor to add the same line. The -- option prefix is not needed in the config file. If you want this option to be the default for all users on the system rather than just your own user, the config file to edit (as root) is /etc/mpv/mpv.conf if mpv was installed as a package. And probably /usr/local/etc/mpv/mpv.conf if installed by compiling the source.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/414639", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
414,655
I had Windows 10 and Manjaro on my laptop and everything was OK.Last day, I've installed Kali Linux in another partition. It has installed correctly and it works fine.But the problem is when I want to boot my Manjaro. I select Manjaro on the grub menu but this is the screen I see. wn-block(0,0)[ 0.667378] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.9.47-1-MANJARO #1[ 0.667435] Hardware name: Acer Aspire E5-575G/Ironman_SK , BIOS V1.04 04/26/2016[ 0.667493] ffffc90000c8bde0 ffffffff813151d2 ffff880276a77000 ffffffff8190b950[ 0.667717] ffffc90000c8be68 ffffffff8117ecd4 ffffffff00000010 ffffc90000c8be78[ 0.667940] ffffc90000c8be10 327c3b64ed88e616 327c3b64ed88e616 ffffc90000c8be80[ 0.668162] Call Trace:[ 0.668213] [<ffffffff813151d2>] dump_stack+0x63/0x81[ 0.668267] [<ffffffff8117ecd4>] panic+0xe4/0x22d[ 0.668321] [<ffffffff81v2a590>] mount_block_root+0x27c/0x2c7[ 0.668377] [<ffffffff81b298be>] ? set_debug_rodata+0x12/0x12[ 0.668432] [<ffffffff81b2a640>] mount_root+0x65/0x68[ 0.668486] [<ffffffff81b2a772>] prepare_namespace+0x12f/0x167[ 0.668542] [<ffffffff81b2a1ca>] kernel_init_freeable+0x1ec/0x205[ 0.668598] [<ffffffff81610b30>] ? rest_init+0x90/0x90[ 0.668652] [<ffffffff81610b3e>] kernel_init+0xe/0x100[ 0.668706] [<ffffffff8161dfd5>] ret_from_fork+0x25/0x30[ 0.668786] Kernel Offset: disabled[ 0.668893] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)_ How can I fix the problem?
VFS: unable to mount root fs on unknown-block(0 0) means the kernel was unable to mount the root filesystem. There are two common causes for this: The kernel doesn't support the filesystem on the device. If you compiled your own kernel, this is usually because you specified the filesystem driver should be built as a module rather than a native part of the kernel; if you're using the distro's kernel, this is usually because you picked an exotic format for your root filesystem. In either case, don't do that. The name of the root device passed to the kernel is wrong. This one can be tricky to fix: the best method I've found is to modify the kernel command line from the bootloader, making educated guesses about what the root= parameter should look like until I find something that works.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/414655", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268705/" ] }
414,656
I am writing this question since I had no problem in years of using OpenCL with nVidia graphics in fedora and testing Darktable with it. But now in Fedora 27 I am trying to use Darktable with Intel Graphics opencl capability but when I do darktable -d opencl I get this response as it doesn't recognize: Beignet: self-test failed: (3, 7, 5) + (5, 7, 3) returned (6, 7, 5) and when I do clifo it winds three devices. I thought it should find only two, my CPU and my GPU. My CPU is Intel Core i7-7500U and that's it. I have installed these packages: ocl-icd , opencl-filesystem , opencl-utils-devel and beignet . I think these cover all the necessary dependencies. The question is: Is it possible to use Darktable's opencl capability with this GPU or not? and how can I do it using beignet and Fedora 27?
VFS: unable to mount root fs on unknown-block(0 0) means the kernel was unable to mount the root filesystem. There are two common causes for this: The kernel doesn't support the filesystem on the device. If you compiled your own kernel, this is usually because you specified the filesystem driver should be built as a module rather than a native part of the kernel; if you're using the distro's kernel, this is usually because you picked an exotic format for your root filesystem. In either case, don't do that. The name of the root device passed to the kernel is wrong. This one can be tricky to fix: the best method I've found is to modify the kernel command line from the bootloader, making educated guesses about what the root= parameter should look like until I find something that works.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/414656", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106195/" ] }
414,718
I want to merge two files based on the common data present in them as header. Following is the example File1 >Feature scaffold11 100 g101 200 g201 300 g>Feature scaffold21 100 g01 500 g>Feature scaffold310 500 g>Feature scaffold410 300 g File 2 >Feature scaffold1500 500 r900 1000 r>Feature scaffold2200 300 r>Feature scaffold3100 200 r>Feature scaffold4500 600 r>Feature scaffold51 1000 r And here's the kind of output I want: >Feature scaffold11 100 g101 200 g201 300 g500 500 r900 1000 r>Feature scaffold21 100 g01 500 g200 300 r>Feature scaffold310 500 g100 200 r>Feature scaffold410 300 g500 600 r>Feature scaffold51 1000 r I have tried some awk and sed but clearly have not been successful, how can I do this?
Awk solution: awk '/^>/{ k=$1 FS $2 } NR==FNR{ if (!/^>/) a[k]=(a[k]!="")? a[k] ORS $0: $0; next } k in a{ print $0 ORS a[k]; delete a[k]; next }1' file1 file2 /^>/{ k=$1 FS $2 } - on encountering header line(i.e. >Feature ... ) - compose a key k from the 1st $1 and 2nd $2 fields NR==FNR{ ... } - processing the 1st input file ( file1 ): if (!/^>/) a[k]=(a[k]!="")? a[k] ORS $0: $0 - accumulate non-header lines into array a using current key k next - jump to next record k in a - if current key based on file2 record is in array a (based on file1 records): print $0 ORS a[k] - print related records delete a[k] - delete processed item(s) The output: >Feature scaffold11 100 g101 200 g201 300 g500 500 r900 1000 r>Feature scaffold21 100 g01 500 g200 300 r>Feature scaffold310 500 g100 200 r>Feature scaffold410 300 g500 600 r>Feature scaffold51 1000 r
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/414718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/244634/" ] }
414,740
The bash manual states: eval [arg ...] The args are read and concatenated together into a single com- mand. This command is then read and executed by the shell, and its exit status is returned as the value of eval. If there are no args, or only null arguments, eval returns 0. I try eval `nonsense`echo $? The result is 0 . Whereas when I execute the back-quoted command separately: `nonsense`echo $? The result is 127 . From what is written in the bash manual I would expect eval to return 127 when taking the back-quoted nonsense as argument. How to obtain the exit status of the argument of eval ?
When you do the following - `nonsense`echo $? You basically are asking "Tell me the exit status when I try to get the output of the command nonsense" the answer to that is "command not found" or 127 But when you do the following eval `nonsense`echo $? You are asking "tell me the exit status of eval when I evaluate an empty string" (the output of command nonsense) which is equal to running eval without arguments. eval has no problems in running without arguments and its exit status becomes 0
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/414740", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/226971/" ] }
414,783
I have a file that contains more than 4000 characters and I want to grep the string between the position 148 and 1824. How can I do this?
You don't use grep. There is a tool that has been designed for precisely this sort of thing: cut . To get characters 148 to 1824, do: cut -c 148-1824 file The -c flag means select characters. Use -b if you want to work on bytes. If you insist on using grep , you would have to do something like this (assuming GNU grep) grep -Po '^.{147}\K.{1675}' file This matches the first 147 characters ( ^.{147} ) and discards them ( \K ). Then it matches the next 1675 characters. The -o flag tells grep to only print the matching section of a line and the -P flag turns on perl-compatible regular expressions which let us use \K .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/414783", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268827/" ] }
414,786
Security researchers have published on the Project Zero a new vulnerability called Spectre and Meltdown allowing a program to steal information from a memory of others programs. It affects Intel, AMD and ARM architectures. This flaw can be exploited remotely by visiting a JavaScript website. Technical details can be found on redhat website , Ubuntu security team . Information Leak via speculative execution side channel attacks (CVE-2017-5715, CVE-2017-5753, CVE-2017-5754 a.k.a. Spectre and Meltdown) It was discovered that a new class of side channel attacks impact most processors, including processors from Intel, AMD, and ARM. The attack allows malicious userspace processes to read kernel memory and malicious code in guests to read hypervisor memory. To address the issue, updates to the Ubuntu kernel and processor microcode will be needed. These updates will be announced in future Ubuntu Security Notices once they are available. Example Implementation in JavaScript As a proof-of-concept, JavaScript code was written that, when run in the Google Chrome browser, allows JavaScript to read private memory from the process in which it runs. My system seem to be affected by the spectre vulnerability. I have compiled and executed this proof-of-concept ( spectre.c ). System information: $ uname -a4.13.0-0.bpo.1-amd64 #1 SMP Debian 4.13.13-1~bpo9+1 (2017-11-22) x86_64 GNU/Linux$ cat /proc/cpuinfomodel name : Intel(R) Core(TM) i3-3217U CPU @ 1.80GHz$gcc --versiongcc (Debian 6.3.0-18) 6.3.0 20170516 How to mitigate the Spectre and Meldown vulnerabilities on Linux systems? Further reading: Using Meltdown to steal passwords in real time . Update Using the Spectre & Meltdown Checker after switching to the 4.9.0-5 kernel version following @Carlos Pasqualini answer because a security update is available to mitigate the cve-2017-5754 on debian Stretch: CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'* Checking count of LFENCE opcodes in kernel: NO (only 31 opcodes found, should be >= 70)> STATUS: VULNERABLE (heuristic to be improved when official patches become available)CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'* Mitigation 1* Hardware (CPU microcode) support for mitigation: NO * Kernel support for IBRS: NO * IBRS enabled for Kernel space: NO * IBRS enabled for User space: NO * Mitigation 2* Kernel compiled with retpoline option: NO * Kernel compiled with a retpoline-aware compiler: NO > STATUS: VULNERABLE (IBRS hardware + kernel support OR kernel with retpoline are needed to mitigate the vulnerability)CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'* Kernel supports Page Table Isolation (PTI): YES * PTI enabled and active: YES > STATUS: NOT VULNERABLE (PTI mitigates the vulnerability) Update Jan 25 , 2018 The spectre-meltdown-checker script is officially packaged by debian , it is available for Debian Stretch through backports repository , Buster and Sid. Update 05/22/2018 Speculative Store Bypass (SSB) – also known as Variant 4 Systems with microprocessors utilizing speculative execution and speculative execution of memory reads before the addresses of all prior memory writes are known may allow unauthorized disclosure of information to an attacker with local user access via a side-channel analysis. Rogue System Register Read (RSRE) – also known as Variant 3a Systems with microprocessors utilizing speculative execution and that perform speculative reads of system registers may allow unauthorized disclosure of system parameters to an attacker with local user access via a side-channel analysis. Edit July 27 , 2018 NetSpectre: Read Arbitrary Memory over Network In this paper, we present NetSpectre, a new attack based on Spectre variant 1, requiring no attacker-controlled code on the target device, thus affecting billions of devices. Similar to a local Spectre attack, our remote attack requires the presence of a Spectre gadget in the code of the target. We show that systems containing the required Spectre gadgets in an exposed network interface or API can be attacked with our generic remote Spectre attack, allowing to read arbitrary memory over the network. The attacker only sends a series of crafted requests to the victim and measures the response time to leak a secret value from the victim’s memory.
Alan Cox shared a link from AMD's blog: https://www.amd.com/en/corporate/speculative-execution Variant One: Bounds Check Bypass Resolved by software / OS updates to be made available by system vendors and manufacturers. Negligible performance impact expected. Variant Two: Branch Target Injection Differences in AMD architecture mean there is a near zero risk of exploitation of this variant. Vulnerability to Variant 2 has not been demonstrated on AMD processors to date. Variant Three: Rogue Data Cache Load Zero AMD vulnerability due to AMD architecture differences. It would be good to have confirmation of these AMD's statements by a third party though. The 'mitigation' on affected systems, would require a new kernel and a reboot, but on many distributions there is not yet released packages with the fixes: https://www.cyberciti.biz/faq/patch-meltdown-cpu-vulnerability-cve-2017-5754-linux/ Debian: https://security-tracker.debian.org/tracker/CVE-2017-5715 https://security-tracker.debian.org/tracker/CVE-2017-5753 https://security-tracker.debian.org/tracker/CVE-2017-5754 Other sources of information I found: https://lists.bufferbloat.net/pipermail/cerowrt-devel/2018-January/011108.html https://www.reddit.com/r/Amd/comments/7o2i91/technical_analysis_of_spectre_meltdown/
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/414786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153195/" ] }
414,799
What are entries in /sys/firmware/efi/efivars/? I see that they are small binary files. Are these addresses and the contents of the address? For example /sys/firmware/efi/efivars/BootFromUSB-ec87d643-eba4-4bb5-a1e5-3f3e36b20da9 in hexadecimal shows 000000000: 0700 0000 00 .... What does it mean?
These are files in the efivars file system, which give you access to UEFI variables. For each UEFI variable there is one file in /sys/firmware/efi/efivars/ . Your example BootFromUSB-ec87d643-eba4-4bb5-a1e5-3f3e36b20da9 has the Name BootFromUSB and the VendorGuid ec87d643-eba4-4bb5-a1e5-3f3e36b20da9 . The GUID makes sure that variables with the same name but from different vendors don't interfere. Some variables are defined in the UEFI specification, but not this one. The first four bytes of the contents are attributes, which are also defined in the UEFI specification. The most important are #define EFI_VARIABLE_NON_VOLATILE 0x00000001#define EFI_VARIABLE_BOOTSERVICE_ACCESS 0x00000002#define EFI_VARIABLE_RUNTIME_ACCESS 0x00000004 so your variable is non-volatile and can be accessed both at boot and at runtime. Any remaining bytes are the value of the variable. In this case there is a single byte with the value 0. You can use UEFI variables to influence the boot process. For example, we have used such a variable to switch the next boot to an alternative recovery firmware, when the standard firmware is not functioning. Note that the efivars file system allows you to write to EFI variables by writing to the files. Be careful when you do this, as overwriting some variables may brick your system.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/414799", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119007/" ] }
414,836
Here is a picture of the problem: Notice that all lines of text have horizontal lines similar to underlining. However, this is a plain text editor (Kate) and it does not do underlining. There is no formatting applied to this text. I selected the text so the lines would show up better in a photo. But the lines exist even without selecting the text. Sometimes they are much thicker and darker. Sometimes they are light. Sometimes they won't be there at all, only to come back at random. Konsole has the same issue. With white text on a black background, Konsole sometimes show multi-colored horizontal lines. Sometimes every line in Konsole has this ugly and distracting underlining. Sometimes only a portion of the lines have it. Sometimes the lines are so dense and overwhelming that it is hard to read the text. Other times the lines are mild, as in the attached photograph. I first saw this issue about a ten months ago on a desktop computer. I thought the user had just done something really crazy in font settings. But now I am seeing the issue on a new laptop without any significant settings changes from default. Both systems run a fully updated Arch Linux KDE. On this laptop, I reset all font settings (in System Settings) to default values. I also reset the Konsole profile appearance to default settings (even though settings were already at default values). However, the horizontal lines will not go away. The applications work correctly (other than sometimes being hard to read text). Copied text does not include the horizontal lines. Commands in Konsole are not affected by the appearance of horizontal lines. It seems to be a display glitch, but it is not specific to any GPU (affects Intel or nvidia) or to any display screen (I tested different monitors on the desktop) or to anything else I can determine. I tried various fixes on the affected desktop over the last ten months and I have not resolved it on that machine either. I have multiple other Arch KDE computers that do not have the problem. Does anyone have a clue as to what might cause this? Has else anyone seen it? Edit: Please see the KDE bug report for Konsole: 373232 – Horizontal lines with fractional HiDPI scaling
This is reported to have been resolved in QTBUG-66036 with version 5.12. As of the time I am writing this, QT on Arch Linux is version 5.11.2-1. Other common distros have also not released packages with Qt 5.12. However, when Qt 5.12 is released, the developers expect this issue to be resolved. To check your Qt version, you can open a terminal and type: qmake --version The output will look similar to this QMake version 3.1Using Qt version 5.11.1 in /usr/lib When you see Qt version 5.12, then you can expect a resolution. If not, let the developers know at QTBUG-66036 . In the mean time, there is a work-around, as described in the bug report below Steps to reproduce: Displays -> Scale -> Scale Factor: 1.3 (or 1.4, etc.) Restart Open Konsole or Kate, type stuff Workaround: set Scale Factor back to 1.0 (or to an integer such as 2 or 3). There is a similar bug report for Konsole here 373232 – Horizontal lines with fractional HiDPI scaling https://bugs.kde.org/show_bug.cgi?id=373232
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/414836", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15010/" ] }
414,892
I have a bash script that runs as long as the Linux machine is powered on. I start it as shown below: ( /mnt/apps/start.sh 2>&1 | tee /tmp/nginx/debug_log.log ) & After it lauches, I can see the tee command in my ps output as shown below: $ ps | grep tee 418 root 0:02 tee /tmp/nginx/debug_log.log3557 root 0:00 grep tee I have a function that monitors the size of the log that tee produces and kills the tee command when the log reaches a certain size: monitor_debug_log_size() { ## Monitor the file size of the debug log to make sure it does not get too big while true; do cecho r "CHECKING DEBUG LOG SIZE... " debugLogSizeBytes=$(stat -c%s "/tmp/nginx/debug_log.log") cecho r "DEBUG LOG SIZE: $debugLogSizeBytes" if [ $((debugLogSizeBytes)) -gt 100000 ]; then cecho r "DEBUG LOG HAS GROWN TO LARGE... " sleep 3 #rm -rf /tmp/nginx/debug_log.log 1>/dev/null 2>/dev/null kill -9 `pgrep -f tee` fi sleep 30 done} To my surprise, killing the tee command also kills by start.sh instance. Why is this? How can I end the tee command but have my start.sh continue to run? Thanks.
When tee terminates, the command feeding it will continue to run, until it attempts to write more output. Then it will get a SIGPIPE (13 on most systems) for trying to write to a pipe with no readers. If you modify your script to trap SIGPIPE and take some appropriate action (like, stop writing output), then you should be able to have it continue after tee is terminated. Better yet, rather than killing tee at all, use logrotate with the copytruncate option for simplicity. To quote logrotate(8) : copytruncate Truncate the original log file in place after creating a copy, instead of moving the old log file and optionally creating a new one. It can be used when some program cannot be told to close its logfile and thus might continue writing (appending) to the previous log file forever. Note that there is a very small time slice between copying the file and truncating it, so some logging data might be lost. When this option is used, the create option will have no effect, as the old log file stays in place.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/414892", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63801/" ] }
414,926
Currently running Antergos Linux. The set-up I would like to have is the following. Pressing and releasing CAPS sends ESC. The combinations of CAPS and any of CAPS+h, CAPS+j, CAPS+k, CAPS+l send Left, Down, Up, and Right respectively. Upon release of CAPS, ESC is not sent. Of course, the goal here is to get some VIM-style bindings in programs which do not have them. It seems that xcape would be useful here: https://github.com/alols/xcape But the examples do not get me quite as far as I would like. Any help is appreciated. EDIT: I came across a very useful answer here: https://unix.stackexchange.com/a/163675/267068 Can anybody help me figure how to modify the procedure so that I get CAPS+hjkl as needed. Could I use Hyper_L, instead of the Super_L in that answer, and then map Hyper_L + hjkl to left, down, up, right?
I wanted to do the exact same thing, and after some search and experiment, finally got it working. Solution 1 See solution 2 below, which is potentially better. Mapping Caps_lock + h j k l : Follow this answer and add the config. You should add to the us file if you are using the US keyboard layout and skip the other keybindings that you're not interested in. Then run setxkbmap -layout us . Caps_lock as Esc : Run xcape -e 'ISO_Level3_Shift=Escape' . You can add this line to your /etc/profile so you don't have to run it manually after reboot. Solution 2 (probably better) I was happy with solution 1, until I realized I couldn't use the key bindings in IntelliJ, which is a big bummer. Eventually I figured out that I could just use xmodmap and xcape to do the job, while still being able to use them in IntelliJ! Mapping Caps_lock + h j k l : Create a file (say ~/.xmodmap ) with the following content: keycode 66 = Mode_switchkeysym h = h H Leftkeysym l = l L Rightkeysym k = k K Upkeysym j = j J Downkeysym u = u U Priorkeysym i = i I Homekeysym o = o O Endkeysym p = p P Next Feel free to skip the last 4 lines. I pasted them because they might be useful to you as well. In fact I'm really hoping to get the caps_lock enhancement working in Linux. Then, run xmodmap ~/.xmodmap . Caps_lock as Esc : Run xcape -e 'Mode_switch=Escape' . Optional: To avoid manually applying the keybindings, put the above 2 commands into your /etc/profile .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/414926", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/267068/" ] }
414,991
(N.B. in the below, I have replaced the domain of my VPS hosting provider with <my_hosting_provider> , for privacy.) My Debian 9.3 "Stretch" instance is showing a kernel update as being available: # apt list --upgradable -aListing... Donelinux-image-amd64/stable 4.9+80+deb9u3 amd64 [upgradable from: 4.9+80+deb9u2]linux-image-amd64/stable,now 4.9+80+deb9u2 amd64 [installed,upgradable to: 4.9+80+deb9u3] I believe 4.9+80+deb9u3 is the same as 4.9.65-3+deb9u2 , a recent kernel security update intended to address CVE-2017-5754 , aka Meltdown . Default config fails to install kernel security update The default contents of Unattended-Upgrade::Origins-Pattern in /etc/apt/apt.conf.d/50unattended-upgrades is: Unattended-Upgrade::Origins-Pattern { "origin=Debian,codename=${distro_codename},label=Debian-Security";}; With that configuration in place, the kernel security update fails to be installed: # unattended-upgrades -v -dInitial blacklisted packages: Initial whitelisted packages: Starting unattended upgrades scriptAllowed origins are: ['origin=Debian,codename=stretch,label=Debian-Security']Checking: linux-image-amd64 ([<Origin component:'main' archive:'stable' origin:'Debian' label:'Debian-Security' site:'mirror.<my_hosting_provider>.com' isTrusted:True>])pkg 'firmware-linux-free' not in allowed originsanity check failedpkgs that look like they should be upgraded: Fetched 0 B in 0s (0 B/s) fetch.run() result: 0blacklist: []whitelist: []Packages that will be upgraded: InstCount=0 DelCount=0 BrokenCount=0Extracting content from '/var/log/unattended-upgrades/unattended-upgrades-dpkg.log' since '2018-01-05 13:11:22'Sending mail to 'root'mail returned: 0 Modified config installs kernel security update If I change Unattended-Upgrade::Origins-Pattern in /etc/apt/apt.conf.d/50unattended-upgrades to read Unattended-Upgrade::Origins-Pattern { "origin=Debian,codename=${distro_codename},label=Debian"; "origin=Debian,codename=${distro_codename},label=Debian-Security";}; then the security update is found and installed: # unattended-upgrades -v -dInitial blacklisted packages: Initial whitelisted packages: Starting unattended upgrades scriptAllowed origins are: ['origin=Debian,codename=stretch,label=Debian', 'origin=Debian,codename=stretch,label=Debian-Security']Checking: linux-image-amd64 ([<Origin component:'main' archive:'stable' origin:'Debian' label:'Debian-Security' site:'mirror.<my_hosting_provider>.com' isTrusted:True>])pkgs that look like they should be upgraded: linux-image-amd64Fetched 0 B in 0s (0 B/s) fetch.run() result: 0<apt_pkg.AcquireItem object:Status: 2 Complete: 1 Local: 1 IsTrusted: 1 FileSize: 19196 DestFile:'/var/cache/apt/archives/firmware-linux-free_3.4_all.deb' DescURI: 'http://mirror.<my_hosting_provider>.com/debian/pool/main/f/firmware-free/firmware-linux-free_3.4_all.deb' ID:0 ErrorText: ''>check_conffile_prompt('/var/cache/apt/archives/firmware-linux-free_3.4_all.deb')No conffiles in deb '/var/cache/apt/archives/firmware-linux-free_3.4_all.deb' (There is no member named 'conffiles')<apt_pkg.AcquireItem object:Status: 2 Complete: 1 Local: 1 IsTrusted: 1 FileSize: 33252 DestFile:'/var/cache/apt/archives/libnuma1_2.0.11-2.1_amd64.deb' DescURI: 'http://mirror.<my_hosting_provider>.com/debian/pool/main/n/numactl/libnuma1_2.0.11-2.1_amd64.deb' ID:0 ErrorText: ''>check_conffile_prompt('/var/cache/apt/archives/libnuma1_2.0.11-2.1_amd64.deb')No conffiles in deb '/var/cache/apt/archives/libnuma1_2.0.11-2.1_amd64.deb' (There is no member named 'conffiles')<apt_pkg.AcquireItem object:Status: 2 Complete: 1 Local: 1 IsTrusted: 1 FileSize: 38768102 DestFile:'/var/cache/apt/archives/linux-image-4.9.0-5-amd64_4.9.65-3+deb9u2_amd64.deb' DescURI: 'http://mirror.<my_hosting_provider>.com/debian-security/pool/updates/main/l/linux/linux-image-4.9.0-5-amd64_4.9.65-3+deb9u2_amd64.deb' ID:0 ErrorText: ''>check_conffile_prompt('/var/cache/apt/archives/linux-image-4.9.0-5-amd64_4.9.65-3+deb9u2_amd64.deb')No conffiles in deb '/var/cache/apt/archives/linux-image-4.9.0-5-amd64_4.9.65-3+deb9u2_amd64.deb' (There is no member named 'conffiles')<apt_pkg.AcquireItem object:Status: 2 Complete: 1 Local: 1 IsTrusted: 1 FileSize: 6994 DestFile:'/var/cache/apt/archives/linux-image-amd64_4.9+80+deb9u3_amd64.deb' DescURI: 'http://mirror.<my_hosting_provider>.com/debian-security/pool/updates/main/l/linux-latest/linux-image-amd64_4.9+80+deb9u3_amd64.deb' ID:0 ErrorText: ''>check_conffile_prompt('/var/cache/apt/archives/linux-image-amd64_4.9+80+deb9u3_amd64.deb')found pkg: linux-image-amd64No conffiles in deb '/var/cache/apt/archives/linux-image-amd64_4.9+80+deb9u3_amd64.deb' (There is no member named 'conffiles')<apt_pkg.AcquireItem object:Status: 2 Complete: 1 Local: 1 IsTrusted: 1 FileSize: 40396 DestFile:'/var/cache/apt/archives/irqbalance_1.1.0-2.3_amd64.deb' DescURI: 'http://mirror.<my_hosting_provider>.com/debian/pool/main/i/irqbalance/irqbalance_1.1.0-2.3_amd64.deb' ID:0 ErrorText: ''>check_conffile_prompt('/var/cache/apt/archives/irqbalance_1.1.0-2.3_amd64.deb')blacklist: []whitelist: []Packages that will be upgraded: linux-image-amd64Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg.log'apt-listchanges: Reading changelogs...Preconfiguring packages ...Selecting previously unselected package firmware-linux-free.(Reading database ... 45465 files and directories currently installed.)Preparing to unpack .../firmware-linux-free_3.4_all.deb ...Unpacking firmware-linux-free (3.4) ...Selecting previously unselected package libnuma1:amd64.Preparing to unpack .../libnuma1_2.0.11-2.1_amd64.deb ...Unpacking libnuma1:amd64 (2.0.11-2.1) ...Selecting previously unselected package linux-image-4.9.0-5-amd64.Preparing to unpack .../linux-image-4.9.0-5-amd64_4.9.65-3+deb9u2_amd64.deb ...Unpacking linux-image-4.9.0-5-amd64 (4.9.65-3+deb9u2) ...Preparing to unpack .../linux-image-amd64_4.9+80+deb9u3_amd64.deb ...Unpacking linux-image-amd64 (4.9+80+deb9u3) over (4.9+80+deb9u2) ...Selecting previously unselected package irqbalance.Preparing to unpack .../irqbalance_1.1.0-2.3_amd64.deb ...Unpacking irqbalance (1.1.0-2.3) ...Setting up libnuma1:amd64 (2.0.11-2.1) ...Setting up linux-image-4.9.0-5-amd64 (4.9.65-3+deb9u2) ...I: /vmlinuz.old is now a symlink to boot/vmlinuz-4.9.0-4-amd64I: /initrd.img.old is now a symlink to boot/initrd.img-4.9.0-4-amd64I: /vmlinuz is now a symlink to boot/vmlinuz-4.9.0-5-amd64I: /initrd.img is now a symlink to boot/initrd.img-4.9.0-5-amd64/etc/kernel/postinst.d/initramfs-tools:update-initramfs: Generating /boot/initrd.img-4.9.0-5-amd64/etc/kernel/postinst.d/zz-update-grub:Generating grub configuration file ...Found linux image: /boot/vmlinuz-4.9.0-5-amd64Found initrd image: /boot/initrd.img-4.9.0-5-amd64Found linux image: /boot/vmlinuz-4.9.0-4-amd64Found initrd image: /boot/initrd.img-4.9.0-4-amd64Found linux image: /boot/vmlinuz-4.9.0-3-amd64Found initrd image: /boot/initrd.img-4.9.0-3-amd64doneSetting up linux-image-amd64 (4.9+80+deb9u3) ...Processing triggers for libc-bin (2.24-11+deb9u1) ...Processing triggers for systemd (232-25+deb9u1) ...Setting up firmware-linux-free (3.4) ...update-initramfs: deferring update (trigger activated)Processing triggers for man-db (2.7.6.1-2) ...Setting up irqbalance (1.1.0-2.3) ...Processing triggers for initramfs-tools (0.130) ...update-initramfs: Generating /boot/initrd.img-4.9.0-5-amd64Processing triggers for systemd (232-25+deb9u1) ...All upgrades installedInstCount=0 DelCount=0 BrokenCount=0Extracting content from '/var/log/unattended-upgrades/unattended-upgrades-dpkg.log' since '2018-01-05 13:24:35'Sending mail to 'root'mail returned: 0Found /var/run/reboot-required, rebooting Questions In relation to the failure to install security updates with the default configuration, should I file a bug against some part of Debian, or was it expected behaviour (and if so, why)? I only want unattended-upgrades to perform security updates. How can I achieve this, given that the default configuration did not succeed?
Based upon feedback in the comments above, this appears to be a bug. A corresponding bug report has now been filed here .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/414991", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
414,996
I'm actually blocking on a stupid thing ... yet i can't get it throught. We've got a git repository on which we have our php files and our sql patch.Every time i update my repo, i have to check if any sql patch has to be played.To avoid searching wich one is to play, i've made a small script that give me every sql file taht my last git pull gave me: find $MY_DIR/scripts/sandbox/migrations -type f -newermt $(date +'%Y-%m-%d') ! -newermt $(date +'%Y-%m-%d' --date="tomorrow") -not -path "$MY_DIR/scripts/sandbox/migrations/generated/*" This gives me and output that looks like this /home/carpette/www/myFolder/scripts/sandbox/migrations/done/2017.12.14 - script1.sql/home/carpette/www/myFolder/scripts/sandbox/migrations/done/2017.09.28 - script2.sql/home/carpette/www/myFolder/scripts/sandbox/migrations/done/2017.12.15 - script3.sql/home/carpette/www/myFolder/scripts/sandbox/migrations/done/2017.12.12 - script4.sql Now, i'm trying to source those files automaticaly in mysql.I've tried doing something like this: mysql myDataBase < $(./myStript.sh) But i get an error message with ambiguous redirection . So i tried: cat $(./myScript.sh) | mysql myDataBase But now, the space contained in my filename path is blocking, and mysql said "no existing file" because it takes only /home/carpette/www/myFolder/scripts/sandbox/migrations/done/2017.09.28 as filename path. I guess i have to escape the blanks, but i'm not finding any elegant solution that works. Update: I want to keep myScript.sh independant, so that i can steel use it in other things.
Based upon feedback in the comments above, this appears to be a bug. A corresponding bug report has now been filed here .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/414996", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83944/" ] }
415,004
I have a python program that I run it via command line (Mac OSX) as: python -W ignore Experiment.py --iterations 10 The file Experiment.py should be run multiple times using different --iterations values. I do that manually one after another, so when one run is finished, I run the second one with different --iterations , and so on. However, I cannot always set near to my laptop to run all of them so I am wondering if there is a way using shell script where I can state all runs together and then the shell script executes them one after another (Not parallel, just sequentially as I would have done it by my self)? Something like: python -W ignore Experiment.py --iterations 10python -W ignore Experiment.py --iterations 100python -W ignore Experiment.py --iterations 1000python -W ignore Experiment.py --iterations 10000python -W ignore Experiment.py --iterations 100000 Edit: What if I have multiple arguments --X --Y --Z ?
You can use a for loop: for iteration in 10 100 1000 10000 100000; do python -W ignore Experiment.py --iteration "${iteration}"done If you have multiple parameters, and you want all the various permutations of all parameters, as @Fox noted in a comment below, you can use nested loops. Suppose, for example, you had a --name parameter whose values could be n1 , n2 , and n3 , then you could do: for iteration in 10 100 1000 10000 100000; do for name in n1 n2 n3; do python -W -ignore Experiment.py --iteration "${iteration}" --name "${name}" donedone You could put that in a file, for example runExperiment.sh and include this as the first line: #!/bin/bash . You could then run the script using either: bash runExperimen.sh Or, you could make the script executable, then run it: chmod +x runExperiment.sh./runExperiment.sh If you're interested in some results before others, that'll guide how you structure the loops. In my example above, the script will run: ... --iteration 10 --name n1... --iteration 10 --name n2... --iteration 10 --name n3... --iteration 100 --name n1 So it runs all experiments for iteration 10 before moving on to the next iteration. If instead you wanted all experiments for name n1 before moving on to the next, you could make the name loop the "outer" loop. for name in ...; do for iteration in ...; do
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415004", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269028/" ] }
415,040
When I try to grep the below expression. $ grep -A2 "IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready" log.txt I get the result as - date-time kern servname: []: info [ 83.262033] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@date-time syslog servname syslog-ng[10831]: notice syslog-ng starting up; version='3.2.5'date-time kern servname: []: info [ 0.000000] Initializing cgroup subsys cpuset Now the @@@@ represents a system crash I believe, how can I grep for just '@^@'. I tried grep @^ file.txt , grep '@^' file.txt , grep '@.*@.*@' file.txt and some other expressions without luck.
When the system is rebooted without files first being flushed it is possible that a file being written to will have the new size but the data has not yet been written to disk. In such a case the hole in the file will contain NUL characters. (It is also possible to deliberately create files with holes in them without a restart, but I don't think that is applicable to your scenario.) Some tools will display NUL characters as ^@ which is a placeholder for a single non-printable character and is totally different from a ^ followed by a @ , which is why your grep command won't work. With that information I was able to find an answer on a sister site. The solution suggested there is to use the following arguments for grep: grep -Pa '\x00' I have tested that this works for me. Notice that using -P or -a alone does not work, you do need both of them before \x00 will work.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415040", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/260846/" ] }
415,057
Is there a linux command to return the last x % of a file? I know tail can return a number of lines (-n) or number of bytes (-c), but what if I wanted to get the last 25% of a file? Is there a command to do that?
GNU split can do pretty much what you ask; given a text file in.txt , this will print the last quarter (part 4 out of 4) in terms of number of bytes (not lines), without splitting lines: split -n l/4/4 in.txt Here is the relevant documentation for split -n CHUNKS : CHUNKS may be: [...] l/K/N output Kth of N to stdout without splitting lines In the very specific case mentioned as an example in the question, 4/4 requests the fourth quarter, or the last 25% of the inputfile. For sizes that are not 1/n of the input, I do not think splitprovides such a straightforward solution.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/415057", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82788/" ] }
415,069
I'd like to simplify this command: mkdir -p {fg0,fg1,fg2,fg3,fg4,fg5,fg6,fg7,fg8,fg9,fg10,fg11,fg12,fg13,fg14} The goal is to create an n number of folders where the numbers increment. I could potentially have 100's of folders, and find it unpractical to add each one individually. Any thoughts on how to simplify this with a single command?
Assuming bash, you can create text sequences using the abbreviation {i..n} So, I advise doing for having the directories balanced, per @cas sugestion: mkdir -p fg{00..14} It will create fg00 ... f01 with newer bash versions 4.x balancing and ordering better the names, and fg0, fg1...with older bash versions (3.x). If fg0...fg14 is still a requirement, as per your example, than it is indeed: mkdir -p fg{0..14}
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/415069", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/229561/" ] }
415,077
I want to add two hexadecimal variables in a bash script. I want them to start as hex and end in hex, not decimal. What I've come up with so far is a bit of a round about hack. Is there a better or more elegant solution? BASE=0xA000OFFSET=0x1000NEW_BASE=$(( $BASE + $OFFSET ))NEW_BASE=`printf "0x%X\n" $NEW_BASE`echo $NEW_BASE0xB000
I would just simplify your script as: printf "0x%X\n" $((0xA000 + 0x1000))
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98743/" ] }
415,078
Since Intel, AMD and ARM is affected by the Spectre and Meltdown cpu kernel memory leak bugs/flaws, could we say that Power architecture is safe from these?
No, you could not say it's safe. https://www.ibm.com/blogs/psirt/potential-impact-processors-power-family/ Complete mitigation of this vulnerability for Power Systems clients involves installing patches to both system firmware and operating systems. The firmware patch provides partial remediation to these vulnerabilities and is a pre-requisite for the OS patch to be effective. [...] Firmware patches for POWER7+, POWER8, and POWER9 platforms are now available via FixCentral. POWER7 patches will be available beginning February 7 . [...] AIX patches will be available beginning January 26 and will continue to be rolled out through February 12 . Update : patches available, http://aix.software.ibm.com/aix/efixes/security/spectre_meltdown_advisory.asc
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/415078", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/261470/" ] }
415,277
if I use the following command: printf "%.0s┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃\n" {1..3} I get an output like this: ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ How can I achieve the same result with getting the repeated chars from a variable? I tried this approach: var="┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃"printf '%.0s%s\n' {1..3} "$var" but it does not work, I end up with this: 2┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃
Use this: $ var="┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃"$ printf "$var"'%.0s\n' {1..3}┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/415277", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240990/" ] }
415,290
Let's say I have FileA.txt which looks a bit like this: 43287134, string1, string21233, foo, bar973, barfoo, foobar7464, asdf, ghjk And I've got FileB.txt with these regexes, separated by a line: ^973,^1233, I would like to apply FileB.txt regexes onto FileA.txt , and delete the lines that match so that final result would be: 43287134, string1, string27464, asdf, ghjk Is there any tool available to do this? Thanks!
This is exactly what grep is designed for: grep -v -f FileB.txt FileA.txt -f <filename> reads regexes from the file (instead of command line) -v reverse the match (prints non-matching lines) Output: 43287134, string1, string27464, asdf, ghjk
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/415290", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269231/" ] }
415,300
I'd like to have two commands - the first to short the prompt to $␣ or #␣ .This should be permanent in the current shell, until I issue the command long_prompt to reset the prompt to the default version. This is the short_prompt command script: if [ $(id -u) = 0 ]; then PS1='\[\033[01;34m\]#\[\033[00m\] 'else PS1='\[\033[01;34m\]\$\[\033[00m\] 'fi (As you see I honor the different prompts of admin shells and user shells) When I source this script with like so: $ . short_prompt all works as expected, but for this I have to be in the directory where short_prompt resides or give the whole pathname, like so: $ . /data/system/bin/short_prompt I can't seem to figure out how to make this command accessible from anywhere (e.g. by creating a soft link to one of the path directories, e.g. /usr/local/bin ) and type $ short_prompt or $ long_prompt I tried to add export in front of the two PS1='...' lines, but that did not work either.
This is exactly what grep is designed for: grep -v -f FileB.txt FileA.txt -f <filename> reads regexes from the file (instead of command line) -v reverse the match (prints non-matching lines) Output: 43287134, string1, string27464, asdf, ghjk
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/415300", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269236/" ] }
415,315
I have this code : for file in "$@"*png; do echo "$file"done It works only if you provide a path that ends with / like /root/ . What would be the correct way to add / to the path input, in situations like this, without breaking my script? If you give a path input without / at the end, it just does this : File: /root*png If I modify it to be for file in "$@"/*png; do and input /root/test/ it works but the result looks ugly : File: /root/test//sample2.png
ilkkachu pointed out a major flaw in my answer and corrected it in his so please give him the credit he deserves. I've come up with another solution though: #!/bin/bashfor dir in "$@"; do find "$dir" -type f -name '*png' -exec readlink -f {} \;done Example : $ lltotal 6-rwxr-xr-x 1 root root 104 Jan 7 14:03 script.sh*drwxr-xr-x 2 root root 3 Jan 7 04:21 test1/drwxr-xr-x 2 root root 3 Jan 7 04:21 test2/drwxr-xr-x 2 root root 3 Jan 7 04:21 test3/$ for n in {1..3}; do ll "test$n"; donetotal 1-rw-r--r-- 1 root root 0 Jan 7 04:21 testfile.pngtotal 1-rw-r--r-- 1 root root 0 Jan 7 04:21 testfile.pngtotal 1-rw-r--r-- 1 root root 0 Jan 7 04:21 testfile.png$ ./script.sh test1 test2/ test3/root/temp/test1/testfile.png/root/temp/test2/testfile.png/root/temp/test3/testfile.png Original Solution : for file in "${@%/}/"*png; do echo "$file"done The ${@%/} will trim any / off the end of your parameter and then the / outside will add it back -- or add it to any parameter that didn't have one.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415315", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119404/" ] }
415,327
I am trying to figure out a command that matches all counts above certain counts. I am using grep -src 'Bicycle' /cygdrive/c/Documents/* |grep -v ':0$' and the output is: /cygdrive/c/Documents/blahhh.txt:1/cygdrive/c/Documents/blahhh.txt:3/cygdrive/c/Documents/bla0.txt:5/cygdrive/c/Documents/blahg.txt:23 But i only want it to output: /cygdrive/c/Documents/blahg.txt:23 I have searched quite a bit for this one. If someone can lead me in the right direction it would awesome.
One simple way to do this would be to pipe the output of grep to awk and parse it by setting a de-limiter as : and check if the last field count is greater than the X what you are trying to define grep -src 'Bicycle' /cygdrive/c/Documents/* | awk -F: '$NF+0 > 1' In the example above, I've taken out the count of occurrences greater than 1 . Modify it as you need. The reason to have $NF+0 > 1 over just $NF > 1 is to do a pure numeric evaluation, consider a case when an empty string or a numeric string is present, adding 0 puts it to a proper numerical for comparing, else we have incorrect types on both sides of the comparison. From How awk Converts Between Strings and Numbers If, for some reason, you need to force a number to be converted to a string, concatenate that number with the empty string, "" . To force a string to be converted to a number, add zero to that string.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415327", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269253/" ] }
415,366
The Linux filesystem hierarchy ( FHS ) contains a lot of important directories. For example, I just discovered /sys/class/input while playing with my PS/2 keyboard settings. But all those important directories are documented elsewhere, so man /sys/class/input doesn't work to explain what happens at a certain point. Why not place README files into the hierarchy to make it easier for people to learn what's going on at certain levels and play with the contents? It would be really awesome if devices could even mount their own README s.
To use your example: /sys/ doesn't contain "real" files, but is entirely provided by the kernel. Do you want all READMEs to become part of the kernel? You probably don't. Documentation is in /usr/share/doc . Which contains normal files on your harddisk. Some documentation about /sys and /proc is in the kernel source, that is in /usr/src/linux/Documentation (if you've installed the kernel source, and made the symlink for your current kernel).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/415366", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47022/" ] }
415,421
I want to add a progress bar in my bash script that show the "." character as progress , and process ended after MAX 180 second in the bash script I use curl command , so curl give the results after sometime but not greater then 180 sec something like that |. after 2 sec|........... after 60 sec|................... after 100 sec|.......................... after 150 sec|................................| after 180 sec final example |................................| after 180 sec or |....| after 30 sec
This is rather simple to do in just plain Bash: #!/bin/bash# progress bar functionprog() { local w=80 p=$1; shift # create a string of spaces, then change them to dots printf -v dots "%*s" "$(( $p*$w/100 ))" ""; dots=${dots// /.}; # print those dots on a fixed-width space plus the percentage etc. printf "\r\e[K|%-*s| %3d %% %s" "$w" "$dots" "$p" "$*"; }# test loopfor x in {1..100} ; do prog "$x" still working... sleep .1 # do some work heredone ; echo The first argument to prog is the percentage, any others are printed after the progress bar. The variable w in the function controls the width of the bar. Print a newline after you're done, the function doesn't print one. Another possibility would be to use the pv tool. It's meant for measuring the throughput of a pipeline, but we can create one for it: for x in {1..100} ; do sleep .1 # do some work here printf .done | pv -pt -i0.2 -s100 -w 80 > /dev/null Here, -pt enables the progress bar and the timer, -s 100 sets the total output size, and whatever we print inside the function counts against that size.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/415421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
415,422
Using Kazam 1.4.5 on Debian stretch, How to stop recording with Kazam? The problem is: the icon on the task-bar does not allows any interaction, so I am looking for some shortcut with the keyboard, however, I could not find any. The result is, currently the video is recording for ever until I kill the process.
Obviously, I found the solution 5 minutes after to post the question. start recording: Super + Control + r pause recording: Super + Control + p finish recording: Super + Control + f show Kazam: Super + Control + s quit Kazam: Super + Control + q Note: Super is usually this "Windows logo" key.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/415422", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142142/" ] }
415,433
When I moved my ssd on ubuntu to a larger ssd I managed to end up with this So I assumed Gparted would allow me to remove /dev/sda3 its empty and then grow /dev/sda5 into the space created but i'm clearly not understanding this process. As I cant find a way yo do it. Data in /dev/sda5 must be kept
Obviously, I found the solution 5 minutes after to post the question. start recording: Super + Control + r pause recording: Super + Control + p finish recording: Super + Control + f show Kazam: Super + Control + s quit Kazam: Super + Control + q Note: Super is usually this "Windows logo" key.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/415433", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269320/" ] }
415,439
Why does awk '/^[^\t]/{a++}END{print a}' not count the empty lines (i.e. lines which only have new line character)? Isn't an empty line started not with \t tab?
The reason is that [^\t] requires a character. The newline ( $ ) does not count as character. You need this: awk '/^([^\t]|$)/{a++}END{print a}'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415439", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
415,446
ls -l only shows the modification timestamps of the files up to second. If two files have the same timestamp up to second, but were modified not exactly at the same time, will ls -lt order the files in the order of the accurate mtimes or just the approximate mtimes up to second (and therefore the order between the files can be arbitrary)?
That very much depends on the ls implementation. Of those 4 found on a GNU/Linux system here: $ touch a; touch c; touch b; stat -c %y a c b2018-01-10 12:52:21.367640342 +00002018-01-10 12:52:21.371640148 +00002018-01-10 12:52:21.375639952 +0000 GNU ls , the one from the GNU project (from the GNU coreutils collection). That's the one typically found on GNU systems like Debian (Linux or kFreeBSD kernels), Cygwin or Fedora. $ gnu-ls -rta c b The ls from the Heirloom Toolchest , a port of OpenSolaris tools: $ heirloom-ls -rta b c The ls from the AT&T Open Source collection , possibly built in ksh93 . Another one with quite a few fancy extensions: $ ast-ls -rta c b$ PATH=/opt/ast/bin:$PATH ksh93 -c 'type ls; ls -rt'ls is a shell builtin version of /opt/ast/bin/lsa c b busybox (as found (or a derivative) on most (generally embedded) Linux-based systems): $ busybox ls -rtc b a So, of those, GNU and ast ls considers the fractional second part. The others fall back to lexical comparison for files last modified within the same second. Only busybox ls honours the -r there. In my tests, FreeBSD's ls also supports sub-second precision (provided they're enabled at the VFS level, see vfs.timestamp_precision sysctl). zsh 's globs (with the om glob qualifier to order on modification time, Om for reverse order) also take the full time: $ echo *(Om)a c b [ file1 -nt file2 ] , where supported also generally support sub-second granularity .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/415446", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
415,477
Both ALAC and FLAC are lossless audio formats and files will usually have more or less the same size when converted from one format to the other.I use ffmpeg -i track.flac track.m4a to convert between these two formats but I notice that the resulting ALAC files are much smaller than the original ones. When using a converter software like the MediaHuman Audio Converter, the size of the ALACs will remain around the same size as the FLACs so I guess I'm missing some flags here that are causing ffmpeg to downsample the signal.
Ok, I was probably a little quick to ask here but for the sake of future reference here is the answer: One should pass the flag -acodec alac to ffmpeg for a lossless conversion between FLAC and ALAC: ffmpeg -i track.flac -acodec alac track.m4a
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/415477", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235207/" ] }
415,505
$ s=/the/path/foo.txt we can extract by different criterion separately $ echo ${s##*/}foo.txt$ echo ${s%.txt}/the/path/foo But if we want to extract according to both criterion at the same time, $ echo ${${s##*/}%.txt}bash: ${${s##*/}%.txt}: bad substitution Is it possible to achieve the same goal, using parameter expansion only and without introducing a temporary variable? Can a parameter expansion work inside another parameter expansion in some way? Thanks.
No and yes. In Bash or standard shell the first part of the expansion has to be a parameter (i.e. a variable or a positional parameter, or one of the special parameters), not just any word. Bash : The basic form of parameter expansion is ${ parameter }. The value of parameter is substituted. The parameter is a shell parameter as described above or an array reference. The text in POSIX similarly only mentions parameters. You can use an expansion in the other parts of the expansions, since they can be arbitrary words. But that of course doesn't help in chaining manipulations of the same string (like in your example ${${s##*/}%.txt} ) $ bash -c 's=/the/path/foo.txt; ext=.tyt; echo "${s%${ext/y/x}}"'/the/path/foo Zsh explicitly supports chaining, though: If a ${...} type parameter expression or a $(...) type command substitution is used in place of name above, it is expanded first and the result is used as if it were the value of name .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415505", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
415,506
I have installed PHP version 7 on my centOs. Now I got an error says: undefined function mb_internal_encoding() So I decided to install php-mbstring (using yum install php-mbstring ) but I face the following error: Error: php70u-common conflicts with php-common-5.4.16-43.el7_4.x86_64 Error: php70u-json conflicts with php-common-5.4.16-43.el7_4.x86_64 But now when I install php-mbstring, it wants to install php-mbstring version 5.4. How can I tell yum to download latest versions of php extensions and packages?
No and yes. In Bash or standard shell the first part of the expansion has to be a parameter (i.e. a variable or a positional parameter, or one of the special parameters), not just any word. Bash : The basic form of parameter expansion is ${ parameter }. The value of parameter is substituted. The parameter is a shell parameter as described above or an array reference. The text in POSIX similarly only mentions parameters. You can use an expansion in the other parts of the expansions, since they can be arbitrary words. But that of course doesn't help in chaining manipulations of the same string (like in your example ${${s##*/}%.txt} ) $ bash -c 's=/the/path/foo.txt; ext=.tyt; echo "${s%${ext/y/x}}"'/the/path/foo Zsh explicitly supports chaining, though: If a ${...} type parameter expression or a $(...) type command substitution is used in place of name above, it is expanded first and the result is used as if it were the value of name .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415506", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269389/" ] }
415,521
I created a service. admin@Xroklaus:~ $ cat /etc/systemd/user/duniter.service [Unit]Description=Duniter nodeAfter=network.targetConditionPathExists=/home/folatt/.config/duniter/duniter_default/duniter.db[Service]Group=folattUser=folattType=forkingExecStart=/usr/bin/duniter webstartExecReload=/usr/bin/duniter webrestartExecStop=/usr/bin/duniter stopRestart=on-failure[Install]WantedBy=multi-user.target After rebooting, it does not load. folatt@Xroklaus:~ $ systemctl --user status duniter.service● duniter.service - Duniter node Loaded: loaded (/etc/systemd/user/duniter.service; enabled) Active: failed (Result: start-limit) since Sun 2018-01-07 20:31:43 UTC; 1min 3s ago Process: 2212 ExecStart=/usr/bin/duniter webstart (code=exited, status=216/GROUP) Journalctl gives a bit more information of the error. admin@Xroklaus:~ $ sudo journalctl -p 3 --no-pager-- Logs begin at Sun 2018-01-07 20:30:33 UTC, end at Sun 2018-01-07 20:31:49 UTC. --Jan 07 20:30:39 Xroklaus bluetoothd[876]: Sap driver initialization failed.Jan 07 20:30:39 Xroklaus bluetoothd[876]: sap-server: Operation not permitted (1)Jan 07 20:31:26 Xroklaus systemd[1]: Failed to start LSB: Start and stop the mysql database server daemon.Jan 07 20:31:42 Xroklaus systemd[2203]: Failed at step GROUP spawning /usr/bin/duniter: Operation not permittedJan 07 20:31:42 Xroklaus systemd[2177]: Failed to start Duniter node.Jan 07 20:31:42 Xroklaus systemd[2206]: Failed at step GROUP spawning /usr/bin/duniter: Operation not permittedJan 07 20:31:42 Xroklaus systemd[2177]: Failed to start Duniter node.Jan 07 20:31:43 Xroklaus systemd[2208]: Failed at step GROUP spawning /usr/bin/duniter: Operation not permittedJan 07 20:31:43 Xroklaus systemd[2177]: Failed to start Duniter node.Jan 07 20:31:43 Xroklaus systemd[2210]: Failed at step GROUP spawning /usr/bin/duniter: Operation not permittedJan 07 20:31:43 Xroklaus systemd[2177]: Failed to start Duniter node.Jan 07 20:31:43 Xroklaus systemd[2212]: Failed at step GROUP spawning /usr/bin/duniter: Operation not permittedJan 07 20:31:43 Xroklaus systemd[2177]: Failed to start Duniter node.Jan 07 20:31:43 Xroklaus systemd[2177]: Failed to start Duniter node. But that's as far as I got. I don't know what the solution is to this.
I moved the service file and removed the user and group, while also changing the install target like this: /usr/lib/systemd/user/duniter.service [Unit]Description=Duniter nodeAfter=network.targetConditionPathExists=/home/folatt/.config/duniter/duniter_default/duniter.db[Service]Type=forkingExecStart=/usr/bin/duniter webstartExecReload=/usr/bin/duniter webrestartExecStop=/usr/bin/duniter stopRestart=on-failure[Install]WantedBy=default.target
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/415521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126513/" ] }
415,546
I have this : 2018:01:02-23:52:482018:01:02-23:52:482018:01:02-23:52:482018:01:03-09:26:202018:01:03-09:26:20 I want to keep the date, but not the hour in order to sort the number of messages per day : 2018:01:022018:01:022018:01:022018:01:032018:01:03 I want to do it with awk if possible.
awk awk -F- '$0=$1' file cut cut -d- -f1 file sed sed 's/-.*//' file perl perl -pe 's/-.*//' file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/415546", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268605/" ] }
415,560
I ran the following in Debian 9.3: cd /usr/local/srcwget http://www.rfxn.com/downloads/maldetect-current.tar.gz The file was downloaded just fine, and yet, when I execute: tar -xzfv maldetect-current.tar.gz I get: tar (child): v: Cannot open: No such file or directory tar (child):Error is not recoverable: exiting now tar: Child returned status 2tar: Error is not recoverable: exiting now But ls -la shows that the file does indeed exist: /usr/local/src# ls -latotal 3144drwxrwsr-x 2 root staff 4096 Jan 8 11:46 .drwxrwsr-x 11 root staff 4096 Jan 8 11:40 ..-rw-r--r-- 1 root staff 1605546 Jul 14 04:45 maldetect-current.tar.gz
The filename should follow immediately after the f option. tar -xzvf maldetect-current.tar.gz
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/415560", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/258010/" ] }
415,588
How to separate strings and numbers from one line using a bash command. Example: I have a string containing string123anotherstr456thenanotherstr789 The output should be: string123anotherstr456thenanotherstr789
GNU grep or compatible solution: s="string123anotherstr456thenanotherstr789"grep -Eo '[[:alpha:]]+|[0-9]+' <<<"$s" [[:alpha:]]+|[0-9]+ - regex alternation group, matches either alphabetic character(s) or number(s); both will be considered as separate entries on output The output: string123anotherstr456thenanotherstr789
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/415588", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112308/" ] }
415,614
I'm trying to look for a command that outputs 1 or 0 depending on whether I have my output muted. I was trying this: amixer sget Master This is the output I get: Simple mixer control 'Master',0 Capabilities: pvolume pvolume-joined pswitch pswitch-joined Playback channels: Mono Limits: Playback 0 - 64 Mono: Playback 64 [100%] [0.00dB] [on] Now the values change here accordingly (if I decrease volume, the percentage and the dB values change). However, if I head into Pavucontrol and mute my output on the 'Output devices' tab, the output of the command above stays the same. Literally nothing changes. But my sound indeed is muted. What command should I use? Why doesn't that [on] change to [off] ? Shouldn't it? Thanks in advance.
After a long search, I actually managed to find an answer. This might be helpful for others looking for something like this out there! What you need: pacmd list-sinks This command has a line like this: muted: no And this no indeed does change to yes , when I mute my device. Perfect. I managed to strip the output using this command, should anyone need it: pacmd list-sinks | awk '/muted/ { print $2 }' Sorry for posting too soon, it seemed like I won't be able to find a solution anywhere.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415614", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/255876/" ] }
415,618
I want to do a full format as opposed to a quick format of a 500 GB exfat USB stick. mkfs.exfat seems to be just a quick format since it's immediately done. The reason I want a full format is that files get corrupted when I copy them to the stick, but seemingly only after about 20 or 30 GB. chkdsk on windows always removes the corrupted files. I also checked for bad sectors, but none are found, same happens on a second PC, so it's probably not a driver issue. Right now I want to do a full format, which takes very long, but I don't want my PC to be on for two days (very slow stick), so I want to use my Raspberry Pi for it. I tried Gparted, but it doesn't support exfat. I feel like it should be pretty easy, just a command to format the stick in exfat, but no quick format
After a long search, I actually managed to find an answer. This might be helpful for others looking for something like this out there! What you need: pacmd list-sinks This command has a line like this: muted: no And this no indeed does change to yes , when I mute my device. Perfect. I managed to strip the output using this command, should anyone need it: pacmd list-sinks | awk '/muted/ { print $2 }' Sorry for posting too soon, it seemed like I won't be able to find a solution anywhere.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415618", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
415,645
The user can write in my Bash script a mac address in the following way: read -p "enter mac-address " mac-address Now i want to check in an if-statement, if this mac-address matches with a "specific" format. i.e. it should be FF:FF:FF:FF:FF:FF and not FFFFFFFFFFFF . Also the length should be correct: 6x2.
The lazy way is just to run if [[ $mac_address == ??:??:??:??:??:?? ]]; then echo Heureka; fi but this doesn't check whether it's a hex string. So if this is important if [[ $mac_address =~ ^[0-9a-fA-F][0-9a-fA-F]:[0-9a-fA-F][0-9a-fA-F]:[0-9a-fA-F][0-9a-fA-F]:[0-9a-fA-F][0-9a-fA-F]:[0-9a-fA-F][0-9a-fA-F]:[0-9a-fA-F][0-9a-fA-F]$ ]]; then echo Heureka; fi might be better. The later can be shortened to if [[ $mac_address =~ ^([[:xdigit:]]{2}:){5}[[:xdigit:]]{2}$ ]]; then echo Heureka; fi If the pattern matches I don't see a need to check for the correct length as well.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/415645", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210309/" ] }
415,654
I was creating a symbolic link to folder1/folder2 in home-folder . But I accidentally did: ln -s folder1/folder2 while in folder2 instead of in home-folder . So I ended up accidentally creating a sort-of-recursive link. Now I can't remove this link: rm folder1/folder2 gives the error message 'folder1/folder2' Is a directory . I'm scared to go for rmdir or rm -rf because I'm not sure what will be attempted to be deleted, the link or folder2 . This is especially an issue since folder1/folder2 is a shared folder and I don't want to mess this up for other users on the server.
When you have a symbolic link to a directory, if you add a trailing slash to the name then you get the directory itself, not the symlink. As a result: rm link/ will try to remove the directory. What you want is to specify the link name only without a trailing slash: rm link That should enable you to remove the link.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415654", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269499/" ] }
415,679
I'm familiar with "jq" for parsing json. I work with one service that produces a json response where one of the properties is itself a json string. How do I convert that quoted value to a valid json string so I can then process it with jq? For instance, if I just view the plain pretty-printed json from "jq .", here's a short excerpt of the output: "someJsonString": "{\"date\":\"2018-01-08\", ... I can use jq to get the value of that property, but I need to convert the quoted string to valid json by "unescaping" it. I suppose I could pipe it into sed, removing the opening and ending double quotes, and removing all backslashes (" sed -e 's/^"//' -e 's/"$//' -e 's/\\//g' "). That seems to work, but that doesn't seem like the most robust solution. Update : Just to be a little clearer on what I'm doing, here are a couple of elided samples that show what I've tried: % curl -s -q -L 'http://.../1524.json' | jq '.results[0].someJsonString' | jq ."{\"date\":\"2018-01-08\",...% echo $(curl -s -q -L 'http:/.../1524.json' | jq '.results[0].someJsonString') | jq ."{\"date\":\"2018-01-08\",... Update : Here's a completely standalone example: % cat stuff.json | jq .{ "stuff": "{\"date\":\"2018-01-08\"}"}% cat stuff.json | jq '.stuff'"{\"date\":\"2018-01-08\"}"% cat stuff.json | jq '.stuff' | jq ."{\"date\":\"2018-01-08\"}" Update : If I tried to process that last output with a real jq expression, it does something like this: % cat stuff.json | jq '.stuff' | jq '.date'assertion "cb == jq_util_input_next_input_cb" failed: file "/usr/src/ports/jq/jq-1.5-3.x86_64/src/jq-1.5/util.c", line 371, function: jq_util_input_get_positionAborted (core dumped)
With jq 's fromjson function: Sample stuff.json contents: { "stuff": "{\"date\":\"2018-01-08\"}"} jq -c '.stuff | fromjson' stuff.json The output: {"date":"2018-01-08"}
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/415679", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123728/" ] }
415,776
Within a programming language, i execute a simple shell command cd var; echo > create_a_file_here with var being a variable that contains a string of (hopefully) a directory to the place where i want to create the file "create_a_file_here".Now if someone sees this line of code, it is possible to exploit it by assigning for instance: var = "; rm -rf /" Things can get pretty ugly. One way to avoid the above case would be maybe to search the string in var for some special characters like ';' before executing the shell command, but i doubt this covers all possibly exploits. Does anyone know a good way to ensure that "cd var" only changes a directory and nothing else?
If I understand correctly, var is a variable in your programming language. And in your programming language, you're asking a shell to interpret a string that is the concatenation of "cd " , the content of that variable and "; echo > create_a_file_here" . If then, yes, if the content of var is not tightly controlled, it's a command injection vulnerability. You could try and properly quote the content of the variable¹ in the syntax of the shell so it is guaranteed to be passed as a single argument to the cd builtin. Another approach would be to pass the content of that variable another way. An obvious way would be to pass that in an environment variable. For instance, in C: char *var = "; rm -rf /";setenv("DIR", var, 1);system("CDPATH= cd -P -- \"$DIR\" && echo something > create_a_file_here"); This time, the code that you ask the shell to interpret is fixed, we still need to write it properly in the shell's syntax (here assumed to be a POSIX-compliant shell): shell variable expansion must be quoted to prevent split+glob you need -P for cd to do a simple chdir() you need -- to mark the end of options to avoid problems with var starting with - (or + in some shells) We set CDPATH to the empty string in case it was is in the environment We only run the echo command if cd was successful. There is (at least) one remaining problems: if var is - , it doesn't chdir into the directory called - but to the previous directory (as stored in $OLDPWD ) and OLDPWD=- CDPATH= cd -P -- "$DIR" is not guaranteed to work around it. So you'd need something like: system( "case $DIR in\n" " (-) CDPATH= cd -P ./-;;\n" " (*) CDPATH= cd -P -- \"$DIR\";;\n" "esac && ...."); ¹ Note that just doing a system(concat("cd \"", var, "\"; echo...")); is not the way to go, you'd be just moving the problem. For instance, a var = "$(rm -rf /)" would still be a problem. The only reliable way to quote text for Bourne-like shells is to use single quotes and also take care of the single quotes that may occur in the string. For instance, turn a char *var = "ab'cd" to char *escaped_var = "'ab'\\''cd'" . That is, replace all ' to '\'' and wrap the whole thing inside '...' . That still assumes that that quoted string is not used within backticks, and you'd still need the -- , -P , && , CDPATH= ...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415776", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269576/" ] }
415,787
Let's say I have a machine (Arago dist) with a user password of 12 alphanumerical characters. When I log myself in via ssh using password authentication, I noticed a couple of days ago, that I can either only input 8 of the password characters or the whole password followed with whatever I'd like. The common outcome in both situations is a successful login. Why is this happening? In this particular case, I don't want to use Public key authentication based on multiple reasons. As an additional info, in this distro the files /etc/shadow and /etc/security/policy.conf are missing. Here the server ssh config: [user@machine:~] cat /etc/ssh/sshd_config# $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $# This is the sshd server system-wide configuration file. See# sshd_config(5) for more information.# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin# The strategy used for options in the default sshd_config shipped with# OpenSSH is to specify options with their default value where# possible, but leave them commented. Uncommented options change a# default value.Banner /etc/ssh/welcome.msg#Port 22#AddressFamily any#ListenAddress 0.0.0.0#ListenAddress ::# Disable legacy (protocol version 1) support in the server for new# installations. In future the default will change to require explicit# activation of protocol 1Protocol 2# HostKey for protocol version 1#HostKey /etc/ssh/ssh_host_key# HostKeys for protocol version 2#HostKey /etc/ssh/ssh_host_rsa_key#HostKey /etc/ssh/ssh_host_dsa_key# Lifetime and size of ephemeral version 1 server key#KeyRegenerationInterval 1h#ServerKeyBits 1024# Logging# obsoletes QuietMode and FascistLogging#SyslogFacility AUTH#LogLevel INFO# Authentication:#LoginGraceTime 2mPermitRootLogin no#StrictModes yes#MaxAuthTries 6#MaxSessions 10#RSAAuthentication yes#PubkeyAuthentication yes#AuthorizedKeysFile .ssh/authorized_keys# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts#RhostsRSAAuthentication no# similar for protocol version 2#HostbasedAuthentication no# Change to yes if you don't trust ~/.ssh/known_hosts for# RhostsRSAAuthentication and HostbasedAuthentication#IgnoreUserKnownHosts no# Don't read the user's ~/.rhosts and ~/.shosts files#IgnoreRhosts yes# To disable tunneled clear text passwords, change to no here!#PasswordAuthentication yes#PermitEmptyPasswords no# Change to no to disable s/key passwords#ChallengeResponseAuthentication yes# Kerberos options#KerberosAuthentication no#KerberosOrLocalPasswd yes#KerberosTicketCleanup yes#KerberosGetAFSToken no# GSSAPI options#GSSAPIAuthentication no#GSSAPICleanupCredentials yes# Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and# PasswordAuthentication. Depending on your PAM configuration,# PAM authentication via ChallengeResponseAuthentication may bypass# the setting of "PermitRootLogin without-password".# If you just want the PAM account and session checks to run without# PAM authentication, then enable this but set PasswordAuthentication# and ChallengeResponseAuthentication to 'no'.#UsePAM no#AllowAgentForwarding yes#AllowTcpForwarding yes#GatewayPorts no#X11Forwarding no#X11DisplayOffset 10#X11UseLocalhost yes#PrintMotd yes#PrintLastLog yes#TCPKeepAlive yes#UseLogin noUsePrivilegeSeparation no#PermitUserEnvironment noCompression noClientAliveInterval 15ClientAliveCountMax 4#UseDNS yes#PidFile /var/run/sshd.pid#MaxStartups 10#PermitTunnel no#ChrootDirectory none# no default banner path#Banner none# override default of no subsystemsSubsystem sftp /usr/libexec/sftp-server# Example of overriding settings on a per-user basis#Match User anoncvs# X11Forwarding no# AllowTcpForwarding no# ForceCommand cvs server Here the ssh client output: myself@ubuntu:~$ ssh -vvv [email protected]_6.6.1, OpenSSL 1.0.1f 6 Jan 2014debug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 19: Applying options for *debug2: ssh_connect: needpriv 0debug1: Connecting to 192.168.1.1 [192.168.1.1] port 22.debug1: Connection established.debug3: Incorrect RSA1 identifierdebug3: Could not load "/home/myself/.ssh/id_rsa" as a RSA1 public keydebug1: identity file /home/myself/.ssh/id_rsa type 1debug1: identity file /home/myself/.ssh/id_rsa-cert type -1debug1: identity file /home/myself/.ssh/id_dsa type -1debug1: identity file /home/myself/.ssh/id_dsa-cert type -1debug1: identity file /home/myself/.ssh/id_ecdsa type -1debug1: identity file /home/myself/.ssh/id_ecdsa-cert type -1debug1: identity file /home/myself/.ssh/id_ed25519 type -1debug1: identity file /home/myself/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8debug1: Remote protocol version 2.0, remote software version OpenSSH_5.6debug1: match: OpenSSH_5.6 pat OpenSSH_5* compat 0x0c000000debug2: fd 3 setting O_NONBLOCKdebug3: load_hostkeys: loading entries for host "192.168.1.1" from file "/home/myself/.ssh/known_hosts"debug3: load_hostkeys: found key type RSA in file /home/myself/.ssh/known_hosts:26debug3: load_hostkeys: loaded 1 keysdebug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsadebug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,ssh-dssdebug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,[email protected],[email protected],[email protected],aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,[email protected],[email protected],[email protected],aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: none,[email protected],zlibdebug2: kex_parse_kexinit: none,[email protected],zlibdebug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1debug2: kex_parse_kexinit: ssh-rsa,ssh-dssdebug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: nonedebug2: kex_parse_kexinit: nonedebug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: setup hmac-md5debug1: kex: server->client aes128-ctr hmac-md5 nonedebug2: mac_setup: setup hmac-md5debug1: kex: client->server aes128-ctr hmac-md5 nonedebug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<3072<8192) sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_GROUPdebug2: bits set: 1481/3072debug1: SSH2_MSG_KEX_DH_GEX_INIT sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_REPLYdebug1: Server host key: RSA 91:66:c0:07:e0:c0:df:b7:8e:49:97:b5:36:12:12:eadebug3: load_hostkeys: loading entries for host "192.168.1.1" from file "/home/myself/.ssh/known_hosts"debug3: load_hostkeys: found key type RSA in file /home/myself/.ssh/known_hosts:26debug3: load_hostkeys: loaded 1 keysdebug1: Host '192.168.1.1' is known and matches the RSA host key.debug1: Found key in /home/myself/.ssh/known_hosts:26debug2: bits set: 1551/3072debug1: ssh_rsa_verify: signature correctdebug2: kex_derive_keysdebug2: set_newkeys: mode 1debug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug2: set_newkeys: mode 0debug1: SSH2_MSG_NEWKEYS receiveddebug1: SSH2_MSG_SERVICE_REQUEST sentdebug2: service_accept: ssh-userauthdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug2: key: /home/myself/.ssh/id_rsa (0x802b9240),debug2: key: /home/myself/.ssh/id_dsa ((nil)),debug2: key: /home/myself/.ssh/id_ecdsa ((nil)),debug2: key: /home/myself/.ssh/id_ed25519 ((nil)),debug3: input_userauth_bannerdebug1: Authentications that can continue: publickey,password,keyboard-interactivedebug3: start over, passed a different list publickey,password,keyboard-interactivedebug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,passworddebug3: authmethod_lookup publickeydebug3: remaining preferred: keyboard-interactive,passworddebug3: authmethod_is_enabled publickeydebug1: Next authentication method: publickeydebug1: Offering RSA public key: /home/myself/.ssh/id_rsadebug3: send_pubkey_testdebug2: we sent a publickey packet, wait for replydebug1: Authentications that can continue: publickey,password,keyboard-interactivedebug1: Trying private key: /home/myself/.ssh/id_dsadebug3: no such identity: /home/myself/.ssh/id_dsa: No such file or directorydebug1: Trying private key: /home/myself/.ssh/id_ecdsadebug3: no such identity: /home/myself/.ssh/id_ecdsa: No such file or directorydebug1: Trying private key: /home/myself/.ssh/id_ed25519debug3: no such identity: /home/myself/.ssh/id_ed25519: No such file or directorydebug2: we did not send a packet, disable methoddebug3: authmethod_lookup keyboard-interactivedebug3: remaining preferred: passworddebug3: authmethod_is_enabled keyboard-interactivedebug1: Next authentication method: keyboard-interactivedebug2: userauth_kbdintdebug2: we sent a keyboard-interactive packet, wait for replydebug1: Authentications that can continue: publickey,password,keyboard-interactivedebug3: userauth_kbdint: disable: no info_req_seendebug2: we did not send a packet, disable methoddebug3: authmethod_lookup passworddebug3: remaining preferred: debug3: authmethod_is_enabled passworddebug1: Next authentication method: [email protected]'s password: debug3: packet_send2: adding 64 (len 57 padlen 7 extra_pad 64)debug2: we sent a password packet, wait for replydebug1: Authentication succeeded (password).Authenticated to 192.168.1.1 ([192.168.1.1]:22).debug1: channel 0: new [client-session]debug3: ssh_session2_open: channel_new: 0debug2: channel 0: send opendebug1: Requesting [email protected]: Entering interactive session.debug2: callback startdebug2: fd 3 setting TCP_NODELAYdebug3: packet_set_tos: set IP_TOS 0x10debug2: client_session2_setup: id 0debug2: channel 0: request pty-req confirm 1debug1: Sending environment.debug3: Ignored env XDG_VTNRdebug3: Ignored env MANPATHdebug3: Ignored env XDG_SESSION_IDdebug3: Ignored env CLUTTER_IM_MODULEdebug3: Ignored env SELINUX_INITdebug3: Ignored env XDG_GREETER_DATA_DIRdebug3: Ignored env COMP_WORDBREAKSdebug3: Ignored env SESSIONdebug3: Ignored env NVM_CD_FLAGSdebug3: Ignored env GPG_AGENT_INFOdebug3: Ignored env TERMdebug3: Ignored env SHELLdebug3: Ignored env XDG_MENU_PREFIXdebug3: Ignored env VTE_VERSIONdebug3: Ignored env NVM_PATHdebug3: Ignored env GVM_ROOTdebug3: Ignored env WINDOWIDdebug3: Ignored env UPSTART_SESSIONdebug3: Ignored env GNOME_KEYRING_CONTROLdebug3: Ignored env GTK_MODULESdebug3: Ignored env NVM_DIRdebug3: Ignored env USERdebug3: Ignored env LD_LIBRARY_PATHdebug3: Ignored env LS_COLORSdebug3: Ignored env XDG_SESSION_PATHdebug3: Ignored env XDG_SEAT_PATHdebug3: Ignored env SSH_AUTH_SOCKdebug3: Ignored env SESSION_MANAGERdebug3: Ignored env DEFAULTS_PATHdebug3: Ignored env XDG_CONFIG_DIRSdebug3: Ignored env PATHdebug3: Ignored env DESKTOP_SESSIONdebug3: Ignored env QT_IM_MODULEdebug3: Ignored env QT_QPA_PLATFORMTHEMEdebug3: Ignored env NVM_NODEJS_ORG_MIRRORdebug3: Ignored env GVM_VERSIONdebug3: Ignored env JOBdebug3: Ignored env PWDdebug3: Ignored env XMODIFIERSdebug3: Ignored env GNOME_KEYRING_PIDdebug1: Sending env LANG = en_US.UTF-8debug2: channel 0: request env confirm 0debug3: Ignored env gvm_pkgset_namedebug3: Ignored env GDM_LANGdebug3: Ignored env MANDATORY_PATHdebug3: Ignored env IM_CONFIG_PHASEdebug3: Ignored env COMPIZ_CONFIG_PROFILEdebug3: Ignored env GDMSESSIONdebug3: Ignored env SESSIONTYPEdebug3: Ignored env XDG_SEATdebug3: Ignored env HOMEdebug3: Ignored env SHLVLdebug3: Ignored env GOROOTdebug3: Ignored env LANGUAGEdebug3: Ignored env GNOME_DESKTOP_SESSION_IDdebug3: Ignored env DYLD_LIBRARY_PATHdebug3: Ignored env gvm_go_namedebug3: Ignored env LOGNAMEdebug3: Ignored env GVM_OVERLAY_PREFIXdebug3: Ignored env COMPIZ_BIN_PATHdebug3: Ignored env XDG_DATA_DIRSdebug3: Ignored env QT4_IM_MODULEdebug3: Ignored env DBUS_SESSION_BUS_ADDRESSdebug3: Ignored env PrlCompizSessionClosedebug3: Ignored env PKG_CONFIG_PATHdebug3: Ignored env GOPATHdebug3: Ignored env NVM_BINdebug3: Ignored env LESSOPENdebug3: Ignored env NVM_IOJS_ORG_MIRRORdebug3: Ignored env INSTANCEdebug3: Ignored env TEXTDOMAINdebug3: Ignored env XDG_RUNTIME_DIRdebug3: Ignored env DISPLAYdebug3: Ignored env XDG_CURRENT_DESKTOPdebug3: Ignored env GTK_IM_MODULEdebug3: Ignored env LESSCLOSEdebug3: Ignored env TEXTDOMAINDIRdebug3: Ignored env GVM_PATH_BACKUPdebug3: Ignored env COLORTERMdebug3: Ignored env XAUTHORITYdebug3: Ignored env _debug2: channel 0: request shell confirm 1debug2: callback donedebug2: channel 0: open confirm rwindow 0 rmax 32768debug2: channel_input_status_confirm: type 99 id 0debug2: PTY allocation request accepted on channel 0debug2: channel 0: rcvd adjust 2097152debug2: channel_input_status_confirm: type 99 id 0debug2: shell request accepted on channel 0 Here the sshd server output: debug1: sshd version OpenSSH_5.6p1debug1: read PEM private key done: type RSAdebug1: private host key: #0 type 1 RSAdebug1: read PEM private key done: type DSAdebug1: private host key: #1 type 2 DSAdebug1: rexec_argv[0]='/usr/sbin/sshd'debug1: rexec_argv[1]='-d'Set /proc/self/oom_adj from 0 to -17debug1: Bind to port 22 on 0.0.0.0.Server listening on 0.0.0.0 port 22.socket: Address family not supported by protocoldebug1: Server will not fork when running in debugging mode.debug1: rexec start in 4 out 4 newsock 4 pipe -1 sock 7debug1: inetd sockets after dupping: 3, 3Connection from 192.168.1.60 port 53445debug1: Client protocol version 2.0; client software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8debug1: match: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8 pat OpenSSH*debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_5.6debug1: list_hostkey_types: ssh-rsa,ssh-dssdebug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug1: kex: client->server aes128-ctr hmac-md5 nonedebug1: kex: server->client aes128-ctr hmac-md5 nonedebug1: SSH2_MSG_KEX_DH_GEX_REQUEST receiveddebug1: SSH2_MSG_KEX_DH_GEX_GROUP sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_INITdebug1: SSH2_MSG_KEX_DH_GEX_REPLY sentdebug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug1: SSH2_MSG_NEWKEYS receiveddebug1: KEX donedebug1: userauth-request for user user service ssh-connection method nonedebug1: attempt 0 failures 0debug1: userauth_send_banner: sentFailed none for user from 192.168.1.60 port 53445 ssh2debug1: userauth-request for user user service ssh-connection method publickeydebug1: attempt 1 failures 0debug1: test whether pkalg/pkblob are acceptabledebug1: temporarily_use_uid: 0/0 (e=0/0)debug1: trying public key file //.ssh/authorized_keysdebug1: Could not open authorized keys '//.ssh/authorized_keys': No such file or directorydebug1: restore_uid: 0/0debug1: temporarily_use_uid: 0/0 (e=0/0)debug1: trying public key file //.ssh/authorized_keys2debug1: Could not open authorized keys '//.ssh/authorized_keys2': No such file or directorydebug1: restore_uid: 0/0Failed publickey for user from 192.168.1.60 port 53445 ssh2debug1: userauth-request for user user service ssh-connection method keyboard-interactivedebug1: attempt 2 failures 1debug1: keyboard-interactive devs debug1: auth2_challenge: user=user devs=debug1: kbdint_alloc: devices ''Failed keyboard-interactive for user from 192.168.1.60 port 53445 ssh2debug1: Unable to open the btmp file /var/log/btmp: No such file or directorydebug1: userauth-request for user user service ssh-connection method passworddebug1: attempt 3 failures 2Could not get shadow information for userAccepted password for user from 192.168.1.60 port 53445 ssh2debug1: Entering interactive session for SSH2.debug1: server_init_dispatch_20debug1: server_input_channel_open: ctype session rchan 0 win 1048576 max 16384debug1: input_session_requestdebug1: channel 0: new [server-session]debug1: session_new: session 0debug1: session_open: channel 0debug1: session_open: session 0: link with channel 0debug1: server_input_channel_open: confirm sessiondebug1: server_input_global_request: rtype [email protected] want_reply 0debug1: server_input_channel_req: channel 0 request pty-req reply 1debug1: session_by_channel: session 0 channel 0debug1: session_input_channel_req: session 0 req pty-reqdebug1: Allocating pty.debug1: session_pty_req: session 0 alloc /dev/pts/1debug1: server_input_channel_req: channel 0 request env reply 0debug1: session_by_channel: session 0 channel 0debug1: session_input_channel_req: session 0 req envdebug1: server_input_channel_req: channel 0 request shell reply 1debug1: session_by_channel: session 0 channel 0debug1: session_input_channel_req: session 0 req shelldebug1: Setting controlling tty using TIOCSCTTY. /etc/pam.d/sshd: # PAM configuration for the Secure Shell service# Read environment variables from /etc/environment and# /etc/security/pam_env.conf.auth required pam_env.so # [1]# Standard Un*x authentication.auth include common-auth# Disallow non-root logins when /etc/nologin exists.account required pam_nologin.so# Uncomment and edit /etc/security/access.conf if you need to set complex# access limits that are hard to express in sshd_config.# account required pam_access.so# Standard Un*x authorization.account include common-accountt# Standard Un*x session setup and teardown.session include common-session# Print the message of the day upon successful login.session optional pam_motd.so # [1]# Print the status of the user's mailbox upon successful login.session optional pam_mail.so standard noenv # [1]# Set up user limits from /etc/security/limits.conf.session required pam_limits.so# Standard Un*x password updating.password include common-password
In the chat, it turned out the system was using traditional (non-shadow) password storage and traditional Unix password hashing algorithm. Both are poor choices in today's security environment. Since the traditional password hashing algorithm only stores and compares the first 8 characters of the password, that explains the behavior noticed in the original question. The posted sshd output includes the line: Could not get shadow information for user I would assume this means at least sshd (or possibly the PAM Unix password storage library) on this system includes shadow password functionality, but for some reason, the system vendor has chosen not to use it.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/415787", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160654/" ] }
415,799
I have a zip file with size of 1.5 GB. Its content is one ridiculous large plain-text file (60 GB) and I currently do not have enough space on my disk left to extract it all nor do I want to extract it all, even if I had. As for my use case, it would suffice if I can inspect parts of the content. Hence I want to unzip the file as a stream and access a range of the file (like one can via head and tail on a normal text file). Either by memory (e.g. extract max 100kb starting from 32GB mark) or by lines (give me the plain text lines 3700-3900). Is there a way to achieve that?
Note that gzip can extract zip files (at least the first entry in the zip file). So if there's only one huge file in that archive, you can do: gunzip < file.zip | tail -n +3000 | head -n 20 To extract the 20 lines starting with the 3000th one for instance. Or: gunzip < file.zip | tail -c +3000 | head -c 20 For the same thing with bytes (assuming a head implementation that supports -c ). For any arbitrary member in the archive, in a Unixy way: bsdtar xOf file.zip file-to-extract | tail... | head... With the head builtin of ksh93 (like when /opt/ast/bin is ahead in $PATH ), you can also do: .... | head -s 2999 -c 20.... | head --skip=2999 --bytes=20 Note that in any case gzip / bsdtar / unzip will always need to uncompress (and discard here) the entire section of the file that leads to the portion that you want to extract. That's down to how the compression algorithm works.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/415799", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12471/" ] }
415,814
Whenever I reboot my laptop, everything runs amazingly and I have a maximum of 40% memory usage (out of 8GB). However over time (~ 1 day of usage), memory usage goes up to 90%+, and the system starts swapping. Right now, free -mh returns this: total used free shared buff/cache availableMem: 7,7G 1,3G 141M 223M 6,3G 246MSwap: 7,5G 530M 6,9G I was assuming that buff/cache memory is free to be reallocated if processes require it, but it seems to mostly be unavailable. cat /proc/meminfo : MemTotal: 8055268 kBMemFree: 145184 kBMemAvailable: 247984 kBBuffers: 49092 kBCached: 423724 kBSwapCached: 38652 kBActive: 881184 kBInactive: 791552 kBActive(anon): 708420 kBInactive(anon): 725564 kBActive(file): 172764 kBInactive(file): 65988 kBUnevictable: 252 kBMlocked: 252 kBSwapTotal: 7812092 kBSwapFree: 7267624 kBDirty: 352 kBWriteback: 0 kBAnonPages: 1195320 kBMapped: 235860 kBShmem: 234068 kBSlab: 6117796 kBSReclaimable: 167260 kBSUnreclaim: 5950536 kBKernelStack: 10352 kBPageTables: 30312 kBNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kBCommitLimit: 11839724 kBCommitted_AS: 6410728 kBVmallocTotal: 34359738367 kBVmallocUsed: 0 kBVmallocChunk: 0 kBHardwareCorrupted: 0 kBAnonHugePages: 104448 kBCmaTotal: 0 kBCmaFree: 0 kBHugePages_Total: 0HugePages_Free: 0HugePages_Rsvd: 0HugePages_Surp: 0Hugepagesize: 2048 kBDirectMap4k: 1361472 kBDirectMap2M: 5859328 kBDirectMap1G: 1048576 kB I found these values especially interesting, as they correlate a lot with the buff/cache usage from free , but I don't know what to do with them or where to look next: SReclaimable: 167260 kBSUnreclaim: 5950536 kBSlab: 6117796 kB Where can I look next? What is the slab, and is there a way to reduce it's memory usage?
You should check with top if something is actually using your RAM or not, sort by memory usage, or check the memory usage in the System Monitor. Linux will borrow unused memory for disk caching. This makes it looks like you are low on memory, but you are not. Check this webpage for more explanation : https://www.linuxatemyram.com/ You have actually around 6.5Gb unused memory on the example which was posted. You can also see that the swap amount is very low (540Mb). You can release the cache(s) as explained here and then free will display you the free memory in the available field : To free pagecache: # echo 1 > /proc/sys/vm/drop_caches To free dentries and inodes: # echo 2 > /proc/sys/vm/drop_caches To free pagecache, dentries and inodes: # echo 3 > /proc/sys/vm/drop_caches Or with this command : free && sync && echo 3 > /proc/sys/vm/drop_caches && free Regarding Slab: Slab, SReclaimable, SUnreclaim The kernel does a lot of repetition during the time it is running. Some objects, like asking for the specific inode of a file may be performed thousand times a day. In such case, it would be wise to store it in a quick reference list, or cache. Slab are the caches for kernel objects, to optimize those activities that happen the most. The Slab field is the total of SReclaimable and SUnreclaim. Try to troubleshoot what is using the Slab SUnreclaim memory amount with slabtop . The above are meant to be run as root.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415814", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269614/" ] }
415,816
krishna@krishna-PC:~/Downloads/wificonnect1$ sudo wpa_cli scan[sudo] password for krishna: Selected interface 'wlan0'OKkrishna@krishna-PC:~/Downloads/wificonnect1$ sudo wpa_cli scan_resultsSelected interface 'wlan0'bssid / frequency / signal level / flags / ssidfc:0a:81:1d:6d:80 2412 -43 [WPA2-PSK-CCMP][ESS] econsys00:24:01:ba:b4:65 2437 -72 [WPA-PSK-TKIP][WPA2-PSK-TKIP][WPS][ESS] Test6c:72:20:f2:1a:6b 2412 -60 [WPA-PSK-CCMP][WPA2-PSK-CCMP][ESS] Haric0:ee:fb:31:ec:4a 2447 -76 [WPA2-PSK-CCMP][ESS] Vishal's hotspotfc:0a:81:1c:6d:f0 2412 -61 [WPA2-PSK-CCMP][ESS] econsysc4:12:f5:08:10:70 2427 -63 [WPA-PSK-CCMP][WPA2-PSK-CCMP][ESS] GoGreenf4:f2:6d:6d:23:44 2462 -62 [WPS][ESS] joyglobalkrishna@krishna-PC:~/Downloads/wificonnect1$ sudo wpa_cli add_networkSelected interface 'wlan0'1krishna@krishna-PC:~/Downloads/wificonnect1$ sudo wpa_cli set_network 1 ssid "econsys"Selected interface 'wlan0'FAIL How should I connect?
Create a /etc/wpa_supplicant/wpa_supplicant.conf file with the following lines: ctrl_interface=/run/wpa_supplicantupdate_config=1 Run: wpa_supplicant -B -i wlan0 -c /etc/wpa_supplicant/wpa_supplicant.conf Type: wpa_cli then: scanscan_resultsadd_network sample output: 0 select the SSID (replace 0 with the exact output): set_network 0 ssid "Your SSID here" Set your password : set_network 0 psk "You Password here" Without the double quotes the command will FAIL .Next step: enable_network 0 then: save_configquit Without the interactive commands you should use (single quote added): sudo wpa_cli set_network 1 ssid '"econsys"' or sudo wpa_cli set_network 1 ssid "\"econsys\"" instead of: sudo wpa_cli set_network 1 ssid "econsys" The single quote should be added too when adding your password: sudo wpa_cli set_network 1 psk '"Your Password"'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415816", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269615/" ] }
415,826
I need to convert the following file example 1 to the file as appears on example 2 so it will print machine name and IP but only in case "name" appears before host_name redhat03.rdns.com 10.10.29.66 not printed because "name" not appears before host_name please advice what is the best way to do this convertion by awk or sed or perl one liners example1 "name" : "REDHAT", "host_name" : "linux01.rdns.com", "ip" : "10.10.29.61" "name" : "REDHAT", "host_name" : "linux02.rdns.com", "ip" : "10.10.29.62" "name" : "REDHAT", "host_name" : "linux03.rdns.com", "ip" : "10.10.29.63" "name" : "REDHAT", "host_name" : "redhat01.rdns.com", "ip" : "10.10.29.64" "name" : "REDHAT", "host_name" : "redhat02.rdns.com", "ip" : "10.10.29.65" "name" : "REDHAT", "host_name" : "redhat03.rdns.com", "ip" : "10.10.29.66" "host_name" : "redhat04.rdns.com", "ip" : "10.10.29.67" "name" : "REDHAT", "host_name" : "redhat05.rdns.com", "ip" : "10.10.29.68" "name" : "REDHAT", "host_name" : "redhat06.rdns.com", "ip" : "10.10.29.81" "name" : "REDHAT", "host_name" : "redhat07.rdns.com", "ip" : "10.10.29.82" "name" : "REDHAT", "host_name" : "redhat08.rdns.com", "ip" : "10.10.29.83" "name" : "REDHAT", "host_name" : "redhat09.rdns.com", "ip" : "10.10.29.84" expected results linux01.rdns.com 10.10.29.61 linux02.rdns.com 10.10.29.62 linux03.rdns.com 10.10.29.63 redhat01.rdns.com 10.10.29.64 redhat02.rdns.com 10.10.29.65 redhat03.rdns.com 10.10.29.66 redhat05.rdns.com 10.10.29.68 redhat06.rdns.com 10.10.29.81 redhat07.rdns.com 10.10.29.82 redhat08.rdns.com 10.10.29.83 redhat09.rdns.com 10.10.29.84
Create a /etc/wpa_supplicant/wpa_supplicant.conf file with the following lines: ctrl_interface=/run/wpa_supplicantupdate_config=1 Run: wpa_supplicant -B -i wlan0 -c /etc/wpa_supplicant/wpa_supplicant.conf Type: wpa_cli then: scanscan_resultsadd_network sample output: 0 select the SSID (replace 0 with the exact output): set_network 0 ssid "Your SSID here" Set your password : set_network 0 psk "You Password here" Without the double quotes the command will FAIL .Next step: enable_network 0 then: save_configquit Without the interactive commands you should use (single quote added): sudo wpa_cli set_network 1 ssid '"econsys"' or sudo wpa_cli set_network 1 ssid "\"econsys\"" instead of: sudo wpa_cli set_network 1 ssid "econsys" The single quote should be added too when adding your password: sudo wpa_cli set_network 1 psk '"Your Password"'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415826", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246468/" ] }
415,839
I have problems to understand the string pattern matching with =~ in bash . I wrote following function (don't be alarmed - it's just experimenting, not a security approach with md5sum): md5 () { [[ "$(md5sum $1)" =~ $2* ]] && echo fine || echo baarr; } and tested it with some input. Here some reference: md5sum wp.laenderlisteb1eb0d822e8d841249e3d68eeb3068d3 wp.laenderliste It's unnecessarily hard to compare, if the source for the control sum does not contain the two blanks with the filename already. That's where the observations origins from, but more interesting than the many ways to solve that problem was my observation: I define a control variable, and test my function with too short, but matching strings: ok=b1eb0d822e8d841249e3d68eeb3068d3for i in {29..32}; do md5 wp.laenderliste ${ok:1:$i} ;done finefinefinefine That's expected and fine, since it is the purpose of the function, to ignore the mismatch of the missing " wp.laenderliste" and therefore even longer mismatches. Now, if I append random stuff, which does not match, I expect, of course, errors, and get them: for i in {29..32}; do md5 wp.laenderliste ${ok:1:$i}GU ;done baarrbaarrbaarrbaarr As expected. But when there is only one, last mismatching character , see what happens: for i in {29..32}; do md5 wp.laenderliste ${ok:1:$i}G ;done finefinefinefine Is this me, not realizing how this is supposed to work (select is broken), or is there really an off-by-one-error in bash's pattern matching? Mismatches in the mid of the string matter from count 1: for i in 5 9 e ; do echo md5 wp.laenderliste ${ok//$i/_} ;done md5 wp.laenderliste b1eb0d822e8d841249e3d68eeb3068d3md5 wp.laenderliste b1eb0d822e8d84124_e3d68eeb3068d3md5 wp.laenderliste b1_b0d822_8d841249_3d68__b3068d3for i in 5 9 e ; do md5 wp.laenderliste ${ok//$i/_} ;done finebaarrbaarr The bash-version: bash -versionGNU bash, Version 4.3.48(1)-release (x86_64-pc-linux-gnu)Copyright (C) 2013 Free Software Foundation, Inc.Lizenz GPLv3+: GNU GPL Version 3 oder jünger <http://gnu.org/licenses/gpl.html> Disclaimer : md5sum is only a useful against unintentional mistakes, not against attacks. I don't encourage using it. And this question is not a search for better solutions or workarounds. It's about the =~ Operator, whether it should act as it does and if so, why.
=~ in ( [[ ]] ) is a regular expression pattern match (or rather, a search , see below). That's different from = (or == ) which uses the same patterns as with filename wildcards. In particular, the asterisk in regular expressions means "zero or one copies of the preceding unit", so abc* means ab plus zero or more c s. In your case, the trailing asterisk makes the final character of the function argument optional. In your final example, the pattern becomes ...68d3G* , and since G* matches the empty string, it matches a string like ...68d3 . Regexese for "any string" is of .* , or "any character, any number of times". Note that the regexp match searches for a match anywhere in the string, it doesn't need to be the whole string. So the pattern cde would be found in the string abcdefgh . You might want to use something like this: [[ "$(md5sum "$1")" = "$2 "* ]] && echo ok We don't really need a regular expression match here, and since md5sum outputs the trailing space (plus filename) anyway, we can use that in the pattern to check that we match against the full pattern. So giving the function a truncated hash would not match.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415839", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4485/" ] }
415,846
Help me in writing a shell script.sh to run my python service fin_code/final/Healthcheck.py by killing the current running service and run it again... This will be called in crontab to run for every 3hrs... script.sh kill -9$(ps grep 'healthcheck.py' | awk '{print $2}')nohup python fin_code/final/healthcheck.py & I've used this script to run in crontab.
=~ in ( [[ ]] ) is a regular expression pattern match (or rather, a search , see below). That's different from = (or == ) which uses the same patterns as with filename wildcards. In particular, the asterisk in regular expressions means "zero or one copies of the preceding unit", so abc* means ab plus zero or more c s. In your case, the trailing asterisk makes the final character of the function argument optional. In your final example, the pattern becomes ...68d3G* , and since G* matches the empty string, it matches a string like ...68d3 . Regexese for "any string" is of .* , or "any character, any number of times". Note that the regexp match searches for a match anywhere in the string, it doesn't need to be the whole string. So the pattern cde would be found in the string abcdefgh . You might want to use something like this: [[ "$(md5sum "$1")" = "$2 "* ]] && echo ok We don't really need a regular expression match here, and since md5sum outputs the trailing space (plus filename) anyway, we can use that in the pattern to check that we match against the full pattern. So giving the function a truncated hash would not match.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415846", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268076/" ] }
415,850
On an older installation I had it somehow configured, so that I if I had an autocomplete list that I could bring up with tab, the first item was highlighted and I could use my arrows to navigate the list and confirm with enter. I don't remember how I set this up, any idea how to do this?
=~ in ( [[ ]] ) is a regular expression pattern match (or rather, a search , see below). That's different from = (or == ) which uses the same patterns as with filename wildcards. In particular, the asterisk in regular expressions means "zero or one copies of the preceding unit", so abc* means ab plus zero or more c s. In your case, the trailing asterisk makes the final character of the function argument optional. In your final example, the pattern becomes ...68d3G* , and since G* matches the empty string, it matches a string like ...68d3 . Regexese for "any string" is of .* , or "any character, any number of times". Note that the regexp match searches for a match anywhere in the string, it doesn't need to be the whole string. So the pattern cde would be found in the string abcdefgh . You might want to use something like this: [[ "$(md5sum "$1")" = "$2 "* ]] && echo ok We don't really need a regular expression match here, and since md5sum outputs the trailing space (plus filename) anyway, we can use that in the pattern to check that we match against the full pattern. So giving the function a truncated hash would not match.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415850", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/232690/" ] }