source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
158,839 | It there a way to display the complete command line in htop (e.g. in multiple lines or with a moving banner). With the default setting where only one line is displayed it isn't possible to distungish all processes, e.g. different java programs (because class or jar argument follows a bunch of arguments) or programs with long absolute path of binaries. Omitting the full absolute path in favour of only the binary would be a compromise where distinction would not be optimal, but better in some cases. I checked out the settings and the manpage and didn't find an option suitable in my understanding. | As far as I know, the only way to show the full command line is to scroll right with the arrow keys or to use a terminal with a small font. EDIT ( thanks to @LangeHaare ): You can use Ctrl-A and Ctrl-E to jump to the beginning and end the command line. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/158839",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63502/"
]
} |
158,853 | I can parse the /etc/passwd with augtool: myuser=bobusershome=`augtool -L -A --transform "Passwd incl /etc/passwd" print "/files/etc/passwd/$myuser/home" | sed -En 's/\/.* = (.*)/\1/p'` ...but it seems a little too convoluted. Is there any simple, dedicated tool for displaying users'home (like usermod can be used to change it)? | You should never parse /etc/passwd directly. You might be on a system with remote users, in which case they won't be in /etc/passwd . The /etc/passwd file might be somewhere else. Etc. If you need direct access to the user database, use getent . $ getent passwd phemmerphemmer:*:1000:4:phemmer:/home/phemmer:/bin/zsh$ getent passwd phemmer | awk -F: '{ print $6 }'/home/phemmer However there is also another way that doesn't involve parsing: $ user=phemmer$ eval echo "~$user"/home/phemmer The ~ operator in the shell expands to the specified user's home directory. However we have to use the eval because expansion of the variable $user happens after expansion of ~ . So by using the eval and double quotes, you're effectively expanding $user first, then calling eval echo "~phemmer" . Once you have the home directory, simple tack /.ssh on to the end. $ sshdir="$(eval echo "~$user/.ssh")"$ echo "$sshdir"/home/phemmer/.ssh | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/158853",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17765/"
]
} |
158,855 | In Mac OSX if you have downloaded a zip/gz/bz2 file, and you double click it in Finder, then the archive is automatically extracted for you in a subfolder, which I find very nice. I am wondering if there is a similar file manager in the Linux world that has such a feature? | You should never parse /etc/passwd directly. You might be on a system with remote users, in which case they won't be in /etc/passwd . The /etc/passwd file might be somewhere else. Etc. If you need direct access to the user database, use getent . $ getent passwd phemmerphemmer:*:1000:4:phemmer:/home/phemmer:/bin/zsh$ getent passwd phemmer | awk -F: '{ print $6 }'/home/phemmer However there is also another way that doesn't involve parsing: $ user=phemmer$ eval echo "~$user"/home/phemmer The ~ operator in the shell expands to the specified user's home directory. However we have to use the eval because expansion of the variable $user happens after expansion of ~ . So by using the eval and double quotes, you're effectively expanding $user first, then calling eval echo "~phemmer" . Once you have the home directory, simple tack /.ssh on to the end. $ sshdir="$(eval echo "~$user/.ssh")"$ echo "$sshdir"/home/phemmer/.ssh | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/158855",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31584/"
]
} |
158,856 | I am currently writing my third ever shell script and I have run into a problem. This is my script so far: #!/bin/bashecho "choose one of the following options : \ 1) display all current users \ 2) list all files \ 3) show calendar \ 4) exit script"while read do case in 1) who;; 2) ls -a;; 3) cal;; 4) exit;; esac done when I try to run the script it says this: line2 : unexpected EOF while looking for matching '"' line14 : syntax error: unexpected end of file. What am I doing wrong? | The problem is, that your case statement is missing the subject - the variable which it should evaluate. Hence you probably want something like this: #!/bin/bashcat <<EODchoose one of the following options:1) display all current users2) list all files3) show calendar4) exit scriptEODwhile true; do printf "your choice: " read case $REPLY in 1) who;; 2) ls -a;; 3) cal;; 4) exit;; esac done Here case uses the default variable $REPLY which read fills when it's not given any variable names (see help read for details). Also note the changes: printf is used to display the prompt in each round (and doesn't append a newline), cat is used to print instructions on several lines so that they don't wrap and are easier to read. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158856",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84597/"
]
} |
158,872 | Does systemd still have the concept of runlevels? For example is it pointless to use telinit <number> ? | SystemD Run-Level Low-Down Within the SystemD(aemon), runlevels are exposed as "Targets." The concept is still there, but the workflow to produce the desired result for your requirement is different. The attached should clarify this issue. How do I change the current runlevel? $ systemctl isolate runlevelX.target How do I change the default runlevel for next-boot? # Create a symlink$ ln -sf /usr/lib/systemd/system/multi-user.target /etc/systemd/system/default.target ln -sf TARGET DESTINATION -s creates symbolic link -f removes the existing destination file OR (as @centimane suggested) simply use the "blessed" systemd command: systemctl set-default [target name].target How do I identify the current runlevel? $ systemctl list-units --type=target | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/158872",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72520/"
]
} |
158,896 | To what extent can other POSIX-compatible shells function as reasonable replacements for bash? They don't need to be true "drop-in" replacements, but close enough to work with most scripts and support the rest with some modification. I want to have explicit bash scripts - initscripts, DHCP client scripts, etc. - work with minimal modification I want my own collection of more specialized shell scripts to not need too much modification I want to have features like string manipulation and built-in regex pattern matching The only serious contenders I know of are zsh and mksh. So, for those of you here who are good with either or both of them: What features does bash have that zsh and mksh respectively do not? What features do the shells share with bash, but use incompatible syntax for? | I'll stick to scripting features. Rich interactive features (command line edition, completion, prompts, etc.) tend to be very different, achieving similar effects in wholly incompatible ways. What features are in zsh and missing from bash, or vice versa? gives a few pointers on interactive use. The closest thing to bash would be ATT ksh93 or mksh (the Korn shell and a clone). Zsh also has a subset of features but you would need to run it in ksh emulation mode, not in zsh native mode. I won't list POSIX features (which are available in any modern sh shell), nor relatively obscure features, nor as mentioned above features for interactive use. Observations are valid as of bash 4.2, ksh 93u and mksh 40.9.20120630 as found on Debian wheezy. Shell syntax Quoting $'…' (literal strings with backslash interpolation) is available in ksh93 and mksh. `$"…" (translated strings) is bash-specific. Conditional constructs Mksh and ksh93 have ;& to fall through in a case statement, but not ;;& to test subsequent cases. Mksh has ;| for that, and recent mksh allows ;;& for compatibility. ((…)) arithmetic expressions and [[ … ]] tests are ksh features. Some conditional operators are different, see “conditional expressions” below. Coprocesses Ksh and bash both have coprocesses but they work differently. Functions Mksh and ksh93 support the function name {…} syntax for function definitions in addition to the standard name () {…} , but using function in ksh changes scoping rules, so stick to name () … to maintain compatibility. The rules for allowed characters in function names vary; stick to alphanumerics and _ . Brace expansion Ksh93 and mksh support brace expansion {foo,bar} . Ksh93 supports numeric ranges {1..42} but mksh doesn't. Parameter expansion Ksh93 and mksh support substring extraction with ${VAR:offset} and ${VAR:offset:length} , but not case folding like ${VAR^} , ${VAR,} , etc. You can do case conversion with typeset -l and typeset -u in both bash and ksh. They support replacement with ${VAR/PATTERN/STRING} or ${VAR/PATTERN//STRING} . The quoting rules for STRING are slightly different, so avoid backslashes (and maybe other characters) in STRING (build a variable and use ${VAR/PATTERN/$REPLACEMENT} instead if the replacement contains quoting characters). Array expansion ( ${ARRAY[KEY]} , "${ARRAY[@]}" , ${#ARRAY[@]} , ${!ARRAY[@]} ) work in bash like in ksh. ${!VAR} expanding to ${OTHERVAR} when the value of VAR is OTHERVAR (indirect variable reference) is bash-specific (ksh does something different with ${!VAR} ). To get this double expansion in ksh, you need to use a name reference instead ( typeset -n VAR=OTHERVAR; echo "$VAR" ). ${!PREFIX*} works the same. Process substitution Process substitution <(…) and >(…) is supported in ksh93 but not in mksh. Wildcard patterns The ksh extended glob patterns that need shopt -s extglob to be activated in bash are always available in ksh93 and mksh. Mksh doesn't support character classes like [[:alpha:]] . IO redirection Bash and ksh93 define pseudo-files /dev/tcp/ HOST / PORT and /dev/udp/ HOST / PORT , but mksh doesn't. Expanding wildcards in a redirection in scripts (as in var="*.txt"; echo hello >$a writing to a.txt if that file name is the sole match for the pattern) is a bash-specific feature (other shells never do it in scripts). <<< here-strings work in ksh like in bash. The shortcut >& to redirect syntax errors is also supported by mksh but not by ksh93. Conditional expressions [[ … ]] double bracket syntax The double bracket syntax from ksh is supported by both ATT ksh93 and mksh like in bash. File operators Ksh93, mksh and bash support the same extensions to POSIX, including -a as an obsolete synonym of -e , -k (sticky), -G (owned by egid), -O (owner by euid), -ef (same file), -nt (newer than), -ot (older than). -N FILE (modified since last read) isn't supported by mksh. Mksh doesn't have a regexp matching operator =~ . Ksh93 has this operator, and it performs the same matching as in bash, but doesn't have an equivalent of BASH_REMATCH to retrieve matched groups afterwards. String operators Ksh93 and mksh support the same string comparison operators < and > as bash as well as the == synonym of = . Mksh doesn't use locale settings to determine the lexicographic order, it compares strings as byte strings. Other operators -v VAR to test if a variable is defined is bash-specific. In any POSIX shell, you can use [ -z "${VAR+1}" ] . Builtins alias The set of allowed character in alias names isn't the same in all shells. I think it's the same as for functions (see above). builtin Ksh93 has a builtin called builtin , but it doesn't execute a name as a built-in command. Use command to bypass aliases and functions; this will call a builtin if one exists, otherwise an external command (you can avoid this with PATH= command error_out_if_this_is_not_a_builtin ). caller This is bash-specific. You can get a similar effect with .sh.fun , .sh.file and .sh.lineno in ksh93. In mksh there's at last LINENO . declare , local , typeset declare is a bash-specific name for ksh's typeset . Use typeset : it also works in bash. Mksh defines local as an alias for typeset . In ksh93, you need to use typeset (or define an alias). Mksh has no associative arrays (they're slated for an as yet unreleased version). I don't think there's an exact equivalent of bash's typeset -t (trace function) in ksh. cd Ksh93 doesn't have -e . echo Ksh93 and mksh process the -e and -n options like in bash. Mksh also understands -E , ksh93 doesn't treat it as an option. Backslash expansion is off by default in ksh93, on by default in mksh. enable Ksh doesn't provide a way to disable builtin commands. To avoid a builtin, look up the external command's path and invoke it explicitly. exec Ksh93 has -a but not -l . Mksh has neither. export Neither ksh93 nor mksh has export -n . Use typeset +x foo instead, it works in bash and ksh. Ksh doesn't export functions through the environment. let let is the same in bash and ksh. mapfile , readarray This is a bash-specific feature. You can use while read loops or command substitution to read a file and split it into an array of lines. Take care of IFS and globbing. Here's the equivalent of mapfile -t lines </path/to/file : IFS=$'\n'; set -flines=($(</path/to/file))unset IFS; set +f printf printf is very similar. I think ksh93 supports all of bash's format directives. mksh doesn't support %q or %(DATE_FORMAT)T ; on some installations, printf isn't an mksh builtin and calls the external command instead. printf -v VAR is bash-specific, ksh always prints to standard output. read Several options are bash-specific, including all the ones about readline. The options -r , -d , -n , -N , -t , -u are identical in bash, ksh93 and mksh. readonly You can declare a variable as read-only in Ksh93 and mksh with the same syntax. If the variable is an array, you need to assign to it first, then make it read-only with readonly VAR . Functions can't be made read-only in ksh. set , shopt All the options to set and set -o are POSIX or ksh features. shopt is bash-specific. Many options concern interactive use anyway. For effects on globbing and other features enabled by some options, see the section “Options” below. source This variant of . exists in ksh as well. In bash and mksh, source searches the current directory after PATH , but in ksh93, it's an exact equivalent of . . trap The DEBUG pseudo-signal isn't implemented in mksh. In ksh93, it exists with a different way to report information, see the manual for details. type In ksh, type is an alias for whence -v . In mksh, type -p does not print the path to the executable, but a human-readable message; you need to use whence -p COMMAND instead. Options shopt -s dotglob — don't ignore dot files in globbing To emulate the dotglob option in ksh93, you can set FIGNORE='@(.|..)' . I don't think there's anything like this in mksh. shopt -s extglob — ksh extended glob patterns The extglob option is effectively always on in ksh. shopt -s failglob — error out if a glob pattern matches nothing I don't think this exists in either mksh or ksh93. It does in zsh (default behavior unless null_glob or csh_null_glob are set). shopt -s globstar — **/ recursive globbing Ksh93 has recursive globbing with **/ , enabled with set -G . Mksh doesn't have recursive globbing. shopt -s lastpipe — run the last command of a pipeline in the parent shell Ksh93 always runs the last command of a pipeline in the parent shell, which in bash requires the lastpipe option to be set. Mksh always runs the last command of a pipeline in a subshell. shopt -s nocaseglob , shopt -s nocasematch — case-insensitive patterns Mksh doesn't have case-insensitive pattern matching. Ksh93 supports it on a pattern-by-pattern basis: prefix the pattern with ~(i) . shopt -s nullglob — expand patterns that match no file to an empty list Mksh doesn't have this. Ksh93 supports it on a pattern-by-pattern basis: prefix the pattern with ~(N) . Variables Obviously most of the BASH_xxx variables don't exist in ksh. $BASHPID can be emulated with the costly but portable sh -c 'echo $PPID' , and has been recently added to mksh. BASH_LINE is .sh.lineno in ksh93 and LINENO in mksh. BASH_SUBSHELL is .sh.subshell in ksh93. Mksh and ksh93 both source the file given in ENV when they start up. EUID and UID don't exist in ksh93. Mksh calls them USER_ID and KSH_UID ; it doesn't have GROUPS . FUNCNAME and FUNCNEST don't exist in ksh. Ksh93 has .sh.fun and .sh.level . Functions declared with function foo { …; } (no parentheses!) have their own name in $0 . GLOBIGNORE exists in ksh93 but with a different name and syntax: it's called FIGNORE , and it's a single pattern, not a colon-separated list. Use a @(…|…) pattern. Ksh's FIGNORE subsumes bash's, with a wholly different syntax. Ksh93 and mksh have nothing like HOSTTYPE , MACHTYPE and OSTYPE . Nor SHELLOPTS or TIMEFORMAT . Mksh has PIPESTATUS , but ksh93 doesn't. Mksh and ksh93 have RANDOM . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/158896",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79424/"
]
} |
158,933 | Is it possible to open an incognito session in bash ? For example, when we need to enter passwords in commands and don't want bash to add them to history. | When you want bash to stop logging your commands, just unset the HISTFILE variable: HISTFILE= All further commands should then no longer be logged to .bash_history . On the other hand, if you are actually supplying passwords as arguments to commands, you're already doing something wrong. .bash_history is not world-readable and therefore not the biggest threat in this situation: ps and /proc are the big problem. All users on the system can see the commands you're currently running with all of their arguments . Passing passwords as command line arguments is therefore inherently insecure . Use environment variables or config files (that you have chmodded 600) to securely supply passwords. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/158933",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73778/"
]
} |
158,959 | i have this script where it generates random password but it needs to be shuffled. I couldnt find a way to do it. Please Help me. num=("0" "1" "2" "3" "4" "5" "7" "8" "9")special=("@" "#" "$" "%" "*" "-" "+")upper=(A B C D E F G H I J K L M N O P Q R S T U V W X Y Z)lower=(a b c d e f g h i j k l m n o p q r s t u v w x y z)lower2=${#lower[*]} #${lower[$((RANDOM%lower2))]}upper2=${#upper[*]} #${upper[$((RANDOM%upper2))]}num2=${#num[*]} #${num[$((RANDOM%num2))]}special2=${#special[*]} #${special[$((RANDOM%special2))]}echo "${special[$((RANDOM%special2))]}${num[$((RANDOM%num2))]}${lower[$((RANDOM%lower2))]}${upper[$((RANDOM%upper2))]}${lower[$((RANDOM%lower2))]}${upper[$((RANDOM%upper2))]}${lower[$((RANDOM%lower2))]}${upper[$((RANDOM%upper2))]}" this is the output: @7nOyIaJ how do I randomize this? | I'm assuming you want it randomized from the sense that "special" is always first. Here's a giant hacky way that does it. NOTE: Your script is scr.bash . $ echo $(./scr.bash | fold -w1 | shuf | tr -d '\n') Examples $ echo $(./scr.bash | fold -w1 | shuf | tr -d '\n')qT*Jyv8Y$ echo $(./scr.bash | fold -w1 | shuf | tr -d '\n')QbOvX3n-$ echo $(./scr.bash | fold -w1 | shuf | tr -d '\n')*Q5nGgIt This is rough but shows you the approach. Details The fold command will break the output up into 1 character per line. There are other ways to do this but I opted for fold . The shuf command can "shuffle" the lines into a random order. The tr command will delete the newline characters ( \n ) from the initial fold that we did. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/158959",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84725/"
]
} |
158,960 | I'm trying to install a Debian package from source (via git). I downloaded thepackage, changed to the package’s directory and ran ./configure command butit returned bash: ./configure: No such file or directory . What can be theproblem? A configure.ac file is located in the program folder. ./configuremakesudo make install | If the file is called configure.ac, do $> autoconf Depends:M4, Automake If you're not sure what to do, try $> cat readme They must mean that you use "autoconf" to generate an executable "configure" file. So the order is: $> autoconf$> ./configure$> make$> make install | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/158960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/487383/"
]
} |
158,997 | Alright, when I run certain commands the wrong way, (misspelled, etc.) The terminal outputs this: > instead of computername:workingfolder username$ , and when I type enter it goes like this: >>> That would be if I pressed enter 3 times. | > is the default continuation prompt.That is what you will see if what you entered before had unbalanced quote marks. As an example, type a single quote on the command line followed by a few enter keys: $ '> > > The continuation prompts will occur until you either (a) complete the command with a closing quote mark or (b) type Ctrl + D to finish input, at which point the shell will respond with an error message about the unbalanced quotes, or (c) type Ctrl + C which will abort the command that you were entering. How this is useful Sometime, you may want to enter a string which contains embedded new lines. You can do that as follows: $ paragraph='first line> second line> third line> end' Now, when we display that shell variable, you can see that the prompts have disappeared but the newlines are retained: $ echo "$paragraph"first linesecond linethird lineend | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/158997",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79979/"
]
} |
159,010 | I have a long running bash instance (inside a screen session) that is executing a complex set of commands inside a loop (with each loop doing pipes, redirects, etc). The long command line was written inside the terminal - it's not inside any script. Now, I know the bash process ID, and I have root access - how can I see the exact command line being executed inside that bash ? Example bash$ echo $$1234bash$ while true ; do \ someThing | somethingElse 2>/foo/bar | \ yetAnother ; sleep 600 ; done And in another shell instance, I want to see the command line executed inside PID 1234: bash$ echo $$5678bash$ su -sh# cd /proc/1234sh# # Do something here that will display the string \ 'while true ; do someThing | somethingElse 2>/foo/bar | \ yetAnother ; sleep 600 ; done' Is this possible? EDIT #1 Adding counter-examples for some answers I've got. About using the cmdline under /proc/PID : that doesn't work, at least not in my scenario. Here's a simple example: $ echo $$8909$ while true ; do echo 1 ; echo 2>/dev/null ; sleep 30 ; done In another shell: $ cat /proc/8909/cmdlinebash Using ps -p PID --noheaders -o cmd is just as useless: $ ps -p 8909 --no-headers -o cmdbash ps -eaf is also not helpful: $ ps -eaf | grep 8909ttsiod 8909 8905 0 10:09 pts/0 00:00:00 bashttsiod 30697 8909 0 10:22 pts/0 00:00:00 sleep 30ttsiod 31292 13928 0 10:23 pts/12 00:00:00 grep --color=auto 8909 That is, there's no output of the ORIGINAL command line, which is what I'm looking for - i.e the while true ; do echo 1 ; echo 2>/dev/null ; sleep 30 ; done . | I knew I was grasping at straws, but UNIX never fails! Here's how I managed it: bash$ gdb --pid 8909...Loaded symbols for /lib/i386-linux-gnu/i686/cmov/libnss_files.so.20xb76e7424 in __kernel_vsyscall () Then at the (gdb) prompt I ran the command, call write_history("/tmp/foo") which will write this history to the file /tmp/foo . (gdb) call write_history("/tmp/foo")$1 = 0 I then detach from the process. (gdb) detachDetaching from program: /bin/bash, process 8909 And quit gdb . (gdb) q And sure enough... bash$ tail -1 /tmp/foowhile true ; do echo 1 ; echo 2>/dev/null ; sleep 30 ; done For easy future re-use, I wrote a bash script , automating the process. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/159010",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6859/"
]
} |
159,015 | I might be mistaken here, but I was watching someone navigate using the cd command, and without actually executing it, they were able to show the folder contents of the current folder. So if I type cd Downloads/Stuff then, without pressing enter, can I list the content of the Download/Stuff folder? | It's the programmable completion feature of the shell. You can simply press the TAB key twice to gain this behavior. Imagine you type cd Downkoads/St and then press the TAB key. St will be completed to Stuff if it is the only folder starting with St . If there are other folders starting with St in there, you will get a list of them by pressing TAB twice. For example: $ cd Downloads/St<tab><tab>Stuff/ Stage/ Start/ Another example: When you type cd Downkoads/ and then press the TAB key twice, everything you can cd to will be listed: $ cd Downloads/St<tab><tab>Stuff/ Stage/ Start/ Otherfolder/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159015",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84558/"
]
} |
159,016 | My preferred desktop environment lately has been Lxde. I like handling most things from the command line, so Gnome and KDE always seem to get in my way more than I like. But I do envy some of the new window manager features. Openbox does a pretty good job (and with lxde it does slightly better). But, I would really like to have dynamic keybindings at times and smarter window tiling/auto-arrangement. Static configs just don't quite cut it sometimes. It seems like some python hooks would do the trick, but I haven't had much luck finding support for it. I was hoping someone knows of a python project that ties into openbox or some other compliant window manager. I've seen some newer WMs (qtile for instance) but I am a bit weary of its age/reliability. I really don't need widget support and all that jazz, just looking for scriptable keybindings, and a semi-pleasant wrapper around window control. Do you know of any such projects? or am I looking at python/xlib solution? | It's the programmable completion feature of the shell. You can simply press the TAB key twice to gain this behavior. Imagine you type cd Downkoads/St and then press the TAB key. St will be completed to Stuff if it is the only folder starting with St . If there are other folders starting with St in there, you will get a list of them by pressing TAB twice. For example: $ cd Downloads/St<tab><tab>Stuff/ Stage/ Start/ Another example: When you type cd Downkoads/ and then press the TAB key twice, everything you can cd to will be listed: $ cd Downloads/St<tab><tab>Stuff/ Stage/ Start/ Otherfolder/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159016",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86530/"
]
} |
159,033 | I'm trying to tail a log file on multiple remote machines and forward the output to my local workstation. I want connections to close when pressing Ctrl - C . At the moment I have the following function that almost works as intended. function dogfight_tail() { logfile=/var/log/server.log pids="" for box in 02 03; do ssh server-$box tail -f $logfile | grep $1 & pids="$pids $!" done trap 'kill -9 $pids' SIGINT trap wait} The connections close and I receive the output from tail . BUT, there is some kind of buffering going on because the output come in batches. And here's the fun part… I can see the same buffering behaviour when executing the following and append "test" to the file /var/log/server.log on the remote machines 4-5 times… ssh server-01 "tail -f /var/log/server.log | grep test" …and found two ways of disabling it… Add -t flag to ssh. ssh -t server-01 "tail -f /var/log/server.log | grep test" Remove quotation from the remote command. ssh server-01 tail -f /var/log/server.log | grep test However, neither of these approaches work for the function that execute on multiple machines mentioned above. I have tried dsh, which have the same buffering behaviour when executing. dsh -m server-01,server-02 -c "tail -f /var/log/server.log | grep test" Same here, if I remove the quotation, the buffering goes away and everything works fine. dsh -m server-01,server-02 -c tail -f /var/log/server.log | grep test Also tried parallel-ssh which works exactly the same as dsh . Can somebody explain what's going on here? How do I fix this problem? Would be ideal to go with straight ssh if possible. P.S. I do not want to use multitail or similar since I want to be able to execute arbitrary commands. | What you see is effect of a standard stdout buffer in grep provided by Glibc. The best solution is to disable it by using --line-buffered (GNU grep, I'm not sure what other implementations might support it or something similar). As for why this only happens in some cases: ssh server "tail -f /var/log/server.log | grep test" runs the whole command in the quotes on the server - thus grep waits to fill its buffer. ssh server tail -f /var/log/server.log | grep test runs grep on your local machine on the output tail sent through the ssh channel. The key part here is, that grep adjusts its behaviour depending on whether its stdin is a terminal or not. When you run ssh -t , the remote command is running with a controlling terminal and thus the remote grep behaves like your local one. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/159033",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86538/"
]
} |
159,086 | I am Running Raspian on a RaspberryPi. When I plug in a GSM modem I see two interfaces - wwan0 & ppp0 . wwan0 exists even when the GSM modem is plugged but not connected ppp0 exists only when the GSM modem is connected Questions What is the difference between wwan0 and ppp0 , and why do I see ppp0 in addition to wwan0 ? Why is the IP address assigned to ppp0 and not wwan0 after a connection is established? | 1. What is the difference between wwan0 & ppp0 and why do I see ppp0 in addition to wwan0 wwan0 is a network interface exposed by the modem via usb. ppp0 is the PPP interface created by pppd when the modem gets connected using ATD call in the serial port. 2. Why is the IP address assigned to ppp0 and not wwan0 after a connection is established. Your connection manager doesn't know how to use the wwan interface and just uses the 'legacy' method of doing everything over a TTY (both AT commands for control and PPP for data). With some more detail... Your modem exposes a WWAN network interface, but you're not using it. Instead, your connection manager is launching a PPP session over the same (or other) serial port where you send the AT commands (which is why you get the ppp0 interface only when connected). If you're targeting LTE speeds you do want to use the WWAN interface instead; so try to use a connection manager that knows how to use that interface (e.g. ModemManager ). Knowing which modem is it would help to define a better answer anyway... If this is e.g. a Qualcomm-based modem (and your kernel is >= 3.4), you're likely getting not only a WWAN interface in addition to the ttys, but also a QMI control interface at /dev/cdc-wdm. If you want to use that wwan0 interface you cannot use AT commands, and instead need to launch the connection using the QMI protocol through e.g. libqmi . If this is e.g. a MBIM-based modem (and your kernel is >= 3.8), then you'll also get a /dev/cdc-wdm interface, but will need to use the MBIM protocol to get the modem connected with the wwan0, through e.g. libmbim . If this is e.g. a Huawei modem, you may instead be getting a wwan interface that needs AT^NDISDUP command to get connected. If this is e.g. a Icera-based modem, the connection AT command may instead be AT%%IPDPACT... And so on. Basically, as soon as you get a WWAN interface, you just need to use either a vendor-specific AT command, or a generic QMI or MBIM commmand. Again, ModemManager does this for you. A bit more on modem management protocols can be found in these slides: Mobile Internet in GNOME Mobile Broadband Modem Protocols | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/159086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86475/"
]
} |
159,094 | I have a deb package for installation. Shall I install by dpkg -i my.deb , or by apt? Will both handle the software dependency problem well? If by apt, how can I install from the deb by apt? | When you use apt to install a package, under the hood it uses dpkg . When you install a package using apt, it first creates a list of all the dependencies and downloads it from the repository. Once the download is finished it calls dpkg to install all those files, satisfying all the dependencies. So if you have a .deb file, you can install it by: Using: sudo dpkg -i /path/to/deb/filesudo apt-get install -f Using: sudo apt install ./name.deb Or sudo apt install /path/to/package/name.deb With old apt-get versions you must first move your deb file to /var/cache/apt/archives/ directory. For both, after executing this command, it will automatically download its dependencies. First installing gdebi and then opening your .deb file using it ( Right-click -> Open with ). It will install your .deb package with all its dependencies. Note : APT maintains the package index which is a database ( /var/cache/apt/*.bin ) of available packages available in repo defined in /etc/apt/sources.list file and in the /etc/apt/sources.list.d directory. All these methods will fail to satisfy the software dependency if the dependencies required by the deb is not present in the package index. Why use sudo apt-get install -f after sudo dpkg -i /path/to/deb/file (as mentioned in method 1)? From man apt-get : -f, --fix-broken Fix; attempt to correct a system with broken dependencies in place. When dpkg installs a package and a package dependency is not satisfied, it leaves the package in an "unconfigured" state and that package is considered broken. The sudo apt-get install -f command tries to fix this broken package by installing the missing dependency. | {
"score": 11,
"source": [
"https://unix.stackexchange.com/questions/159094",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
159,123 | I'm currently writing a program that prints to a Zebra printer. Because my office doesn't have a zebra printer, we print to a linux VM running netcat with nc -k -l -p 9100 | tee labels.txt so that we can view the output to the printer and verify correctness. Unfortunately, this file gets pretty big and takes up a lot of space on the VM, especially because no one ever remembers to clear it. Using tee seems to be a good option for writing to a file, but it isn't very featured in the way I'd desire. I'd like for the label.txt to only grow to a certain size (say 20 MB), at which point it begins overwriting itself. Or perhaps renames label.txt to label.txt.1 , allowing label.txt to grow and then overwriting label.txt.1 . Is there any way to do this with netcat / tee ? Or should I be looking at another program? | When you use apt to install a package, under the hood it uses dpkg . When you install a package using apt, it first creates a list of all the dependencies and downloads it from the repository. Once the download is finished it calls dpkg to install all those files, satisfying all the dependencies. So if you have a .deb file, you can install it by: Using: sudo dpkg -i /path/to/deb/filesudo apt-get install -f Using: sudo apt install ./name.deb Or sudo apt install /path/to/package/name.deb With old apt-get versions you must first move your deb file to /var/cache/apt/archives/ directory. For both, after executing this command, it will automatically download its dependencies. First installing gdebi and then opening your .deb file using it ( Right-click -> Open with ). It will install your .deb package with all its dependencies. Note : APT maintains the package index which is a database ( /var/cache/apt/*.bin ) of available packages available in repo defined in /etc/apt/sources.list file and in the /etc/apt/sources.list.d directory. All these methods will fail to satisfy the software dependency if the dependencies required by the deb is not present in the package index. Why use sudo apt-get install -f after sudo dpkg -i /path/to/deb/file (as mentioned in method 1)? From man apt-get : -f, --fix-broken Fix; attempt to correct a system with broken dependencies in place. When dpkg installs a package and a package dependency is not satisfied, it leaves the package in an "unconfigured" state and that package is considered broken. The sudo apt-get install -f command tries to fix this broken package by installing the missing dependency. | {
"score": 11,
"source": [
"https://unix.stackexchange.com/questions/159123",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73386/"
]
} |
159,174 | I am confused by the concept of enabled or active and disabled or inactive. Could someone explain it? | The man page for systemd has the info that you're looking for. excerpt systemd provides a dependency system between various entities called "units". Units encapsulate various objects that are relevant for system boot-up and maintenance. The majority of units are configured in unit configuration files, whose syntax and basic set of options is described in systemd.unit(5), however some are created automatically from other configuration or dynamically from system state. Units may be 'active' (meaning started, bound, plugged in, ... depending on the unit type, see below), or 'inactive' (meaning stopped, unbound, unplugged, ...), as well as in the process of being activated or deactivated, i.e. between the two states (these states are called 'activating', 'deactivating'). A special 'failed' state is available as well which is very similar to 'inactive' and is entered when the service failed in some way (process returned error code on exit, or crashed, or an operation timed out). If this state is entered the cause will be logged, for later reference. Note that the various unit types may have a number of additional substates, which are mapped to the five generalized unit states described here. Breakdown So if you've read the above and don't really understand the difference, here it is, in a nutshell. enabled - a service (unit) is configured to start when the system boots disabled - a service (unit) is configured to not start when the system boots active - a service (unit) is currently running. inactive - a service (unit) is currently not running, but may get started, i.e. become active, if something attempts to make use of the service. inactive This last one can seem like the most perplexing, but think of systemd along the same lines as xinetd . It can manage your services for you and start them up, on demand when needed. So while the services are "off" they're in the inactive state, but when started, they can become active . This state can also occur when a service (unit) has been enabled but not yet manually started. So the service lays "dormant" in the stopped or failed state until either the service is manually started, or the system goes through a reboot, which would cause the service to become active due to its enablement. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/159174",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72520/"
]
} |
159,184 | I installed Debian Wheezy with the GNOME GUI, and then upgraded to Jessie using the terminal. After the upgrade was done, I tried logging out and logging back in, but it booted straight to command line view with no GUI. When I logged in and entered the startx command as root, all I got was a black screen. Do I need to manually update GNOME somehow, or is there another way to get to the GUI desktop? How do I change the boot parameters to boot to GUI by default? | The man page for systemd has the info that you're looking for. excerpt systemd provides a dependency system between various entities called "units". Units encapsulate various objects that are relevant for system boot-up and maintenance. The majority of units are configured in unit configuration files, whose syntax and basic set of options is described in systemd.unit(5), however some are created automatically from other configuration or dynamically from system state. Units may be 'active' (meaning started, bound, plugged in, ... depending on the unit type, see below), or 'inactive' (meaning stopped, unbound, unplugged, ...), as well as in the process of being activated or deactivated, i.e. between the two states (these states are called 'activating', 'deactivating'). A special 'failed' state is available as well which is very similar to 'inactive' and is entered when the service failed in some way (process returned error code on exit, or crashed, or an operation timed out). If this state is entered the cause will be logged, for later reference. Note that the various unit types may have a number of additional substates, which are mapped to the five generalized unit states described here. Breakdown So if you've read the above and don't really understand the difference, here it is, in a nutshell. enabled - a service (unit) is configured to start when the system boots disabled - a service (unit) is configured to not start when the system boots active - a service (unit) is currently running. inactive - a service (unit) is currently not running, but may get started, i.e. become active, if something attempts to make use of the service. inactive This last one can seem like the most perplexing, but think of systemd along the same lines as xinetd . It can manage your services for you and start them up, on demand when needed. So while the services are "off" they're in the inactive state, but when started, they can become active . This state can also occur when a service (unit) has been enabled but not yet manually started. So the service lays "dormant" in the stopped or failed state until either the service is manually started, or the system goes through a reboot, which would cause the service to become active due to its enablement. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/159184",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86554/"
]
} |
159,191 | I am trying to setup KVM in ubuntu 14.04 host machine. I use a wireless interface to access the internet in my machine. Ihave setup the wireless interface in my /etc/networks/interfaces as below. auto wlan0iface wlan0 inet staticaddress 192.168.1.9netmask 255.255.255.0gateway 192.168.1.1wpa-ssid My_SSIDwpa-psk SSID_Passworddns-nameservers 8.8.8.8dns-search landns-domain lan I checked if my machine is available for virtualization and thiscommand confirms that my hardware supports virtualization. egrep '(vmx|svm)' /proc/cpuinfo I installed the necessary packages for kvm virtualization as below. apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder I also installed the bridge utils package to configure bridge network for my kvm . apt-get install bridge-utils I modified my /etc/network/interfaces to allow the bridged networkas below. auto br0iface br0 inet staticaddress 192.168.1.40network 192.168.1.0netmask 255.255.255.0broadcast 192.168.1.255gateway 192.168.1.1dns-nameservers 8.8.8.8dns-search landns-domain lanbridge_ports wlan0bridge_stp 0ffbridge_fd 0bridge_maxwait 0wpa-ssid my_ssidwpa-psk ssid_password After the above step, I am able to ping 192.168.1.40 and also Icould see there is br0 and virbr0 listed in the output of ifconfig -a command. I am also able to access the internet withoutany problem with my wireless interface. However, after the above step if I try to add another OS using ubuntu-vm-builder command, I am not able to add a new OS. This is the command I use to add a new OS. sudo ubuntu-vm-builder kvm trusty \--domain rameshpc \--dest demo1 \--hostname demo1 \--arch amd64 \--mem 1024 \--cpus 4 \--user ladmin \--pass password \--bridge br0 \--ip 192.168.1.40 \--mask 255.255.255.0 \--net 192.168.1.0 \--bcast 192.168.1.255 \--gw 192.168.1.1 \--dns 8.8.8.8 \--components main,universe \--addpkg acpid \--addpkg openssh-server \--addpkg linux-image-generic \--libvirt qemu;///system; I have seen that setting a bridged network using a wireless interface is quiet complicated as discussed in this question. However, as the answer describes it is possible using a tunneling device. I have tried the option as suggested in this link. But I couldn't get it to work. | As someone rightly said once, Nothing is impossible in Linux TM , I could achieve the kvm in my host with a bridged network over a wireless interface. These are the steps I followed to accomplish the same. I installed the virt-manager package to manage the installation moreefficiently. I installed it as below. sudo apt-get install virt-manager Now, create a new sub-network using Virt Manager’s GUI as highlighted below. This is basically a sub network of our existing host network. After setting this new sub-network , check if the network isavailable and ping some sites to check the network connectivity. Also, check the routing information using route command and makesure wlan0 and virbr2 doesn't have the same destination. Now, the final step to make it work is to issue the below command. Here 192.168.1.9 is the host machine address. arp -i wlan0 -Ds 192.168.1.9 wlan0 pub After the above step, I was able to successfully install a Fedoraguest OS using the virt-manager . References http://specman1.wordpress.com/2014/01/02/wireless-bridging-virtual-machines-kvm/ https://superuser.com/questions/694929/wireless-bridge-on-kvm-virtual-machine | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/159191",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47538/"
]
} |
159,221 | Executing journalctl under a CentOS 7 system just prints messages generated after the last boot. The command # journalctl --boot=-1 prints Failed to look up boot -1: Cannot assign requested address and exits with status 1. Comparing it to a current Fedora system I notice that the CentOS 7 does not have /var/log/journal (and journalctl does not provide --list-boots ). Thus my question how to display log messages which were written before the last boot date. Or, perhaps this functionality has to be enabled on CentOS 7? (The journalctl man page lists 'systemd 208' as version number.) | tl;dr On CentOS 7, you have to enable the persistent storage of log messages: # mkdir /var/log/journal# systemd-tmpfiles --create --prefix /var/log/journal# systemctl restart systemd-journald Otherwise, the journal log messages are not retained between boots. Details Whether journald retains log messages from previous boots is configured via /etc/systemd/journald.conf . The default setting under CentOS 7 is: [Journal]Storage=auto Where the journald.conf man page explains auto as: One of "volatile", "persistent", "auto" and "none". If "volatile", journal log data will be stored only in memory, i.e. below the /run/log/journal hierarchy (which is created if needed). If "persistent", data will be stored preferably on disk, i.e. below the /var/log/journal hierarchy (which is created if needed), with a fallback to /run/log/journal (which is created if needed), during early boot and if the disk is not writable. " auto " is similar to "persistent" but the directory /var/log/journal is not created if needed, so that its existence controls where log data goes . (emphasize mine) The systemd-journald.service man page thus states that: By default, the journal stores log data in /run/log/journal/. Since /run/ is volatile, log data is lost at reboot. To make the data persistent, it is sufficient to create /var/log/journal/ where systemd-journald will then store the data. Apparently, the default was changed in Fedora 19 (to persitent storage) and since CentOS 7 is derived from Fedora 18 - it is still non-persisent there, by default. Persistency is implemented by default outside of journald via /var/log/messages and the rotated versions /var/log/messages-YYYYMMDD which are written by rsyslogd (which runs by default and gets its input from journald). Thus, to enable persistent logging with journald under RHEL/CentOS 7 one has to # mkdir /var/log/journal and then fix permissions and restart journald, e.g. via # systemd-tmpfiles --create --prefix /var/log/journal# systemctl restart systemd-journald | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/159221",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1131/"
]
} |
159,252 | I have set up a network as such:Set up host-only networking on VirtualBox. The first adapter is configured with NAT, the second with host-only networking HOST: Windows GUEST: CentOS VM1, CentOS VM2 (clone of VM1) When executing ifconfig -a on both VMs, I noticed that the MAC addresses are exactly the same. My question is how am I able to ping from VM1 to VM2 considering that the MAC addresses are the same? VM1:eth0 Link encap:Ethernet HWaddr 08:00:27:AF:A3:28 inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:feaf:a328/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:27 errors:0 dropped:0 overruns:0 frame:0 TX packets:47 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10671 (10.4 KiB) TX bytes:5682 (5.5 KiB)eth1 Link encap:Ethernet HWaddr 08:00:27:C4:A8:B6 inet addr:192.168.56.102 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fec4:a8b6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:859 errors:0 dropped:0 overruns:0 frame:0 TX packets:41 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:114853 (112.1 KiB) TX bytes:4823 (4.7 KiB) ip -6 addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000 inet6 fe80::a00:27ff:feaf:a328/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000 inet6 fe80::a00:27ff:fec4:a8b6/64 scope link valid_lft forever preferred_lft foreverVM2:eth0 Link encap:Ethernet HWaddr 08:00:27:AF:A3:28 inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:feaf:a328/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:114 errors:0 dropped:0 overruns:0 frame:0 TX packets:151 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:41594 (40.6 KiB) TX bytes:13479 (13.1 KiB)eth1 Link encap:Ethernet HWaddr 08:00:27:C4:A8:B6 inet addr:192.168.56.101 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fec4:a8b6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1900 errors:0 dropped:0 overruns:0 frame:0 TX packets:78 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:259710 (253.6 KiB) TX bytes:9736 (9.5 KiB)ip -6 addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000 inet6 fe80::a00:27ff:feaf:a328/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000 inet6 fe80::a00:27ff:fec4:a8b6/64 scope link tentative dadfailed valid_lft forever preferred_lft forever | This is one of those things that surprises people because it goes against what they've been taught. 2 machines with the same hardware mac address on the same broadcast domain can talk to each other just fine as long as they have different IP addresses (and the switching gear plays nice). Lets start with a test setup: VM1 $ ip addr show dev enp0s83: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:3c:f9:ad brd ff:ff:ff:ff:ff:ff inet 169.254.0.2/24 scope global enp0s8 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe3c:f9ad/64 scope link valid_lft forever preferred_lft forever VM2 $ ip addr show dev enp0s83: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:3c:f9:ad brd ff:ff:ff:ff:ff:ff inet 169.254.0.3/24 scope global enp0s8 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe3c:f9ad/64 scope link tentative dadfailed valid_lft forever preferred_lft forever So notice how both machines have the same MAC addr, but different IPs. Lets try pinging: VM1 $ ping -c 3 169.254.0.3PING 169.254.0.3 (169.254.0.3) 56(84) bytes of data.64 bytes from 169.254.0.3: icmp_seq=1 ttl=64 time=0.505 ms64 bytes from 169.254.0.3: icmp_seq=2 ttl=64 time=0.646 ms64 bytes from 169.254.0.3: icmp_seq=3 ttl=64 time=0.636 ms--- 169.254.0.3 ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 2001msrtt min/avg/max/mdev = 0.505/0.595/0.646/0.070 ms So, the remote host responded. Well, that's weird. Lets look at the neighbor table: VM1 $ ip neigh169.254.0.3 dev enp0s8 lladdr 08:00:27:3c:f9:ad REACHABLE10.0.2.2 dev enp0s3 lladdr 52:54:00:12:35:02 STALE That's our MAC! Lets do a tcpdump on the other host to see that it's actually getting the traffic: VM2 $ tcpdump -nn -e -i enp0s8 'host 169.254.0.2'16:46:21.407188 08:00:27:3c:f9:ad > 08:00:27:3c:f9:ad, ethertype IPv4 (0x0800), length 98: 169.254.0.2 > 169.254.0.3: ICMP echo request, id 2681, seq 1, length 6416:46:21.407243 08:00:27:3c:f9:ad > 08:00:27:3c:f9:ad, ethertype IPv4 (0x0800), length 98: 169.254.0.3 > 169.254.0.2: ICMP echo reply, id 2681, seq 1, length 6416:46:22.406469 08:00:27:3c:f9:ad > 08:00:27:3c:f9:ad, ethertype IPv4 (0x0800), length 98: 169.254.0.2 > 169.254.0.3: ICMP echo request, id 2681, seq 2, length 6416:46:22.406520 08:00:27:3c:f9:ad > 08:00:27:3c:f9:ad, ethertype IPv4 (0x0800), length 98: 169.254.0.3 > 169.254.0.2: ICMP echo reply, id 2681, seq 2, length 6416:46:23.407467 08:00:27:3c:f9:ad > 08:00:27:3c:f9:ad, ethertype IPv4 (0x0800), length 98: 169.254.0.2 > 169.254.0.3: ICMP echo request, id 2681, seq 3, length 6416:46:23.407517 08:00:27:3c:f9:ad > 08:00:27:3c:f9:ad, ethertype IPv4 (0x0800), length 98: 169.254.0.3 > 169.254.0.2: ICMP echo reply, id 2681, seq 3, length 64 So, as you can see, even though the traffic has the same source and destination hardware mac address, everything still works perfectly fine. The reason for this is that the MAC address lookup comes very late in the communication process. The box has already used the destination IP address, and the routing tables to determine which interface it is going to send the traffic out on. The mac address that it adds onto the packet comes after that decision. I should also note that this is dependent upon the layer 2 infrastructure. How these machines are connected, and what sits between them. If you've got a more intelligent switch, this may not work. It may see this packet coming through and reject it. Now, going on to the traditional belief, of that this doesn't work. Well it is true, from a certain point of view :-) The problem arises when another host on the network needs to talk to either of these machines. When the traffic goes out, the switch is going to route the traffic by the destination mac address, and it's only going to send it to a single host. There are a few possible reasons why this test setup works: The traffic is broadcast to all ports, or to all ports which the MAC matches. The switch discards the source port as an option when determining the destination port. The switch is actually a layer 3 switch and is routing based on the IP address, and not the mac address. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/159252",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44594/"
]
} |
159,253 | I want to decode URL encoding, is there any built-in tool for doing this or could anyone provide me with a sed code that will do this? I did search a bit through unix.stackexchange.com and on the internet but I couldn't find any command line tool for decoding url encoding. What I want to do is simply in place edit a txt file so that: %21 becomes ! %23 becomes # %24 becomes $ %26 becomes & %27 becomes ' %28 becomes ( %29 becomes ) And so on. | Found these Python one liners that do what you want: Python2 $ alias urldecode='python -c "import sys, urllib as ul; \ print ul.unquote_plus(sys.argv[1])"'$ alias urlencode='python -c "import sys, urllib as ul; \ print ul.quote_plus(sys.argv[1])"' Python3 $ alias urldecode='python3 -c "import sys, urllib.parse as ul; \ print(ul.unquote_plus(sys.argv[1]))"'$ alias urlencode='python3 -c "import sys, urllib.parse as ul; \ print (ul.quote_plus(sys.argv[1]))"' Example $ urldecode 'q+werty%3D%2F%3B'q werty=/;$ urlencode 'q werty=/;'q+werty%3D%2F%3B References Urlencode and urldecode from a command line | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/159253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79979/"
]
} |
159,273 | I am seeing hundreds of different connections to the same ip and port scrolling by when running nethogs. Occasionally the foreign IP and port will change (not always 80, but sometimes). I've noticed that my router CPU usage jumps to 100% when these huge bursts of connections happen, so I'm fairly certain that this massive spike keeps overloading the router and essentially making my network useless for up to a full 60 seconds. Things I've tried: sudo netstat -tulpn | grep $whateverip : nothing sudo netstat --inet -ap | grep $whateverip : nothing sudo lsof -i | grep $whateverport : by the time this finishes, the port and IP have changed again This may just be paranoia, but I swear it seems like every time I try to dig into more info on the connection, the port and IP change, so my command gives me nothing. Am I dealing with something evil living inside my server? Or is there some more benign explanation that I'm missing in my limited networking knowledge? Also note that this is an Ubuntu server with no UI, so it's not me chasing around someone just browsing reddit. | Found these Python one liners that do what you want: Python2 $ alias urldecode='python -c "import sys, urllib as ul; \ print ul.unquote_plus(sys.argv[1])"'$ alias urlencode='python -c "import sys, urllib as ul; \ print ul.quote_plus(sys.argv[1])"' Python3 $ alias urldecode='python3 -c "import sys, urllib.parse as ul; \ print(ul.unquote_plus(sys.argv[1]))"'$ alias urlencode='python3 -c "import sys, urllib.parse as ul; \ print (ul.quote_plus(sys.argv[1]))"' Example $ urldecode 'q+werty%3D%2F%3B'q werty=/;$ urlencode 'q werty=/;'q+werty%3D%2F%3B References Urlencode and urldecode from a command line | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/159273",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86680/"
]
} |
159,278 | In the directory /home/in I have files like this: crust.MC12345.txt crust.etcMC12345.txtcrust.MC23456.txtcrust.etcMC23456.txt crust.etctcMC23456.txt I only need to move crust.etcMC12345.txt and crust.etcMC23456.txt to another dir, /home/out .what is the pattern i use in the mv command for the above scenario ? | If I correctly understand your question the answer is very simple: mv crust.etcMC* /home/out or if etc is not literal string, but for example any three characters then: mv crust.???MC* /home/out | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/159278",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86683/"
]
} |
159,286 | In Linux Mint 17 I have Suspend when inactive for under Power Management set to 10 minutes. The problem is that the system suspends even when I'm listening to music using Spotify . Is there any way to prevent this? | I found this article which suggests a couple of ways to stop the screensaver from activating. I have not tested this but since they presumably issue events to stop the activation (the second does at least), that should also count as activity for the inactivity monitor. You'll have to test this though, I'm not sure it will work. Install caffeine . This can be done by adding a ppa to your system and then installing it like any other package. As long as you're using the normal Mint, this should also work for you. sudo add-apt-repository ppa:caffeine-developers/ppasudo apt-get updatesudo apt-get install caffeine Then, launch caffeine and its icon should appear in your system tray. You can choose which programs should cancel suspending. Note: As of the 2.7 release of caffeine , the program no longer has a GUI and works only as a daemon. When running, it will prevent the screensaver from activating so long as the active window is full screen. The LightsOn script. It won't check spotify by default but it is easy enough to modify it to do so. Just add spotify to the delay_progs array at the beginging of the script: delay_progs=("spotify") Then, add the script to your startup programs so it runs in the background. and it should stop you from suspending if spotify is running. Note that this does not check whether any music is playing, just whether the program is running. Let me know if these don't work for the suspension and I'll try and hack something together using xdotool or similar programs. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159286",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78809/"
]
} |
159,344 | I am trying to better understand the network setup in my machine. Host Machine Setup I have a wireless interface ( wlan0 ) on my host machine which hasthe IP address as 192.168.1.9 . The default gateway of this host is the router which goes to theoutside world through my ISP, whose IP address is 192.168.1.1 . The route -n command in my host machine returns me the output as, Kernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 wlan0169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0192.168.1.160 0.0.0.0 255.255.255.224 U 0 0 0 virbr2 Guest Machine Setup Now, I setup a guest OS in KVM as below. The KVM is in a sub-network which has the details as 192.168.1.160/27 . The DHCP start is 192.168.1.176 and the DHCP end is 192.168.1.190 . I also did the below command for my KVM configuration to work. arp -i wlan0 -Ds 192.168.1.9 wlan0 pub From the guest OS, I see that my IP address is 192.168.1.179 . My route -n command in the guest machine returns me the output as, kernel IP routing tableDestination Gateway Genmask0.0.0.0 192.168.1.161 0.0.0.0192.168.1.160 0.0.0.0 255.255.255.224 How can I make the guest OS to interact with the outside world? EDIT This is the output of virsh net-list --all . ramesh@ramesh-pc:~$ virsh net-list --all Name State Autostart Persistent---------------------------------------------------------- arpbr0 inactive yes yes default active yes yes proxyArp active yes yes | I would like to thank user slm for guiding me in the right direction in setting up the guest network in the KVM . I will add the screen shots to the answer so that it will be more informative. I assume the virt-manager package is installed and also the host machine is setup with the necessary packages for KVM to work. Preparing the Network For Guest to Host Interaction The main step in the KVM is setting up of the network. If the machine is not available in the network, then it serves no purpose, be it physical or virtual . Type virt-manager in the terminal. The console would show up as below. Click on Edit -> Connection Details and a new screen would pop up as below. Click on Virtual Networks tab and from there click on the + button to add a new network to the KVM guests. Click on Forward and then we would be presented with the below screen. Now, the IPV4 addresses we choose here is completely up to our choice and we could optimize this step to suit our actual needs. After we click on Forward in the above screen, we would be presented with the below screen. In this step, it basically tells the address space available for us. In this step, choose forwarding to physical network and select the host's network interface which will help the guests to interact with the outside world. After the above step, we are almost done and we just would be presented with the below screen, which is kind of a review of all the details we chose so far. Adding this new device to our Guest OS From the initial screen of virt-manager , click on the Open and we will be presented with a screen as below. From the above screen, click on the i to open up another screen as below. Click on Add Hardware and select Network . In the Network tab, select the host device as our newly created network in the previous step and click on Finish as shown in the below screen. Testing in the guest OS Now, inside the guest OS make sure that you are able to ping the host machine and outside network such as google . If the ping succeeds, then we have successfully setup our network in the guest OS. References The reference material used to setup the guest network | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159344",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47538/"
]
} |
159,361 | I am unable to view PDF's in VIFM with Evince (my pdf viewer of choice). How does one do this? | You need to configure vifm to open .pdf files with Evince. In your ~/.vifm/vifmrc add the following lines: command evince evince *.{pdf} &filetype *.pdf evince The first defines a user command, the second sets the default program for specific filetypes. The default vifmrc (that should be installed to /usr/share/vifm/vifmrc ) is very well commented and should provide all the documentation you need to get it up and running. The Arch wiki also has a vifm page that contains some helpful pointers and tips. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159361",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86396/"
]
} |
159,367 | I know this question has probably been answered before. I have seen many threads about this in various places, but the answers are usually hard to extract for me. I am looking for help with an example usage of the 'sed' command. Say I wanted to act upon the file "hello.txt" (in same directory as prompt). Anywhere it contained the phrase "few", it should be changed to "asd". What would the command look like? | sed is the s tream ed itor , in that you can use | (pipe) to send standard streams (STDIN and STDOUT specifically) through sed and alter them programmatically on the fly, making it a handy tool in the Unix philosophy tradition; but can edit files directly, too, using the -i parameter mentioned below. Consider the following : sed -i -e 's/few/asd/g' hello.txt s/ is used to s ubstitute the found expression few with asd : The few, the brave. The asd, the brave. /g stands for "global", meaning to do this for the whole line. If you leave off the /g (with s/few/asd/ , there always needs to be three slashes no matter what) and few appears twice on the same line, only the first few is changed to asd : The few men, the few women, the brave. The asd men, the few women, the brave. This is useful in some circumstances, like altering special characters at the beginnings of lines (for instance, replacing the greater-than symbols some people use to quote previous material in email threads with a horizontal tab while leaving a quoted algebraic inequality later in the line untouched), but in your example where you specify that anywhere few occurs it should be replaced, make sure you have that /g . The following two options (flags) are combined into one, -ie : -i option is used to edit i n place on the file hello.txt . -e option indicates the e xpression/command to run, in this case s/ . Note: It's important that you use -i -e to search/replace. If you do -ie , you create a backup of every file with the letter 'e' appended. | {
"score": 11,
"source": [
"https://unix.stackexchange.com/questions/159367",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86730/"
]
} |
159,371 | I'm using SteamOS. SteamOS I believe is Debian based. I wiped the laptop and got it installed nicely. When I started moving my music over I got this message: I assume, I need to make some sort of partition larger but I haven't been able to figure out how to do that? As requested: desktop@steamos:~$ sudo fdisk -lWARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.Disk /dev/sda: 1000.2 GB, 1000204886016 bytes255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk identifier: 0x116c49cc Device Boot Start End Blocks Id System/dev/sda1 1 1953525167 976762583+ ee GPTPartition 1 does not start on physical sector boundary.desktop@steamos:~$ df -h Filesystem Size Used Avail Use% Mounted onrootfs 9.3G 8.8G 27M 100% /udev 10M 0 10M 0% /devtmpfs 739M 360K 739M 1% /run/dev/disk/by-uuid/12742cc0-e489-472e-aa10-974d078d98e0 9.3G 8.8G 27M 100% /tmpfs 5.0M 0 5.0M 0% /run/locktmpfs 3.4G 25M 3.4G 1% /run/shm/dev/sda5 889G 119M 843G 1% /boot/dev/sda1 487M 128K 486M 1% /boot/efi/dev/sda3 9.3G 1.5G 7.4G 17% /boot/recoverydesktop@steamos:~$ | sed is the s tream ed itor , in that you can use | (pipe) to send standard streams (STDIN and STDOUT specifically) through sed and alter them programmatically on the fly, making it a handy tool in the Unix philosophy tradition; but can edit files directly, too, using the -i parameter mentioned below. Consider the following : sed -i -e 's/few/asd/g' hello.txt s/ is used to s ubstitute the found expression few with asd : The few, the brave. The asd, the brave. /g stands for "global", meaning to do this for the whole line. If you leave off the /g (with s/few/asd/ , there always needs to be three slashes no matter what) and few appears twice on the same line, only the first few is changed to asd : The few men, the few women, the brave. The asd men, the few women, the brave. This is useful in some circumstances, like altering special characters at the beginnings of lines (for instance, replacing the greater-than symbols some people use to quote previous material in email threads with a horizontal tab while leaving a quoted algebraic inequality later in the line untouched), but in your example where you specify that anywhere few occurs it should be replaced, make sure you have that /g . The following two options (flags) are combined into one, -ie : -i option is used to edit i n place on the file hello.txt . -e option indicates the e xpression/command to run, in this case s/ . Note: It's important that you use -i -e to search/replace. If you do -ie , you create a backup of every file with the letter 'e' appended. | {
"score": 11,
"source": [
"https://unix.stackexchange.com/questions/159371",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86731/"
]
} |
159,406 | How to get the man page for ifcfg-$Interface files? Something like man 5 $Keyword for /etc/sysconfig/network-scripts/ifcfg-enp0s3 . Is there not a man page? Is the keyword wrong? RHEL:~# cat /etc/os-release | egrep "^NAME=|^VERSION="NAME="Red Hat Enterprise Linux Server"VERSION="7.0 (Maipo)"RHEL:~# RHEL:~# RHEL:~# man --versionman 2.6.3RHEL:~# RHEL:~# RHEL:~# man 5 ifcfgNo manual entry for ifcfg in section 5RHEL:~# RHEL:~# RHEL:~# whatis ifcfgifcfg (8) - simplistic script which replaces ifconfig IP managmentRHEL:~# RHEL:~# RHEL:~# man -k ifcfgifcfg (8) - simplistic script which replaces ifconfig IP managmentRHEL:~# RHEL:~# RHEL:~# file /etc/sysconfig/network-scripts/ifcfg-enp0s3 /etc/sysconfig/network-scripts/ifcfg-enp0s3: ASCII textRHEL:~# Works fine on SUSE: SUSE:~# cat /etc/os-release | egrep "^NAME=|^VERSION="NAME=openSUSEVERSION="13.1 (Bottle)"SUSE:~# SUSE:~# SUSE:~# man --versionman 2.6.3SUSE:~# SUSE:~# SUSE:~# whatis ifcfgifcfg (5) - common elements of network interface configurationSUSE:~# SUSE:~# man 5 ifcfgSUSE:~# SUSE:~# SUSE:~# man -k ifcfgifcfg (5) - common elements of network interface configurationifcfg-bonding (5) - interface bonding configurationifcfg-bridge (5) - ethernet bridge interface configurationifcfg-tunnel (5) - network tunnel interface configurationifcfg-vlan (5) - virtual LAN interface configurationifcfg-wireless (5) - wireless LAN network interface configurationSUSE:~# SUSE:~# SUSE:~# file /etc/sysconfig/network/ifcfg-enp0s3 /etc/sysconfig/network/ifcfg-enp0s3: ASCII textSUSE:~# | I have found it in manual docs using find / -name *ifcfg* Please use man 5 nm-settings-ifcfg-rh . It includes most comprehensive documents.I have spent 1 hour to find it......HAHA | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159406",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50835/"
]
} |
159,418 | how to delete the second character "." from the line what I have is this ( but its remove the first "." from output uname -r | sed s'/\./ /'2 6.18-164.2.1.el5PAE while I need the following output 2.6 18-164.2.1.el5PAE | Simply add N to the end of the command for it to match the Nth match, like this: uname -r | sed 's/\./ /2' What do you need it for though? From the info page on sed : The `s' command can be followed by zero or more of the following FLAGS: g Apply the replacement to _all_ matches to the REGEXP, not just the first. NUMBER Only replace the NUMBERth match of the REGEXP. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/159418",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67059/"
]
} |
159,436 | I would like to do echo "[this thing]" This thing is \documentclass{article}\usepackage{rotating}\usepackage{pdfpages}\usepackage{verbatim}\usepackage{amsmath, amsfonts, amssymb, textcomp, mathtools, xparse}\usepackage[T4, OT1]{fontenc}\usepackage{graphicx}\graphicspath{{/Users/Masi/Dropbox/Physiology/images/}}% Animations cannot be included here% \addmediapath{ {/Users/Masi/Dropbox/Physiology/animations/} }\usepackage{newunicodechar}\usepackage{multirow}\DeclarePairedDelimiter{\abs}{\lvert}{\rvert}\DeclarePairedDelimiter{\norm}{\lVert}{\rVert}\usepackage{color}\usepackage{hyperref}\usepackage{media9} % animations swf\usepackage{Tabbing}\usepackage{doi, natbib}\hypersetup{colorlinks=true,linkcolor=blue,citecolor=blue,allcolors=blue}\usepackage[affil-it]{authblk}\usepackage{import}\usepackage{color}\usepackage[normalem]{ulem}\usepackage{titling} % Two titles in one document\DeclarePairedDelimiter{\abs}{\lvert}{\rvert}%%%%%%%%%%%%%%%%%%%%%%%%%%% Question and Answer %%%%%%%%%%%%%%%%%\usepackage[framemethod=tikz]{mdframed}\mdfdefinestyle{ans}{ linecolor=cyan, backgroundcolor=yellow!20, frametitlebackgroundcolor=green!40, frametitlerule=true}\newcounter{question}[section]%\setcounter{question}{0}\newenvironment{question}[1]{%\refstepcounter{question}% \begin{mdframed}[style=ans,frametitle={Question: #1}]}{% \end{mdframed}%}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Smaller things\newtheorem{case}{Case logic}\mdfdefinestyle{que}{ linecolor=cyan, backgroundcolor=cyan!20,}\surroundwithmdframed[style=que]{case}\newtheorem{sidenote}{Sidenote}\mdfdefinestyle{que}{ linecolor=cyan, backgroundcolor=cyan!20,}\surroundwithmdframed[style=que]{sidenote}\newtheorem{citation}{Citation}\mdfdefinestyle{que}{ linecolor=cyan, backgroundcolor=yellow!20,}\surroundwithmdframed[style=que]{citation}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\newtheorem{theorem}{Theorem}\newtheorem{proposition}[theorem]{Proposition}\newenvironment{definition}[1][Definition]{\begin{trivlist}\item[\hskip \labelsep {\bfseries #1}] \emph}{\end{trivlist}}\providecommand{\keywords}[1]{\textbf{Keywords:} #1}%%%%%%%%%%%%%%%%%%%%%%%%% Counter Section %%%%%%%%%%%%%%%%%%%%%%\makeatletter \def\@part[#1]#2{% \ifnum \c@secnumdepth >\m@ne \refstepcounter{part}% \fi \addcontentsline{toc}{part}{#1}% {\parindent \z@ \raggedright \interlinepenalty \@M \normalfont \LARGE \bfseries #2% \markboth{}{}\par}% \nobreak \vskip 3ex \@afterheading}\@addtoreset{section}{part} \makeatother%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% How can you echo such a big content in Bash file well? | Use a here document : cat <<'EOF'Data...EOF Note: it's better to quote the heredoc word ( EOF ) as above to avoid expansion if the data contains something like $foo or backslashes, unless you want expansion of course. Examples: $ cat <<EOF$SHLVL \\EOF gives something like: 3 \ while $ cat <<'EOF'$SHLVL \\EOF gives: $SHLVL \\ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/159436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
159,449 | I have the following question about UNIX: What else is needed when the top line goes #!\bin\awk -f to make it run as a command? I am thinking whatever the name of the script is it need to be given permission using chmod to make it run. | The first line with the "#!" will be a full pathname to the program to be executed. Pathnames in Linux have the forward slash between directories. i.e.: #!/bin/awk -f or #!/bin/bash or #!/usr/bin/perl The first script would be run the script program using awk , the second using bash , the third using perl . You'll also have to make the file executable with: $ chmod +x myscript.sh | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159449",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84162/"
]
} |
159,454 | Some applications allow to pass password as an argument. For example: mysql --user=user_name --password=your_password db_name Is it safe? Besides the fact that typed password would be saved in bash history, someone can type w command in the appropriate moment and will see the full command line of process (including password). It's quite surprising for me that every user can see what command I'm currently executing. | The command line arguments of every process in the system is considered "public". Not just the w command, but ps and top and many other commands access that information as a matter of course. Indeed no special privileges are required to get that information. On Linux, you can read the command line of another process, even a process belonging to another user, by reading /proc/<pid>/cmdline . This is not a flaw or unsafe behaviour on the part of w or top or ps (or cat ). Rather, the onus is on the side of not passing sensitive information on command lines on multi-user systems, ever. Most utilities that have the ability to accept passwords on the command line document that it's not recommended to do it. For example, from mysql 's manpage: Specifying a password on the command line should be considered insecure. See Section 5.3.2.2, "End-User Guidelines for Password Security". You can use an option file to avoid giving the password on the command line. By the way, passing passwords or sensitive data in environment variables is less blatantly unsafe, but is also actually unsafe on most systems. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/159454",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28115/"
]
} |
159,462 | I know that all of them are unit files, but I can't understand the special meaning of them. I think that targets are similar to daemons and sockets are the same as socket (IP + port) but also with inode numbers. Could anyone please explain them in simple words? | Service units: A unit configuration file whose name ends in .service encodesinformation about a process controlled and supervised by systemd. — systemd.service(5) Systemd service units are the units that actually execute and keep track of programs and daemons, and dependencies are used to make sure that services are started in the right order. They are the most commonly used type of units. Socket units: A unit configuration file whose name ends in ".socket" encodesinformation about an IPC or network socket or a file system FIFOcontrolled and supervised by systemd, for socket-based activation. — systemd.socket(5) Socket units on the other hand don't actually start daemons on their own. Instead, they just sit there and listen on an IP address and a port, or a UNIX domain socket, and when something connects to it, the daemon that the socket is for is started and the connection is handed to it. This is useful for making sure that big daemons that take up a lot of resources but are rarely used aren't running and taking up resources all the time, but instead they are only started when needed. Target units: A unit configuration file whose name ends in ".target" encodesinformation about a target unit of systemd, which is used for groupingunits and as well-known synchronization points during start-up. — systemd.target(5) Targets are used for grouping and ordering units. They are somewhat of a rough equivalent to runlevels in that at different targets, different services, sockets, and other units are started. Unlike runlevels, they are much more free-form and you can easily make your own targets for ordering units, and targets have dependencies among themselves. For instance, multi-user.target is what most daemons are grouped under, and it requires basic.target to be activated, which means that all services grouped under basic.target will be started before the ones in multi-user.target . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/159462",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72520/"
]
} |
159,489 | When you want to run multiple commands you can use ; , && and | Like this: killall Finder; killall SystemUIServer , cd ~/Desktop/ && rm Caches Or: man grep | man cat for example. But, is there a difference between | , ; and && ? If so, what is the difference? | ; : commands separated by a ; are executed sequentially. The shell waits for each command to terminate in turn. && : command after && is executed if, and only if, command before && returns an exit status of zero. You can think of it as AND operator. | : a pipe. In expression command1 | command2 The standard output of command1 is connected via a pipe to the standard input of command2. There are more similar control operators, worth to mention: || : command after || is executed if, and only if, command before || returns a non-zero exit status. You can think of it as OR operator. Please note, that | and || are completely different animals. & : the shell executes the command terminated by & in the background, does not wait for the command to finish and immediately returns exit code 0. Once again, & has nothing to do with && . |& : a shorthand for 2>&1 | i.e. both standard output and standard error of command1 are connected to command2's standard input through the pipe. Additionally if you use zsh then you can also start command with &| or &! . In this case job is immediately disowned, after startup it does not have a place in the job table. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/159489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79979/"
]
} |
159,509 | If I do the following commands: $ cat picture.jpg > copy1.jpg and $ cat -v picture.jpg > copy2.jpg copy1.jpg is a perfect copy of picture.jpg , but copy2.jpg is a lot bigger than picture.jpg . I assume this is because copy2.jpg has had each of what cat thought were its line endings replaced by a ^M , and each ^M is bigger in size than a line ending. Is this correct? If then do cat copy2.jpg , I find that there are no instances of ^M in copy2.jpg . What's going on here? And can cat be relied upon for joining files perfectly using > , if its output can be different from its input? | It's not just ^M . Every byte with a non-printable character (whatever that means in your current locale) will be expanded to a multiple-byte printable equivalent under cat -v . If you're using cat to join files, you need to avoid every option that modifies the output: -b and -n (number lines), -E (mark line endings with $ ), -s (suppress repeated empty lines), and -v and -T (display non-printable characters using printable characters). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159509",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85900/"
]
} |
159,513 | I often see tutorials online that connect various commands with different symbols. For example: command1 | command2command1 & command2command1 || command2 command1 && command2 Others seem to be connecting commands to files: command1 > file1command1 >> file1 What are these things? What are they called? What do they do? Are there more of them? Meta thread about this question. . | These are called shell operators and yes, there are more of them. I will give a brief overview of the most common among the two major classes, control operators and redirection operators , and how they work with respect to the bash shell. A. Control operators POSIX definition In the shell command language, a token that performs a control function. It is one of the following symbols: & && ( ) ; ;; <newline> | || And |& in bash. A ! is not a control operator but a Reserved Word . It becomes a logical NOT [negation operator] inside Arithmetic Expressions and inside test constructs (while still requiring an space delimiter). A.1 List terminators ; : Will run one command after another has finished, irrespective of the outcome of the first. command1 ; command2 First command1 is run, in the foreground, and once it has finished, command2 will be run. A newline that isn't in a string literal or after certain keywords is not equivalent to the semicolon operator. A list of ; delimited simple commands is still a list - as in the shell's parser must still continue to read in the simple commands that follow a ; delimited simple command before executing, whereas a newline can delimit an entire command list - or list of lists. The difference is subtle, but complicated: given the shell has no previous imperative for reading in data following a newline, the newline marks a point where the shell can begin to evaluate the simple commands it has already read in, whereas a ; semi-colon does not. & : This will run a command in the background, allowing you to continue working in the same shell. command1 & command2 Here, command1 is launched in the background and command2 starts running in the foreground immediately, without waiting for command1 to exit. A newline after command1 is optional. A.2 Logical operators && : Used to build AND lists, it allows you to run one command only if another exited successfully. command1 && command2 Here, command2 will run after command1 has finished and only if command1 was successful (if its exit code was 0). Both commands are run in the foreground. This command can also be written if command1 then command2 else false fi or simply if command1; then command2; fi if the return status is ignored. || : Used to build OR lists, it allows you to run one command only if another exited unsuccessfully. command1 || command2 Here, command2 will only run if command1 failed (if it returned an exit status other than 0). Both commands are run in the foreground. This command can also be written if command1 then true else command2 fi or in a shorter way if ! command1; then command2; fi . Note that && and || are left-associative; see Precedence of the shell logical operators &&, || for more information. ! : This is a reserved word which acts as the “not” operator (but must have a delimiter), used to negate the return status of a command — return 0 if the command returns a nonzero status, return 1 if it returns the status 0. Also a logical NOT for the test utility. ! command1 [ ! a = a ] And a true NOT operator inside Arithmetic Expressions: $ echo $((!0)) $((!23)) 1 0 A.3 Pipe operator | : The pipe operator, it passes the output of one command as input to another. A command built from the pipe operator is called a pipeline . command1 | command2 Any output printed by command1 is passed as input to command2 . |& : This is a shorthand for 2>&1 | in bash and zsh. It passes both standard output and standard error of one command as input to another. command1 |& command2 A.4 Other list punctuation ;; is used solely to mark the end of a case statement . Ksh, bash and zsh also support ;& to fall through to the next case and ;;& (not in ATT ksh) to go on and test subsequent cases. ( and ) are used to group commands and launch them in a subshell. { and } also group commands, but do not launch them in a subshell. See this answer for a discussion of the various types of parentheses, brackets and braces in shell syntax. B. Redirection Operators POSIX definition of Redirection Operator In the shell command language, a token that performs a redirection function. It is one of the following symbols: < > >| << >> <& >& <<- <> These allow you to control the input and output of your commands. They can appear anywhere within a simple command or may follow a command. Redirections are processed in the order they appear, from left to right. < : Gives input to a command. command < file.txt The above will execute command on the contents of file.txt . <> : same as above, but the file is open in read+write mode instead of read-only : command <> file.txt If the file doesn't exist, it will be created. That operator is rarely used because commands generally only read from their stdin, though it can come handy in a number of specific situations . > : Directs the output of a command into a file. command > out.txt The above will save the output of command as out.txt . If the file exists, its contents will be overwritten and if it does not exist it will be created. This operator is also often used to choose whether something should be printed to standard error or standard output : command >out.txt 2>error.txt In the example above, > will redirect standard output and 2> redirects standard error. Output can also be redirected using 1> but, since this is the default, the 1 is usually omitted and it's written simply as > . So, to run command on file.txt and save its output in out.txt and any error messages in error.txt you would run: command < file.txt > out.txt 2> error.txt >| : Does the same as > , but will overwrite the target, even if the shell has been configured to refuse overwriting (with set -C or set -o noclobber ). command >| out.txt If out.txt exists, the output of command will replace its content. If it does not exist it will be created. >> : Does the same as > , except that if the target file exists, the new data are appended. command >> out.txt If out.txt exists, the output of command will be appended to it, after whatever is already in it. If it does not exist it will be created. >& : (per POSIX spec) when surrounded by digits ( 1>&2 ) or - on the right side ( 1>&- ) either redirects only one file descriptor or closes it ( >&- ). A >& followed by a file descriptor number is a portable way to redirect a file descriptor, and >&- is a portable way to close a file descriptor. If the right side of this redirection is a file please read the next entry. >& , &> , >>& and &>> : (read above also) Redirect both standard error and standard output, replacing or appending, respectively. command &> out.txt Both standard error and standard output of command will be saved in out.txt , overwriting its contents or creating it if it doesn't exist. command &>> out.txt As above, except that if out.txt exists, the output and error of command will be appended to it. The &> variant originates in bash , while the >& variant comes from csh (decades earlier). They both conflict with other POSIX shell operators and should not be used in portable sh scripts. << : A here document. It is often used to print multi-line strings. command << WORD Text WORD Here, command will take everything until it finds the next occurrence of WORD , Text in the example above, as input . While WORD is often EoF or variations thereof, it can be any alphanumeric (and not only) string you like. When any part of WORD is quoted or escaped, the text in the here document is treated literally and no expansions are performed (on variables for example). If it is unquoted, variables will be expanded. For more details, see the bash manual . If you want to pipe the output of command << WORD ... WORD directly into another command or commands, you have to put the pipe on the same line as << WORD , you can't put it after the terminating WORD or on the line following. For example: command << WORD | command2 | command3... Text WORD <<< : Here strings, similar to here documents, but intended for a single line. These exist only in the Unix port or rc (where it originated), zsh, some implementations of ksh, yash and bash. command <<< WORD Whatever is given as WORD is expanded and its value is passed as input to command . This is often used to pass the content of variables as input to a command. For example: $ foo="bar" $ sed 's/a/A/' <<< "$foo" bAr # as a short-cut for the standard: $ printf '%s\n' "$foo" | sed 's/a/A/' bAr # or sed 's/a/A/' << EOF $foo EOF A few other operators ( >&- , x>&y x<&y ) can be used to close or duplicate file descriptors. For details on them, please see the relevant section of your shell's manual ( here for instance for bash). That only covers the most common operators of Bourne-like shells. Some shells have a few additional redirection operators of their own. Ksh, bash and zsh also have constructs <(…) , >(…) and =(…) (that latter one in zsh only). These are not redirections, but process substitution . | {
"score": 10,
"source": [
"https://unix.stackexchange.com/questions/159513",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
159,517 | I have two files with thousands lines. I want to get their differences ratio in lines/bytes using diff , vimdiff or other commands, even regardless of the specific differences. | There's a tool called diffstat that sounds like what you're looking for. $ diff <file1> <file2> | diffstat Example $ diff afpuri.c afpuri1.c | diffstat unknown | 53 ++++++++++++++++++++--------------------------------- 1 file changed, 20 insertions(+), 33 deletions(-) This can be used for diff output which includes multiple files in a tree as well. References How to get diff to report summary of new, changed and deleted lines | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159517",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86795/"
]
} |
159,530 | I never really understood why a window system must have a server. Why do desktop environments, display managers and window managers need xorg-server? Is it only to have a layer of abstraction on top of the graphics card? Why do window systems employ a client-server model? Wouldn't inter-process communication via named pipes be simpler? | I think you've already noticed that some sort of "server" is needed. Each client (desktop environment, window manager, or windowed program) needs to share the display with all of the others, and they need to be able to display things without knowing the details of the hardware, or knowing who else is using the display. So the X11 server provides the layer of abstraction and sharing that you mentioned, by providing an IPC interface. X11 could probably be made to run over named pipes, but there are a two big things that named pipes can't do. Named pipes only communicate in one direction. If two processes start putting data into the "sending" end of a named pipe, the data will get intermingled. In fact, most X clients talk to the server using a "new and improved" named pipe called a UNIX-domain socket. It's a lot like a named pipe except that it lets processes talk in both directions, and it keeps track of who said what. These are the same sorts of things that the network has to do, and so UNIX-domain sockets use the same programming interface as the TCP/IP sockets that provide network communications. But from there, it's really easy to say "What if I ran the server on a different host than the client?" Just use a TCP socket instead of the UNIX socket, and voila: a remote-desktop protocol that predates Windows RDP by decades. I can ssh to four different remote hosts and run synaptic (graphical package manager) on each of them, and all four windows appear on my local computer's display. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/159530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84985/"
]
} |
159,531 | I am trying to compile nginx with a module: https://github.com/leev/ngx_http_geoip2_module . Before the nginx compilation, this library: https://github.com/maxmind/libmaxminddb needs to be installed. I followed the instructions ( https://github.com/maxmind/libmaxminddb/blob/master/README.md#installing-from-a-tarball ), compiled and installed the library. After the installation, ldconfig -p | grep maxminddb gives: libmaxminddb.so.0 (libc6,x86-64) => /usr/local/lib/libmaxminddb.so.0libmaxminddb.so (libc6,x86-64) => /usr/local/lib/libmaxminddb.so However, when I configure nginx with ngx_http_geoip2_module, it complains during configure: adding module in /home/cilium/ngx_http_geoip2_modulechecking for MaxmindDB library ... not found./configure: error: the geoip2 module requires the maxminddb library. which is exactly the library I've already installed. This error seems to come from the config file of ngx_http_geoip2_module : ngx_feature="MaxmindDB library"ngx_feature_name=ngx_feature_run=nongx_feature_incs="#include <maxminddb.h>"ngx_feature_libs=-lmaxminddb. auto/featureif [ $ngx_found = yes ]; then ngx_addon_name=ngx_http_geoip2_module HTTP_MODULES="$HTTP_MODULES ngx_http_geoip2_module" NGX_ADDON_SRCS="$NGX_ADDON_SRCS $ngx_addon_dir/ngx_http_geoip2_module.c" CORE_LIBS="$CORE_LIBS -lmaxminddb"else cat << END$0: error: the geoip2 module requires the maxminddb library.END exit 1fi Does anyone know what may have gone wrong here? UPDATE: some relevant output by sh -x ./configure .. : + echo adding module in /home/cilium/ngx_http_geoip2_moduleadding module in /home/cilium/ngx_http_geoip2_module+ test -f /home/cilium/ngx_http_geoip2_module/config+ . /home/cilium/ngx_http_geoip2_module/config+ ngx_feature=MaxmindDB library+ ngx_feature_name=+ ngx_feature_run=no+ ngx_feature_incs=#include <maxminddb.h>+ ngx_feature_libs=-lmaxminddb+ . auto/feature+ echo checking for MaxmindDB library ...\cchecking for MaxmindDB library ...+ cat+ ngx_found=no+ test -n...+ [ -x objs/autotest ]+ echo not found not found+ echo ----------+ cat objs/autotest.c+ echo ----------+ echo cc -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security -D_FORTIFY_SOURCE=2 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/chromium/src -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/google-sparsehash/src -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/google-sparsehash/gen/arch/linux/x64/include -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/protobuf/src -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/re2/src -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/out/Debug/obj/gen -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/out/Debug/obj/gen/protoc_out/instaweb -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/apr/src/include -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/aprutil/src/include -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/apr/gen/arch/linux/x64/include -I /home/cilium/ngx_pagespeed-release-1.9.32.1-beta/psol/include/third_party/aprutil/gen/arch/linux/x64/include -o objs/autotest objs/autotest.c -Wl,-Bsymbolic-functions -Wl,-z,relro -lmaxminddb+ echo ----------+ rm -rf objs/autotest.c+ [ no = yes ]+ cat./configure: error: the geoip2 module requires the maxminddb library.+ exit 1 | I think you've already noticed that some sort of "server" is needed. Each client (desktop environment, window manager, or windowed program) needs to share the display with all of the others, and they need to be able to display things without knowing the details of the hardware, or knowing who else is using the display. So the X11 server provides the layer of abstraction and sharing that you mentioned, by providing an IPC interface. X11 could probably be made to run over named pipes, but there are a two big things that named pipes can't do. Named pipes only communicate in one direction. If two processes start putting data into the "sending" end of a named pipe, the data will get intermingled. In fact, most X clients talk to the server using a "new and improved" named pipe called a UNIX-domain socket. It's a lot like a named pipe except that it lets processes talk in both directions, and it keeps track of who said what. These are the same sorts of things that the network has to do, and so UNIX-domain sockets use the same programming interface as the TCP/IP sockets that provide network communications. But from there, it's really easy to say "What if I ran the server on a different host than the client?" Just use a TCP socket instead of the UNIX socket, and voila: a remote-desktop protocol that predates Windows RDP by decades. I can ssh to four different remote hosts and run synaptic (graphical package manager) on each of them, and all four windows appear on my local computer's display. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/159531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37082/"
]
} |
159,540 | I'm trying to install php-5.3 on Arch Linux, but bison is too new, so I built older version of bison from sources. And it appears it installs itself into /usr/local by default. (Is this some kind of convention?) So I'm now wondering if I can install more than one version of bison i.e., side-by-side with the default system one and the one I just installed. These kind of things are likely to be rarely needed. I'm just curious about it. Is this a hard thing to do? How should I go about it? | I think you've already noticed that some sort of "server" is needed. Each client (desktop environment, window manager, or windowed program) needs to share the display with all of the others, and they need to be able to display things without knowing the details of the hardware, or knowing who else is using the display. So the X11 server provides the layer of abstraction and sharing that you mentioned, by providing an IPC interface. X11 could probably be made to run over named pipes, but there are a two big things that named pipes can't do. Named pipes only communicate in one direction. If two processes start putting data into the "sending" end of a named pipe, the data will get intermingled. In fact, most X clients talk to the server using a "new and improved" named pipe called a UNIX-domain socket. It's a lot like a named pipe except that it lets processes talk in both directions, and it keeps track of who said what. These are the same sorts of things that the network has to do, and so UNIX-domain sockets use the same programming interface as the TCP/IP sockets that provide network communications. But from there, it's really easy to say "What if I ran the server on a different host than the client?" Just use a TCP socket instead of the UNIX socket, and voila: a remote-desktop protocol that predates Windows RDP by decades. I can ssh to four different remote hosts and run synaptic (graphical package manager) on each of them, and all four windows appear on my local computer's display. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/159540",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29867/"
]
} |
159,543 | Windows has a super-administrator account that has not only elevated privileges to perform protected system functions, but unfettered access to anything on the computer, regardless of user ownership. Does Linux have an equivalent? My understanding is that in terms of access to all user accounts, root is just another user (it would defeat user security if any user could become any other user simply by doing it via root). | In Linux/Unix the user with user id 0 is such a super administrator. The user is usually called "root", but the magic is really behind the id and not the name. That user is especially not bound to local file access permissions and can read and write any file. That user also has the ability to change to any other user without needing a password. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85417/"
]
} |
159,557 | In a shell script, how do I easily and non-invasively test for write access to a file without actually attempting to modify the file? I could parse the output of stat , but that seems really complex, and perhaps brittle, though I'm not sure how much stat output differs across implementations and time. I could append to the end of the file and see if that succeeds, but that's potentially dangerous, for two reasons I can think of: I now have to remove the addition, and in case some other process writes to the file, this immediately becomes non-trivial as my line is no longer the last one. Any process reading the file may have arbitrary requirements on the contents of that file, and I may just have broken that application. | Just use the - w flag of the test utillity: [ -w /path/to/file ] && echo "writeable" || echo "write permission denied" Note that if you're going to write to the file later, it's still possible that you won't be able to write to it. The file may have moved, the permissions may have changed, etc. It can also happen that -w detects write permissions but some other factor intervenes to make the file not writable . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/159557",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18863/"
]
} |
159,569 | I've been given this one-liner to know how much memory my processes consume. $ ps -u $USER -o pid,rss,command | \ awk '{print $0}{sum+=$2} END {print "Total", sum/1024, "MB"}' Now I'd like to create an alias for that command, but have an issue escaping ' characters: $ alias mm=ps -u $USER -o pid,rss,command | \ awk '{print $0}{sum+=$2} END {print "Total", sum/1024, "MB"}'bash: alias: -u: not foundbash: alias: myuser: not foundbash: alias: -o: not foundbash: alias: pid,rss,command: not foundTotal 0 MB I tried to escape the single quotes, but still it doesn't work. $ alias mm='ps -u $USER -o pid,rss,command | \ awk \'{print $0}{sum+=$2} END {print "Total", sum/1024, "MB"}\''> Can you help me understand how to create this alias? | To make this an alias, which is possible, you need to use double quotes around the entire value for the alias. You'll also need to escape a few things within the alias as a result, and you need to escape any of the field arguments to awk since these will get interpreted as arguments by Bash as well when you're setting the alias. This worked for me: $ alias mm="ps -u $USER -o pid,rss,command | \ awk '{print \$0}{sum+=\$2} END {print \"Total\", sum/1024, \"MB\"}'" In the above I've done the following: Double quotes around alias' value alias mm="ps -u ...." Escaped awk's double quotes awk '{print \$0}{sum+=\$2} END {print \"Total\", sum/1024, \"MB\"} Escaped awk's fields awk '{print \$0}{sum+=\$2} END Would I use this? Probably not, I'd switch this to a Bash function instead, since it'll be easier to maintain and understand what's going on, but here's the alias if you still want it. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/159569",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8115/"
]
} |
159,594 | I have recently come across a file whose name begins with the character '♫'. I wanted to copy this file, feed it into ffmpeg , and reference it in various other ways in the terminal. I usually auto-complete weird filenames but this fails as I cannot even type the first letter. I don't want to switch to the mouse to perform a copy-paste maneuver. I don't want to memorize a bunch of codes for possible scenarios. My ad hoc solution was to switch into vim , paste !ls and copy the character in question, then quit and paste it into the terminal. This worked but is quite horrific. Is there an easier way to deal with such scenarios? NOTE: I am using the fish shell if it changes things. | If the first character of file name is printable but neither alphanumeric nor whitespace you can use [[:punct:]] glob operator: $ ls *.txtf1.txt f2.txt ♫abc.txt$ ls [[:punct:]]*.txt♫abc.txt | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/159594",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83019/"
]
} |
159,613 | I realize this is probably a pretty basic question, but I can't seem to find an answer that makes sense to me. I have experience configuring networks for IPv4, but IPv6 is a whole other beast. I'm trying to wrap my head around it. I'm trying to configure my laptop to use IPv6. I'm going to have to start dealing with IPv6 at work, so I thought I'd play around locally. I have a few questions about address configuration. To start, based on this site my Linux kernel supports and is configured for IPv6. $ [ -f /proc/net/if_inet6 ] && echo 'IPv6 ready system!' || echo 'No IPv6 support found! Compile the kernel!!'IPv6 ready system!$ lsmod | grep -qw ipv6 && echo "IPv6 kernel driver loaded and configured." || echo "IPv6 not configured and/or driver loaded on the system."IPv6 kernel driver loaded and configured. I can successfully ping myself using ping6 -wlan0 [ip6addr] . My current IP is a link local address, and from what I understand I need a Global scope to access the outside world (like ipv6.google.com). Can I assign my own Global scope IP, or do I need to let network discovery/DHCPv6 take care of that for me? If it's the latter, how can I configure my system to do this? If it's the former, then I assume I can follow these instructions . Much like configuring IPv4. Is there any rhyme or reason to how I should generate the address other than the prefix being set to 20XX? I also realize that my wireless router needs to be configured for IPv6, but that's not part of this question. | Debian, Ubuntu, and other Linux distributions have been IPv6 ready for several releases. You can't assign your own global IPv6 address, just as you can't assign your own global IPv4 address. You need to get it assigned by your ISP, or IPv6 provider. If you are connected to an IPv6 network, you computer may auto-configure using data from a radvd announcement. IPv6 is designed for auto-configuration. You can see if you are configured by listing your IPv6 addresses (you may have a few). Try the command ip -6 addr show . Addresses starting fe80: are link local addresses . If you have an address starting with 2xxx: , then you have a global IPv6 address. There are a number of ways to get a global IPv6 address (and network block): If your ISP is IPv6 ready, you should be able to get an address and at least one /64 network block from them. You can use 6to4 networking to get an IPv6 network based on your IPv4 address. This will begin 2002: followed by your IPv6 address in HEX. It is possible to configure radvd to derive your IPv6 network block from your IPv4 address. You can use 6in4 to tunnel your IPv6 network to a tunnel broker. In this case you would get your IPv6 address and network blocks from the tunnel broker. This is your best option if your ISP is not IPv6 ready. If you don't get your address from your ISP, then your addresses will change when your ISP becomes IPv6 ready. It is possible to do this transition smoothly using multiple IPv6 addresses and some routing rules. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159613",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79977/"
]
} |
159,614 | On a multi-user system, what protects against any user accessing any other users files via root? As context, the question is based on my understanding as follows: There are two commands related to root privileges, sudo and su . With sudo , you don't become another user (including root). sudo has a pre-defined list of approved commands that it executes on your behalf. Since you are not becoming root or another user, you just authenticate yourself with your own password. With su , you actually become root or another user. If you want to become user Bob, you need Bob's password. To become root, you need the root password (which would be defined on a multi-user system). ref's: howtogeek.com : su switches you to the root user account and requires the root account’s password. sudo runs a single command with root privileges – it doesn’t switch to the root user. and If you execute the su bob command, you’ll be prompted to enter Bob’s password and the shell will switch to Bob’s user account; Similar description at computerhope.com tecmint.com : ‘sudo‘ is a root binary setuid, which executes root commands on behalf of authorized users If you become root, you have access to everything. Anyone not authorized to access another user's account would not be given the root password and would not have sudo definitions allowing it. This all makes sense until you look at something like this link , which is a tutorial for using sudo -V and then sudo su - to become root using only your own password. If any user can become root without the root password, what mechanism protects user files from unauthorized access? | The major difference between sudo and su is the mechanism used to authenticate. With su the user must know the root password (which should be a closely guarded secret), while with sudo the user uses his/her own password. In order to stop all users causing mayhem, the priviliges discharged by the sudo command can, fortunately, be configured using the /etc/sudoers file. Both commands run a command as another user, quite often root . sudo su - works in the example you gave because the user (or a group where the user is a member) is configured in the /etc/sudoers file. That is, they are allowed to use sudo . Armed with this, they use the sudo to temporarily gain root privileges (which is default when no username is provided) and as root start another shell ( su - ). They now have root access without knowing root 's password. Conversely, if you don't allow the user to use sudo then they won't be able to sudo su - . Distros generally have a group (often called wheel ) whose members are allowed to use sudo to run all commands. Removing them from this group will mean that they cannot use sudo at all by default. The line in /etc/sudoers that does this is: ## Allows people in group wheel to run all commands%wheel ALL=(ALL) ALL While removing users from this group would make your system more secure, it would also result in you (or other system adminstrators) being required to carry out more administrative tasks on the system on behalf of your users. A more sensible compromise would configure sudo to give you more fine grained control of who is allowed to use sudo and who isn't, along with which commands they are allowed to use (instead of the default of all commands). For example, ## Allows members of the users group to mount and unmount the## cdrom as root%users ALL=/sbin/mount /mnt/cdrom, /sbin/umount /mnt/cdrom (only useful with the previous %wheel line commented out, or no users in the wheel group). Presumably, distros don't come with this finer grained configuration as standard as it's impossible to forecast what the admin's requirements are for his/her users and system. Bottom line is - learn the details of sudo and you can stop sudo su - while allowing other commands that don't give the user root shell access or access to commands that can change other users' files. You should give serious consideration to who you allow to use sudo and to what level. WARNING: Always use the visudo command to edit the sudoers file as it checks your edits for you and tries to save you from the embarrassing situation where a misconfigured file (due to a syntax error) stops you from using sudo to edit any errors. This is especially true on Debian/Ubuntu and variants where the root account is disabled by default. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/159614",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85417/"
]
} |
159,626 | I can get all files on the bash patches site by downloading them in a sequence: SEQ=$(seq -f "%03g" 1 30)for i in $SEQ; do wget http://ftp.gnu.org/gnu/bash/bash-4.3-patches/bash43-$i;done But then I would have to know the maximum number. Is there a possibility to just get the listing and extract all patchfiles for downloading? | You could use wget with recursive downloading: wget -nc -nd -nH -np -r -R '*.*' http://ftp.gnu.org/gnu/bash/bash-4.3-patches/ Explanation: -nc : no-clobber (don't overwrite existing files), probably not necessary. -nd : Don't create hierarchy of directories. -nH : Don't create directory based on hostname. Or you'd find everything downloaded to a directory called ftp.gnu.org . -np : Never ascend to the parent directory. -r : Download recursively. -R '*.*' : Reject everything with a . in its filename (skips things like index.html and so on). An accept list may also be used. The file is downloaded, but discarded . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159626",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
159,630 | Is there any command line tool for making icns files to use for OS X apps?I am aware of icontool but it doesn't do what i want. It converts iconset files into and icns file. What i want it to do is: Copy a 1024x1024 tiff file, then convert it into all the different sizes such as 512x512@2x , 128x128 or 16x16@2x . I can do this manually but it can't be a pain and especially when doing it multiple times. | You could use wget with recursive downloading: wget -nc -nd -nH -np -r -R '*.*' http://ftp.gnu.org/gnu/bash/bash-4.3-patches/ Explanation: -nc : no-clobber (don't overwrite existing files), probably not necessary. -nd : Don't create hierarchy of directories. -nH : Don't create directory based on hostname. Or you'd find everything downloaded to a directory called ftp.gnu.org . -np : Never ascend to the parent directory. -r : Download recursively. -R '*.*' : Reject everything with a . in its filename (skips things like index.html and so on). An accept list may also be used. The file is downloaded, but discarded . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79979/"
]
} |
159,641 | I wrote a pretty simple service: [Unit]Description=Service on interface %I[Service]Type=oneshotRemainAfterExit=yesExecStart=/usr/lib/project/my_script.sh start %I[Install]WantedBy=multi-user.target Which I start like systemclt start myservice@net0 , where net0 is a network interface. How can I restart the service every time the network interface is restarted? | You can have your systemd unit bind ( BindTo ) and depend on ( DependsOn ) the .device unit for the corresponding network interface (which is automatically loaded) for the said interface as so: [Unit]Description=Service on interface %IBindsTo=sys-subsystem-net-devices-%i.deviceAfter=sys-subsystem-net-devices-%i.device[Service]Type=oneshotRemainAfterExit=yesExecStart=/usr/lib/project/my_script.sh start %I[Install]WantedBy=multi-user.target You'll probably want to have the unit's dependency formalized by adding Wants and After if your custom unit expects to have the network interface actually online by adding these in the [Unit] section: Wants=network-online.targetAfter=network-online.target NB: You can verify that .device units are loaded by doing systemctl list-units --type=device . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/159641",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27359/"
]
} |
159,665 | I want to to remove all color codes which look like '@n', '@R' etc, from a moderately large size collection of text files. So in a file called 'remove_cc', I wrote the following: sed -ie 's/@r//g' $1sed -ie 's/@g//g' $1sed -ie 's/@y//g' $1sed -ie 's/@b//g' $1sed -ie 's/@m//g' $1sed -ie 's/@c//g' $1sed -ie 's/@n//g' $1sed -ie 's/@R//g' $1sed -ie 's/@G//g' $1sed -ie 's/@Y//g' $1sed -ie 's/@B//g' $1sed -ie 's/@M//g' $1sed -ie 's/@C//g' $1sed -ie 's/@N//g' $1sed -ie 's/@W//g' $1sed -ie 's/@K//g' $1 Anyways, the script works fantastically if I use it like:./remove_cc file.txt But if I type ./remove_cc *.txt in a folder with many txt files that I want to run the script on, it fails to do anything. ./remove_cc * is equally ineffective. Is there a way to fix this so it works? | When you type ./remove_cc * , the shell changes it to ./remove_cc file1.txt file2.txt file3.txt etc , and runs your script that way. Your script is only looking at $1 , the first parameter (file1.txt). The most general way to do this is to loop over each parameter in turn. "$@" expands to the list of all the parameters, and we can use for to loop over them. for n in "$@" ; do sed -ie 's/@r//g' "$n" ...done In this particular case, since sed will take multiple filenames, you can simplify: sed -ie 's/@r//g' "$@" Note that paying attention to quotes is important when writing shell scripts. Without the quotes, for example, the script would not work on a file named My Text File.txt . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159665",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86730/"
]
} |
159,672 | I'm just trying to review basic terminal commands. Having said that, how do I create a text file using the terminal only? | You can't use a terminal to create a file. You can use an application running in a terminal. Just invoke any non-GUI editor ( emacs -nw , joe , nano , vi , vim , …). If you meant using the command line, then you are asking how to create a file using the shell. See What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'? The basic way to create a file with the shell is with output redirection . For example, the following command creates a file called foo.txt containing the line Hello, world. echo 'Hello, world.' >foo.txt If you want to write multiple lines, here are a few possibilities. You can use printf . printf '%s\n' 'First line.' 'Second line.' 'Third line.' >foo.txt You can use a string literal containing newlines. echo 'First line.Second line.Third line.' >foo.txt or echo $'First line.\nSecond line.\nThird line.' >foo.txt Another possibility is to group commands. { echo 'First line.' echo 'Second line.' echo 'Third line.'} >foo.txt On the command line, you can do this more directly with cat . Redirect its output to the file and type the input line by line on cat 's standard input. Press Ctrl + D at the beginning of the line to indicate the end of the input. $ cat >foo.txt First line. Second line. Third line. Ctrl+D In a script you would use a here document to achieve the same effect: cat <<EOF >foo.txtFirst line.Second line.Third line.EOF If you just want to create an empty file, you can use the touch command: it creates the file if it doesn't exist, and just updates its last-modified date if it exists. touch foo.txt Equivalently: >>foo.txt i.e. open foo.txt for appending, but write 0 bytes to it — this creates the file but doesn't modify it. Unlike touch , this doesn't update the file's last-modified date if it already existed. To create an empty file, and remove the file's content if the file already existed, you can use >foo.txt | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/159672",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68557/"
]
} |
159,695 | This one-liner removes duplicate lines from text input without pre-sorting. For example: $ cat >fqwewr$ awk '!a[$0]++' <fqwer$ The original code I have found on the internets read: awk '!_[$0]++' This was even more perplexing to me as I took _ to have a special meaning in awk, like in Perl, but it turned out to be just a name of an array. Now, I understand the logic behind the one-liner: each input line is used as a key in a hash array, thus, upon completion, the hash contains unique lines in the order of arrival. What I would like to learn is how exactly this notation is interpreted by awk. E.g. what the bang sign ( ! ) means and the other elements of this code snippet. How does it work? | Here is a "intuitive" answer, for a more in depth explanation of awk's mechanism see either @Cuonglm's In this case, !a[$0]++ , the post-increment ++ can be set aside for a moment, it does not change the value of the expression. So, look at only !a[$0] . Here: a[$0] uses the current line $0 as key to the array a , taking the value stored there. If this particular key was never referenced before, a[$0] evaluates to the empty string. !a[$0] The ! negates the value from before. If it was empty or zero (false), we now have a true result. If it was non-zero (true), we have a false result. If the whole expression evaluated to true, meaning that a[$0] was not set to begin with, the whole line is printed as the default action. Also, regardless of the old value, the post-increment operator adds one to a[$0] , so the next time the same value in the array is accessed, it will be positive and the whole condition will fail. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/159695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27985/"
]
} |
159,696 | I really tried searching but could not find anything (it's hard to know what exactly to search for). I know how to do this with sed : print from current line until the line that matches SOMETHING: sed -n '/1/,/SOMETHING/p' But how do I do the same thing, but print from current line until the line that does not match SOMETHING? e.g. pipe this into sed : blah blah SOMETHING blah blahblah blah SOMETHINGblahblahblahSOMETHING blah blahNO MATCH HERE Then I want to filter out and print only the first 3 lines (but "3" can vary). | Here is a "intuitive" answer, for a more in depth explanation of awk's mechanism see either @Cuonglm's In this case, !a[$0]++ , the post-increment ++ can be set aside for a moment, it does not change the value of the expression. So, look at only !a[$0] . Here: a[$0] uses the current line $0 as key to the array a , taking the value stored there. If this particular key was never referenced before, a[$0] evaluates to the empty string. !a[$0] The ! negates the value from before. If it was empty or zero (false), we now have a true result. If it was non-zero (true), we have a false result. If the whole expression evaluated to true, meaning that a[$0] was not set to begin with, the whole line is printed as the default action. Also, regardless of the old value, the post-increment operator adds one to a[$0] , so the next time the same value in the array is accessed, it will be positive and the whole condition will fail. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/159696",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86919/"
]
} |
159,735 | I have a Raspberry Pi running Raspbian (Debian Wheezy). I entered set in a terminal and was surprised by the long list. This seems to be almost entirely git functions — ~3700 lines of them. __git_all_commands=__git_diff_common_options=$'--stat --numstat --shortstat --summary\n\t\t\t--patch-with-stat --name-only ... My question is, how did they get there and why? I have occasionally used git to get packages. I have checked all the usual suspects /etc/profile , /etc/bash.bashrc , .bashrc , .profile I found a script /etc/bash_completion.d/git (I had never heard of bash_completion before). I have to do some more study to work out what this does, and exactly where is is called. I still need to figure out WHY I would want to run this in every login shell when I only use git once or twice a year. (The Raspberry Pi is not exactly over endowed with RAM). This doesn't seem to happen on my Mac. | These functions are part of the shell's completion support for git . They are maintained as part of the Git software. Debian (which Raspbian is based on) distributes the bash completion setup in the git package. The functions are located in /etc/bash_completion.d/git , in the same directory as other command completion support for bash. All the files in /etc/bash_completion.d are loaded as part of setting up bash's programmable completion, in /etc/bash_completion . Debian's default .bashrc loads /etc/bash_completion , you can edit it out if you don't want any command-specific completion. If you never use git, remove the git package. If you have the git package installed, then presumably you do sometimes run the git command and thus would want to have good completion for it. “I only use git rarely and I want to save a few kilobytes of RAM” is too fine a distinction even for Debian. If you want to skip that completion file but use others, you can divert the file to a name that causes it to be skipped by /etc/bash_completion . Diverting a file is a way to tell the package manager to apply updates and removals to a file located in a different place. dpkg-divert --add --local --rename --divert /etc/bash_completion.d/git.dpkg-diverted /etc/bash_completion.d/git | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159735",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47111/"
]
} |
159,781 | On this video at 1.09 I see CentOS 7 Live CD which allows to start CentOS directly from CD without installing it. How can I make such disk? I've downloaded .iso file from official web site, burned it, but it only allows me to "Install CentOS", i can't run CentOS without installing. | Pick a mirror in the mirrors page . It will show you an FTP folder.Reading the README.txt in that folder, we see: CentOS-7.0-1406-x86_64-GnomeLive.isoCentOS-7.0-1406-x86_64-KdeLive.iso These images are Live images of CentOS 7. Depending on the name they use therespective display manager. They are designed for testing purposes andexploring the CentOS 7 environment. They will not modify the content of your hard disk, unless you choose to install CentOS 7 from within the Liveenvironment. Please be advised that you can not change the set of installedpackages in this case. This needs to be done within the installed systemusing 'yum'.CentOS-7.0-1406-x86_64-livecd.isoThis is like the GnomeLive image mentioned above, but without packages suchas libreoffice. This image is small enough to be burned on a CD. So pick the iso or torrent file you want accordingly | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159781",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86978/"
]
} |
159,826 | If I wanted to know who is logged in since when and what are the processes currently running under his control, how can I do that in systemd? | You don't need systemd for that … but there's a systemd way of doing it as well , as long as you are running the systemd-logind daemon, or something that provides the same API. First obtain a list of sessions: $ systemd-loginctl list-sessions SESSION UID USER SEAT c89 1000 jdebp seat0 1 sessions listed. Then for each session that you are interested in show its status: $ systemd-loginctl session-status c89c89 - jdebp (1000) Since: Tue, 07 Oct 2014 20:16:20 +0100; 15s ago Leader: 24453 (3) Seat: seat0; vc6 TTY: /dev/tty6 Service: login; type tty; class user Active: yes CGroup: /user/jdebp/c89 ├ 24453 login ├ 25661 -zsh └ 25866 systemd-loginctl session-status c89 The systemd people have renamed them to loginctl and logind in more recent versions. Further reading loginctl . freedesktop.org. logind API . freedesktop.org. GSOC 2014: systemd replacement utilities (systembsd) . OpenBSD Journal . 2014-09-12. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159826",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72520/"
]
} |
159,843 | I essentially want to run this command... ln -s /opt/gitlab/embedded/service/gitlab-shell/hooks/ /var/opt/gitlab/git-data/repositories/web/*/hooks This would create a symbolic link in all folders under the web folder called hooks however it returns no errors but it does not actually add the symlink. | You'll probably want to use the find command using the maxdepth option. I created this sample directory structure: /tmp/parent/tmp/parent/subdir2/tmp/parent/subdir1/tmp/parent/subdir4/tmp/parent/subdir4/notme/tmp/parent/subdir3 Let's say I wanted to create a symlink to /tmp/hooks in each subdir but not the notme subdir: root@xxxxxxvlp12 ~ $ find /tmp/parent -type d -maxdepth 1 -exec ln -s /tmp/hooks {} \;root@xxxxxxvlp12 ~ $ find /tmp/parent -ls2490378 4 drwxr-xr-x 6 root root 4096 Oct 7 12:39 /tmp/parent2490382 4 drwxr-xr-x 2 root root 4096 Oct 7 12:39 /tmp/parent/subdir22490394 0 lrwxrwxrwx 1 root root 10 Oct 7 12:39 /tmp/parent/subdir2/hooks -> /tmp/hooks2490379 4 drwxr-xr-x 2 root root 4096 Oct 7 12:39 /tmp/parent/subdir12490395 0 lrwxrwxrwx 1 root root 10 Oct 7 12:39 /tmp/parent/subdir1/hooks -> /tmp/hooks2490389 4 drwxr-xr-x 3 root root 4096 Oct 7 12:39 /tmp/parent/subdir42490390 4 drwxr-xr-x 2 root root 4096 Oct 7 12:38 /tmp/parent/subdir4/notme2490396 0 lrwxrwxrwx 1 root root 10 Oct 7 12:39 /tmp/parent/subdir4/hooks -> /tmp/hooks2490387 4 drwxr-xr-x 2 root root 4096 Oct 7 12:39 /tmp/parent/subdir32490397 0 lrwxrwxrwx 1 root root 10 Oct 7 12:39 /tmp/parent/subdir3/hooks -> /tmp/hooks2490391 0 lrwxrwxrwx 1 root root 10 Oct 7 12:39 /tmp/parent/hooks -> /tmp/hooks | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159843",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85452/"
]
} |
159,859 | As per my knowledge/understanding both help and man came at the same time or have very little time difference between them. Then GNU Info came in and from what I have seen is much more verbose, much more detailed and arguably much better than what man is. Many entries even today in man are cryptic. I have often wondered why Info which is superior to man in many ways didn't succeed man at all. I still see people producing man pages than info pages. Was it due to not helpful tools for info? Something in the licenses of the two? Or some other factor which didn't get info the success it richly deserved? I did see a few questions on unix stackexchange notably What is GNU Info for? and Difference between help, info and man command among others. | To answer your question with at least a hint of factual background I propose to start by looking at the timeline of creation of man , info and other documentation systems. The first man page was written in 1971 using troff (nroff was not around yet) in a time when working on a CRT based terminal was not common and printing of manual pages the norm. The man pages use a simple linear structure. The man pages normally give a quick overview of a command, including its commandline option/switches. The info command actually processes the output from Texinfo typesetting syntax. This had its initial release in February 1986, a time when working on a text based CRT was the norm for Unix users, but graphical workstations still exclusive. The .info output from Texinfo provides basic navigation of text documents. And from the outset has a different goal of providing complete documentation (for the GNU Project). Things like the use of the command and the commandline switches are only a small part of what an Texinfo file for a program contains. Although there is overlap the (Tex)info system was designed to complement the man pages, and not to replace them. HTML and web browsers came into existence in the early 90s and relatively quickly replaced text based information systems based on WAIS and gopher.Web browsers utilised the by then available graphical systems, which allows for more information (like underlined text for a hyperlink) then text-only systems allow. As the functionality info provides can be emulated in HTML and a web browser (possible after conversion), the browser based system allow for greater ease of navigation (or at least less experience/learning). HTML was expanded and could do more things than Texinfo can. So for new projects (other than GNU software) a whole range of documentation systems has evolved (and is still evolving), most of them generating HTML pages. A recent trend for these is to make their input (i.e. what the human documenter has to provide) human readable, whereas Texinfo (and troff) is more geared to efficient processing by the programs that transform them.¹ info was not intended to be a replacement for the man pages, but they might have replaced them if the GNU software had included a info2man like program to generate the man pages from a (subset of a larger) Texinfo file. Combine that with the fact that fully utilising the facilities that a system like Texinfo, (La(TeX, troff, HTML (+CSS) and reStructured Text provide takes time to learn, and that some of those are arguably more easy to learn and/or are more powerful, there is little chance of market dominance of (Tex) info . ¹ E.g reStructured Text , which can also be used to write man pages | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/159859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
159,873 | I want to set up CentOS 7 firewall such that, all the incoming requests will be blocked except from the originating IP addresses that I whitelist. And for the Whitelist IP addresses all the ports should be accessible. I'm able to find few solutions (not sure whether they will work) for iptables but CentOS 7 uses firewalld . I can't find something similar to achieve with firewall-cmd command. The interfaces are in Public Zone. I have also moved all the services to Public zone already. | I'd accomplish this by adding sources to a zone. First checkout which sources there are for your zone: firewall-cmd --permanent --zone=public --list-sources If there are none, you can start to add them, this is your "whitelist" firewall-cmd --permanent --zone=public --add-source=192.168.100.0/24firewall-cmd --permanent --zone=public --add-source=192.168.222.123/32 (That adds a whole /24 and a single IP, just so you have a reference for both a subnet and a single IP) Set the range of ports you'd like open: firewall-cmd --permanent --zone=public --add-port=1-22/tcpfirewall-cmd --permanent --zone=public --add-port=1-22/udp This just does ports 1 through 22. You can widen this, if you'd like. Now, reload what you've done. firewall-cmd --reload And check your work: firewall-cmd --zone=public --list-all Side note / editorial: It doesn't matter but I like the "trusted" zone for a white-listed set of IPs in firewalld. You can make a further assessment by reading redhat's suggestions on choosing a zone . See also: RHEL 7 using Firewalls article Fedora FirewallD docs (fairly good, fedora's been using firewalld for some while) If you'd like to DROP packets outside this source, here's an example for dropping those outside the /24 I used as an example earlier, you can use rich rules for this , I believe. This is conceptual, I have not tested it (further than seeing that centos 7 accepts the command), but, should be easy enough to do a pcap and see if it behaves how you'd expect firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.100.0/24" invert="True" drop' | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/159873",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87039/"
]
} |
159,906 | How can I parse JSON output on the shell? For example, Amazon Web Services provides a CLI to retrieve the status of your instances: $ aws ec2 describe-instances <my_instance_id> But the command returns a JSON string. The output of that command looks like this: $ aws ec2 describe-instances x12345{ "Reservations" : { "OwnerId": "1345345" "Groups": [], "SecurityGroups": [ { "Foo" : "yes" "Bar" : "no } ] }} Are there an shells built-ins that could be used to parse JSON output? For example, I would like to capture in a shell variable FOO , the following output["Reservations"]["SecurityGroups"][0]{"Foo"} . In case it helps, I am specifically interested in solutions that could work from Zsh. | As I understand it you're looking for the value of "Foo". This is really easy to do with the shell command-line tool jq . It is something like sed in that it implements its own kind of parser language. Given your example: json='{ "Reservations" : { "OwnerId" : "1345345", "Groups" : [], "SecurityGroups" : [ { "Foo" : "yes", "Bar" : "no" } ] }}' jq can get yes as simply as: printf %s "$json" |jq '.[].SecurityGroups[0].Foo?' OUTPUT "yes" You can walk through an object hash or dictionary list using the .dot notation, and indexed arrays can be indexed more simply, with, as you have probably guessed, numeric, square-bracketed indices. In the command above I use the empty index form to indicate that I want all of that level's iterable items expanded. That may be easier to understand in this way: printf %s "$json" | jq '.[][]' ... which breaks out all values for the second level items in the hash and gets me... "1345345"[][ { "Foo": "yes", "Bar": "no" }] This barely scratches the surface with regards to jq 's capabilities. It is an immensely powerful tool for serializing data in the shell, it compiles to a single executable binary in the classic Unix-style, it is very likely available via package-manager for your distribution, and it is very well documented. Please visit its git -page and see for yourself. By the way, another way to tackle layered-data in json - at least to get an idea of what you're working with - might be to go the other way and use the .dot notation to break out all values at all levels like: printf %s "$json" | jq '..'{ "Reservations": { "OwnerId": "1345345", "Groups": [], "SecurityGroups": [ { "Foo": "yes", "Bar": "no" } ] }}{ "OwnerId": "1345345", "Groups": [], "SecurityGroups": [ { "Foo": "yes", "Bar": "no" } ]}"1345345"[][ { "Foo": "yes", "Bar": "no" }]{ "Foo": "yes", "Bar": "no"}"yes""no" But far better, probably, would be just to use one of the many discovery or search methods that jq offers for the various types of nodes. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/159906",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
159,925 | I used to be able to do things like: X=123 cat <<EOFX is $XEOF or even simpler: X=123 echo $X The first one still seems to work on Mac OS X after install the bash fix, however neither seem to work on my Ubuntu 14.04 instance in AWS anymore. What makes it that echo or cat no longer have access to these environment variables? Stranger still, when I pass the env vars to a NodeJS app I don't seem to have any issues: cat <<EOF > test.jsconsole.log('X is ' + process.env.X);EOFX=123 node test.js This seems to work in bash scripts as well: cat <<EOF > test.shecho X is \$XEOFchmod +x test.shX=123 ./test.sh | In any POSIX shell, when you write X=123 echo $X the $X is expanded before the whole command is executed, i.e. if $X is initially unset, you get: X=123 echo which is then executed. You can see more or less what the shell is doing with set -x : $ set -x$ X=123 echo X=$X+ X=123+ echo X=X=$ set +x You can see that echo (actually the shell itself, which does the expansion before executing echo ) still has access to the environment: $ X=123 eval 'echo $X'123 The issue with cat <<EOF is similar. Note that concerning bash , there was a bug in old versions (before 4.1), described in the CHANGES file as: Fixed a bug that caused variable expansion in here documents to look in any temporary environment. This may be the cause of the behavior observed under Mac OS X. Do not rely on this bug. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159925",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2455/"
]
} |
159,936 | I recently installed Fedora 20. I don't recall what exact options I chose for encrypting the disk/LVM during installation. It installed fine and I can log in etc. Here is the situation I have: I booted up with LiveCD and tried the following: (I have installed Fedora20 to /dev/sda3' partition). If I run cryptsetup open /dev/sda3 fedo I get an error saying it is not a LUKS device. I I run cryptsetup luksDump /dev/sda3 I get an error saying it is not a LUKS device If I run cryptsetup open --type plain /dev/sda3 fedo , it prompts for password and it opens the device fine. So, obviously, that is a plain-text encrypted (without LUKS header) partition. Now, when I try to run mount /dev/mapper/fedo /mnt/fedora , it says unknown crypto_LUKS filesystem . I do have LVM on top of it, so, I can run pvdisplay , vgdisplay , lvdisplay and it shows information. I have a VG called fedora and two LVs, viz 00 for swap partition and 01 for / partition. Now, if I do a cryptsetup luksDump /dev/fedora/01 I can see LUKS headers etc. And, I can mount by running mount /dev/fedora/00 /mnt/fedora , no password prompt. So, do I have a LUKS-over-LVM-over-(plain-text)-encrypted partition? Here is my output of lsblk : # lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 37.3G 0 disk|-sda3 8:3 0 17.4G 0 part |-fedora-00 253:0 0 2.5G 0 lvm | |-luks-XXXXX 253:3 0 2.5G 0 crypt [SWAP] |-fedora-01 253:1 0 15G 0 lvm |-luks-XXXXX 253:2 0 15G 0 crypt / So, the question is, how to figure out whether I have LVM-over-LUKS or LUKS-over-LVM , or some other combination thereof ( LUKS over LVM over LUKS etc)? To make my question clear, I know I have LVM and LUKS, I want to figure out the order of them. | cryptsetup luksDump /dev/fedora/01 shows the LVM logical volume to be a LUKS encrypted volume. The output of pvs or pvdisplay would show the partition /dev/sda3 to be a physical volume. Thus you have LUKS over LVM. At a lower level, you have LVM over PC partition. The output of lsblk confirms this: sda is a disk, sda3 is a partition (which contains an LVM physical volume), fedora-00 and fedora-01 are logical volumes, and each logical volume contains a LUKS encrypted volume. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/159936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87085/"
]
} |
159,941 | I'm writing this in my script. It has other parts but I'm getting stuck on this part only. if [[$# == $year $month $day ]] ; then cal $day $month $yearfi When I run this it give me this msg: [[3: command not found So what is the problem? is it a syntax or actual command? Here's the rest of my script if that helps: year=$(echo "$year" | bc)month=$(echo "$month" | bc)day=$(echo "$day" | bc)if [[$# == $year $month $day ]] ; then cal $day $month $yearfi | cryptsetup luksDump /dev/fedora/01 shows the LVM logical volume to be a LUKS encrypted volume. The output of pvs or pvdisplay would show the partition /dev/sda3 to be a physical volume. Thus you have LUKS over LVM. At a lower level, you have LVM over PC partition. The output of lsblk confirms this: sda is a disk, sda3 is a partition (which contains an LVM physical volume), fedora-00 and fedora-01 are logical volumes, and each logical volume contains a LUKS encrypted volume. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/159941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87060/"
]
} |
159,961 | I want to create a file that contains columns from two input files. File1 is like: aa 32bb 15cc 78 File2 is: fa 19bc 23cc 50de 28aa 45bb 31 The task is, read through File1, if the 1st field of a row exists among the 1st field of File2, then print that line of File2, with both columns and add the 2nd column entry of File1 containing the 1st field. The output should be like: aa 45 32bb 31 15cc 50 78 awk is preferred for the script. | $ awk 'FNR==NR{a[$1]=$2;next} ($1 in a) {print $1,a[$1],$2}' file2 file1aa 45 32bb 31 15cc 50 78 Explanation: awk implicitly loops through each file, one line at a time. Since we gave it file2 as the first argument, it is read first. file1 is read second. FNR==NR{a[$1]=$2;next} NR is the number of lines that awk has read so far and FNR is the number of lines that awk has read so far from the current file. Thus, if FNR==NR , we are still reading the first named file: file2 . For every line in file2 , we assign a[$1]=$2 . Here, a is an associative array and a[$1]=$2 means saving file2's second column, denoted $2 , as a value in array a using file2's first column, $1 , as the key. next tells awk to skip the rest of the commands and start over with the next line. ($1 in a) {print $1,a[$1],$2} If we get here, that means that we are reading the second file: file1 . If we saw the first field of the line in file2 , as determined by the contents of array a , then we print out a line with the values of field 2 from both files. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/159961",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87098/"
]
} |
159,964 | I have written a small daemon that starts at the boot time and does all the things perfectly like writing in the log file. But I want to know, how can we check whether that process is daemon or not? My professor told me about a command ps -xj | grep daemon (my file name is daemon ), but I am not convinced about that as it shows unwanted information. Is there any shell command for that? edit: I am using Ubuntu 14.04 LTS | Anything with the PPID of 1 is, for the most part, likely a daemon. But there are situations that can arise where processes can become children of 1 that are not technically daemons. So the methods I discuss below are to demonstrate how you'd go about determining if a PID is owned by 1, not necessarily that its a actual daemon. For example $ ps -xj PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND 8420 1211 1211 8420 pts/4 1211 S+ 1000 0:01 ssh dufresne 1 2276 2275 2275 ? -1 Sl 1000 0:48 /usr/bin/gnome-keyring-daemon --daemonize --login 2196 2278 2278 2278 ? -1 Ssl 1000 0:39 gnome-session 1 2288 2278 2278 ? -1 S 1000 0:00 dbus-launch --sh-syntax --exit-with-session 1 2289 2289 2289 ? -1 Ssl 1000 6:00 /bin/dbus-daemon --fork --print-pid 4 --print-address 6 --session 1 2358 2289 2289 ? -1 Sl 1000 0:01 /usr/libexec/gvfsd The excerpt from Wikipedia can shed some light on things as well, but it too leaves things a little vague on how to actually determine if a process is a daemon or not. excerpt from Wikipedia In a Unix environment, the parent process of a daemon is often, but not always, the init process. A daemon is usually either created by a process forking a child process and then immediately exiting, thus causing init to adopt the child process, or by the init process directly launching the daemon. In addition, a daemon launched by forking and exiting typically must perform other operations, such as dissociating the process from any controlling terminal (tty). Such procedures are often implemented in various convenience routines such as daemon(3) in Unix. NOTE: on systems that make use of SystemD (Red Hat distros such as Fedora) there's typically no init process but instead this: $ ps -j -1 PID PGID SID TTY STAT TIME COMMAND 1 1 1 ? Ss 0:42 /usr/lib/systemd/systemd --switched-root --system --deserialize 20 That's the process with the PID 1. On Debian/Ubuntu system's they'll have a process still named init : $ ps -j -1 PID PGID SID TTY STAT TIME COMMAND 1 1 1 ? Ss 0:02 /sbin/init So what's a daemon? And here's the reason it can be tricky to determine if something is a daemon or not when its PPID is 1: A process can become a child of the init process, ( NOTE: that init process is PID 1), when their parent is killed or disowns them, these processes are not necessarily daemons, but will still show up as having their PPID equal to 1 . So to make the determination whether something is a daemon or not will likely require a battery of tests, and not simply looking to see if it's PPID is 1. So where does that leave us? To determine if something is a daemon you'll likely have to resort to a variety of tests such as: PPID 1? Has TTY attached? Is it a service? sudo service ... ? Is it managed by Systemd, Upstart or SysV? Is it listening on a port? Is it writing to a log file? Syslog? So we're having to resort to "duck typing" if it quacks, and swims, it's likely a duck, but even the above characteristics can fool you. References Daemon (computing) Linux Daemon Writing HOWTO | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/159964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87101/"
]
} |
160,001 | Why are ASCII-encoded files extended to UTF-8 or in reverse reduced to ASCII? user:~$ echo 'A B C | } ~' > ./file user:~$ user:~$ file --brief --mime ./filetext/plain; charset=us-asciiuser:~$ user:~$ user:~$ echo 'ᴁ ♫ ⼌ ' >> ./file user:~$ user:~$ file --brief --mime ./file text/plain; charset=utf-8user:~$user:~$ user:~$ cat ./file A B C | } ~ᴁ ♫ ⼌ user:~$ user:~$ user:~$ sed -i '$d' ./file user:~$ user:~$ cat ./file A B C | } ~user:~$user:~$ file --brief --mime ./file text/plain; charset=us-asciiuser:~$ In case you cannot read a character in the second echo statement: From first to last: U+1D01, ᴁ; U+266B, ♫; U+2F0C, ⼌; U+1D411, ; U+1F035, ; U+1F200, . The locale settings are: user:~$ echo $LANGen_US.UTF-8user:~$ echo $LANGUAGEen_US:enuser:~$ echo $LC_COLLATEuser:~$ echo $LC_CTYPEuser:~$ echo $SHELL/bin/bashuser:~$ echo $SHELL/bin/bashuser:~$ user:~$ ps -p $$ PID TTY TIME CMD 7537 pts/6 00:00:00 bashuser:~$ | I think you're confusing "encoding" and "character sets". In the first case, the file contains only characters found in US-ASCII. This means that the file will look the same no matter what language settings you're using to display it. In the second case, the file now contains characters belonging to the UTF8 character set, because that's what you put into it. There's no conversion happening here; the command is simply informing you of what the contents of the file are. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160001",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50835/"
]
} |
160,008 | I have a set of files, all that are named with the convention file_[number]_[abcd].bin (where [number] is a number in the range 0-size of drive in MB). i.e there is file_0_a.bin , file_0_b.bin , file_0_c.bin and file_0_d.bin and then the 0 would become a 1 and so on. The number of files is figured out at run-time based on the size of the partition. I need to delete all of the files that have been created, but in a pseudo-random manner. in blocks of size that I need to be able to specify , i.e where there is 1024 files, delete 512, then delete another 512. I have the following function for doing it currently, which I call the required number of times, but it will get progressively less likely to find a file that exists, to the point where it might never complete. Obviously, this is somewhat less than ideal. What is another method that I can use to delete all of the files in a random order? deleteRandFile() #$1 - total number of files{ i=$((RANDOM%$1)) j=$((RANDOM%3)) file="" case $j in 0) file="${dest_dir}/file_${i}_a.bin";; 1) file="${dest_dir}/file_${i}_b.bin";; 2) file="${dest_dir}/file_${i}_c.bin";; 3) file="${dest_dir}/file_${i}_d.bin";; esac if ! [[ -f $file ]]; then deleteRandFile $1 else rm $file fi return 0;} Edit:I'm trying to delete in random order so that I can fragment the files as much as possible. This is part of a script that begins by filling a drive with 1MB files, and deletes them, 1024 at a time, then fills the 'gap' with 1 1GB file. Rinse and repeat until you have some very fragmented 1GB files. | If you want to delete all the files, then, on a GNU system, you could do: cd -P -- "$destdir" && printf '%s\0' * | # print the list of files as zero terminated records sort -Rz | # random sort (shuffle) the zero terminated records xargs -r0 rm -f # pass the input if non-empty (-r) understood as 0-terminated # records (-0) as arguments to rm -f If you want to only delete a certain number of those matching a regexp you'd insert something like this between the sort and xargs : awk -v RS='\0' -v ORS='\0' -v n=1024 '/regexp/ {print; if (--n == 0) exit}' With zsh , you could do: shuffle() REPLY=$RANDOMrm -f file_<->_[a-d].bin(.+shuffle[1,1024]) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/160008",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87133/"
]
} |
160,009 | I have a file containing two columns and 10 million rows. The first column contains many repeated values, but there is a distinct value in column 2. I want to remove the repeated rows and want to keep only one using awk . Note: the file is sorted with values in column 1. For example: 1.123 -4.02.234 -3.52.234 -3.12.234 -2.04.432 0.05.123 +0.28.654 +0.58.654 +0.88.654 +0.9.... Expected output 1.123 -4.02.234 -3.54.432 0.05.123 +0.28.654 +0.5.... | A few ways: awk awk '!a[$1]++' file This is a very condensed way of writing this: awk '{if(! a[$1]){print; a[$1]++}}' file So, if the current first field ( $1 ) is not in the a array, print the line and add the 1st field to a . Next time we see that field, it will be in the array and so will not be printed. Perl perl -ane '$k{$F[0]}++ or print' file or perl -ane 'print if !$k{$F[0]}++' file This is basically the same as the awk one. The -n causes perl to read the input file line by line and apply the script provided by -e to each line. The -a will automatically split each line on whitespace and save the resulting fields in the @F array. Finally, the first field is added to the %k hash and if it is not already there, the line is printed. The same thing could be written as perl -e 'while(<>){ @F=split(/\s+/); print unless defined($k{$F[0]}); $k{$F[0]}++; }' file Coreutils rev file | uniq -f 1 | rev This method works by first reversing the lines in file so that if a line is 12 345 it'll now be 543 21. We then use uniq -f 1 to ignore the first field, that is to say, the column that 543 is in. There are fields within file . Using uniq here has the effect of filtering out any duplicate lines, keeping only 1 of each. Lastly we put the lines back into their original order with another reverse. GNU sort (as suggested by @StéphaneChazelas) sort -buk1,1 The -b flag ignores leading whitespace and the -u means print only unique fields. The clever bit is the -k1,1 . The -k flag sets the field to sort on. It takes the general format of -k POS1[,POS2] which means only look at fields POS1 through POS2 when sorting. So, -k1,1 means only look at the 1st field. Depending on your data, you might want to also add one of these options: -g, --general-numeric-sort compare according to general numerical value -n, --numeric-sort compare according to string numerical value | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/160009",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83409/"
]
} |
160,013 | Let's say I have an application which has plugins, daemons, etc. Now It has to save its settings and plugins settings as well as too. Most of all applications use users home folder for it but it is not secure. Something (a bad user, a hacker, a virus etc.) can delete those settings. Gsettings store them in binary database in user folder, KDE save them in INI format in users home folder but these options are not secure still because something can delete them easily.Or am I wrong? How else can I secure it? Storing them as root or another user is not an option too as you know you have to grant permission for every little change you make which is very annoying. | A few ways: awk awk '!a[$1]++' file This is a very condensed way of writing this: awk '{if(! a[$1]){print; a[$1]++}}' file So, if the current first field ( $1 ) is not in the a array, print the line and add the 1st field to a . Next time we see that field, it will be in the array and so will not be printed. Perl perl -ane '$k{$F[0]}++ or print' file or perl -ane 'print if !$k{$F[0]}++' file This is basically the same as the awk one. The -n causes perl to read the input file line by line and apply the script provided by -e to each line. The -a will automatically split each line on whitespace and save the resulting fields in the @F array. Finally, the first field is added to the %k hash and if it is not already there, the line is printed. The same thing could be written as perl -e 'while(<>){ @F=split(/\s+/); print unless defined($k{$F[0]}); $k{$F[0]}++; }' file Coreutils rev file | uniq -f 1 | rev This method works by first reversing the lines in file so that if a line is 12 345 it'll now be 543 21. We then use uniq -f 1 to ignore the first field, that is to say, the column that 543 is in. There are fields within file . Using uniq here has the effect of filtering out any duplicate lines, keeping only 1 of each. Lastly we put the lines back into their original order with another reverse. GNU sort (as suggested by @StéphaneChazelas) sort -buk1,1 The -b flag ignores leading whitespace and the -u means print only unique fields. The clever bit is the -k1,1 . The -k flag sets the field to sort on. It takes the general format of -k POS1[,POS2] which means only look at fields POS1 through POS2 when sorting. So, -k1,1 means only look at the 1st field. Depending on your data, you might want to also add one of these options: -g, --general-numeric-sort compare according to general numerical value -n, --numeric-sort compare according to string numerical value | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/160013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87138/"
]
} |
160,016 | I have a separate file for logging local7 facility, and this file is touch ed and permissions set, from my installer. But sometimes I see that the logs are not being written to it (after I do a re-install) until I do rsyslog restart ! Is it mandatory to restart rsyslog if the log file is touch ed by another program/application ? (since the installer is run as root , the log file's time-stamp will be changed due to touch - will this cause rsyslog to not write to the log file ?) | A few ways: awk awk '!a[$1]++' file This is a very condensed way of writing this: awk '{if(! a[$1]){print; a[$1]++}}' file So, if the current first field ( $1 ) is not in the a array, print the line and add the 1st field to a . Next time we see that field, it will be in the array and so will not be printed. Perl perl -ane '$k{$F[0]}++ or print' file or perl -ane 'print if !$k{$F[0]}++' file This is basically the same as the awk one. The -n causes perl to read the input file line by line and apply the script provided by -e to each line. The -a will automatically split each line on whitespace and save the resulting fields in the @F array. Finally, the first field is added to the %k hash and if it is not already there, the line is printed. The same thing could be written as perl -e 'while(<>){ @F=split(/\s+/); print unless defined($k{$F[0]}); $k{$F[0]}++; }' file Coreutils rev file | uniq -f 1 | rev This method works by first reversing the lines in file so that if a line is 12 345 it'll now be 543 21. We then use uniq -f 1 to ignore the first field, that is to say, the column that 543 is in. There are fields within file . Using uniq here has the effect of filtering out any duplicate lines, keeping only 1 of each. Lastly we put the lines back into their original order with another reverse. GNU sort (as suggested by @StéphaneChazelas) sort -buk1,1 The -b flag ignores leading whitespace and the -u means print only unique fields. The clever bit is the -k1,1 . The -k flag sets the field to sort on. It takes the general format of -k POS1[,POS2] which means only look at fields POS1 through POS2 when sorting. So, -k1,1 means only look at the 1st field. Depending on your data, you might want to also add one of these options: -g, --general-numeric-sort compare according to general numerical value -n, --numeric-sort compare according to string numerical value | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/160016",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47055/"
]
} |
160,019 | I was trying to install bsd-mailx utility the package got installed however I am wondering about the error. This is the error I get: Preconfiguring packages ...dpkg: warning: 'ldconfig' not found in PATH or not executable.dpkg: warning: 'start-stop-daemon' not found in PATH or not executable.dpkg: error: 2 expected programs not found in PATH or not executable.Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin.E: Sub-process /usr/bin/dpkg returned an error code (2) | First of all, the lines you are truly interested in are: dpkg: warning: 'ldconfig' not found in PATH or not executable.dpkg: warning: 'start-stop-daemon' not found in PATH or not executable. These errors have been reported several times by Debian and Ubuntu users (you can actually Google them for more information). It seems like the PATH variable isn't correctly set when the user tries to execute a command through sudo , which is probably what you are trying to do. Solution 1: Set sudo 's default secure path Open /etc/sudoers by running visudo in your terminal, and make sure the file includes the following line: Defaults env_resetDefaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" More information about this problem may be found here (Problems and tips > PATH not set). Solution 2: use the root account directly Don't use sudo , just switch to root to run your commands. Run one of the following commands to do so: $ sudo -i$ su Once you are logged in as root, just run your apt-get commands again: # apt-get ... You might have to set root's PATH first though. Edit /root/.bashrc (with root privileges of course), and add the following line: export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin Solution 3: try to pass the PATH variable to sudo at execution time. Just prefix the sudo call with the redefinition of the PATH variable: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin sudo apt-get ... | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/160019",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87142/"
]
} |
160,039 | I have some public keys of multiple users in my keyring in GnuPG. One of these users has switched to a new public key. I still have the user's old key which has an assigned trust of ultimate . I just assigned the same trust to his new key. He does not use the old key anymore. What should I do with the old key? Should I withdraw trust, or revoke it? What is the correct procedure in such a case? | First of all, ultimate trust shouldn't be used for other's keys, full trust is enough . If you issued ultimate trust to make the key itself valid, you missunderstood the web of trust concept . If you just wanted all his certifications to be valid for you (thus, extending your web of trust), full trust is enough, if you at the same time certified him. Regarding your actual question: this depends a little bit on the situation. You will not be able to revoke the other's key. Has the key's owner revoked it? If so, he should just send you the revocation certificate -- for example by uploading it to the key servers, where you can fetch it again. If the key is revoked, you do not have to care about trust any more anyway. The key owner has lost control over the key, but cannot revoke it any more. For example, somebody stole the laptop with the only copy of the key, and the owner doesn't have a revocation certificate (very bad idea). Now it's at you to fix the situation, by withdrawing trust and setting it to "never". Also consider doing the same with his new key, as there seem to be major issues with the owner's key handling. This does not change validity if his key (if you signed it), it just makes sure certifications issued by it won't be used for validity calculation of others. The key owner just doesn't want to use the key any more , but still owns it and wants to keep the reputation in the web of trust he build up (which you probably also want to make use of): Just import his new key, and don't care about the old one at all. Apart from changing trust from "ultimate" to "full". If you want to make sure you're not accidentially encrypting to his old key, disable it by running gpg --edit-key [key-id] , then using GnuPG's disable command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160039",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65869/"
]
} |
160,116 | I often use the last command to check my systems for unauthorized logins, this command: last -Fd gives me the logins where I have remote logins showing with ip. From man last : -F Print full login and logout times and dates.-d For non-local logins, Linux stores not only the host name of the remote host but its IP number as well. This option translates the IP number back into a hostname. Question: One of my systems is only showing a few days worth of logins. Why is that? What can I do when last only gives me few days? Here is the output in question: root ~ # last -Fduser pts/0 111-111-111-111. Wed Oct 8 20:05:51 2014 still logged in user pts/0 host.lan Mon Oct 6 09:52:01 2014 - Mon Oct 6 09:53:41 2014 (00:01) user pts/0 host.lan Sat Oct 4 10:11:39 2014 - Sat Oct 4 10:12:13 2014 (00:00) user pts/0 host.lan Sat Oct 4 09:31:07 2014 - Sat Oct 4 10:11:00 2014 (00:39) user pts/0 host.lan Sat Oct 4 09:26:04 2014 - Sat Oct 4 09:28:16 2014 (00:02) wtmp begins Sat Oct 4 09:26:04 2014 | It is likely that logrotate has archived the log(s) of interest and opened a new one. If you have older wtmp files, specify one of those, as for example: last -f /var/log/wtmp-20141001 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160116",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36440/"
]
} |
160,130 | Is there an easy way to find out which command has the longest manual pages? | You can calculate it yourself for your system with simple command $ find /usr/share/man/ -type f -exec ls -S {} + 2>/dev/null | head | while \ read -r file; do printf "%-40s" "$file"; \ man "$file" 2>/dev/null | wc -lwm; done | sort -nrk 4 which returns on my box (file) (lines) (words) (chars)/usr/share/man/man1/zshall.1.bz2 27017 186394 1688174/usr/share/man/man1/cmake.1.bz2 22477 106148 1004288/usr/share/man/man1/cmake-gui.1.bz2 21362 100055 951110/usr/share/man/man1/perltoc.1.bz2 18179 59783 780134/usr/share/man/man1/cpack.1.bz2 9694 48264 458528/usr/share/man/man1/cmakemodules.1.bz2 10637 42022 419127/usr/share/man/man5/smb.conf.5.bz2 8306 49991 404190/usr/share/man/man1/perlapi.1.bz2 8548 43873 387237/usr/share/man/man1/perldiag.1.bz2 5662 37910 276778/usr/share/man/e 1518 5919 58630 where columns represent number of lines, words and characters respectively. Rows (commands) are sorted by last column. We can do similar thing for info pages, but we have to bear in mind that it's content can span over many files. Thus let's use the benefits of zsh to keep above one-liner in compact form: $ for inf in ${(u)$(echo /usr/share/info/**/*(.:t:r:r))}; do \ printf "%-40s" "$inf"; \ info "$inf" 2>/dev/null | wc -lwm; done | sort -nrk 4 what gives (info title) (lines) (words) (chars)elisp 72925 457537 3379403libc 69813 411216 3066817lispref 62753 374938 2806412emacs 47507 322194 2291425calc 33716 244394 1680763internals 32221 219772 1549305zsh 34932 206851 1544909gsl-ref 32493 179954 1518248gnus 31723 180613 1405064gawk 27150 167135 1203395xemacs 25734 170403 1184250 Info pages are huge mostly for gnu-related stuff what is understandable, but I find interesting that for example zsh has more lines and words but less characters than in man pages. It is interesting because at first glance the content is the same, just formatting is a little bit different. Explanation of zsh tricks in the selection of the files for the loop: for inf in ${(u)$(echo /usr/share/info/**/*(.:t:r:r))}; do The goal is to create the list of unique file names from /usr/share/info directory and all subdirectories. Files should be stripped from dirname, extenstions and all numbers. The above snippet can be rewritten as ${(u)$(echo /usr/share/info/**/*(.)):t:r:r} , what gives the same result but uses probably more decent syntax, namely: **/* : descent into all subdirectories and mark everything there (.) : select only plain files :t : remove pathname components (works like basename ) :r : remove extension (everything after last dot, including dot). It is applied twice to remove also unnecessary string and number (e.g. .info-6 from file zsh.info-6.bz2 ) (u) : show only unique words (after previous operations there are many the same words - different files/chapters for the same info command) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160130",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81059/"
]
} |
160,172 | I run commands: tar -cf myArchive.tar myDirectory/gzip myArchive.tar then I copy the file over a lot of unreliable mediums, and later I unpack it using: tar -xzf myArchive.tar.gz The fact that I compressed the tar-ball, will that in any way guarantee the integrity, or at least a CRC of the unpacked content? | tar itself does not write down a checksum for later comparsion. If you gzip the tar archive you can have that functionality. tar uses compress . If you use the -Z flag while creating the archive tar will use the compress program when reading or writing the archive. From the gzip manpage: The standard compress format was not designed to allow consistency checks. But, you can use the -z parameter. Then tar reads and writes the archive through gzip . And gzip writes a crc checksum. To display that checksum use that command: $ gzip -lv archive.tar.gzmethod crc date time compressed uncompressed ratio uncompressed_namedefla 3f641c33 Sep 25 14:01 24270 122880 80.3% archive.tar From the gzip manpage: When using the first two formats ( gzip or zip is meant ), gunzip checks a 32 bit CRC. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/160172",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39392/"
]
} |
160,185 | I am having trouble installing any new packages in Ubuntu because of python. I tried sudo apt-get install python3 python3-dev but I am getting the following output :: dpkg: error processing python-lockfile (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration.dpkg: error processing python-gi (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration.dpkg: error processing python-apt (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration.Setting up python-six (1.3.0-1) ...Traceback (most recent call last): File "/usr/bin/pycompile", line 35, in <module> from debpython.version import SUPPORTED, debsorted, vrepr, \ File "/usr/share/python/debpython/version.py", line 24, in <module> from ConfigParser import SafeConfigParserImportError: No module named ConfigParserdpkg: error processing python-six (--configure): subprocess installed post-installation script returned error exit status 1dpkg: error processing python-chardet (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration.dpkg: dependency problems prevent configuration of python-debian: python-debian depends on python-six; however: Package python-six is not configured yet. python-debian depends on python-chardet; however: Package python-chardet is not configured yet.`dpkg: error processing python-debian (--configure): dependency problems - leaving unconfigureddpkg: dependency problems prevent configuration of update-notifier-common: update-notifier-common depends on python-apt (>= 0.6.12); however: Package python-apt is not configured yet. update-notifier-common depends on python-debian; however: Package python-debian is not configured yet.`dpkg: error processing update-notifier-common (--configure): dependency problems - leaving unconfiguredSetting up python-sip (4.15.2-1ubuntu1) ...Traceback (most recent call last): File "/usr/bin/pycompile", line 35, in <module> from debpython.version import SUPPORTED, debsorted, vrepr, \ File "/usr/share/python/debpython/version.py", line 24, in <module> from ConfigParser import SafeConfigParserImportError: No module named ConfigParserdpkg: error processing python-sip (--configure): subprocess installed post-installation script returned error exit status 1dpkg: dependency problems prevent configuration of python-qt4: python-qt4 depends on sip-api-10.1; however: Package sip-api-10.1 is not installed. Package python-sip which provides sip-api-10.1 is not configured yet.`dpkg: error processing python-qt4 (--configure): dependency problems - leaving unconfigureddpkg: error processing python-dbus (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration.dpkg: dependency problems prevent configuration of python-qt4-dbus: python-qt4-dbus depends on python-dbus (>= 0.84.0-2~); however: Package python-dbus is not configured yet.`dpkg: error processing python-qt4-dbus (--configure): dependency problems - leaving unconfigureddpkg: error processing python-dirspec (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration.dpkg: error processing python-httplib2 (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration.dpkg: error processing python-crypto (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration.dpkg: dependency problems prevent configuration of python-oauthlib: python-oauthlib depends on python-crypto; however: Package python-crypto is not configured yet.`dpkg: error processing python-oauthlib (--configure): dependency problems - leaving unconfiguredSetting up python-openssl (0.13-2ubuntu4) ...Traceback (most recent call last): File "/usr/bin/pycompile", line 35, in <module> from debpython.version import SUPPORTED, debsorted, vrepr, \ File "/usr/share/python/debpython/version.py", line 24, in <module> from ConfigParser import SafeConfigParserImportError: No module named ConfigParserdpkg: error processing python-openssl (--configure): subprocess installed post-installation script returned error exit status 1Setting up python-pkg-resources (0.6.37-1ubuntu1) ...Traceback (most recent call last): File "/usr/bin/pycompile", line 35, in <module> from debpython.version import SUPPORTED, debsorted, vrepr, \ File "/usr/share/python/debpython/version.py", line 24, in <module> from ConfigParser import SafeConfigParserImportError: No module named ConfigParserdpkg: error processing ubuntu-sso-client-qt (--configure): dependency problems - leaving unconfiguredSetting up python-problem-report (2.12.5-0ubuntu2.2) ...Traceback (most recent call last): File "/usr/bin/pycompile", line 35, in <module> from debpython.version import SUPPORTED, debsorted, vrepr, \ File "/usr/share/python/debpython/version.py", line 24, in <module> from ConfigParser import SafeConfigParserImportError: No module named ConfigParserdpkg: error processing python-problem-report (--configure): subprocess installed post-installation script returned error exit status 1dpkg: error processing python-keyring (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration.dpkg: error processing python-lazr.uri (--configure): dependency problems - leaving unconfiguredSetting up python-simplejson (3.3.0-2ubuntu2) ...Traceback (most recent call last): File "/usr/bin/pycompile", line 35, in <module> from debpython.version import SUPPORTED, debsorted, vrepr, \ File "/usr/share/python/debpython/version.py", line 24, in <module> from ConfigParser import SafeConfigParserImportError: No module named ConfigParserdpkg: error processing python-simplejson (--configure): subprocess installed post-installation script returned error exit status 1Setting up python-oauth (1.0.1-3build1) ...Traceback (most recent call last): File "/usr/bin/pycompile", line 35, in <module> from debpython.version import SUPPORTED, debsorted, vrepr, \ File "/usr/share/python/debpython/version.py", line 24, in <module> from ConfigParser import SafeConfigParserImportError: No module named ConfigParserdpkg: error processing python-oauth (--configure): subprocess installed post-installation script returned error exit status 1(gconftool-2:20627): GConf-WARNING **: Client failed to connect to the D-BUS daemon:Unable to autolaunch a dbus-daemon without a $DISPLAY for X11Traceback (most recent call last): File "/usr/sbin/gconf-schemas", line 121, in <module> trim(os.path.join(defaults_dest,"%gconf-tree.xml"), get_valid_languages()) File "/usr/sbin/gconf-schemas", line 18, in get_valid_languages langs.add(l.split('_')[0])TypeError: Type str doesnt support the buffer APIdpkg: error processing gconf2 (--configure): subprocess installed post-installation script returned error exit status 1dpkg: dependency problems prevent configuration of aisleriot:Setting up python-xapian (1.2.15-4) ...Traceback (most recent call last): File "/usr/bin/pycompile", line 35, in <module> from debpython.version import SUPPORTED, debsorted, vrepr, \ File "/usr/share/python/debpython/version.py", line 24, in <module> from ConfigParser import SafeConfigParserImportError: No module named ConfigParserdpkg: error processing python-xapian (--configure): subprocess installed post-installation script returned error exit status 1Setting up python-xdg (0.25-3) ...Traceback (most recent call last): File "/usr/bin/pycompile", line 35, in <module> from debpython.version import SUPPORTED, debsorted, vrepr, \ File "/usr/share/python/debpython/version.py", line 24, in <module> from ConfigParser import SafeConfigParserImportError: No module named ConfigParserdpkg: error processing python-xdg (--configure): subprocess installed post-installation script returned error exit status 1dpkg: error processing python-configglue (--configure): dependency problems - leaving unconfigureddpkg: too many errors, stoppingErrors were encountered while processing: python-lockfile duplicity deja-dup python-gi python-apt python-six python-chardet python-debian update-notifier-common python-sip python-qt4 python-dbus python-qt4-dbus python-dirspec python-httplib2 python-crypto python-oauthlib python-openssl python-pkg-resources python-zope.interface python-twisted-core python-twisted-web python-ubuntu-sso-client ubuntu-sso-client ubuntu-sso-client-qt python-problem-report python-keyring python-lazr.uri python-simplejson python-wadllib python-oauth python-lazr.restfulclient python-launchpadlib python-apport python3-distupgrade python3-update-manager ubuntu-release-upgrader-core update-manager-core gconf2 aisleriot gnome-terminal-data gnome-terminal python-xapian apt-xapian-index apturl-common apturl compiz-gnome compiz deja-dup-backend-gvfs python-xdg python-configglueProcessing was halted because there were too many errors.` I have tried most everything I can find through google. I am using python 2.7.5. I have done apt-get clean and apt-get autoclean and all variations on that theme. I really want to be able to install python-dev . How can I make this happen? At this point, I am willing to consider extreme options, whatever they may be other than formatting the system. Output for sudo apt-get install --reinstall python-lockfile Reading package lists... DoneBuilding dependency tree Reading state information... Done0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 0 not upgraded.280 not fully installed or removed.Need to get 0 B/5,207 kB of archives.After this operation, 0 B of additional disk space will be used.(Reading database ... 209306 files and directories currently installed.)Preparing to replace python-lockfile 1:0.8-2ubuntu1 (using .../python-lockfile_1%3a0.8-2ubuntu1_all.deb) ... File "/usr/bin/pyclean", line 63 except (IOError, OSError), e: ^SyntaxError: invalid syntaxdpkg: warning: subprocess old pre-removal script returned error exit status 1dpkg: trying script from the new package instead ... File "/usr/bin/pyclean", line 63 except (IOError, OSError), e: ^SyntaxError: invalid syntaxdpkg: error processing /var/cache/apt/archives/python-lockfile_1%3a0.8-2ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1Traceback (most recent call last): File "/usr/bin/pycompile", line 35, in <module> from debpython.version import SUPPORTED, debsorted, vrepr, \ File "/usr/share/python/debpython/version.py", line 24, in <module> from ConfigParser import SafeConfigParserImportError: No module named ConfigParserdpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1Errors were encountered while processing: /var/cache/apt/archives/python-lockfile_1%3a0.8-2ubuntu1_all.debE: Sub-process /usr/bin/dpkg returned an error code (1) Output for apt-cache policy python-lockfile python-lockfile: Installed: 1:0.8-2ubuntu1 Candidate: 1:0.8-2ubuntu1 Version table: *** 1:0.8-2ubuntu1 0 500 http://us.archive.ubuntu.com/ubuntu/ saucy/main amd64 Packages 100 /var/lib/dpkg/statuspython-minimal: Installed: 2.7.5-5ubuntu1 Candidate: 2.7.5-5ubuntu1 Version table: *** 2.7.5-5ubuntu1 0 500 http://us.archive.ubuntu.com/ubuntu/ saucy/main amd64 Packages 100 /var/lib/dpkg/status Edit- 5 Output for -> sudo dpkg -C---> http://goo.gl/ib3RqB (Sorry, word limit reached so posting in this file.) Edit - 6 Thank you for your patience and help. After the suggestions, I am still getting errors my results are :: sudo dpkg --configure -a -> http://goo.gl/uab19Esudo apt-get install -f -> http://goo.gl/wUZXgY Output for sudo apt-get --reinstall install ubuntu-release-upgrader-gtk Reading package lists... DoneBuilding dependency tree Reading state information... Done0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 0 not upgraded.108 not fully installed or removed.After this operation, 0 B of additional disk space will be used.E: Internal Error, No file name for ubuntu-release-upgrader-gtk:amd64 | I had a very similar issue. It seems to come from the use of python3 instead of python2.7 I had /usr/bin/python linked to python3 (I changed the link after installing python3 for greater convenience, it looks like aliasing is a much better idea). Anyway, after unlinking it and relinking it to python2.7 upgrade worked fine. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87262/"
]
} |
160,196 | apt is using two locations to store downloaded packages and other files: /var/lib/apt/lists/var/cache/apt/archives These folders can get quite big, even when using apt-get clean regularly. My /var is on a separate partition and is relatively small. Is it possible to configure apt, so that is stores its files somewhere ales (i.e. in /home/apt/ ? | You have a few options. Change the settings in /etc/apt/apt.conf dir::state::lists /path/to/new/directory;dir::cache::archives /path/to/new/directory; Mount a larger partitions at the current directories (if you have spare space for a partition): # mount /dev/sda5 /var/lib/apt # mount /dev/sda6 /var/cache/apt Of course, for the above to work, you'll need to create partitions and filesystems first. Symlink to another location (if you have no space for new partitions, but space within current partitions): # ln -s /home/apt/lib /var/apt/lib# ln -s /home/apt/cache /var/apt/cache Or as above, but using bind mounts: # mount --bind /home/apt/lib /var/apt/lib# mount --bind /home/apt/cache /var/apt/cache | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160196",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
160,212 | I'm looking for a way to watch YouTube videos in terminal (not in a browser or another window, but right there, in any bash session). Is there a simple way to do this? I imagine something like this: $ youtube <video-url> I already know how to play a video using mplayer : $ mplayer -vo caca local-file.avi However, this opens a new window. It would be cool to play it in terminal. Also, it should be compatible with tmux sessions. I asked another question for how to prevent opening a new window . For those that wonder where I need such a functionality, I started an experimental project named TmuxOS -- with the concept that everything should run inside of a tmux session . So, indeed I need a video player for local and remote videos. :-) | You can download videos and/or just the audio and then watch/listen to them using youtube-dl . The script is written in Python and makes use of ffmpeg I believe. $ youtube-dl --helpUsage: youtube-dl [options] url [url...]Options: General Options: -h, --help print this help text and exit --version print program version and exit -U, --update update this program to latest version. Make sure that you have sufficient permissions (run with sudo if needed)...... To download videos you simply give it the URL from the page you want the video on and the script does the rest: $ youtube-dl https://www.youtube.com/watch?v=OwvZemXJhF4[youtube] Setting language[youtube] OwvZemXJhF4: Downloading webpage[youtube] OwvZemXJhF4: Downloading video info webpage[youtube] OwvZemXJhF4: Extracting video information[youtube] OwvZemXJhF4: Encrypted signatures detected.[youtube] OwvZemXJhF4: Downloading js player 7N[youtube] OwvZemXJhF4: Downloading js player 7N[download] Destination: Joe Nichols - Yeah (Audio)-OwvZemXJhF4.mp4[download] 100% of 21.74MiB in 00:16 You can then use vlc or mplayer to watch these locally: $ vlc "Joe Nichols - Yeah (Audio)-OwvZemXJhF4.mp4"VLC media player 2.1.5 Rincewind (revision 2.1.4-49-gdab6cb5)[0x1cd1118] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface.Fontconfig warning: FcPattern object size does not accept value "0"Fontconfig warning: FcPattern object size does not accept value "0"Fontconfig warning: FcPattern object size does not accept value "0"Fontconfig warning: FcPattern object size does not accept value "0" OK but I want to watch these videos as they're streamed & in ASCII I found this blog article titled: On ascii, youtube and letting go (Wayback Machine) that demonstrates the method that I discussed in the chatroom, mainly using youtube-dl as the "backend" which could do the downloading of the YouTube stream and then redirecting it to some other app. This article shows it being done with mplayer : $ youtube-dl http://www.youtube.com/watch?v=OC83NA5tAGE -o - | \ mplayer -vo aa -monitorpixelaspect 0.5 - The video being downloaded by youtube-dl is redirected via STDOUT above, -o - . There's a demo of the effect here . With the installation of additional libraries the ASCII video can be enhanced further. OK but I want the video in my actual terminal? I found this trick which allows video to be played in an xterm in the O'Reilly articled titled: Watch Videos in ASCII Art . $ xterm -fn 5x7 -geometry 250x80 -e "mplayer -vo aa:driver=curses j.mp4 The above results in a xterm window being opened where the video plays. So I thought, why not put the peanut butter and the chocolate together like this: $ xterm -fn 5x7 -geometry 250x80 -e \ "youtube-dl http://www.youtube.com/watch?v=OC83NA5tAGE -o - | \ mplayer -vo aa:driver=curses -" This almost works! I'm not sure why the video cannot play in the window, but it would seem like it should be able to. The window comes up and starts to play but then closes. I see video for a brief few seconds and then nothing. Perhaps the above will get you closer to your ultimate solution, or perhaps it just needs to be tweaked a bit on the options. Additional libraries If you have libcaca installed (the colorized version of aalib ) and you reduce the font size in your gnome-terminal to something really small, like say 3, the following command will display a much better looking ASCII video directly within the terminal: $ CACA_DRIVER=ncurses mplayer -vo caca video.mp4 Terminals It would seem that the choice of terminal can make a big deal as to whether mplayer can play directly inside the terminal or whether it opens a separate window. Caching too on mplayer made a dramatic difference in being able to play directly in ones terminals. Using this command I was able to play in terminator , at least for the first 1/4 of the video before it cut out: $ youtube-dl http://www.youtube.com/watch?v=OC83NA5tAGE -o - | \ mplayer -cache 32767 -vo aa:driver=curses - The colored version used this command: $ youtube-dl http://www.youtube.com/watch?v=OC83NA5tAGE -o - | \ CACA_DRIVER=ncurses mplayer -cache 64000 -vo caca - These same commands could play in gnome-terminal & xterm too. NOTE: That's (from Left to Right) xterm , terminator , gnome-terminal , and terminology . | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/160212",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45370/"
]
} |
160,236 | I'm getting the following mail every day on a server: This is an automatically generated mail message from mdadmrunning on <host>A SparesMissing event had been detected on md device /dev/md0.Faithfully yours, etc.P.S. The /proc/mdstat file currently contains the following:Personalities : [raid1]md0 : active raid1 sda1[0] sdb1[1] 731592000 blocks [2/2] [UU]unused devices: <none> The output from cat /proc/mdstat looks fine though, so it's not obvious what is causing this problem. | The cause was an erroneous spares=1 option in the mdadm.conf : # definitions of existing MD arraysARRAY /dev/md0 UUID=621d5f15:cce75825:60273c48:78a7dac7 spares=1 I'm not sure how this ended up there, but I suppose it happened when a device failed and was replaced. Removing the spares=1 option or just recreating the mdadm.conf from scratch fixes the problem: /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/160236",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32931/"
]
} |
160,246 | I'm puzzled by the hash (ASCII) code stored under Linux (Ubuntu) /etc/shadow. Taking a hypothetical case, let password be 'test' , salt be 'Zem197T4' . By running following command, $ mkpasswd -m SHA-512 test Zem197T4 A long series of ASCII characters are generated (This is actually how Linux store in the /etc/shadow) $6$Zem197T4$oCUr0iMuvRJnMqk3FFi72KWuLAcKU.ydjfMvuXAHgpzNtijJFrGv80tifR1ySJWsb4sdPJqxzCLwUFkX6FKVZ0 When using online SHA-512 generator (e.g. http://www.insidepro.com/hashes.php?lang=eng ), what is generated is some hex code as below: option 1) password+salt 8d4b73598280019ef818e44eb4493c661b871bf758663d52907c762f649fe3355f698ccabb3b0c59e44f1f6db06ef4690c16a2682382617c6121925082613fe2 option 2) salt+password b0197333c018b3b26856473296fcb8637c4f58ab7f4ee2d6868919162fa6a61c8ba93824019aa158e62ccf611c829026b168fc4bf90b2e6b63c0f617198006c2 I believe these hex code should be the 'same thing' as the ascii code generated by mkpasswd. But how are they related? Hope someone could enlighten me? | On Ubuntu/Debian mkpasswd is part of the package whois and implemented in mkpasswd.c which as actually just a sophisticated wrapper around the crypt() function in glibc declared in unistd.h . crypt() takes two arguments password and salt. Password is "test" in this case, salt is prepended by "$6$" for the SHA-512 hash (see SHA-crypt ) so "$6$Zem197T4" is passed to crypt(). Maybe you noticed the -R option of mkpasswd which determines the number of rounds. In the document you'll find a default of 5000 rounds. This is the first hint why the result would never be equal to the simple concatenation of salt and password, it's not hashed only once. Actually if you pass -R 5000 you get the same result. In this case "$6$rounds=5000$Zem197T4" is passed to crypt() and the implementation in glibc (which is the libc of Debian/Ubuntu) extracts the method and number of rounds from this. What happens inside crypt() is more complicated than just computing a single hash and the result is base64 encoded in the end. That's why the result you showed contains all kinds of characters after the last '$' and not only [0-9a-f] as in the typical hex string of a SHA-512 hash. The algorithm is described in detail in the already mentioned SHA-Crypt document. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160246",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80848/"
]
} |
160,256 | Let's say I have a bash script that acts as a config file for another bash script: config.sh: verbose=yesecho "Malicious code!"name=test script.sh: source config.shecho "After sourcing: verbose='$verbose', name='$name'" The problem is, this isn't very secure, as anything put in config.sh gets run: $ ./script.shMalicious code!After sourcing: verbose='yes', name='test' To make it more secure, I thought I'd grep out assignment operations and only execute those. I would accomplish by passing source a "here document": script.sh: source <<EOF$(grep -P '^\s*\w+=' test.sh)EOFecho "After sourcing: verbose='$verbose', name='$name'" (Yes, I know the regex isn't that strong; it's just a placeholder.) Sadly, source doesn't seem to play well with here docs: ./script.sh: line 1: source: filename argument requiredsource: usage: source filename [arguments]After sourcing: verbose='', name='' Obviously I could do any number of things to get config data from a file, and that's likely more secure anyways. But I'm still left with this itch; I want to figure out if what I've tried can work. Any suggestions? | source needs a filename, you can't redirect input to it. On my system, I was able to use Process substitution instead: source <( grep = test.sh ) Replace = with the appropriate regular expression. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160256",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49491/"
]
} |
160,268 | There is a SLES 11 machine. The users logs in via SSH and pubkey (mixed, some user uses password, some use ssh key) The sshd_config has: UsePAM yesPasswordAuthentication yesPubkeyAuthentication yes The problem: If the password expires for a user that uses pubkey login, then the user will be prompted to change password. The question: How can we set the PAM or sshd config to allow users to log in if they have a valid SSH key and they're password expired? - Without popping up "change your password". UPDATE#1: The solution can't be: "UsePAM no" SERVER:~ # cat /etc/pam.d/sshd #%PAM-1.0auth requisite pam_nologin.soauth include common-authaccount requisite pam_nologin.soaccount include common-accountpassword include common-passwordsession required pam_loginuid.sosession include common-sessionSERVER:~ # UPDATE#2: The solution can't be: set the users password to never expire UPDATE#3: SERVER:/etc/pam.d # cat common-account#%PAM-1.0...account required pam_unix2.so account required pam_tally.soSERVER:/etc/pam.d # | The order of operations that causes the expired password prompt is as follows: SSH runs the PAM account stage, which verifies that the account exists and is valid. The account stage notices that the password has expired, and lets SSH know. SSH performs key-based authentication. It doesn't need PAM for this, so it doesn't run the auth stage. It then sets up the SSH login session and runs the PAM session stage. Next, SSH remembers that PAM told it the password had expired, prints a warning message, and asks PAM to have the user change the password. SSH then disconnects. All of this is SSH's doing, and I don't see any SSH options to configure this behavior. So unless you want to build a custom version of SSH and/or PAM, the only option I see is to prevent PAM from reporting the expired password to SSH. If you do this, it will disable expired password checks over SSH entirely , even if the user is logging in over SSH with a password. Other (non-SSH) methods of login will still check password expiration. Your current pam.d/sshd file has a account include common-account entry. I presume there's a common-account file which contains a reference to pam_unix.so . This is the line that checks for an expired password. You probably don't want to touch the common-account file itself, since it's used for other login methods. Instead, you want to remove the include from your pam.d/sshd file. If there are other functions in common-account besides pam_unix.so , you probably want to put them directly into pam.d/sshd . Finally, remember that this is a modification to the security of your system and you shouldn't just blindly trust me to give you good advice. Read up on how PAM works if you're unfamiliar with it. Some starting places might be man 7 PAM , man 5 pam.conf , and man 8 pam_unix . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160268",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86335/"
]
} |
160,280 | I need to download this file http://download.oracle.com/otn-pub/java/jdk/7u67-b01/jdk-7u67-linux-x64.tar.gz using wget. I use the command wget http://download.oracle.com/otn-pub/java/jdk/7u67-b01/jdk-7u67-linux-x64.tar.gz The file gets downloaded but when I try to untar that file I get this tar -zxvf jdk-7u67-linux-x64.tar.gzgzip: stdin: not in gzip formattar: Child returned status 1tar: Error is not recoverable: exiting now So I use the file command to check on the file and I get this file jdk-7u67-linux-x64.tar.gzjdk-7u67-linux-x64.tar.gz: HTML document, ASCII text, with very long lines, with CRLF line terminators I'm on Ubuntu 14.04. Any ideas? | Check the file size. You're probably actually getting back HTML. Oracle does not give you the jdk download unless you check the checkbox accepting their terms. (If you look at the reponse headers, you're probably getting back Content: text/html ) You can accept the terms by providing the following header: --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160280",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86198/"
]
} |
160,300 | The input file that needs to be edited is as below (can have more rows): bundle_id target_id length eff_length tot_counts uniq_counts est_counts eff_counts ambig_distr_alpha ambig_distr_beta fpkm fpkm_conf_low fpkm_conf_high solvable tpm1 intron_FBgn0035847:4_FBgn0035847:3 61 0 0 0 0 0 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 F 0.00E+002 intron_FBgn0032515:2_FBgn0032515:4 72 0 0 0 0 0 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 F 0.00E+003 intron_FBgn0266486:5_FBgn0266486:4 58 0 0 0 0 0 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 F 0.00E+004 intron_FBgn0031359:10_FBgn0031359:7 4978 1430.739479 91 0 30.333333 105.539363 1.00E+00 1.00E+00 6.30E+00 1.77E+00 1.08E+01 F 1.42E+014 intron_FBgn0031359:10_FBgn0031359:8 4978 1430.739479 91 0 30.333333 105.539363 1.00E+00 1.00E+00 6.30E+00 1.77E+00 1.08E+01 F 1.42E+014 intron_FBgn0031359:10_FBgn0031359:9 4978 1430.739479 91 0 30.333333 105.539363 1.00E+00 1.00E+00 6.30E+00 1.77E+00 1.08E+01 F 1.42E+01536 intron_CR31143:1_CR31143:2 40 0 0 0 0 0 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 F 0.00E+00 For each ID in the 2nd column intron_XXXXXXXX:X_XXXXXXXX:X , I want to extract the string between intron_ and the 1st : (in between the string usually but not always starts with FBgn). Then I have a list as following (one column for FBgn and the other column for corresponding name I want the FBgn to be converted into): ## FlyBase Gene Mapping Table ## Generated: Fri Dec 20 12:37:29 2013 ## Using datasource: dbi:Pg:dbname=fb_2014_01_reporting;host=flysql9;port=5432... FBgn0035847 mthl7FBgn0032515 loqsFBgn0266486 CG45085FBgn0031359 CG18317 Then I want to search the extracted string in the list's 1st column. If the extracted string has corresponding value in the 2nd column, I want to replace the whole ID intron_FBgnXXXXXX:X_FBgnXXXXXX:X with the corresponding name in the 2nd column. If the extracted string does not exist in the 1st column, I want to replace the whole ID intron_XXXXXXXX:X_XXXXXXXX:X with the extracted string. I have a script as the following: ref="gene_map_table_fb_2014_01_short.tsv"target="HC25_LNv_ZT02_intron_results.txt"output="temptemp.txt"declare -A mapwhile read linedoif [[ ! -z "$line" ]] && [[ ! "$line" =~ ^#.* ]]thenkey=$(echo "$line" | cut -f 1)value=$(echo "$line" | cut -f 2)map[$key]=$valuefidone < $refwhile read linedo key=$(echo "$line" | sed -n 's/.*_\([^\.]*\)\:.*/\1/p' | head -1)if [ ! -z "$key" ]then echo "$line" | sed 's/intron_[^[:space:]]*/'${map[$key]}'/g' >> $outputelse echo "$line" | sed 's/intron_[^[:space:]]*/'$key'/g' >> $outputfidone < $target Everything seems to work fine except that the output file lack the lines whose ID does not start with FBgn. | Check the file size. You're probably actually getting back HTML. Oracle does not give you the jdk download unless you check the checkbox accepting their terms. (If you look at the reponse headers, you're probably getting back Content: text/html ) You can accept the terms by providing the following header: --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160300",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86959/"
]
} |
160,309 | In my program, I have several threads that are started with the process and that remain until the program ends. They will meet different loads during the lifetime of the app, and at times they will all run at 100%. By default the Linux thread scheduler will change affinity on a multi-core system for these threads quite frivolously IMO. When I look at the bouncing graphs in my graphical process monitor (the one in gnome) I can't help but think that this constitutes some kind of overhead. EDIT: To clarify, even for very stable loads, the threads are scheduled on different cores, and even though it is not visible in the image provided it is at times very clear that the core selected for each thread is "swapped" frequently. Will not this constant change in affinity affect performance adversely? In that case, why is it implemented this way? What benefits does the changing affinity have? My guesses are: Wear levelling - Don't put all the work on one core Unintentional - Some smart algorithm tries to optimize usage depending on load and it so happens that the overhead of is not significant enough to warrant keeping the affinity over changing it. | Check the file size. You're probably actually getting back HTML. Oracle does not give you the jdk download unless you check the checkbox accepting their terms. (If you look at the reponse headers, you're probably getting back Content: text/html ) You can accept the terms by providing the following header: --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160309",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33357/"
]
} |
160,356 | Is there a key combination for bash to auto compete a command from the history? In ipython and matlab for example this is achieved by pressing up arrow after typing a few characters. | First of all, hitting tab in bash is even better since it autocompletes all executables in your PATH irrespecitve of whether they're in the history. That said, there are various ways of getting a command from your history: Use its number. If you know that the command you want was 3 commands ago, you can just run !-3 That will re-execute the command you ran three commands ago. Search for it. Type Ctrl r and start typing any text. The first command from the history that matches your text will be shown and hitting enter will execute it. Hit ▲ (up arrow). That will bring up the last command, press it again and you will go up your command history. When you find the one you want, hit enter . Add these lines to your ~/.inputrc (create the file if it doesn't exist): "\e[A": history-search-backward "\e[B": history-search-forward To immediately load the file, run bind -f ~/.inputrc ( source ). Now, type the first few characters of one of the commands you've previously run and hit ▲ . The first command from your history that starts with those characters will be shown. Hit ▲ again to see the rest and hit enter when you've found the one you want. Use the history command. As @Isaac explained, that will list all of the commands stored in your history file. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/160356",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77305/"
]
} |
160,401 | The midnight commander is not installed on my Debian Linux, so I tried to download it. I ran apt-get install mc in the terminal but it says the pack is not found . I did this task as root and the internet connection is perfect. I don't know why I can't download and install the MC. Does anybody know? My /etc/apt/sources.list : ## deb cdrom: [Debian GNU/Linux 7.6.0 _Wheezy_ - Offical amd64 CD Binary-1 20140712-14:11]/ wheezy maindeb cdrom: [Debian GNU/Linux 7.6.0 _Wheezy_ - Offical amd64 CD Binary-1 20140712-14:11]/wheezy maindeb http://security.debian.org/ wheezy/update maindeb-src http://security.debian.org/ wheezy/updates main# wheezy-updates, previously known as 'volatile'# A networl mirror was not selected during install. The following entries# are provided as examples, but you should amend them as appropriate# for your mirror of choice.## deb http://ftp.debian.org/debian/ wheezy-updates main# deb-src http://ftp.debian.org/debian/ wheezy-updates main | You're missing the main Debian repositories, your sources only point to the security repo. Uncomment the last lines in /etc/apt/sources/list . Change this: # deb http://ftp.debian.org/debian/ wheezy-updates main# deb-src http://ftp.debian.org/debian/ wheezy-updates main to this: deb http://ftp.debian.org/debian/ wheezy main deb-src http://ftp.debian.org/debian/ wheezy main That, however, will give you access to the generic repos, you will get much better performance if you choose one of your local mirrors. So, either choose one close to you from this list or use netselect-apt : sudo apt-get install netselect-aptsudo netselect-apt -n wheezysudo cp ./sources.list /etc/apt/sources.list No matter what you choose to do, remember to refresh your sources by running sudo apt-get update | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160401",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74712/"
]
} |
160,429 | From this link I get the following about exec bash builtin command: If command is supplied, it replaces the shell without creating a new process. How does it exactly replace the shell (i.e. how does it work internally)? Does the exec*() system call work the same? | Yes, the exec builtin ultimately makes use of one of the exec*() family of system calls. So does running commands normally. It's just that when you use exec , it doesn't use the fork() system call first to create a new process, and the result is that the new command replaces the shell. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160429",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20334/"
]
} |
160,431 | Let's assume I've a ZIP file whichs content may look like # file: with_out_dir.zipfile1.txtfile2.cppfile3.jssome_sub_dir/- file_in_subdir.txtfile4.xml but it may also look like this # file: with_dir.ziparchive/- file1.txt- file2.cpp- file3.js- some_sub_dir/-- file_in_subdir.txt- file4.xml and now I would have my bash script extract these files, but in any case it should to be extract to /var/sample/ so it will like this: # expected output/var/- sample/-- file1.txt-- file2.cpp-- file3.js-- some_sub_dir/--- file_in_subdir.txt-- file4.xml What's the best way todo that via a bash script? At the moment I am using unzip to extract zip files, but I'm open for any other command line tool, if it would be easier using it. | Yes, the exec builtin ultimately makes use of one of the exec*() family of system calls. So does running commands normally. It's just that when you use exec , it doesn't use the fork() system call first to create a new process, and the result is that the new command replaces the shell. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160431",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58492/"
]
} |
160,490 | Is there any way to have ack sort the results found by date of modification? (ideally showing the date next to the result?). It doesn't look like ack has a date option, but just in case. If this isn't possible with ack , how about grep or using a combination of tools? | Neither ack or grep have any notion of a file's modification dates. For that you'll need to generate the list of files first, and then sort them based on afterwards. You can use xargs to run the output of either ack or grep into another command which will provide the modification dates. For the modification dates you can use stat to do that. Example $ grep -Rl awk * | xargs -n 1 stat --printf "%y ------ %n\n"2013-11-12 10:06:16.000000000 -0500 ------ 100855/tst_ccmds.bash2013-11-13 00:32:11.000000000 -0500 ------ 100911/cmd.bash2013-11-23 03:16:17.000000000 -0500 ------ 102298/cmd.bash2013-12-14 20:06:04.467708173 -0500 ------ 105159/cmd.txt2013-12-16 03:20:48.166016538 -0500 ------ 105328/cmds.txt2013-01-14 14:17:39.000000000 -0500 ------ 106932/red5-1.0.1.tar.gz NOTE: This method will only show you the names of the files that matched your query along with the modification date. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160490",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37128/"
]
} |
160,497 | I am writing a script which needs to calculate the number of characters in a command's output in a single step . For example, using the command readlink -f /etc/fstab should return 10 because the output of that command is 10 characters long. This is already possible with stored variables using the following code: variable="somestring";echo ${#variable};# 10 Unfortunately, using the same formula with a command-generated string does not work: ${#(readlink -f /etc/fstab)};# bash: ${#(readlink -f /etc/fstab)}: bad substitution I understand it is possible to do this by first saving the output to a variable: variable=$(readlink -f /etc/fstab);echo ${#variable}; But I would like to remove the extra step. Is this possible? Compatibility with the Almquist shell (sh) using only in-built or standard utilities is preferable. | With GNU expr : $ expr length + "$(readlink -f /etc/fstab)"10 The + there is a special feature of GNU expr to make sure the next argument is treated as a string even if it happens to be an expr operator like match , length , + ... The above will strip any trailing newline of output. To work around it: $ expr length + "$(readlink -f /etc/fstab; printf .)" - 210 The result was subtracted to 2 because the final newline of readlink and the character . we added. With Unicode string, expr does not seem to work, because it returns length of string in bytes instead of characters count (See line 654 ) $ LC_ALL=C.UTF-8 expr length ăaa4 So, you can use: $ printf "ăaa" | LC_ALL=C.UTF-8 wc -m3 POSIXLY: $ expr " $(readlink -f /etc/fstab; printf .)" : ".*" - 310 The space before command substitution prevent command from being crashed with string start with - , so we need to subtract 3. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87443/"
]
} |
160,523 | Is there a way I can open a new 'screen' session on my RHEL box as a non-root user? When I try to open a new screen using the 'screen' command as a non-root user, it fails and I get the following message: Cannot open your terminal '/dev/pts/2' - please check. I researched a little bit and found people suggesting to change the permissions on /dev/pts to grant the non-root user (who is trying to open the screen) a read/write access. Though it may work, it does not look like a neat solution. Is there a 'legal' way that allows a non-root user to open a screen session? Edited : I have this issue on my RHEL 5.5, 6.2 and 6.5 machines. The screen version on all these boxes are 'Screen version 4.00.03 (FAU) 23-Oct-06'. P.S:- I know that I can open a screen session as root and 'su' to start my command/process, but that is not what I am looking for. | This is a known problem, if you ssh as root somewhere and then su to become a normal user: $ ssh root@server# su -l anthon$ screen Cannot open your terminal '/dev/pts/3' - please check. It is e.g. described in these posts from 2005 The solution is to directly login as the user you want the screen session to run as. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160523",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68296/"
]
} |
160,527 | Every time I do git pull or git reset , git resets changes to permissions and ownership I made. See for yourself: #!/usr/bin/env bashrm -rf 1 2mkdir 1cd 1git initecho 1 > 1 && git add 1 && git ci -m 1git clone . ../2cd $_chmod 0640 1chgrp http 1cd ../1echo 12 > 1 && git ci -am 2cd ../2stat 1git pullstat 1 The output: $ ./1.sh 2>/dev/null | grep -F 'Access: ('Access: (0640/-rw-r-----) Uid: ( 1000/ yuri) Gid: ( 33/ http)Access: (0664/-rw-rw-r--) Uid: ( 1000/ yuri) Gid: ( 1000/ yuri) Is there a way to work around it? I want to make some files/directories accessible for writing by the web server. | This sounds like the user you're running has the default group set to yuri . You can confirm this like so: $ id -auid=1000(saml) gid=1000(saml) groups=1000(saml),10(wheel),989(wireshark) The UID of your account is this: uid=1000(saml) whereas the default group is git=1000(saml) and any secondary groups are thereafter. NOTE: If you want the git clone to have specific ownership, then you have at least 2 options. Option #1 Set a parent directory with the permissions as you want like so: $ mkdir topdir$ chgrp http topdir$ chmod g+s topdir$ cd topdir$ git clone .... This forced the directory topdir to enforce any child directories underneath it to have the group http applied. This will work by in large but can lead to problems, since if you move files into this git clone workspace, those files will not have their groups enforced by the changes made above. Option #2 Prior to doing work, change your default group to http like so: $ newgrp http$ git clone ... This method will force any new files created to have their group set to http instead of your normal default group of yuri , but this will only work so long as you remember to do a newgrp prior to working in this workspace. Other options If neither of these seem acceptable you can try using ACLs instead on the git workspace directory. These are discussed in multiple Q&A's on this site, such as in this Q&A titled: Getting new files to inherit group permissions on Linux . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/160527",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29867/"
]
} |
160,539 | In a script I use find to collect some files in the current directory, as in $ find . -name "*.h"./foo.h Now I'd like it to just output foo.h , without the ./ prefix. I thought that the empty string "" denoted the current directory in shell commands. Butthis gives: $ find "" -name "*.h"find: ftsopen: No such file or directory So I was wrong. Now my question is when/how/where/.. does an "empty string (?)" denote the current dir in commands that expect a filename or a pathname? Is there a neat and enlightening explanation? A side question is whether the find nitpicking above can be solved simply, without string manipulation a la ${parameter#word} or cut or sed ? | A long time ago (in 7th edition , 32V , 4.2BSD , 4.3BSD ), at the system-call level a zero-length pathname denoted the current working directory (when used for lookup; it was disallowed when trying to create or delete a file or directory). In System III , it was an error to use a zero-length pathname under all circumstances, and the POSIX standard has this to say about pathname resolution: A null pathname shall not be successfully resolved. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/160539",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42406/"
]
} |
160,595 | I am working on embedded product running Linux. The device uses /dev/ttyO0 as console. On boot is automatically start a program with which uses input from /dev/ttyO0 (serial), and gives some information on device status on the serial. I want to redirect /dev/ttyO0 to something I can reach through the network (such as ssh). I am able to connect by telnet, but I don't see any of the information seen in serial. What can I do for this purpose ? | You don't. Use netcat nc instead. It will do what you want, whereas telnet will not. (echo helo ole.tange.dk; echo mail from: '<[email protected]>'; echo rcpt to: '<[email protected]>'; echo data; echo Subject: This is an email;echo;echo test;echo .;echo quit ) | nc smtp.server.example.com 25 | grep 250 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/160595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79842/"
]
} |
161,674 | While I understand the greatness of udev and appreciate the developers' effort, I was simply wondering if there is an alternative to it. For instance, I might imagine there should be a way to make startup script that creates most of the device nodes which on my system (no changing hardware) are most the same anyway. The benefit or reason I would like to skip udev would be the same as for skipping dbus , namely reducing complexity and by that increasing my changes to setup the system more safely. | There are various alternatives to udev out there. Seemingly Gentoo can use something called mdev . Another option would be to attempt to use udev 's predecessor devfsd . Finally, you can always create all the device files you need with mknod . Note that with the latter there is no need to create everything at boot time since the nodes can be created on disk and not in a temporary file system as with the other options. Of course, you lose the flexibility of having dynamically created device files when new hardware is plugged in (eg a USB stick). I believe the standard approach in this era was to have every device file you could reasonably need already created under /dev (ie a lot of device files). Of course the difficultly in getting any of these approaches to work in a modern distro is probably quite high. The Gentoo wiki mentions difficulties in getting mdev to work with a desktop environment (let alone outside of Gentoo). The last devfsd release was 2002, I have no idea if it will even work at all with modern kernels. Creating the nodes manually is probably the most viable approach, but even disabling udev could be a challenge, particularly in distos using systemd ( udev is now part of systemd , which suggests a strong dependency). My advice is stick with udev ;) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/161674",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24394/"
]
} |
161,675 | When I try to install Jekyll on Elementary OS Luna with the command sudo gem install jekyll --no-rdoc --no-ri I get the following error. -- rbconfig (LoadError)from /usr/lib/ruby/vendor_ruby/1.8/rubygems.rb:29from /usr/bin/gem:8:in `require'from /usr/bin/gem:8 Can anybody help me make sense of the error and maybe suggest a fix? | There are various alternatives to udev out there. Seemingly Gentoo can use something called mdev . Another option would be to attempt to use udev 's predecessor devfsd . Finally, you can always create all the device files you need with mknod . Note that with the latter there is no need to create everything at boot time since the nodes can be created on disk and not in a temporary file system as with the other options. Of course, you lose the flexibility of having dynamically created device files when new hardware is plugged in (eg a USB stick). I believe the standard approach in this era was to have every device file you could reasonably need already created under /dev (ie a lot of device files). Of course the difficultly in getting any of these approaches to work in a modern distro is probably quite high. The Gentoo wiki mentions difficulties in getting mdev to work with a desktop environment (let alone outside of Gentoo). The last devfsd release was 2002, I have no idea if it will even work at all with modern kernels. Creating the nodes manually is probably the most viable approach, but even disabling udev could be a challenge, particularly in distos using systemd ( udev is now part of systemd , which suggests a strong dependency). My advice is stick with udev ;) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/161675",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87559/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.