source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
291,282
Is there a way to make tree not show files that are ignored in .gitignore ?
This might help: list git ignored files in an almost-compatible way for tree filter: function tree-git-ignore { # tree respecting gitignore local ignored=$(git ls-files -ci --others --directory --exclude-standard) local ignored_filter=$(echo "$ignored" \ | egrep -v "^#.*$|^[[:space:]]*$" \ | sed 's~^/~~' \ | sed 's~/$~~' \ | tr "\\n" "|") tree --prune -I ".git|${ignored_filter: : -1}" "$@"}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291282", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39275/" ] }
291,285
The history command lists out all the history for the current session. Like: 1 ls 2 cd /root3 mkdir something4 cd something5 touch afile6 ls7 cd ..8 rm something/afile9 cd ..10 ls11 history In order to search items of interest, I can pipe history with grep like history | grep ls1 ls6 ls10 ls I can also view last 3 commands like: history 311 history12 history | grep ls13 history 3 But how do I get a specific range of history? For example something like: history range 4 74 cd something5 touch afile6 ls7 cd ..
Instead of history , you can use fc , which allow you select range: fc -l 4 7
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/291285", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176179/" ] }
291,302
I have stored the password as plain text in a txt file. Now I want to write the script which will read the plain text from the txt file and after that it should encrypt that and decrypt that.
Encrypting a password is useless when you can't keep it encrypted. The instant you decrypt it, it's vulnerable again. No matter how cryptographically hard they are, the encryption and decryption methods are right there for anyone to see and copy-paste anyway. That just makes it sillier. chmod will be a much better defense against snooping than a rube goldberg machine, but with some work you might be able to avoid using stored passwords at all, which would be a very good thing. Because : Retrievably stored passwords are security hot potatoes and to be avoided. They're such a bad idea that sudo , su , ssh , scp , and sftp don't just avoid them, they're all specifically designed to stop you from using them too. If you plan on having it prompt you for a password, which has limited use, but I'll entertain the possibility for operations automation or something, you can use a utility like openssl . $ echo foobar | openssl enc -aes-128-cbc -a -salt -pass pass:asdffdsaU2FsdGVkX1/lXSnI4Uplc6DwDPPUQ/WjHULJoKypTO8=$ echo U2FsdGVkX1/lXSnI4Uplc6DwDPPUQ/WjHULJoKypTO8= | openssl enc -aes-128-cbc -a -d -salt -pass pass:asdffdsafoobar Alternatively you can do this, $ touch pass.txt && echo foobar > pass.txt$ openssl bf -a -salt -in pass.txt -out secret && rm -f pass.txtenter bf-cbc encryption password:Verifying - enter bf-cbc encryption password:$ openssl bf -d -a -in secret -out pass.txtenter bf-cbc decryption password:$ cat pass.txtfoobar
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291302", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176200/" ] }
291,315
Searching through passed Logfile with something like this: cat /path/to/logfile | grep -iEw 'some-ip-address-here|correspondig-mac-adress-here' This gives me all the passed log lines until now so I can see what has been. Now I also want to see what's going on so I need to exchange cat with tail -f giving me this: tail -f /path/to/logfile | grep -iEw 'some-ip-address-here|correspondig-mac-adress-here'
You can use !!:* to refer to all the words but the zeroth of the last command line. !! refers to the previous command, : separates the event specification from the word designator, * refers to all the words but the zeroth. This is from the HISTORY EXPANSION section of bash(1). wieland@host in ~» cat foo | grep barbarwieland@host in ~» tail -f !!:*tail -f foo | grep barbar You could also use quick substitution where ^string1^string2^ repeats the last command, replacing string1 with string2 : wieland@host in ~» cat foo | grep barbarwieland@host in ~» ^cat^tail -ftail -f foo | grep barbar
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291315", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52374/" ] }
291,319
Current Environment : mysql> show variables like "%version%";+-------------------------+------------------------------+| Variable_name | Value |+-------------------------+------------------------------+| innodb_version | 5.7.13 || protocol_version | 10 || slave_type_conversions | || tls_version | TLSv1,TLSv1.1 || version | 5.7.13 || version_comment | MySQL Community Server (GPL) || version_compile_machine | x86_64 || version_compile_os | Linux |+-------------------------+------------------------------+8 rows in set (0.01 sec) Password Change command user : mysql> update user set password=PASSWORD("XXXX") where user="root";ERROR 1054 (42S22): Unknown column 'password' in 'field list' Am I missing something?
In MySQL 5.7, the password field in mysql.user table field was removed, now the field name is authentication_string . First choose the database: mysql> use mysql; And then show the tables: mysql> show tables; You will find the user table, and see its fields: mysql> describe user; You will realize there is no field named password , the password field is named authentication_string . So, just do this: update user set authentication_string=password('XXXX') where user='root'; As suggested by @Rui F Ribeiro, alternatively you can run: mysql> SET PASSWORD FOR 'root' = PASSWORD('new_password');
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/291319", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176204/" ] }
291,404
With Bash's source it is possible to execute a script without an execution bit set. This is documented and expected behaviour, but isn't this against the use of an execution bit? I know, that source doesn't create a subshell.
source or the equivalent but standard dot . do not execute the script, but read the commands from script file, then execute them, line by line, in current shell environment. There's nothing against the use of execution bit, because the shell only need read permission to read the content of file. The execution bit is only required when you run the script. Here the shell will fork() new process then using execve() function to create new process image from the script, which is required to be regular, executable file.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/291404", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166118/" ] }
291,412
On my local Linux machine, I'm using termite (VTE-based) . It comes with its own terminfo file ( xterm-termite ), which is not distributed with ncurses by default. I try to connect to a remote FreeBSD server. Unfortunately, FreeBSD by default only uses the older termcap format. I haven't been able to find a way to convert terminfo to termcap. So, while I can compile a version of ncurses which includes terminfo support from the ports ( devel/ncurses ), ncurses applications will link to the system-supplied lib by default. This results in apps failing to launch: $ echo $TERMxterm-termite$ toe | grep termitexterm-termite VTE-based terminal$ tmuxopen terminal failed: can't find terminfo database$ htopError opening terminal: xterm-termite. ( toe lists terminfo entries) How can I make FreeBSD use terminfo by default, or force applications to use the port-supplied ncurses lib, or at least convert terminfo to termcap data? (I'm aware of the workarounds like setting TERM to a safe value like xterm-256color , but I think that defeats the purpose of terminfo)
The conventional way to convert terminfo to termcap is with infocmp -Cr The infocmp option -C tells infocmp to use termcap names, and the -r option tells it to translate terminfo capabilities to termcap format. Some (such as the expressions used in sgr ) will not translate, and infocmp may leave commented-out capabilities if there is enough space. By "enough space", refers to the fact that real termcap applications allow only 1023 bytes in a description. FreeBSD uses ncurses underneath, but some applications make assumptions about the entry length. FreeBSD however does have a termcap file which is independent of ncurses. There also is a "port" for ncurses, which some find useful. By the way, you may be referring to this: termite/termite.terminfo . If you translated it, you might see something like # vim: noet:ts=8:sw=8:sts=0# (untranslatable capabilities removed to fit entry within 1023 bytes)# (sgr removed to fit entry within 1023 bytes)# (acsc removed to fit entry within 1023 bytes)# (terminfo-only capabilities suppressed to fit entry within 1023 bytes)xterm-termite|VTE-based terminal:\ :am:hs:km:mi:ms:xn:\ :co#80:it#8:li#24:\ :AL=\E[%dL:DC=\E[%dP:DL=\E[%dM:DO=\E[%dB:IC=\E[%d@:\ :K2=\EOE:LE=\E[%dD:RI=\E[%dC:SF=\E[%dS:SR=\E[%dT:\ :UP=\E[%dA:ae=\E(B:al=\E[L:as=\E(0:bl=^G:bt=\E[Z:cd=\E[J:\ :ce=\E[K:cl=\E[H\E[2J:cm=\E[%i%d;%dH:cr=^M:\ :cs=\E[%i%d;%dr:ct=\E[3g:dc=\E[P:dl=\E[M:do=^J:\ :ds=\E]2;\007:ec=\E[%dX:ei=\E[4l:fs=^G:ho=\E[H:im=\E[4h:\ :is=\E[!p\E[?3;4l\E[4l\E>:k1=\EOP:k2=\EOQ:k3=\EOR:\ :k4=\EOS:k5=\E[15~:k6=\E[17~:k7=\E[18~:k8=\E[19~:\ :k9=\E[20~:kD=\E[3~:kI=\E[2~:kN=\E[6~:kP=\E[5~:kb=\177:\ :kd=\EOB:ke=\E[?1l\E>:kh=\EOH:kl=\EOD:kr=\EOC:\ :ks=\E[?1h\E=:ku=\EOA:le=^H:md=\E[1m:me=\E[0m:mh=\E[2m:\ :mm=\E[?1034h:mo=\E[?1034l:mr=\E[7m:nd=\E[C:rc=\E8:sc=\E7:\ :se=\E[27m:sf=^J:so=\E[7m:sr=\EM:st=\EH:ta=^I:te=\E[?1049l:\ :ti=\E[?1049h:ts=\E]2;:ue=\E[24m:up=\E[A:us=\E[4m:\ :vb=\E[?5h\E[?5l:ve=\E[?12l\E[?25h:vi=\E[?25l:\ :vs=\E[?12;25h: There are a few errors in the terminfo entry (VTE does not support meta mode, for instance). Also, in termcap format you may notice that most of the function-keys go away (1023-byte limit). Further reading: infocmp - compare or print out terminfo descriptions infotocap - convert a terminfo description into a termcap description tctest – A Termcap Test Utility
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291412", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49094/" ] }
291,431
I have a script which compile a program. This program first compiles the source code using configure && make commands, then runs some tests using make test . This script also uses set -e to catch errors. Now, what I want to do is keep set -e set in the script and still continue running the script when make test encounters some errors. I have tried using make -k test to make the tests run even when it encounters errors, but it is caught by the set -e command and it is stopped. I also know which tests are going to fail, so is there any way to tell the script to skip catching these errors.
make test || true e.g. #!/bin/shset -eecho hellomake test || trueecho done Will result in hellomake: *** No rule to make target `test'. Stop.done In this case the failure was a missing rule (no Makefile :-)) but we can see the script continues.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291431", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171377/" ] }
291,450
I have a question similar to another one on this site where the individual had to find a list of all users using grep or awk from /etc/passwd. That worked for me but I've tried translating it to find and list the home directories of them also. I already know you can't do it in one line so I know I would use a pipeline. I've done my research online but I can't figure it out the problem is. If I use grep and do something like the following: grep -oE '^[/*/]$' /etc/passwd ...i t would probably give me an error or it will also show me the /bin/bash files which is not what I want. I just need the user names and their home directories listed using grep! I'm also not sure if the * will show other forward-slashes as characters, as some home directories have more than just two /'s (forward-slashes).
Grep is really not the tool for parsing out data this way;grep is more for pattern matching and you're trying to do text-processing. You would want to use awk. awk -F":" '$7 == "/bin/false" {print "User: "$1 "Home Dir: "$6}' /etc/passwd awk – The command -F":" – Sets the data delimiter to : $7 == "/bin/false" – Checks if the 7th data column is /bin/false {print "User: "$1 "Home Dir: "$6}' – Says to print the first column and sixth column in the specified format. /etc/passwd – Is the file we're processing
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291450", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176328/" ] }
291,456
SunOS 5.8 Directory structure /TEST/CHM CHM A file1.txt file2.txt B file3.txt C file4.txt file5.txt file6.txt If the parent directory CHM has less than 8 files/subdirectories zip it up normally.If the parent directory CHM has 8 or more files/subdirectories create a zip archive for ever 5 files.This is for testing only. In production it will be 10000 files, not 5.Parent directory CHM could have 0 to n subdirectories. #!/bin/bashset -ecd $subdir/# varsnum=8 # set 10000 in productionfor dir in $subdirdo dir=${dir%*/} # testing code only if [[ ${dir##*/} = "CHM" ]] then numfile=$(ls * | wc -l) fi if [ "$numfile" -lt "$num" ] then zip -r -6 ${dir##*/}.zip . else ls * > files split -l 5 - files < files for i in files[a-z][a-z]; do zip -6 "$i.zip" -@ < "$i" done fi # end testdoneexit zip warning: name not matched: A: zip warning: name not matched: file1.txt zip warning: name not matched: file2.txt zip warning: name not matched: B:zip error: Nothing to do! (filesaa.zip) The else part of the second if statement is failing and I'm not sure why.I'm looking to create: CHM.zip masterCHM.001.zipCHM.002.zipCHM.003.zip So I can unzip in the same order on a remote server.
Grep is really not the tool for parsing out data this way;grep is more for pattern matching and you're trying to do text-processing. You would want to use awk. awk -F":" '$7 == "/bin/false" {print "User: "$1 "Home Dir: "$6}' /etc/passwd awk – The command -F":" – Sets the data delimiter to : $7 == "/bin/false" – Checks if the 7th data column is /bin/false {print "User: "$1 "Home Dir: "$6}' – Says to print the first column and sixth column in the specified format. /etc/passwd – Is the file we're processing
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291456", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88123/" ] }
291,460
Is there a simple way to search a text file (source code) for all instances of a specific integer. This should not trigger on larger numbers that happen to include the integer as a sub-string, but it can't simply exclude such lines since they could contain both cases: searching for '6'... int a=6; // foundint b=16; // not found (despite the '6' in '16')int c=6, d=16; // found I'm really looking for a command-line approach to this, but am also curious if there is a FOSS GUI-type editor that will do it.
grep -E '\b6\b' \b is a "word boundary" Edit: After pointing @nobar in the right direction, he found/pointed out the shortcut-option -w (word-regexp) in the manpage, which simplifies the above to: grep -w 6 If used a lot, you could use a function similar to wgrp(){ grep -w "$1" "$2"; } Note (to @glenn-jackman): If you don't quote "$2" here, you can use the function as a pipeline filter. But yes, then it won't work with filenames with spaces. After reading yet another great answer from @Gilles, I now propose igrp(){ grep -E "(^|[^0-9])$1($|[^0-9])" "$2"; }
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291460", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6764/" ] }
291,541
I have a growing collection of scripts which should be sourced, not run . At the moment they have the shebang #! /bin/cat but I would prefer the have them be sourced into bash when run, in the same way as I had done $ . /path/to/script.sh or $ source /path/to/script.sh But . and source are bash builtins, so is an alternative shebang line for such scripts possible?
No. By the time a shebang comes into play, you have already lost. A shebang is applied when a process is exec() 'd and typically that happens after forking, so you're already in a separate process. It's not the shell that reads the shebang, it's the kernel.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/291541", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32775/" ] }
291,562
Check out: data/tmp$ gzip -l tmp.csv.gz compressed uncompressed ratio uncompressed_name 2846 12915 78.2% tmp.csvdata/tmp$ cat tmp.csv.gz | gzip -l compressed uncompressed ratio uncompressed_name -1 -1 0.0% stdoutdata/tmp$ tmp="$(cat tmp.csv.gz)" && echo "$tmp" | gzip -lgzip: stdin: unexpected end of file Ok apparently the input is not the same, but it should have been, logically. What am I missing here? Why aren't the piped versions working?
This command $ tmp="$(cat tmp.csv.gz)" && echo "$tmp" | gzip -l assigns the content of tmp.csv.gz to a shell variable and attempts to use echo to pipe that to gzip . But the shell's capabilities get in the way (null characters are omitted). You can see this by a test-script: #!/bin/shtmp="$(cat tmp.csv.gz)" && echo "$tmp" |cat >foo.gzcmp foo.gz tmp.csv.gz and with some more work, using od (or hexdump ) and looking closely at the two files. For example: 0000000 037 213 010 010 373 242 153 127 000 003 164 155 160 056 143 163 037 213 \b \b 373 242 k W \0 003 t m p . c s0000020 166 000 305 226 141 157 333 066 020 206 277 367 127 034 012 014 v \0 305 226 a o 333 6 020 206 277 367 W 034 \n \f0000040 331 240 110 246 145 331 362 214 252 230 143 053 251 121 064 026 331 240 H 246 e 331 362 214 252 230 c + 251 Q 4 026 drops a null in the first line of this output: 0000000 037 213 010 010 373 242 153 127 003 164 155 160 056 143 163 166 037 213 \b \b 373 242 k W 003 t m p . c s v0000020 305 226 141 157 333 066 020 206 277 367 127 034 012 014 331 240 305 226 a o 333 6 020 206 277 367 W 034 \n \f 331 2400000040 110 246 145 331 362 214 252 230 143 053 251 121 064 026 152 027 H 246 e 331 362 214 252 230 c + 251 Q 4 026 j 027 Since the data changes, it is no longer a valid gzip'd file, which produces the error. As noted by @coffemug, the manual page points out that gzip will report a -1 for files not in gzip'd format. However, the input is no longer a compressed file in any format, so the manual page is in a sense misleading: it does not categorize this as error-handling. Further reading: How do I use null bytes in Bash? Representing/quoting NUL on the command line @wildcard points out that other characters such as backslash can add to the problem, because some versions of echo will interpret a backslash as an escape and produce a different character (or not, depending on the treatment of escapes applied to characters not in their repertoire). For the case of gzip (or most forms of compression), the various byte values are equally likely, and since all nulls will be omitted, while some backslashes will cause the data to be modified. The way to prevent this is not to try assigning a shell variable the contents of a compressed file. If you want to do that, use a better-suited language. Here is a Perl script which can count character-frequencies, as an example: #!/usr/bin/perl -wuse strict;our %counts;sub doit() { my $file = shift; my $fh; open $fh, "$file" || die "cannot open $file: $!"; my @data = <$fh>; close $fh; for my $n ( 0 .. $#data ) { for my $o ( 0 .. ( length( $data[$n] ) - 1 ) ) { my $c = substr( $data[$n], $o, 1 ); $counts{$c} += 1; } }}while ( $#ARGV >= 0 ) { &doit( shift @ARGV );}for my $c ( sort keys %counts ) { if ( ord $c > 32 && ord $c < 127 ) { printf "%s:%d\n", $c, $counts{$c} if ( $counts{$c} ); } else { printf "\\%03o:%d\n", ord $c, $counts{$c} if ( $counts{$c} ); }}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291562", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176399/" ] }
291,570
there is a script I evolved with it, it has line of command like below : mytemp=`echo ${sourcedir}|awk -F/ '{printf "/%s/tmp",$2}'`/`basename $0`-$1.$$ at the last of the command we see $$ that produces a number. when I use echo $$ in bash I also see a number like bellow: #echo $$ 23019 what exactly is this number, and what is $$ ?
From Advanced Bash-Scripting Guide: $$ is the process ID (PID) of the script itself. $BASHPID is the process ID of the current instance of Bash. This is not the same as the $$ variable, but it often gives the same result.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/291570", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78188/" ] }
291,638
It's a common scenario. For whatever reason, the initramfs (OpenSUSE, in case it matters) has failed to find the root filesystem, so it drops you into a rescue shell. I know perfectly well what device needs to be mounted though. My question: What is the correct procedure to mount the root filesystem and continue the boot sequence? Presumably that's the whole point of the rescue console. And yet, nobody seems to have documented how you actually do this. Obviously I can mount the root filesystem somewhere. But how do I make that the root of the filesystem tree? And now do I continue the normal boot process after that? (I thought just exiting the shell would do it... but it doesn't.) What exactly do you need to get mounted before you continue, and how do you continue?
exec switch_root /mnt/root /sbin/init https://wiki.gentoo.org/wiki/Custom_Initramfs#Init
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291638", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26776/" ] }
291,659
In /tmp dir I have file with this filename: .<?php passthru($_GET['cmd']);echo 'm3rg3';?> I can't remove this file by normal means and have tried with quoting this filename with no results. What should I try next?
Use ls -li to see the inode them remove the inode with find [root@server tmp]# ls -li .\<*16163346 -rw-r--r-- 1 root root 0 Jun 23 12:02 .<?php passthru($_GET[cmd]);echo [root@server tmp]# find . -inum 16163346 -exec rm -i {} \;rm: remove regular empty file `./.<?php passthru($_GET[cmd]);echo'? y Reference: http://www.cyberciti.biz/tips/delete-remove-files-with-inode-number.html
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291659", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68350/" ] }
291,674
I am looking for a way to make the mount command prompt for the password so my password doesn't show up in any system history, I thought I would have it run read -s -p command inside of the mount command but I am not having any luck with it. I am wondering if my statement is wrong and how. mount -t cifs -o domain=domain.ad,user=thebtm,pass=$(read -s -p "password: ") "//NAS/thebtm$/" /mnt/cifsmountpassword: mount error(13): Permission deniedRefer to the mount.cifs(8) manual page (e.g. man mount.cifs)
A quick look at the manual page shows: password=arg specifies the CIFS password. If this option is not given then the environment variable PASSWD is used. If the password is not specified directly or indirectly via an argument to mount, mount.cifs will prompt for a password, unless the guest option is specified. So, leave it unspecified, or set the PASSWD env var.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291674", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60089/" ] }
291,681
I need to find if any lines in a file begin with ** . I cannot figure out how to do it because * is interpreted as a wildcard by the shell. grep -i "^2" test.out works if the line begins with a 2 but grep -i "^**" test.out obviously doesn't work. (I also need to know if this line ends with a ) but have not attempted that yet).
Use the \ character to escape the * to make it a normal character. grep '^\*\*' test.out Also note the single quote ' and not double quote " to prevent the shell expanding things
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291681", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176488/" ] }
291,686
Very recently I recovered a root password for a Debian server by booting into single user mode. This resulted in me having access to a shell with root privileges (prompt said "root@none") Now this has left me wondering why a potential intruder can't just reboot a system and use the same process to reset the root password and infiltrate your treasure trove?! See ( https://serverfault.com/questions/482079/debian-boot-to-single-user-mode )
Use the \ character to escape the * to make it a normal character. grep '^\*\*' test.out Also note the single quote ' and not double quote " to prevent the shell expanding things
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291686", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176485/" ] }
291,729
Operating a standard bash shell on a server, the PS1 prompt defaults to ending in a $ for non-root users, and # for root. IE: ubuntu@server:~$ sudo suroot@server:/home/ubuntu# Why is this?
Historically the original /bin/sh Bourne shell would use $ as the normal prompt and # for the root user prompt (and csh would use % ). This made it pretty easy to tell if you were running as superuser or not. # is also the comment character, so anyone blindly re-entering data wouldn't run any real commands. More modern shells (eg ksh, bash) continue this distinction of $ and # although it's less important when you can set more complicated values such as the username, hostname, directory :-)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/291729", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89664/" ] }
291,737
On RHEL 6.6, I installed Python 3.5.1 from source. I am trying to install pip3 via get-pip.py, but I get Traceback (most recent call last): File "get-pip.py", line 19177, in <module> main() File "get-pip.py", line 194, in main bootstrap(tmpdir=tmpdir) File "get-pip.py", line 82, in bootstrap import pipzipimport.ZipImportError: can't decompress data; zlib not available It works for the Python 2.6.6 installed. I have looked online for answers, but I cannot seem to find any that works for me. edit: yum search zlib jzlib.i686 : JZlib re-implementation of zlib in pure Javaperl-Compress-Raw-Zlib.i686 : Low-Level Interface to the zlib compression libraryperl-Compress-Zlib.i686 : A module providing Perl interfaces to the zlib compression libraryperl-IO-Zlib.i686 : Perl IO:: style interface to Compress::Zlibzlib.i686 : The zlib compression and decompression libraryzlib-debuginfo.i686 : Debug information for package zlibzlib-devel.i686 : Header files and libraries for Zlib developmentperl-IO-Compress-Zlib.i686 : Perl interface to allow reading and writing of gzip and zip data Name and summary matches only, use "search all" for everything.
Ubuntu 16.10+ and Python 3.7 dev sudo apt-get install zlib1g-dev Note: I only put this here because it was the top search result for the error, but this resolved my issue. Update: also the case for ubuntu 14.04LTS and base kernel at 4.1+
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/291737", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176523/" ] }
291,742
After rebooting, my network interface card (renamed from eth0 to enp0s25 ) is not displayed with the command ifconfig, only in ifconfig -a . Also ping -c 4 google.com only yields unknown host. In my /etc/resolv.conf file, the name server is set to my router which deals with all of the DNS bs. I checked to see if net.enp0s25 is installed at runlevel which it was. I was trying out MATE and dbus/xdm threw alot of error messages after the reboot. Also ping 8.8.8.8 yield network unreachable. Trying to set the interface to up through ifconfig up enp0s25 yields enp0s25 : Host Name lookup Failure.
Ubuntu 16.10+ and Python 3.7 dev sudo apt-get install zlib1g-dev Note: I only put this here because it was the top search result for the error, but this resolved my issue. Update: also the case for ubuntu 14.04LTS and base kernel at 4.1+
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/291742", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176526/" ] }
291,904
This is really confusing... I currently have a Debian 8 computer, and I connect to it using PuTTY (SSH). The default console used is Bash. When I try to pass a path to an alias, it gives the following error: -bash: /: Is a directory Here is an example: Also, there's a bizarre behaviour: running '/' or "/" cause the same error, as if Bash were ignoring quotes. If it matters, the alias explorer was defined like this: alias explorer='pcmanfm 1>/dev/null 2>&1 &' Is this the expected behaviour? If not, what am I doing wrong?
The way you wrote your alias, the command you run would be expanded as pcmanfm 1>/dev/null 2>&1 & '/' This will run pcmanfm without any options as a background job and then try to run / as a command. You probably want a function instead of an alias explorer() { pcmanfm "$@" >/dev/null 2>&1 & }
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/291904", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133591/" ] }
291,923
Is there a way to change the color of only one of the listed directories in the ls command? I have researched the LS_COLORS variable, but this doesn't solve the issue because you cannot list specific files or directories in the LS_COLORS variable. I have been producing a bash script to accomplish this, but so far it's proven extremely complicated. There must be an easier way. Thanks!
The way you wrote your alias, the command you run would be expanded as pcmanfm 1>/dev/null 2>&1 & '/' This will run pcmanfm without any options as a background job and then try to run / as a command. You probably want a function instead of an alias explorer() { pcmanfm "$@" >/dev/null 2>&1 & }
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/291923", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176621/" ] }
291,932
I never used tail -F command instead always used tail -f however someone told me that -F is better without much explanation. I looked up man page for tail command. -f output appended data as the file grows;-F Same as --follow=name --retry--retry Keep trying to open a file even when it is or becomes inaccessible It is easy to understand what lower -f does but I do not follow what upper case -F is trying to do. I'd appreciate someone can explain to me the differences.
You describe the GNU tail utility. The difference between these two flags is that if I open a file, a log file for example, like this: $ tail -f /var/log/messages ... and if the log rotation facility on my machine decides to rotate that log file while I'm watching messages being written to it ("rotate" means delete or move to another location etc.), the output that I see will just stop. If I open the file with tail like this: $ tail -F /var/log/messages ... and again, the file is rotated, the output would continue to flow in my console because tail would reopen the file as soon as it became available again, i.e. when the program(s) writing to the log started writing to the new /var/log/messages . On the free BSD systems, there is no -F option, but tail -f will behave like tail -F does on GNU systems, with the difference that you get the message tail: file has been replaced, reopening. in the output when the file you're monitoring disappears and reappears. YOU CAN TEST THIS In one shell session, do $ cat >myfile That will now wait for you to type stuff. Just go ahead and type some gibberish, a few lines. It will all be saved into the file myfile . In another shell session (maybe in another terminal, without interrupting the cat ): $ tail -f myfile This will show the (end of the) contents of myfile in the console. If you go back to the first shell session and type something more, that output will immediately be shown by tail in the second shell session. Now quit cat by pressing Ctrl+D , and remove the myfile file: $ rm myfile Then run the cat again: $ cat >myfile ... and type something, a few lines. With GNU tail , these lines will not show up in the second shell session (where tail -f is still running). Repeat the exercise with tail -F and observe the difference.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/291932", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156114/" ] }
291,943
I want to get only the total size of a mounted filesystem. But the catch is that I only know about the mount point. So, I thought of using the df command. To get the size of this mounted filesystem, I ran the following command: df --output=target,size | grep -w /mnt/xyz The result that I got was something like this: /mnt/xyz 4339044 I know how to use cut but it was of no use here as the space between the string and the integers is unknown to me. Is there a way to just print this size on the terminal?
You can do it without the grep : df --output=target,size /mnt/xyz | awk ' NR==2 { print $2 } ' df accepts as argument the mount point; you can tell to awk to print both the second line only (NR==2) , and the 2nd argument, $2. Or better yet, cut the target as you are not outputting it, and it becomes: df --output=size /mnt/xyz | awk ' NR==2 ' When I was a begginer, I also did manage to get around cut limitations using tr -s " " (squeeze) to cut redundant spaces as in: df --output=target,size /mnt/xyz | tail -1 | tr -s " " | cut -f2 -d" "
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291943", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171377/" ] }
291,945
I have been attempting to create a file out of my loop in bash , I was successful in generating the file with the following code for ((i = 0; i <= 5; i++)); do for ((j = i; j <= 5; j++)); do echo -e "$i $j" >> numbers.txt done done however I want my results to be out in the following fashion 01 02 03 etc. But the result I am getting is similar to this 0 0 1 0 2 0 3 0 4 0 5 1 1 1 2 1 3 1 4 1 5 2 2 2 3 2 4 2 5 3 3 3 4 3 5 How can I solve this ?
You can do it without the grep : df --output=target,size /mnt/xyz | awk ' NR==2 { print $2 } ' df accepts as argument the mount point; you can tell to awk to print both the second line only (NR==2) , and the 2nd argument, $2. Or better yet, cut the target as you are not outputting it, and it becomes: df --output=size /mnt/xyz | awk ' NR==2 ' When I was a begginer, I also did manage to get around cut limitations using tr -s " " (squeeze) to cut redundant spaces as in: df --output=target,size /mnt/xyz | tail -1 | tr -s " " | cut -f2 -d" "
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291945", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/174662/" ] }
291,975
I just downloaded VLC 3.0 Beta (using ubuntu ppa) and I wanted to know how to set it up to stream to chromecast. It's in the repo's NEWS that the feature has been added. Numerous news outlets are covering it. But, there is no example of how to actually use it yet. I know it's not in the GUI (having searched the source code). And, I have no idea how to use the code from the command line. Here is the Ubuntu PPA that I used to install it. However, it shouldn't matter. Nor, should the OS or system matter. It's just software. You can build it yourself or download a binary ("nightly") here .
Building VLC If you have to build vlc yourself, make sure you have --enable-sout --enable-chromecast Using VLC Thus far this feature is not available under the GUI, however you can stream to Chromecast like this, $ vlc --sout="#chromecast{ip=ip_address}" ./video.mp4 You can watch the video at the same time with $ vlc --sout="#duplicate{dst=display,#chromecast{ip=ip_address}}" ./video.mp4 To make matters even better, you can actually add a delay on the video so it better syncs with the audio (sets the delay to 3100ms). $ vlc --sout="#duplicate{dst=display{delay=3100},#chromecast{ip=ip_address}}" ./video.mp4 You can find the list of options support to chromecast here , they currently include ip port http-port mux mime video
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/291975", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
291,983
I have a command that returns the following: id1="172" id2="17" id3="136" id4="5" id5="0" /> id1="172" id2="17" id3="128" id4="2" id5="0" /> id1="172" id2="17" id3="128" id4="4" id5="0" /> id1="172" id2="17" id3="128" id4="6" id5="0" /> The first four IDs combined represent an IP address (e.g. 172.17.136.5 for the first line). I would like to pipe the above to a parsing command which will output a single string with all the IP addresses separated by spaces. For the above example: myVariable="172.17.136.5 172.17.128.2 172.17.128.4 172.17.128.6" How can I do this?
You can do this with an awk command most easily: your-command | awk -F\" -v OFS=. -v ORS=' ' '{print $2, $4, $6, $8}' To set it as a variable, use command substitution: myVariable="$(your-command | awk -F\" -v OFS=. -v ORS=' ' '{print $2, $4, $6, $8}')" -F sets the input field separator (which is by default any whitespace) to a custom value; in this case a double quote ( " ). -v allows you to set awk variables. OFS is the output field separator, by default a single space. We set it to a period. ORS is the output record separator, by default a newline. We set it to a space. Then we print the 2nd, 4th, 6th and 8th fields for each line of the input. Sample output: $ cat temp id1="172" id2="17" id3="136" id4="5" id5="0" /> id1="172" id2="17" id3="128" id4="2" id5="0" /> id1="172" id2="17" id3="128" id4="4" id5="0" /> id1="172" id2="17" id3="128" id4="6" id5="0" />$ myVariable="$(cat temp | awk -F\" -v OFS=. -v ORS=' ' '{print $2, $4, $6, $8}')"$ echo "$myVariable" 172.17.136.5 172.17.128.2 172.17.128.4 172.17.128.6 $
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/291983", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171008/" ] }
292,053
I run ps -ejH | less . The output includes ps and less as well. What is the reason? I thought it would work as follows: First, ps will run and it will list all processes existing at thatmoment. Then, the output of ps will be fed into less . But according to this logic, neither ps not less should appear in the output of ps . So, why are these processes included in the output of ps ? Does ps work a bit differently that I have described?
The shell starts both, to establish the ends of the pipe, so ps sees itself as well as the process at the other end of the pipe.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/292053", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106637/" ] }
292,087
How do I get bc to start decimal fractions with a leading zero? $ bc <<< 'scale=4; 1/3' .3333 I want 0.3333.
bc natively do not support adding zero. Workaround is: echo 'scale=4; 1/3' | bc -l | awk '{printf "%.4f\n", $0}' 0.3333 "\n" - add a new line. "%f" - floating point "%.4f" - the number of digits to show after the decimal point. here it is 4.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292087", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23692/" ] }
292,091
I installed and oepnvpn on an Ubuntu server 16.04 by following the following guideline how-to-set-up-an-openvpn-server-on-ubuntu When I start the openVPN server with: service openvpn start it looks like it get started, but I get no log files written even though I have the log option activated. status /var/log/openvpn-status.loglog /var/log/openvpn.log Any hints what I can try? how can I check if the process/service is really running? how can I find out if the service is crashing every time? any idea why the log files don't get written? output on starting the service root@Diabolo:/etc/openvpn# service openvpn stoproot@Diabolo:/etc/openvpn# service openvpn startroot@Diabolo:/etc/openvpn# service openvpn statusopenvpn.service - OpenVPN serviceLoaded: loaded (/lib/systemd/system/openvpn.service; enabled; vendor preset: enabled)Active: active (exited) since Sat 2016-06-25 19:04:12 CEST; 3s agoProcess: 3956 ExecStart=/bin/true (code=exited, status=0/SUCCESS)Main PID: 3956 (code=exited, status=0/SUCCESS)Jun 25 19:04:12 Diabolo systemd[1]: Starting OpenVPN service...Jun 25 19:04:12 Diabolo systemd[1]: Started OpenVPN service. output on syslog Jun 25 19:04:12 Diabolo systemd[1]: Starting OpenVPN service...Jun 25 19:04:12 Diabolo systemd[1]: Started OpenVPN service. config file server.conf port 1194proto udpdev tunca /etc/openvpn/ca.crtcert /etc/openvpn/server.crtkey /etc/openvpn/server.key dh /etc/openvpn/dh2048.pemserver 10.8.0.0 255.255.255.0ifconfig-pool-persist ipp.txtpush "redirect-gateway def1 bypass-dhcp"push "dhcp-option DNS 208.67.222.222"push "dhcp-option DNS 208.67.220.220"keepalive 10 120comp-lzomax-clients 100user nobodygroup nogrouppersist-keypersist-tunstatus /var/log/openvpn-status.loglog /var/log/openvpn.logverb 3
The problem is that service config /lib/systemd/system/openvpn.service just calls /bin/true (I have no idea on why it wasn't just removed).Usable configuration might be found in /lib/systemd/system/[email protected] , but it still needs to be somewhat hacked. Solution that worked for me: 1. Create dependency on networking service To protect it from overwriting, create it in a separate file in subdirectory: mkdir -p /lib/systemd/system/openvpn\@.service.d Create a file in this directory. Its name must end with .conf , for example: vi /lib/systemd/system/openvpn\@.service.d/local-after-ifup.conf Put following content in this file: [Unit]Requires=networking.serviceAfter=networking.service 2. Try to start the server systemctl start openvpn@<CONF_NAME>.service Where CONF_NAME is the name of your .conf file in /etc/openvpn directory. In your case: systemctl start [email protected] 3. Enable service autostart if everything works systemctl enable [email protected]
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/292091", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81209/" ] }
292,096
How do Linux logfiles handle daylight savings time? When you fall back not only would you get out of order values but also possibly duplicate values. I'm thinking that I should set the system time to UTC and then process the logfiles into local timezone before handing off to a logfile viewer.
Logfiles are plain text files, and each line is appended at the end. So there is no loss of data when using non-UTC timezone. Of course, you may view the files using a tool which can get confused. However, the usual reason for using UTC is to avoid ambiguity: you do not have to know what the local timezone is to interpret the data. So yes, using UTC in logfiles is a good thing , and often done , but logfiles do not lose data if you do not do this.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292096", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171588/" ] }
292,097
I registred because I didn't manage running cgroups with several tutorials/comments/whatever you find on google. I want to limit the amount of ram a specifix user may use. Internet says "cgroups". My testserver is running Ubuntu 14.04. You can divide the mentioned tutorials in two categories. Directly set limits using echo and use specific config. Neither is working for me. Setting Limits using echo cgcreate -g cpu,cpuacct,...:/my_group finishes without any notices. When I try to run echo 100M > memory.limit_in_bytes it just says "not permitted" even when using sudo. I don't even reach any point of limiting another user. Setting limits using config I read about two config files. So here are my config files: cgconfig.conf mount { memory = /cgroup/memory;}group limit_grp { memory { memory.limit_in_bytes=100M; memory.memsw.limit_in_bytes=125M; }} cgrules.conf testuser memory limit_grp When I run cgconfigparser -l /etc/cgconfig.conf it mounts to systemd. Now I log on with testuser, run an memory intense task - and it runs without caring about my limit. I tried rebooting, nothing changed. Even some strange attempts using kernel config didn't work. I'm new to cgroups and didn't expect it to be that complicated. I'd appreciate any suggestions to my topic. Thank you in advance!
Logfiles are plain text files, and each line is appended at the end. So there is no loss of data when using non-UTC timezone. Of course, you may view the files using a tool which can get confused. However, the usual reason for using UTC is to avoid ambiguity: you do not have to know what the local timezone is to interpret the data. So yes, using UTC in logfiles is a good thing , and often done , but logfiles do not lose data if you do not do this.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292097", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176754/" ] }
292,137
How can I write the following in a bash script? tmux # Start tmux session.compass watch /path/to/project1/compass/ # Run the first process.Ctrl + B, " # Split the pane.compass watch /path/to/project2/compass/ # Run the second process.Ctrl + B, D # Exit the session.
tmux \ new-session 'compass watch /path/to/project1/compass/' \; \ split-window 'compass watch /path/to/project2/compass/' \; \ detach-client The new-session command (which creates a new tmux session) and the split-window command (which splits the current window into two panes) in tmux takes optional shell commands to run. The detach-client does the obvious at the end. If you want a horizontal split (two panes side by side), use split-window -h in the command above. When sending multiple tmux commands to tmux you need to separate them by ; . The ; needs to be protected from the shell by quoting/escaping it ( ';' , ";" or \; ), to stop the shell from interpreting it as the end of the tmux command. I've split the whole thing into separate lines for readability. If you do this in a script (which I recommend), make sure there's nothing after the final \ on each line. Reattach to the session with tmux a , tmux attach , or tmux attach-session (these are all equivalent). The tmux session will end once both commands have finished executing.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/292137", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160950/" ] }
292,163
What would be the best way to come up with a shell script that grabs one or many Wistia videos? There's plenty of proof-of-concepts out there for downloading YouTube videos via a shell script (i.e.: http://computing.dcu.ie/~humphrys/Notes/UNIX/lab.youtube.html ) - however, given that Wistia uses some obfuscation - it seems a little more complicated. For arguments sake, let's imagine that I wanted to 'wget' all of the videos (in, .mp4 or .flv format) from https://www.optimizely.com/opticon/sessions/ to a *nix server. We can determine each of the video links (iFrames) to be: https://fast.wistia.net/embed/iframe/xcorh9bx2t?popover=true https://fast.wistia.net/embed/iframe/s0yu60meos?popover=true https://fast.wistia.net/embed/iframe/iz1icfwrgn?popover=true ... As a starting point, I've started with: $ wget https://fast.wistia.net/embed/iframe/xcorh9bx2t Then, in xcorh9bx2t , we have some interesting HTML which can probably be parsed. Example: $ cat xcorh9bx2t | grep .flv Wistia.iframeInit({"assets":[{"type":"original","slug":"original","display_name":"Original file","width":1920,"height":1080,"ext":"mp4","size":3099425793,"bitrate":10139,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/a1706057d64d7d4c1a39f28539a2475f8bd28123.bin","created_at":1435770832},{"type":"iphone_video","slug":"mp4_h264_932k","display_name":"360p","container":"mp4","codec":"h264","width":640,"height":360,"ext":"mp4","size":284937429,"bitrate":932,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/d402b0a53106ca07fac1fa6a403b280640546102.bin","created_at":1435770832,"opt_vbitrate":800},{"type":"flash_video","slug":"flv_h264_819k","display_name":"360p","container":"flv","codec":"h264","width":640,"height":360,"ext":"flv","size":250541036,"bitrate":819,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/33d11cd62475bc8e823f4091e802ee1c48407d02.bin","created_at":1435770832,"opt_vbitrate":700},{"type":"flash_video","slug":"flv_h264_330k","display_name":"224p","container":"flv","codec":"h264","width":400,"height":224,"ext":"flv","size":101132686,"bitrate":330,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/35e479cea895d39f6ffa62a1b30f5417b3bbdbe7.bin","created_at":1435770832,"opt_vbitrate":200},{"type":"mp4_video","slug":"mp4_h264_328k","display_name":"224p","container":"mp4","codec":"h264","width":400,"height":224,"ext":"mp4","size":100518328,"bitrate":328,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/f381cae06fbf0812def4dd25b02d75cf5a755862.bin","created_at":1435770832,"opt_vbitrate":200},{"type":"md_flash_video","slug":"flv_h264_1308k","display_name":"540p","container":"flv","codec":"h264","width":960,"height":540,"ext":"flv","size":400097566,"bitrate":1308,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/e7bb3744f468b6957a66e77646264ba7df192da8.bin","created_at":1435770832,"opt_vbitrate":1200},{"type":"md_mp4_video","slug":"mp4_h264_1306k","display_name":"540p","container":"mp4","codec":"h264","width":960,"height":540,"ext":"mp4","size":399475890,"bitrate":1306,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/96c7c3df12bca82781bb91f828c5a73584d41986.bin","created_at":1435770832,"opt_vbitrate":1200},{"type":"hd_flash_video","slug":"flv_h264_2579k","display_name":"720p","container":"flv","codec":"h264","width":1280,"height":720,"ext":"flv","size":788608424,"bitrate":2579,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/4bdc9ccad52d20fa8ef0758723f462cc0975b058.bin","created_at":1435770832,"opt_vbitrate":2500},{"type":"hd_mp4_video","slug":"mp4_h264_2577k","display_name":"720p","container":"mp4","codec":"h264","width":1280,"height":720,"ext":"mp4","size":787978856,"bitrate":2577,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/c6d2bb8dbe4efcc83bbc83360640b35fff88cf09.bin","created_at":1435770832,"opt_vbitrate":2500},{"type":"hd_flash_video","slug":"flv_h264_3802k","display_name":"1080p","container":"flv","codec":"h264","width":1920,"height":1080,"ext":"flv","size":1162495511,"bitrate":3802,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/842bd773588d0ceedec3bd1b947d203735b7bed2.bin","created_at":1435770832,"opt_vbitrate":3750},{"type":"hd_mp4_video","slug":"mp4_h264_3800k","display_name":"1080p","container":"mp4","codec":"h264","width":1920,"height":1080,"ext":"mp4","size":1161857141,"bitrate":3800,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/6f3a6043d431a58b907503997d7fd276c584cb4b.bin","created_at":1435770832,"opt_vbitrate":3750},{"type":"mp4_video","slug":"mp4_h264_311k","display_name":"360p","container":"mp4","codec":"h264","width":640,"height":360,"ext":"mp4","size":95330863,"bitrate":311,"public":false,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/2eaee4ad203f0a5ad223c2db5a9b5c6506683df1.bin","created_at":1435770832,"opt_vbitrate":600},{"type":"still_image","slug":"still_image_1280x720","display_name":"Image","width":1280,"height":720,"ext":"jpg","size":92160,"bitrate":0,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/be55b81db186b83574c23db7c893e12b5651cc80.bin","created_at":1436925915}],"unnamed_assets":[{"type":"original","slug":"original","display_name":"Original file","width":1920,"height":1080,"ext":"mp4","size":3099425793,"bitrate":10139,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/a1706057d64d7d4c1a39f28539a2475f8bd28123.bin","created_at":1435770832},{"type":"iphone_video","slug":"mp4_h264_932k","display_name":"360p","container":"mp4","codec":"h264","width":640,"height":360,"ext":"mp4","size":284937429,"bitrate":932,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/d402b0a53106ca07fac1fa6a403b280640546102.bin","created_at":1435770832,"opt_vbitrate":800},{"type":"flash_video","slug":"flv_h264_819k","display_name":"360p","container":"flv","codec":"h264","width":640,"height":360,"ext":"flv","size":250541036,"bitrate":819,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/33d11cd62475bc8e823f4091e802ee1c48407d02.bin","created_at":1435770832,"opt_vbitrate":700},{"type":"flash_video","slug":"flv_h264_330k","display_name":"224p","container":"flv","codec":"h264","width":400,"height":224,"ext":"flv","size":101132686,"bitrate":330,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/35e479cea895d39f6ffa62a1b30f5417b3bbdbe7.bin","created_at":1435770832,"opt_vbitrate":200},{"type":"mp4_video","slug":"mp4_h264_328k","display_name":"224p","container":"mp4","codec":"h264","width":400,"height":224,"ext":"mp4","size":100518328,"bitrate":328,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/f381cae06fbf0812def4dd25b02d75cf5a755862.bin","created_at":1435770832,"opt_vbitrate":200},{"type":"md_flash_video","slug":"flv_h264_1308k","display_name":"540p","container" "flv","codec":"h264","width":960,"height":540,"ext":"flv","size":400097566,"bitrate":1308,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/e7bb3744f468b6957a66e77646264ba7df192da8.bin","created_at":1435770832,"opt_vbitrate":1200},{"type":"md_mp4_video","slug":"mp4_h264_1306k","display_name":"540p","container":"mp4","codec":"h264","width":960,"height":540,"ext":"mp4","size":399475890,"bitrate":1306,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/96c7c3df12bca82781bb91f828c5a73584d41986.bin","created_at":1435770832,"opt_vbitrate":1200},{"type":"hd_flash_video","slug":"flv_h264_2579k","display_name":"720p","container":"flv","codec":"h264","width":1280,"height":720,"ext":"flv","size":788608424,"bitrate":2579,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/4bdc9ccad52d20fa8ef0758723f462cc0975b058.bin","created_at":1435770832,"opt_vbitrate":2500},{"type":"hd_mp4_video","slug":"mp4_h264_2577k","display_name":"720p","container":"mp4","codec":"h264","width":1280,"height":720,"ext":"mp4","size":787978856,"bitrate":2577,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/c6d2bb8dbe4efcc83bbc83360640b35fff88cf09.bin","created_at":1435770832,"opt_vbitrate":2500},{"type":"hd_flash_video","slug":"flv_h264_3802k","display_name":"1080p","container":"flv","codec":"h264","width":1920,"height":1080,"ext":"flv","size":1162495511,"bitrate":3802,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/842bd773588d0ceedec3bd1b947d203735b7bed2.bin","created_at":1435770832,"opt_vbitrate":3750},{"type":"hd_mp4_video","slug":"mp4_h264_3800k","display_name":"1080p","container":"mp4","codec":"h264","width":1920,"height":1080,"ext":"mp4","size":1161857141,"bitrate":3800,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/6f3a6043d431a58b907503997d7fd276c584cb4b.bin","created_at":1435770832,"opt_vbitrate":3750},{"type":"mp4_video","slug":"mp4_h264_311k","display_name":"360p","container":"mp4","codec":"h264","width":640,"height":360,"ext":"mp4","size":95330863,"bitrate":311,"public":false,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/2eaee4ad203f0a5ad223c2db5a9b5c6506683df1.bin","created_at":1435770832,"opt_vbitrate":600},{"type":"still_image","slug":"still_image_1280x720","display_name":"Image","width":1280,"height":720,"ext":"jpg","size":92160,"bitrate":0,"public":true,"status":2,"progress":1.0,"url":"https://embed-ssl.wistia.com/deliveries/be55b81db186b83574c23db7c893e12b5651cc80.bin","created_at":1436925915}],"distilleryUrl":"https://distillery-main.wistia.com/x","accountKey":"wistia-production_58149","mediaKey":"wistia-production_14783299","type":"Video","progress":1.0,"status":2,"name":"Opticon 2015 Keynote Address","duration":2388.35,"hashedId":"xcorh9bx2t","branding":false,"seoDescription":"an Opticon Session Videos (Public) video from Optimizely","preloadPreference":null,"flashPlayerUrl":"https://embed-ssl.wistia.com/flash/embed_player_v2.0.swf?2015-02-27","showAbout":true,"createdAt":1435770832,"firstEmbedForAccount":false,"firstShareForAccount":false,"stats":{"loadCount":403,"playCount":369,"uniqueLoadCount":267,"uniquePlayCount":250,"averageEngagement":0.186758},"trackingTransmitInterval":79,"playerPreference":"auto","integrations":{"marketo":true},"embed_options":{"volumeControl":"true","fullscreenButton":"true","controlsVisibleOnLoad":"true","playerColor":"7b796a","bpbTime":"false","googleAnalytics":true,"videoQuality":"","vulcan":"false","version":"v2","playButton":"true","smallPlayButton":"true","playbar":"true","branding":"false","plugin":{"socialbar-v1":{"buttons":"twitter-linkedIn-facebook","showTweetCount":"false","tweetText":"Learning lots from the thought leadership at @optimizely! Check out: {video_name}","height":"25"}},"autoPlay":"false","endVideoBehavior":"default"}}, {}); Essentially, it looks like the Wistia iFrame has some JavaScript that runs the function Wistia.iframeInit() and points it to an obsfucated .bin URL such as: https://embed-ssl.wistia.com/deliveries/[Unique40BitToken].bin . If I parse this out and then wget that (or open it in a browser) - it does seem to work, i.e.: $ wget https://embed-ssl.wistia.com/deliveries/d402b0a53106ca07fac1fa6a403b280640546102.bin--2016-06-26 14:29:08-- https://embed-ssl.wistia.com/deliveries/d402b0a53106ca07fac1fa6a403b280640546102.binResolving embed-ssl.wistia.com (embed-ssl.wistia.com)... 72.21.81.253Connecting to embed-ssl.wistia.com (embed-ssl.wistia.com)|72.21.81.253|:443... connected.HTTP request sent, awaiting response... 200 OKLength: 284937429 (272M) [video/mp4]Saving to: âd402b0a53106ca07fac1fa6a403b280640546102.binâ100%[======================================================================================================================================================================>] 284,937,429 45.7MB/s in 6.9s2016-06-26 14:29:15 (39.2 MB/s) - âd402b0a53106ca07fac1fa6a403b280640546102.binâ saved [284937429/284937429] The resulting file is 272M which is about right. So, what would be the easiest way to make this into an elegant one-liner? Parsing the parameters of Wistia.iframeInit() (which appears to be JSON) with sed or awk is most likely the best (or maybe even jsawk ) appears to be the best approach.
YouTube-dl supports this: $ youtube-dl fast.wistia.net/embed/iframe/xcorh9bx2tWARNING: The url doesn't specify the protocol, trying with http[Wistia] xcorh9bx2t: Downloading JSON metadata[download] Destination: Opticon 2015 Keynote Address-xcorh9bx2t.mp4
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292163", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65566/" ] }
292,168
I've been struggling for days with this now and can't find what I'm doing wrong. I've got a website on a VPS server. Every night I make a backup of the database. It gets stored on my VPS server. I also want to send a copy to my NAS (Synology DS214play) at home. Both servers operate on Linux. So I've logged into my VPS server (as root) and generated a ssh-keygen . On my VPS it looks like this: [root@vps /]# cd ~[root@vps ~]# ls -alhdr-xr-x---. 7 root root 4.0K Jun 25 18:58 .dr-xr-xr-x. 24 root root 4.0K Jun 25 19:33 ..drwx------ 3 root root 4.0K Jun 25 20:29 .ssh[root@vps ~]# cd .ssh[root@vps .ssh]# ls -alhdrwx------ 3 root root 4.0K Jun 25 20:29 .dr-xr-x---. 7 root root 4.0K Jun 25 18:58 ..-rw------- 1 root root 1.7K Jun 26 07:27 id_rsa-rw-r--r-- 1 root root 403 Jun 26 07:27 id_rsa.pub-rw------- 1 root root 394 Jun 25 20:29 known_hosts Then I copied the file to the NAS by using ssh-copy-id admin@NAS:/$ cd ~admin@NAS:~$ ls -alhdrwxrwxrwx 6 admin users 4.0K Jun 26 07:28 .drwxrwxrwx 13 root root 4.0K Jun 21 20:57 ..drwx------ 2 admin users 4.0K Jun 26 07:28 .sshadmin@NAS:~$ cd .sshadmin@NAS:~/.ssh$ ls -alhdrwx------ 2 admin users 4.0K Jun 26 07:28 .drwxrwxrwx 6 admin users 4.0K Jun 26 07:28 ..-rw------- 1 admin users 403 Jun 26 07:27 authorized_keys When looking into VPS/id_rsa.pub and NAS/authorized_keys I see that both keys are identical. Now I'm trying to copy a test file from the VPS to the NAS by using: [root@vps /]# scp -i ~/.ssh/id_rsa /test.txt admin@___.___.___.___:/volume1/SQL_backup That however results in shell asking me for the password (every time). How come that I have to keep giving my pass?
When troubleshooting problems with daemons, you should always check the system logs. In this particular case, if you check your system logs on the NAS host, you'll see something similar to: Authentication refused: bad ownership or modes for directory /home/admin The problem is shown in this output: admin@NAS:~$ ls -alhdrwxrwxrwx 6 admin users 4.0K Jun 26 07:28 . For security, SSH will refuse to use the authorized_keys file if any ancestor of the ~/.ssh directory is writable by someone other than the user or root (ancestor meaning /home/user/.ssh , /home/user , /home , / ). This is because another user could replace the ~/.ssh directory (or ~/.ssh/authorized_keys file) with their own, and then ssh into your user. To fix, change the permissions on the directory with something like: chmod 755 ~
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292168", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176799/" ] }
292,189
I am running Fedora 24 with Gnome Shell. I try to pair my new Bose QuietComfort 35 over Bluetooth. I started using the Gnome interface. Unfortunately, the connection seems not to hold. It appears as constantly connecting/disconnecting: https://youtu.be/eUZ9D9rGUZY My next step was to perform some checks using the command-line. First, I checked that the bluetooth service is running: $ sudo systemctl status bluetooth● bluetooth.service - Bluetooth service Loaded: loaded (/usr/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled) Active: active (running) since dim. 2016-06-26 11:19:24 CEST; 14min ago Docs: man:bluetoothd(8) Main PID: 932 (bluetoothd) Status: "Running" Tasks: 1 (limit: 512) Memory: 2.1M CPU: 222ms CGroup: /system.slice/bluetooth.service └─932 /usr/libexec/bluetooth/bluetoothdjuin 26 11:19:24 leonard systemd[1]: Starting Bluetooth service...juin 26 11:19:24 leonard bluetoothd[932]: Bluetooth daemon 5.40juin 26 11:19:24 leonard bluetoothd[932]: Starting SDP serverjuin 26 11:19:24 leonard bluetoothd[932]: Bluetooth management interface 1.11 initializedjuin 26 11:19:24 leonard bluetoothd[932]: Failed to obtain handles for "Service Changed" characteristicjuin 26 11:19:24 leonard systemd[1]: Started Bluetooth service.juin 26 11:19:37 leonard bluetoothd[932]: Endpoint registered: sender=:1.68 path=/MediaEndpoint/A2DPSourcejuin 26 11:19:37 leonard bluetoothd[932]: Endpoint registered: sender=:1.68 path=/MediaEndpoint/A2DPSinkjuin 26 11:20:26 leonard bluetoothd[932]: No cache for 08:DF:1F:DB:A7:8A Then, I have tried to follow some explanations from Archlinux wiki with no success. The pairing is failing Failed to pair: org.bluez.Error.AuthenticationFailed : $ sudo bluetoothctl [NEW] Controller 00:1A:7D:DA:71:05 leonard [default][NEW] Device 08:DF:1F:DB:A7:8A Bose QuietComfort 35[NEW] Device 40:EF:4C:8A:AF:C6 EDIFIER Luna Eclipse[bluetooth]# agent onAgent registered[bluetooth]# scan onDiscovery started[CHG] Controller 00:1A:7D:DA:71:05 Discovering: yes[CHG] Device 08:DF:1F:DB:A7:8A RSSI: -77[CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 0000febe-0000-1000-8000-00805f9b34fb[CHG] Device 08:DF:1F:DB:A7:8A RSSI: -69[CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 0000febe-0000-1000-8000-00805f9b34fb[CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 0000110d-0000-1000-8000-00805f9b34fb[CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 0000110b-0000-1000-8000-00805f9b34fb[CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 0000110e-0000-1000-8000-00805f9b34fb[CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 0000110f-0000-1000-8000-00805f9b34fb[CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 00001130-0000-1000-8000-00805f9b34fb[CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 0000112e-0000-1000-8000-00805f9b34fb[CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 0000111e-0000-1000-8000-00805f9b34fb[CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 00001108-0000-1000-8000-00805f9b34fb[CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 00001131-0000-1000-8000-00805f9b34fb[CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 00000000-deca-fade-deca-deafdecacaff[bluetooth]# devicesDevice 08:DF:1F:DB:A7:8A Bose QuietComfort 35Device 40:EF:4C:8A:AF:C6 EDIFIER Luna Eclipse[CHG] Device 08:DF:1F:DB:A7:8A RSSI: -82[CHG] Device 08:DF:1F:DB:A7:8A RSSI: -68[CHG] Device 08:DF:1F:DB:A7:8A RSSI: -79[bluetooth]# trust 08:DF:1F:DB:A7:8AChanging 08:DF:1F:DB:A7:8A trust succeeded[bluetooth]# pair 08:DF:1F:DB:A7:8AAttempting to pair with 08:DF:1F:DB:A7:8A[CHG] Device 08:DF:1F:DB:A7:8A Connected: yesFailed to pair: org.bluez.Error.AuthenticationFailed[CHG] Device 08:DF:1F:DB:A7:8A Connected: no I tried to disable SSPMode but it seems to have no effect: $ sudo hciconfig hci0 sspmode 0 When I use bluetoothctl, journalctl logs the following: juin 26 11:37:21 leonard sudo[4348]: lpellegr : TTY=pts/2 ; PWD=/home/lpellegr ; USER=root ; COMMAND=/bin/bluetoothctljuin 26 11:37:21 leonard audit[4348]: USER_CMD pid=4348 uid=1000 auid=4294967295 ses=4294967295 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='cwd="/home/lpellegr" cmd="bluetoothctl" terminal=ptjuin 26 11:37:21 leonard audit[4348]: CRED_REFR pid=4348 uid=0 auid=4294967295 ses=4294967295 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_fprintd acct="roojuin 26 11:37:21 leonard sudo[4348]: pam_systemd(sudo:session): Cannot create session: Already occupied by a sessionjuin 26 11:37:21 leonard audit[4348]: USER_START pid=4348 uid=0 auid=4294967295 ses=4294967295 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,juin 26 11:37:21 leonard sudo[4348]: pam_unix(sudo:session): session opened for user root by (uid=0)juin 26 11:38:06 leonard bluetoothd[932]: No cache for 08:DF:1F:DB:A7:8A Unfortunately, I don't understand the output. Any idea or help is welcome. I am pretty lost. The bluetooth receiver I use is a USB dongle from CSL-Computer. Bluetoothctl version is 5.40. I am running kernel 4.5.7-300.fc24.x86_64. Below are the features supported by my bluetooth adapter: hciconfig -a hci0 featureshci0: Type: BR/EDR Bus: USB BD Address: 00:1A:7D:DA:71:05 ACL MTU: 310:10 SCO MTU: 64:8 Features page 0: 0xff 0xff 0x8f 0xfe 0xdb 0xff 0x5b 0x87 <3-slot packets> <5-slot packets> <encryption> <slot offset> <timing accuracy> <role switch> <hold mode> <sniff mode> <park state> <RSSI> <channel quality> <SCO link> <HV2 packets> <HV3 packets> <u-law log> <A-law log> <CVSD> <paging scheme> <power control> <transparent SCO> <broadcast encrypt> <EDR ACL 2 Mbps> <EDR ACL 3 Mbps> <enhanced iscan> <interlaced iscan> <interlaced pscan> <inquiry with RSSI> <extended SCO> <EV4 packets> <EV5 packets> <AFH cap. slave> <AFH class. slave> <LE support> <3-slot EDR ACL> <5-slot EDR ACL> <sniff subrating> <pause encryption> <AFH cap. master> <AFH class. master> <EDR eSCO 2 Mbps> <EDR eSCO 3 Mbps> <3-slot EDR eSCO> <extended inquiry> <LE and BR/EDR> <simple pairing> <encapsulated PDU> <non-flush flag> <LSTO> <inquiry TX power> <EPC> <extended features> Features page 1: 0x03 0x00 0x00 0x00 0x00 0x00 0x00 0x00 The pairing works well with EDIFIER Luna Eclipse speakers. I suspect the issue is really related to the headset I am trying to configure.
I have these headphones as well, along with a handy laptop running Fedora 24. After chatting with one of the Bluez developers on IRC, I have things working. Below is what I've found. (Note that I know very little about Bluetooth so I may be using incorrect terminology for some of this.) The headphones support (or at least say they support) bluetooth LE but don't support LE for pairing. Bluez does not yet support this and has no way to set the supported BT mode except statically in the configuration file. You can use the headphones over regular bluetooth just fine, though. This happens to be the reason Bluez 4 works; it doesn't really support LE. So, create /etc/bluetooth/main.conf. Fedora 24 doesn't come with this file so either fetch a copy from Upstream , find the line containing #ControllerMode = dual and change it to: ControllerMode = bredr or create a new file containing just: [General]ControllerMode = bredr Then restart bluetooth and pair. (I did this manually via bluetoothctl, but just using the bluetooth manager should work.) Now, this got things working for me, though if you don't force pulseaudio to use the A2DP-Sink protocol, the headphones will announce that you have an incoming call for some reason. However, my mouse requires Bluetooth LE, so I went in and removed the ControllerMode line. And... the headphones still work, as well as the mouse. I guess that once they are paired everything is OK.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/292189", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48778/" ] }
292,232
For example: xargs -n 1 is the same as xargs -n1 But if you look at the man page , the option is listed as -n max-args , which means the space is supposed to be preserved. There is nothing about the abbreviated form -n max-args . This also happens with many other Linux utilities. What is this called in Linux? Do all utilities support the abbreviated form (but never document it in the man page)?
When you write the command line parsing bit of your code, you specify what options take arguments and which ones do not. For example, in a shell script accepting an -h option (for help for example) and an -a option that should take an argument, you do opt_h=0 # default valueopt_a=""while getopts 'a:h' opt; do case $opt in h) opt_h=1 ;; a) opt_a="$OPTARG" ;; esacdoneecho "h: $opt_h"echo "a: $opt_a" The a:h bit says "I'm expecting to parse two options, -a and -h , and -a should take an argument" (it's the : after a that tells the parser that -a takes a argument). Therefore, there is never any ambiguity in where an option ends, where its value starts, and where another one starts after that. Running it: $ bash test.sh -h -a helloh: 1a: hello$ bash test.sh -h -ahelloh: 1a: hello$ bash test.sh -hahelloh: 1a: hello This is why you most of the time shouldn't write your own command line parser to parse options. There is only one case in this example that is tricky. The parsing usually stops at the first non-option, so when you have stuff on the command line that looks like options: $ bash test.sh -a hello -worldtest.sh: illegal option -- wtest.sh: illegal option -- otest.sh: illegal option -- rtest.sh: illegal option -- ltest.sh: illegal option -- dh: 0a: hello The following solves that: $ bash test.sh -a hello -- -worldh: 0a: hello The -- signals an end of command line options, and the -world bit is left for the program to do whatever it wants with (it's in one of the positional variables). That is, by the way, how you remove a file that has a dash in the start of its file name with rm . EDIT : Utilities written in C call getopt() (declared in unistd.h ) which works pretty much the same way. In fact, for all we know, the bash function getopts may be implemented using a call to the C library function getopt() . Perl, Python and other languages have similar command line parsing libraries, and it is most likely that they perform their parsing in similar ways. Some of these getopt and getopt -like library routines also handle "long" options. These are usually preceded by double-dash ( -- ), and long options that takes arguments often does so after an equal sign, for example the --block-size=SIZE option of [some implementations of] the du utility (which also allows for -B SIZE to specify the same thing). The reason manuals are often written to show a space in between the short options and their arguments is probably for readability. EDIT : Really old tools, such as the dd and tar utilities, have options without dashes in front of them. This is purely for historical reasons and for maintaining compatibility with software that relies on them to work in exactly that way. The tar utility has gained the ability to take options with dashes in more recent times. The BSD manual for tar calls the old-style options for "bundled flags".
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/292232", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176833/" ] }
292,253
I want to redirect the output of the find command to cat command so I can print the data of the given file. So for example if the output of find is /aFile/readme then the cat should be interpreted as cat ./aFile/readme . How can I do that instantly ? Do I have to use pipes ? I tried versions of this : cat | find ./inhere -size 1033c 2> /dev/null But I guess this is completely wrong? Of course I'm sure that the output is only one file and not multiple files. So how can I do that ? I've searched on Google and couldn't find a solution, probably because I didn't search right :P
You can do this with find alone using the -exec action: find /location -size 1033c -exec cat {} + {} will be replaced by the files found by find , and + will enable us to read as many arguments as possible per invocation of cat , as cat can take multiple arguments. If your find does not have the standard + extension, or you want to read the files one by one: find /location -size 1033c -exec cat {} \; If you want to use any options of cat , do: find /location -size 1033c -exec cat -n {} +find /location -size 1033c -exec cat -n {} \; Here I am using the -n option to get the line numbers.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/292253", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176853/" ] }
292,318
When I log in via ssh on a Linux(Ubuntu) server, I notice that all the bash commands executed by other users on the server are saved in the command history. Is there a way that could allow me to hide the commands that I have typed in the command line from other users on the server?
There are many ways to hide your command history, but it's a bad idea to turn off history altogether as it is very useful. Here are three good ways to turn it off temporarily. Quickest solution : Type unset HISTFILE That will prevent all commands run in the current login session from getting saved to the .bash_history file when you logout. Note that HISTFILE will get reset the next time you login, so history will be saved as usual. Also, note that this removes all commands from the session, including ones run before you typed unset HISTFILE, which may not be what you want. Another downside is that you cannot be sure you did it right until you logout as bash will still let you use the up arrow to see previous commands. Best solution : type a space before a command Try it and then hit up arrow to see if it got added to your history. Some sites have it already set up so that such commands are not saved. If it does not work, add the line export HISTCONTROL=ignoreboth to your .bashrc file. When you login in the future, commands that start with a space will be forgotten immediately. Easiest to remember : Type sh That will start a subshell with the original Bourne shell. Any commands written in it (until you exit ) will not be saved in your history. Anybody looking at your history file will be able to see that you ran sh (which is suspicious), but not see what you ran after that. There are many other ways of doing this. You can even tell bash which commands to never remember (HISTIGNORE). See the man page for bash(1) and search for HIST to see lots of possibilities.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292318", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172771/" ] }
292,320
I've booted Linux from NFS with Android system. And now I have to execute command as super use in minicom . But system doesn't allow me to switch to super user mode. Every time when I type: shell@blaze_tablet:/ $ su I see: su: permission denied There is parameter at bootargs I've added in u-boot androidboot.selinux=disabled wich I thought would help. But it doesn't.Can problem be with permission to some files on NFS ? Or I missed any parameter in bootargs ? Update The content of my /etc/exports file # /etc/exports: the access control list for filesystems which may be exported# to NFS clients. See exports(5).## Example for NFSv2 and NFSv3:# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)## Example for NFSv4:# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)/export/rfs *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
There are many ways to hide your command history, but it's a bad idea to turn off history altogether as it is very useful. Here are three good ways to turn it off temporarily. Quickest solution : Type unset HISTFILE That will prevent all commands run in the current login session from getting saved to the .bash_history file when you logout. Note that HISTFILE will get reset the next time you login, so history will be saved as usual. Also, note that this removes all commands from the session, including ones run before you typed unset HISTFILE, which may not be what you want. Another downside is that you cannot be sure you did it right until you logout as bash will still let you use the up arrow to see previous commands. Best solution : type a space before a command Try it and then hit up arrow to see if it got added to your history. Some sites have it already set up so that such commands are not saved. If it does not work, add the line export HISTCONTROL=ignoreboth to your .bashrc file. When you login in the future, commands that start with a space will be forgotten immediately. Easiest to remember : Type sh That will start a subshell with the original Bourne shell. Any commands written in it (until you exit ) will not be saved in your history. Anybody looking at your history file will be able to see that you ran sh (which is suspicious), but not see what you ran after that. There are many other ways of doing this. You can even tell bash which commands to never remember (HISTIGNORE). See the man page for bash(1) and search for HIST to see lots of possibilities.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292320", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/169897/" ] }
292,327
All. Forgive me I not familiar with the Linux. I am trying to install CentOS in the VMWare. As I knew, Linux can only create three kinds of partitions. they are primary, extended, and logical , For MBR, the max numbers of primary and extended partition are 4. and The unlimited numbers of logical partitions can be created under the extended partition. (If I was wrong. Please correct me. Thanks.) But As to the CentOS. I got the options like below when creating the partitions. Compare to the concept of primary, extended, and logical , I can't understand Standard partition and LVM physical volume and didn't know what is the difference between them. What does it mean creating an LVM physical volume ? Could anyone please tell me more about it ? Thanks.
As I knew, Linux can only create three kinds of partitions. they are primary, extended, and logical No, that's wrong. What you're describing here is PC old-style “MBR” partitions . This was the standard partition type on PC-type computers (and some others) since the 1980s but these days it's being replaced by GUID partitions. Logical vs primary partition is a hack due to the limitations of this 1980s system which you can ignore if you don't have to deal with older systems. Using a standard partition system is essential if you have multiple operating systems installed on the same disk. Otherwise, you don't have to. Furthermore, even with multiple operating systems, you can use a single standard partition for Linux, and use Linux's own partitioning system inside it. LVM is Linux's native partitioning system. It has many advantages over MBR or GUID partitions, in particular the ability to move or even spread partitions between disks (without unmounting anything), and to resize partitions easily. Use LVM for Linux by preference. LVM achieves its flexibility by combining several levels of abstraction. A physical storage area, typically a PC-style partition, is a physical volume . The space of one or more physical volume makes up a volume group . In a volume group, you create logical volumes , each containing a filesystem (or a swap volume, etc.).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292327", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53618/" ] }
292,344
Unfortunately bc and calc don't support xor.
Like this: echo $(( 0xA ^ 0xF )) Or if you want the answer in hex: printf '0x%X\n' $(( 0xA ^ 0xF )) On a side note, calc(1) does support xor as a function: $ calcbase(16) 0xaxor(0x22, 0x33) 0x11
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/292344", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/25812/" ] }
292,416
I've found this question that explains how to edit a remote file with vim using: vim scp://user@myserver[:port]//path/to/file.txt Is it possible to do this as root (via sudo ) on the remote host? I've tried creating a file with root permissions on the remote host and editing it with the above. Vim can see the content, can edit it, and can save it but nothing changes on the remote host (probably because vim is just saving its temp file and then giving that to scp to put back?) When doing this with a file saved by my user it behaves as expected. My SSH uses a key to authenticate and the remote server has NOPASSWD for my sudo access This question is similar, but the only answer with votes uses puppet which is definitely not what I want to use. Edit: In response to @drewbenn's comment below, here is my full process for editing: vim scp://nagios//tmp/notouch Where /tmp/notouch is the file owned by root, I see vim quickly show :!scp -q 'nagios:/tmp/notouch' '/tmp/vaHhwTl/0' This goes away automatically to yield an empty black screen with the text "/tmp/vaHhwTl/0" 1L, 12CPress ENTER or type command to continue Pressing enter allows me to edit the file Saving pops up the same kind of scp command as the beginning, which quickly and automatically goes away (it's difficult to read it in time but the scp and /tmp/... files are definitely there)
I'm going to say this is not possible because vim is not executing remote commands. It is simply using scp to copy the file over, edit it locally and scp it back when done. As stated in this question sudo via scp is not possible and it is recommended that you either modify permissions to accomplish what you're wanting or just ssh across to the remote machine.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292416", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73386/" ] }
292,444
I am on Arch Linux where I am trying to create a systemd timer as a cron alternative for hibernating my laptop on low battery. So I wrote these three files: /etc/systemd/system/battery.service [Unit]Description=Preko skripte preveri stanje baterije in hibernira v kolikor je stanje prenizko[Service]Type=oneshotExecStart=/home/ziga/Dropbox/workspace/operacijski/archlinux/hibernate/hibernatescriptUser=nobodyGroup=systemd-journal /etc/systemd/system/battery.timer [Unit]Description=Periodical checking of battery status every two minutes[Timer]OnUnitActiveSec=2min [Install]WantedBy=timers.target /home/ziga/Dropbox/workspace/operacijski/archlinux/hibernate/hibernatescript #!/bin/sh/usr/bin/acpi -b | /usr/bin/awk -F'[,:%]' '{print $2, $3}' | ( read -r status capacity if [ "$status" = Discharging ] && [ "$capacity" -lt 50 ]; then /usr/bin/systemctl hibernate fi ) And then to enable timer I executed: sudo systemctl enable battery.timersudo systemctl start battery.timer And somehow it isn't working. Script works on its own. This means that if I execute command below, my computer hibernates just fine. /home/ziga/Dropbox/workspace/operacijski/archlinux/hibernate/hibernatescript ADD1: After enabling and starting timer I ran some checks and this is what I get: [ziga@ziga-laptop ~]$ systemctl list-timersNEXT LEFT LAST PASSED UNIT ACTIVATESn/a n/a n/a n/a battery.timer battery.servTue 2016-06-28 00:00:00 CEST 42min left Mon 2016-06-27 00:01:54 CEST 23h ago logrotate.timer logrotate.seTue 2016-06-28 00:00:00 CEST 42min left Mon 2016-06-27 00:01:54 CEST 23h ago shadow.timer shadow.serviTue 2016-06-28 00:00:00 CEST 42min left Mon 2016-06-27 00:01:54 CEST 23h ago updatedb.timer updatedb.serTue 2016-06-28 22:53:58 CEST 23h left Mon 2016-06-27 22:53:58 CEST 23min ago systemd-tmpfiles-clean.timer systemd-tmpf and [ziga@ziga-laptop ~]$ systemctl | grep batterybattery.timer loaded active elapsed Periodical checking of battery status every two minutes ADD2: After applying solution from Alexander T my timer starts (check the code below) but script doesn't hibernate my laptop while it hibernates it if I execute it directly. [ziga@ziga-laptop ~]$ systemctl list-timersNEXT LEFT LAST PASSED UNIT ACTIVATESTue 2016-06-28 19:17:30 CEST 1min 43s left Tue 2016-06-28 19:15:30 CEST 16s ago battery.timer battery.service
An answer to this question is to swap User=nobody not with User=ziga but with User=root in /etc/systemd/system/battery.service . Somehow even if user ziga has all the privileges of using sudo command it can't execute systemctl hibernate inside of the bash script. I really don't know why this happens. So the working files are as follows: /etc/systemd/system/battery.service [Unit]Description=Preko skripte preveri stanje baterije in hibernira v kolikor je stanje prenizko[Service]Type=oneshotExecStart=/home/ziga/Dropbox/workspace/operacijski/archlinux/hibernate/hibernatescriptUser=rootGroup=systemd-journal /etc/systemd/system/battery.timer [Unit]Description=Periodical checking of battery status every two minutes[Timer]OnBootSec=2minOnUnitActiveSec=2min [Install]WantedBy=battery.service /home/ziga/Dropbox/workspace/operacijski/archlinux/hibernate/hibernatescript #!/bin/sh/usr/bin/acpi -b | /usr/bin/awk -F'[,:%]' '{print $2, $3}' | ( read -r status capacity if [ "$status" = Discharging ] && [ "$capacity" -lt 7 ]; then /usr/bin/systemctl hibernate fi) I tried it and it allso works with User=ziga or User=nobody but we need to change /usr/bin/systemctl hibernate into sudo /usr/bin/systemctl hibernate in the last script. So it looks like User variable somehow doesn't even matter... Oh and you can as well remove absolute names from the last script and change first line from #!/bin/sh to #!/bin/bash . I also changed WantedBy=timers.target to WantedBy=battery.service in /etc/systemd/system/battery.timer . There you go. The best cron alternative to hibernate laptops on low battery. =)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292444", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9135/" ] }
292,556
I have 6 gzipped text files, each of which is ~17G when compressed. I need to see the last few lines (decompressed) of each file to check whether a particular problem is there. The obvious approach is very slow: for i in *; do zcat "$i" | tail -n3; done I was thinking I could do something clever like: for i in *; do tail -n 30 "$i" | gunzip | tail -n 4 ; done Or for i in *; do tac "$i" | head -100 | gunzip | tac | tail -n3; done But both complain about: gzip: stdin: not in gzip format I thought that was because I was missing the gzip header, but this also fails: $ aa=$(head -c 300 file.gz)$ bb=$(tail -c 300 file.gz)$ printf '%s%s' "$aa" "$bb" | gunzipgzip: stdin: unexpected end of file What I am really looking for is a ztail or ztac but I don't think those exist. Can anyone come up with a clever trick that lets me decompress and print the last few lines of a compressed file without decompressing the entire thing?
You can't, as it has been already said , if the files have been compressed with standard gzip . If you have control over the compression, you can use dictzip to compress the files, it compresses the files in separate blocks and you can decompress just the last block (typically 64KB). And it is backward compatible with gzip , meaning the dictzipped file is perfectly legal gzipped file as well. Other possibility would be if you get the gzipped file as a concatenation of several already gzipped files, you could search for the last gzip signature and decompress everything after that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292556", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22222/" ] }
292,562
After a long search on the web, trying everything I can find I came to ask you guys, how can I add an existing user to be a sudoer? I've tried usermod -a -G sudo user and also adduser user both while in root...the first one didn't work at all and the second one supposedly added 'user' to sudoers but when I try to run sudo with that user it says:user is not in the sudoers file. This incident will be reported. When I run adduser again, it says the user 'user' is already a member of 'sudo'. what can I do??? -EDIT: for clarification, I do want the user to be prompted for a password when trying to run sudo. currently when the user is running sudo he is being prompted for a password and then he gets "user is not in sudoers file...."I wand him to be able to run sudo, be prompted and then escalate the privilege.
As root edit /etc/sudoers and place the following line: youruser ALL=(ALL) NOPASSWD:ALL after # Allow members of group sudo to execute any command%sudo ALL=(ALL:ALL) ALL In this way you will be capable to execute all commands that require sudo privileges passwordless. In order to use sudo and be prompted for a password you need to remove NOPASSWD:ALL
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/292562", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176814/" ] }
292,573
Test file is given below: PATTERN1abcPATTERN2defPATTERN2gh I want to print line between PATTERN1 and 2nd match of PATTERN2 : PATTERN1abcPATTERN2defPATTERN2
Here's one way to do it with sed : sed '/PATTERN1/,$!d;/PATTERN2/{x;//{x;q;};g;}' infile This just deletes all lines (if any) up to the first occurrence of PATTERN1 and then, on each line that matches PATTERN2 it e x changes buffers. If the new pattern space also matches, it means it's the 2nd occurrence so it e x changes back and q uits (after auto-printing). If it doesn't match it means it's the 1st occurrence so it copies the hold space content over the pattern space via g (so now there's a line matching PATTERN2 in the hold buffer) and goes on... and another way with awk : awk '/PATTERN1/{t=1}; t==1{print; if (/PATTERN2/){c++}}; c==2{exit}' infile It starts printing and counting lines matching PATTERN2 only when encountering a line matching PATTERN1 and exits when c ounter reaches 2 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292573", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177093/" ] }
292,588
$ bash -c 'echo $0 ' foo foo$ bash -c 'echo $0 ' bash From bash manual ($0) Expands to the name of the shell or shell script. This is set at shell initialization. If Bash is invoked with a file of commands (see Section 3.8 [Shell Scripts], page 39), $0 is set to the name of that file. If Bash is started with the -c option (see Section 6.1 [Invoking Bash], page 80), then $0 is set to the first argument after the string to be executed , if one is present. Otherwise , it is set to the filename used to invoke Bash, as given by argument zero. What does "otherwise" mean? What cases does it include? Does it include the case when bash is started with -c without any "argument after the string to be executed"? I expected bash -c 'echo $0 ' not to output anything, according to the second case in the quote, but it outputs bash . Thanks.
The documentation you quote gives three cases: If bash is invoked with a file of commands, $0 is set to the name of that file. (case 1) If bash is started with the -c option, then $0 is set to the first argument after the string to be executed, if one is present. (case 2; note the two "if"s, which must both be satisfied in this case) Otherwise, it is set to the filename used to invoke bash , as given by argument zero. (case 3). The "otherwise" clause covers any situation which isn't covered by cases 1 and 2: bash isn't invoked with a file of commands, and bash isn't started with the -c option, or it's started with the -c option but without any argument after the string to be executed. So yes, it includes the case where Bash is started with -c without any argument after the string to be executed. It also includes the basic echo $0 case when run from an interactive shell, since the interactive shell was most likely started without either a file of commands or a -c option.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292588", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
292,593
I am having problems with the network performance between SLES and AIX. I have tested the network performance from AIX 1 Gbit/s to AIX 1 Gbit/s SLES 11 10 Gbit/s to SLES 11 10 Gbit/s AIX 1 Gbit/s to SLES 11 10 Gbit/s and reverse there are also other machines on the network, so we don't have the full bandwidth, but the network is definitely not flooded by the other machines. via: netcat scp niping (network performance measurement tool from SAP) between two AIX machines I am getting "decent" results of about 110 Mbit/sbetween the two Linux machines I am getting good results of about 2,2 Gbit/sbut between Linux and AIX, independent in which direction, I am getting only about 30 Mbit/s, consistently over all the 3 measurement tools. All tested adapters are in the same subnet! Routing is not the problem. When I am doing traceroute, the nodes are connecting directly to each other without taking a hop over a gateway. There are also no Ierrs/Oerrs according to netstat -i on any machine. Network stability test over ~20 minutes via ping is decent as well. So personally I would say I can exclude the possibility of a networking problem and can narrow it down to the speed negotiation or buffer size negotiation between AIX and Linux. For your info: All hosts are Logical Partitions (Virtual machines so to speak) on IBM PowerVM. Somebody got an idea what to do?
The documentation you quote gives three cases: If bash is invoked with a file of commands, $0 is set to the name of that file. (case 1) If bash is started with the -c option, then $0 is set to the first argument after the string to be executed, if one is present. (case 2; note the two "if"s, which must both be satisfied in this case) Otherwise, it is set to the filename used to invoke bash , as given by argument zero. (case 3). The "otherwise" clause covers any situation which isn't covered by cases 1 and 2: bash isn't invoked with a file of commands, and bash isn't started with the -c option, or it's started with the -c option but without any argument after the string to be executed. So yes, it includes the case where Bash is started with -c without any argument after the string to be executed. It also includes the basic echo $0 case when run from an interactive shell, since the interactive shell was most likely started without either a file of commands or a -c option.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292593", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173991/" ] }
292,596
Some said that Ubuntu 14.04's default shell is dash. Mine is bash. My /bin/sh is dash . I don't remember if I changed the default shell. What is some way to change the default shell? Can I find out if I have done that? Thanks.
There are different meanings of the phrase "default shell". The default shell for /bin/sh scripts is whatever shell is installed as /bin/sh . In Debian derivatives, including Ubuntu, this is Dash. On most other Linux distributions, it's Bash (except in embedded distributions where it could be Busybox). On Unix systems it's likely something else. On Debian derivatives, you can switch between Dash and Bash as the default /bin/sh by running dpkg-reconfigure dash as root . The default shell for users is whatever is set in their NSS entries (typically, their line in /etc/passwd , or their LDAP entry). Users can change this using chsh(1) , and the default used when users are created depends on the tool used (for adduser(8) , it's defined using DSHELL in /etc/adduser.conf ). On most Linux distributions (including Debian derivates) the default shell for users (the default interactive shell) is Bash.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292596", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
292,641
I have local file structure: /tmp/24 /dir1 /file1 /file2 /dir2 /file3 /file4 and I want to upload it to remote compute. When I use rsync /tmp/24 host:/target it creates it inside target directory on the remote host ( /target/24/dir1/file1 , ...). But when I use it like this rsync /tmp/24/ host:/target it does what I want and that is to create them like this: /target/dir1/file1 . However, scp does the first thing if the target folder already exists and second when it doesn't regardless of the path ends with / or not. How can I force scp to behave like rsync in the second example?
The trailing '/' on the source directory name is a subtlety of rsync. Pay attention to it. rsync A trailing slash on the source effectively means "copy the contents of this directory, not the directory itself". Without the trailing slash, it means "copy the directory". So rsync -a tmp/24/ host:/target will copy the contents of "/tmp/24/" into "host:/target/…". But rsync -a tmp/24 host:/target will copy the directory "/tmp/24/" (and its contents) into "host:/target/24/…". It doesn't matter if "host:/target/" doesn't already exist; it will be created if necessary and the results are the same either way. (Trailing slashes on the destination don't matter.) ┌─────────────────────────┬───────────────┬───────────────────────┐│ rsync │ target exists │ target does not exist │├─────────────────────────┼───────────────┼───────────────────────┤│ rsync -a tmp/24 target │ target/24/… │ target/24/… │├─────────────────────────┼───────────────┼───────────────────────┤│ rsync -a tmp/24/ target │ target/… │ target/… │└─────────────────────────┴───────────────┴───────────────────────┘ scp Slashes do not matter at all, only whether the target directory exists or not. If it exists, then the source directory is copied into the target directory, otherwise the target directory is created as a clone of the source. ┌───────────────────────┬───────────────┬───────────────────────┐│ scp │ target exists │ target does not exist │├───────────────────────┼───────────────┼───────────────────────┤│ scp -r tmp/24 target │ target/24/… │ target/… │├───────────────────────┼───────────────┼───────────────────────┤│ scp -r tmp/24/ target │ target/24/… │ target/… │└───────────────────────┴───────────────┴───────────────────────┘ So you're right, you should just do ssh host mkdir -p /target first, and then the behavior will be the same as for rsync. But why not just use rsync? It does so much more, such as partial transfers, interrupted transfers, and compressed data. rsync -azu tmp/24 host:/target cp And for completeness: On Mac, the trailing '/' gives you rsync semantics as long as target already exists. ┌──────────────────────┬───────────────┬───────────────────────┐│ cp (Mac) │ target exists │ target does not exist │├──────────────────────┼───────────────┼───────────────────────┤│ cp -a tmp/24 target │ target/24/… │ target/… │├──────────────────────┼───────────────┼───────────────────────┤│ cp -a tmp/24/ target │ target/… │ target/… │└──────────────────────┴───────────────┴───────────────────────┘ Under Linux, slashes do not matter at all, same as scp: ┌──────────────────────┬───────────────┬───────────────────────┐│ cp (Linux) │ target exists │ target does not exist │├──────────────────────┼───────────────┼───────────────────────┤│ cp -a tmp/24 target │ target/24/… │ target/… │├──────────────────────┼───────────────┼───────────────────────┤│ cp -a tmp/24/ target │ target/24/… │ target/… │└──────────────────────┴───────────────┴───────────────────────┘ ditto Adding on…ditto(1) is a Mac OS tool to clone a directory. It makes as an exact a copy as possible. Slashes do not matter at all. Whether or not the target directory already exists does not matter. ┌──────────────────────┬──────────┐│ ditto tmp/24 target │ target/… │└──────────────────────┴──────────┘ If target already exists, previously-existing files are overwritten unconditionally. Files in target which are not in the source are left alone.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292641", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112194/" ] }
292,674
I've got a kernel image and an empty filesystem and need to fill it with programs, like a desktop, some basic utilities and software. Can I install apt-get somehow to do that for me? I'm not sure exactly how apt-get works so there might be some issues with me not having a defined distro or something. If it is possible, where would one get the source to build it?
apt 's source code is available on Salsa , but it isn't designed to serve as a basis for bootstrapping a distribution from source. To bootstrap a Debian-based ( apt -based) distribution, you need to use a tool such as debootstrap , which itself needs quite a number of programs to run (although since your filesystem isn't empty, but includes the basic Linux tools, it might already have everything needed). Usually, bootstrapping a system in this way involves either using another Debian-style system, or running an installer. If you want to build a system from source, pulling yourself up by your bootstraps, you should look at Linux from Scratch .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292674", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/175129/" ] }
292,721
I just followed the answer in: How to install Octave without GUI in Ubuntu 16.04? to install octave in ubuntu 16.04 and apparently it worked fine. Running octave-cli in terminal apparently works But when I went to run octave clicking on its icon I got the following error: The settings file /home/user/.config/octave/qt-settings does not exist and can not be created. Make sure you have read and write permissions to /home/user/.config/octave Octave GUI must be closed now. Can anyone please help me fix this, so I can run octave?
cd .config/octavesudo chown $USER qt-settings
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/292721", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177205/" ] }
292,744
I've been using rsync to synchronise folders and it works well. The problem is that more recently I've started syncing folders with larger files in them, and it takes much longer than I'd like it to (due to its hashing comparison). I've noticed that the cp commands can do one part of rsyncs job much quicker, by invoking the -u option. This means newer files in the source can be added to the destination easily using this method. But what I need to figure out is the second part of the rsync job which I find useful. This is a command which can recursively compare the list of files in all folders, and delete those which no longer feature in the source, but which are still in the destination (but without performing a hash on all the files, a simple comparison using the ls command, for example, is good enough for what I want). Is this possible?
cd .config/octavesudo chown $USER qt-settings
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/292744", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177240/" ] }
292,797
Using the ↑ and ↓ directional arrow keys to to scroll up and down the page in the GNU info pages causes the info page viewer to unexpectedly jump to another node, this is really disorienting. How can I scroll down through the page and just have the info viewer/pager stop when it gets to the top or the bottom, and then require a separate command to jump to a different node?
Posting as an answer, as requested. Just don't use info to browse info pages. There is a standalone info browser named pinfo , and Emacs has, of course, its own Info Mode . If you're using Vim you can also install the ref and ref-info plugins. ref is essentially a generic hypertext browser. It comes with plugins for a number of sources, such as man pages, perldoc , pydoc , etc., but not for info . ref-info is a plugin for ref that adds capability to browse info pages. The combination ref + ref-info makes a decent info browser, with the only drawback that it can only search through the page it currently displays. A partial workaround for this problem is to tell the info backend to produce larger chunks before feeding them to ref-info , by adding this line to your vimrc: let g:ref_info_cmd = 'info --subnodes -o -' You'd then browse info pages like this: :Ref info <page> Of course, you can also use ref with the other sources ( :Ref man <page> etc.). Read the manual for more information.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/292797", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
292,800
Is it possible to create users (non-sudoers) that can only use the command "mysql" ? or set of commands ? Context: I have a server hosting a mysql database. I want to give some users the rights to connect to the server via SSH but only so they can use the mysql client. I am using a the Amazon linux AMI Ps : Using the command option in the authorized_keys file is not a correct solution.
Posting as an answer, as requested. Just don't use info to browse info pages. There is a standalone info browser named pinfo , and Emacs has, of course, its own Info Mode . If you're using Vim you can also install the ref and ref-info plugins. ref is essentially a generic hypertext browser. It comes with plugins for a number of sources, such as man pages, perldoc , pydoc , etc., but not for info . ref-info is a plugin for ref that adds capability to browse info pages. The combination ref + ref-info makes a decent info browser, with the only drawback that it can only search through the page it currently displays. A partial workaround for this problem is to tell the info backend to produce larger chunks before feeding them to ref-info , by adding this line to your vimrc: let g:ref_info_cmd = 'info --subnodes -o -' You'd then browse info pages like this: :Ref info <page> Of course, you can also use ref with the other sources ( :Ref man <page> etc.). Read the manual for more information.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/292800", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177279/" ] }
292,830
After I look through the help . I didn't found much difference between them. -g, --gid GROUP The group name or number of the user's initial login group. The group name must exist. A group number must refer to an already existing group. -G, --groups GROUP1[,GROUP2,...[,GROUPN]]] A list of supplementary groups which the user is also a member of. Each group is separated from the next by a comma, with no intervening whitespace. The groups are subject to the same restrictions as the group given with the -g option. The default is for the user to belong only to the initial group. If they are the same. Why both they exist?
-g sets the initial, or primary, group. This is what appears in the group field in /etc/passwd . On many distributions the primary group name is the same as the user name. -G sets the supplementary, or additional, groups. These are the groups in /etc/group that list your user account. This might include groups such as sudo , staff , etc.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/292830", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53618/" ] }
292,835
I make bus timetable booklets for drivers every day, and I have 4 PDF files: Special Instruction Sheet (Every day different) Bus daily timetable (which is always different) Bus Assistant daily report (internal form, always the same) Bus Driver Daily Report (internal form, always the same) So I can print them double-sided and stapled from a photocopier, I have been using PDFEscape.com to manually put them into the following order: A=front side of A4 paper. B=reverse side of A4 paper. 1a. Special Instructions 1b. BLANK PAGE 2a. Timetable 2b. BLANK PAGE 3a. Timetable 3b. BLANK PAGE ....... R-2a. Timetable second-last page R-2b. BUS ASSISTANT FORM R-1a. Timetable last page R-1b. BUD DRIVER FORM The timetables are individual PDFs exported from a scheduling program, and the problem is they are not always the same number of pages (usually 1-5, but can be up to 15). It is so time-consuming. Does anyone know what script I could write that will do this? Thanks in advance!
-g sets the initial, or primary, group. This is what appears in the group field in /etc/passwd . On many distributions the primary group name is the same as the user name. -G sets the supplementary, or additional, groups. These are the groups in /etc/group that list your user account. This might include groups such as sudo , staff , etc.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/292835", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177306/" ] }
292,843
On a Linux VM I would like to TEST the NAGIOS monitoring more deeply than just switching off the VM or disconnecting the virtual NIC; I would like to test or "enforce a disk space alarm" through occupying several % of free space for a short period of time. I know that I could just use a dd if=/dev/zero of=/tmp/hd-fillup.zeros bs=1G count=50 or something like that... but this takes time and loads the system and requires again time when removing the test files with rm. Is there a quick (almost instant) way to fill up a partition that does not load down the system and takes a lot of time ? im thinking about something that allocates space, but does not "fill" it.
The fastest way to create a file in a Linux system is using fallocate : fallocate -l 50G file From man: fallocate is used to manipulate the allocated disk space for a file, either to deallocate or preallocate it. For filesystems which support the fallocate system call, preallocation is done quickly by allocating blocks and marking them as uninitialized, requiring no IO to the data blocks. This is much faster than creating a file by filling it with zeros. Supported for XFS (since Linux 2.6.38), ext4 (since Linux 3.0), Btrfs (since Linux 3.7) and tmpfs (since Linux 3.5).
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/292843", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119964/" ] }
292,868
I use Linux on a Macbook with a UK/GB keyboard and I customize the keymap to solve some problems Apple's weird keyboard layout causes. I use xmodmap to do this. I'd like to try Wayland, but xmodmap doesn't work in that. How can I achieve a similar result in Wayland? The .Xmodmap file I use contains the following: keycode 12 = 3 numbersign 3 sterling sterling sterling threesuperior sterlingclear Controlclear Mod4add Control = Control_L Super_Radd Mod4 = Super_Lkeysym Caps_Lock = NoSymbol Caps_Lock Line 1: On UK keyboards Shift - 3 is £, so # usually has its own key near Return . But on the Mac # is obtained with Altgr - 3 . As a programmer I use # far more than £ so this line swaps them over. Selecting US layout also achieves this, but doing that in Linux also swaps some other commonly used keys, whereas in OS X those other keys are unaffected by US/UK. Lines 2-5: Make the right Cmd key act as Right- Ctrl , because this keyboard has no physical right Ctrl key. Line 6: Makes CapsLock only work if you press it with Shift . Not Mac-specific, this should be a standard option for all OSs and keyboards.
Unfortunately modifying the system XKB database in /usr/share/X11/xkb is the only way; from your other question it looks like you've gotten that part working. The limitation is mostly due to the immaturity of Wayland and a design oversight in XKB. Tools like setxkbmap and xkbcomp provide an -I option to add a user-defined database to search (eg ~/.xkb or ~/.config/xkb , with files and subdirectories laid out like the system database). These tools interact with the X server, so they might be useful configuring the Xwayland compatibility layer for running X applications under Wayland. But they do not at present speak the Wayland protocols. Wayland protocols are still maturing. Currently it seems the input-method and text-input protocols are most relevant, and both are unstable. Neither mention anything about altering a defined keymap; these details are left to the compositor. GNOME and KDE both provide keyboard-handling settings daemons that should handle system XKB options, including changing on the fly. To the best of my knowledge, there's no way to tell either about user customizations. As far as I know, Weston and other compositors rely on config files or environment variables to set XKB options at startup, and provide no way to change them other than exiting and restarting. Even in XKB itself, this is not fully supported. Your custom symbols file can include other system symbols files. But at present there's no include functionality for XKB rules files, so even if you had a tool that would talk to the Wayland compositor and would look up your personal customizations, you'd have to manually include all the rules you want to use (ie copy rules/evdev* from the system XKB and modify it). libxkbcommon has an open issue on this topic and a related bug .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292868", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109504/" ] }
292,874
Hi how can I use gsub to replace a word which has parentheses. Here I want to replace ABC(T) with ABC/G awk ' {gsub("ABC\(T\)","ABC/G")}; Print $0' "$FILENAME" > tmp.tmp && mv tmp.tmp "$FILENAME"
You could simplify the whole thing if you use the // format for gsub : $ echo "ABC(T)" | awk '{gsub(/ABC\(T\)/,"ABC/G")}; print $0'ABC/G Then, you could simplify further by using print with no arguments (which is the same as print $0 ) or the 1 shorthand for printing (the default awk action for expressions that evaluate to true, such as 1; is to print the current line): $ echo "ABC(T)" | awk '{gsub(/ABC\(T\)/,"ABC/G")}1'ABC/G Personally, however, I wouldn't use awk for this, the syntax is shorter and cleaner with other tools: $ echo "ABC(T)" | sed 's|ABC(T)|ABC/G|'ABC/G$ echo "ABC(T)" | perl -pe 's|ABC\(T\)|ABC/G|'ABC/G
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292874", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120160/" ] }
292,902
In htop I would like to order processes by CPU utilization but the top processes bounce back and forth so it is difficult to view the details of each process. I would like to be able to sort in whatever preferred order then lock that order while each field continues to update.
A similar question was asked on superuser in February https://superuser.com/questions/1036978/how-pause-list-of-process-in-htop The accepted answer is to use the -d option to change the delay of the refresh. From the man page: -d --delay=DELAY Delay between updates, in tenths of seconds e.g. htop -d 100 to refresh every 10 seconds. Judging by this bug report there is currently no way to pause htop completely. The suggestion is again to use the -d option. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=821904
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292902", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177113/" ] }
292,950
I run the following line from my bash script in order to run the script /usr/run.pl on remote machines xargs -P "$numprocs" -I '{}' -n 1 -- perl -e 'alarm shift; exec @ARGV' -- "$timeout" ssh -nxaq -o ConnectTimeout=5 -o StrictHostKeyChecking=no '{}' /usr/run.sh < <(echo -e "$node") but I get on the console the following standard output Connection to 143.6.22.4 closed by remote host. xargs: perl: exited with status 255; aborting where I need to put the 1>/dev/null in my syntax to avoid this message?
There are two streams of output from a process that are generally sent to the terminal: standard output (which is bound to file descriptor 1), and standard error (which is bound to file descriptor 2). This allows for easily separated capture of expected output, and error messages or other diagnostic information that is not generally expected to be output. When you use the redirect ( > ), that only kicks standard output to the specified file or location, leaving standard error untouched. This allows you to see if there are errors (like those you are seeing). If you want to send all output, including standard error, you need to redirect both streams: /path/to/program arg1 arg2 > /dev/null 2>&1 Or, alternatively but more explicitly: /path/to/program arg1 arg2 > /dev/null 2> /dev/null The syntax 2>&1 means "Send output currently going to file descriptor 2 to the same place that output going to file descriptor 1 is going to". > is omitting the default of FD1, so it is semantically the same as 1> , which might make 2> make more sense.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292950", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153544/" ] }
292,955
Using the following code, I'm trying to read a file into an array: GROUPS=()while IFS=: read -r g1 g2 g3 g4do GROUPS+=("$g3") echo "$g3"done < /etc/group This doesn't work, it doesn't even output anything, however, if I leave only the echo it does print the contents of the file. As long as the instruction to add to an array is present somewhere in the loop, nothing is printed. It seems very strange since I can do this to other files without problems. Any idea what's causing this? I checked with bash -x and when the problematic instruction is present, it doesn't even enter the loop. Also, in case it is relevant, I'm running this as root.
You unluckily selected name of the array which is already reserved by the bash itself and is read only , so you cannot change it. GROUPS An array variable containing the list of groups of which thecurrent user is a member. Assignments to GROUPS have no effect andreturn an error status. If GROUPS is unset, it loses its specialproperties, even if it is subsequently reset. Just use another name and the code should work.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/292955", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177384/" ] }
292,999
Consider the following Docker container: docker run --rm -it -v /tmp:/mnt/tmp alpine sh This mounts the host directory /tmp into /mnt/tmp inside of the alpine container. Now, on the Host System I mount a NFS volume to the /tmp directory: mkdir /tmp/nfsmount -t nfs4 192.168.1.100:/data /tmp/nfs The mount works on the Host System, and I see the following: # ls /tmp/nfsfile1 file2 file3# But on the Docker Container, I see a blank directory: # ls /mnt/tmp/nfs# I know that I can get around this by doing the mount directly in the Docker Container. But I am really interested to know why the mount works on the host container but not in the docker container?
This happens because the volume is using private mount propagation. This means that once the mount happens, any changes that happen on the origin side (e.g. the "host" side in the case of Docker) will not be visible underneath the mount. There are a couple of ways to handle this: Do the NFS mount first, then start the container. The mount will propagate to the container, however as before any changes to the mount will not be seen by the container (including unmounts). Use "slave" propagation. This means that once the mount is created, any changes on the origin side (docker host) will be able to be seen in the target (in the container). If you happen to be doing nested mounts, you'll want to use rslave ( r for recursive). There is also "shared" propagation. This mode would make changes to the mountpoint from inside the container propagate to the host, as well as the other way around. Since your user wouldn't even have privileges to make such changes (unless you add CAP_SYS_ADMIN), this is probably not what you want. You can set the propagation mode when creating the mount like so: $ docker run -v /foo:/bar:private The other alternative would be to use a volume rather than a host mount.You can do this like so: $ docker volume create \ --name mynfs \ --opt type=nfs \ --opt device=:<nfs export path> \ --opt o=addr=<nfs host> \ mynfs$ docker run -it -v mynfs:/foo alpine sh This will make sure to always mount in the container for you, doesn't rely on having the host setup in some specific way or dealing with mount propagation. note : the : at the front of the device path is required, just something weird about the nfs kernel module. note : Docker does not currently resolve <nfs host> from a DNS name (it will in 1.13) so you will need to supply the ip address here. More details on "shared subtree" mounts: https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/292999", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153971/" ] }
293,042
I have a big file with several columns on each line. I'm familiar with using cut -f -d to select specific columns by their number. I checked the manual for cut and it doesn't seem that there's a way to regex match columns. What I want to do specifically is: select the 2nd column of every line and also select all columns that contain the string "hello" (there may be none, if not it could be any column(s) and not the same column(s) for each line) What's the most convenient terminal tools for this operation? EDIT: Simplified example x ID23 a b c hello1x ID47 hello2 a b cx ID49 hello3 a b hello4x ID53 a b c d The result I would want is: ID23 hello1ID47 hello2ID49 hello3 hello4 or alternatively: ID23 hello1ID47 hello2ID49 hello3 hello4ID53 To elaborate the example given: Columns are defined by one space whether or not "only print if the string is present" is not really important, I can just grep for "hello" if necessary we can assume the string "hello" will never be in column 1 or 2.
If one space at the end of the line doesn't hurt you much: $ awk '{for(i=1;i<=NF;i++) if(i==2 || $i~"hello") printf $i" ";print ""}' fileID23 hello1 ID47 hello2 ID49 hello3 hello4 ID53 This doesn't assume anything about the position of the "hello" string.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293042", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89986/" ] }
293,048
I have a file both locally and on server. I first try running rsync in "dry-run" mode, to see if there are some differences between the files: $ rsync -aP --dry-run [email protected]:/home/dir [email protected]'s password: receiving incremental file listdir/myfile.txt This apparently means that the file dir/myfile.txt is different, as it would be updated. Then I check the same files with a diff : $ ssh [email protected] 'cat /home/dir/myfile.txt' | diff --report-identical-files - dir/[email protected]'s password: Files - and dir/myfile.txt are identical So, the files are identical, apparently. Why does then rsync want to update this file - and how could I confirm the reason from the command line?
rsync will report changes for permissions differences timestamp differences content (and filesize) differences In comments, @roaima pointed out that there is an option to give a summary of these changes, in the rsync manual page : -i, --itemize-changes output a change-summary for all updates You may find it useful, though the summary is terse and (in the version I have at hand) only reports the type (file, link or directory) and name . Here is what I see with rsync 3.0.9-4 and 3.1.1-3 on my Debian 7 and testing machines: cd+++++++++ backup-invisible-island/>f+++++++++ backup-invisible-island/.bash_historycL+++++++++ backup-invisible-island/conf -> ../system/invisible-island.net/confcL+++++++++ backup-invisible-island/statistics -> ../system/invisible-island.net/statisticscd+++++++++ backup-invisible-island/anon_ftp/cL+++++++++ backup-invisible-island/anon_ftp/AdaCurses -> pub/AdaCursescL+++++++++ backup-invisible-island/anon_ftp/DEBS -> pub/DEBScL+++++++++ backup-invisible-island/anon_ftp/GIT -> pub/GIT For my own use, changes of timestamps for directories are relatively unimportant. I use a script which shows only files which are changed: rsync: show when newer file on destination is to be overwritten
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293048", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171267/" ] }
293,058
I'm looking for a way to list all files in a directory that contain the full set of keywords I'm seeking, anywhere in the file. So, the keywords need not to appear on the same line. One way to do this would be: grep -l one $(grep -l two $(grep -l three *)) Three keywords is just an example, it could just as well be two, or four, and so on. A second way I can think of is: grep -l one * | xargs grep -l two | xargs grep -l three A third method, that appeared in another question , would be: find . -type f \ -exec grep -q one {} \; -a \ -exec grep -q two {} \; -a \ -exec grep -q three {} \; -a -print But that's definitely not the direction I'm going here. I want something that requires less typing, and possibly just one call to grep , awk , perl or similar. For example, I like how awk lets you match lines that contain all keywords , like: awk '/one/ && /two/ && /three/' * Or, print just the file names: awk '/one/ && /two/ && /three/ { print FILENAME ; nextfile }' * But I want to find files where the keywords may be anywhere in the file, not necessarily on the same line. Preferred solutions would be gzip friendly, for example grep has the zgrep variant that works on compressed files. Why I mention this, is that some solutions may not work well given this constraint. For example, in the awk example of printing matching files, you can't just do: zcat * | awk '/pattern/ {print FILENAME; nextfile}' You need to significantly change the command, to something like: for f in *; do zcat $f | awk -v F=$f '/pattern/ { print F; nextfile }'; done So, because of the constraint, you need to call awk many times, even though you could do it only once with uncompressed files. And certainly, it would be nicer to just do zawk '/pattern/ {print FILENAME; nextfile}' * and get the same effect, so I would prefer solutions that allow this.
awk 'FNR == 1 { f1=f2=f3=0; }; /one/ { f1++ }; /two/ { f2++ }; /three/ { f3++ }; f1 && f2 && f3 { print FILENAME; nextfile; }' * If you want to automatically handle gzipped files, either run this in a loop with zcat (slow and inefficient because you'll be forking awk many times in a loop, once for each filename) or rewrite the same algorithm in perl and use the IO::Uncompress::AnyUncompress library module which can decompress several different kinds of compressed files (gzip, zip, bzip2, lzop). or in python, which also has modules for handling compressed files. Here's a perl version that uses IO::Uncompress::AnyUncompress to allow for any number of patterns and any number of filenames (containing either plain text or compressed text). All args before -- are treated as search patterns. All args after -- are treated as filenames. Primitive but effective option handling for this job. Better option handling (e.g. to support a -i option for case-insensitive searches) could be achieved with the Getopt::Std or Getopt::Long modules. Run it like so: $ ./arekolek.pl one two three -- *.gz *.txt1.txt.gz4.txt.gz5.txt.gz1.txt4.txt5.txt (I won't list files {1..6}.txt.gz and {1..6}.txt here...they just contain some or all of the words "one" "two" "three" "four" "five" and "six" for testing. The files listed in the output above DO contain all three of the search patterns. Test it yourself with your own data) #! /usr/bin/perluse strict;use warnings;use IO::Uncompress::AnyUncompress qw(anyuncompress $AnyUncompressError) ;my %patterns=();my @filenames=();my $fileargs=0;# all args before '--' are search patterns, all args after '--' are# filenamesforeach (@ARGV) { if ($_ eq '--') { $fileargs++ ; next }; if ($fileargs) { push @filenames, $_; } else { $patterns{$_}=1; };};my $pattern=join('|',keys %patterns);$pattern=qr($pattern);my $p_string=join('',sort keys %patterns);foreach my $f (@filenames) { #my $lc=0; my %s = (); my $z = new IO::Uncompress::AnyUncompress($f) or die "IO::Uncompress::AnyUncompress failed: $AnyUncompressError\n"; while ($_ = $z->getline) { #last if ($lc++ > 100); my @matches=( m/($pattern)/og); next unless (@matches); map { $s{$_}=1 } @matches; my $m_string=join('',sort keys %s); if ($m_string eq $p_string) { print "$f\n" ; last; } }} A hash %patterns is contains the complete set of patterns that files have to contain at least one of each member $_pstring is a string containing the sorted keys of that hash. The string $pattern contains a pre-compiled regular expression also built from the %patterns hash. $pattern is compared against each line of each input file (using the /o modifier to compile $pattern only once as we know it won't ever change during the run), and map() is used to build a hash (%s) containing the matches for each file. Whenever all the patterns have been seen in the current file (by comparing if $m_string (the sorted keys in %s ) is equal to $p_string ), print the filename and skip to the next file. This is not a particularly fast solution, but is not unreasonably slow. The first version took 4m58s to search for three words in 74MB worth of compressed log files (totalling 937MB uncompressed). This current version takes 1m13s. There are probably further optimisations that could be made. One obvious optimisation is to use this in conjunction with xargs 's -P aka --max-procs to run multiple searches on subsets of the files in parallel. To do that, you need to count the number of files and divide by the number of cores/cpus/threads your system has (and round up by adding 1). e.g. there were 269 files being searched in my sample set, and my system has 6 cores (an AMD 1090T), so: patterns=(one two three)searchpath='/var/log/apache2/'cores=6filecount=$(find "$searchpath" -type f -name 'access.*' | wc -l)filespercore=$((filecount / cores + 1))find "$searchpath" -type f -print0 | xargs -0r -n "$filespercore" -P "$cores" ./arekolek.pl "${patterns[@]}" -- With that optimisation, it took only 23 seconds to find all 18 matching files. Of course, the same could be done with any of the other solutions. NOTE: The order of filenames listed in the output will be different, so may need to be sorted afterwards if that matters. As noted by @arekolek, multiple zgrep s with find -exec or xargs can do it significantly faster, but this script has the advantage of supporting any number of patterns to search for, and is capable of dealing with several different types of compression. If the script is limited to examining only the first 100 lines of each file, it runs through all of them (in my 74MB sample of 269 files) in 0.6 seconds. If this is useful in some cases, it could be made into a command line option (e.g. -l 100 ) but it has the risk of not finding all matching files. BTW, according to the man page for IO::Uncompress::AnyUncompress , the compression formats supported are: zlib RFC 1950 , deflate RFC 1951 (optionally), gzip RFC 1952 , zip, bzip2, lzop, lzf, lzma, xz One last (I hope) optimisation. By using the PerlIO::gzip module (packaged in debian as libperlio-gzip-perl ) instead of IO::Uncompress::AnyUncompress I got the time down to about 3.1 seconds for processing my 74MB of log files. There were also some small improvements by using a simple hash rather than Set::Scalar (which also saved a few seconds with the IO::Uncompress::AnyUncompress version). PerlIO::gzip was recommended as the fastest perl gunzip in https://stackoverflow.com/a/1539271/137158 (found with a google search for perl fast gzip decompress ) Using xargs -P with this didn't improve it at all. In fact it even seemed to slow it down by anywhere from 0.1 to 0.7 seconds. (I tried four runs and my system does other stuff in the background which will alter the timing) The price is that this version of the script can only handle gzipped and uncompressed files. Speed vs flexibility: 3.1 seconds for this version vs 23 seconds for the IO::Uncompress::AnyUncompress version with an xargs -P wrapper (or 1m13s without xargs -P ). #! /usr/bin/perluse strict;use warnings;use PerlIO::gzip;my %patterns=();my @filenames=();my $fileargs=0;# all args before '--' are search patterns, all args after '--' are# filenamesforeach (@ARGV) { if ($_ eq '--') { $fileargs++ ; next }; if ($fileargs) { push @filenames, $_; } else { $patterns{$_}=1; };};my $pattern=join('|',keys %patterns);$pattern=qr($pattern);my $p_string=join('',sort keys %patterns);foreach my $f (@filenames) { open(F, "<:gzip(autopop)", $f) or die "couldn't open $f: $!\n"; #my $lc=0; my %s = (); while (<F>) { #last if ($lc++ > 100); my @matches=(m/($pattern)/ogi); next unless (@matches); map { $s{$_}=1 } @matches; my $m_string=join('',sort keys %s); if ($m_string eq $p_string) { print "$f\n" ; close(F); last; } }}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293058", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111915/" ] }
293,069
We're using Ubuntu 16.04 LTS and want to use multiple Tomcat installations which should start at boot time. One of these Tomcats would host a Jenkins which will deploy a webapp onto the other tomcat and restart it. To start the services, we added systemd service scripts. What we noticed is that when one tomcat is stopped or killed, the other is stopped as well. We reduced this to two simple scripts that only use /usr/bin/yes: Unit A [Unit]Description=AAfter=syslog.target network.target[Service]Type=simpleExecStart=/usr/bin/yesExecStop=/bin/kill -15 $MAINPIDUser=tomcat8Group=tomcat8[Install]WantedBy=multi-user.target Unit B [Unit]Description=BAfter=syslog.target network.target[Service]Type=simpleExecStart=/usr/bin/yesExecStop=/bin/kill -15 $MAINPIDUser=tomcat8Group=tomcat8[Install]WantedBy=multi-user.target What happens:When a service is killed ( kill -9 ), both services are gone afterwards. Why are both services killed? How can we prevent this? Is running more than one service under a single user discouraged, or is this good practice? EDIT: For clarification - we did also try to do the same when launching the tomcats without systemd. In this instance, the behaviour was as expected: only the killed service was stopped while the other lived on. EDIT2: The user is not a front-end user that logs in/out at all. It's purely a system user to restrict access of the services.
The changelog for systemd (v230) says: systemd-logind will now by default terminate user processes that are part of the user session scope unit (session-XX.scope) when the user logs out. This behavior is controlled by the KillUserProcesses= setting in logind.conf, and the previous default of "no" is now changed to "yes". This means that user sessions will be properly cleaned up after, but additional steps are necessary to allow intentionally long-running processes to survive logout. So this is default behaviour. It also explains what to do to undo the change: logind.conf , set KillUserProcesses= to no (and --without-kill-user-processes option to configure ) But the changelog also includes a ... While the user is logged in at least once, [email protected] is running, and any service that should survive the end of any individual login session can be started at a user service or scope using systemd-run . systemd-run(1) man page has been extended with an example which shows how to run screen in a scope unit underneath [email protected] . The same command works for tmux. and After the user logs out of all sessions, [email protected] will be terminated too, by default, unless the user has lingering enabled. To effectively allow users to run long-term tasks even if they are logged out, lingering must be enabled for them. See loginctl(1) for details. The default polkit policy was modified to allow users to set lingering for themselves without authentication . That one is more important since it uses the default (kill'm all) with a way to provide exceptions: enable lingering . Some more info: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=825394
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293069", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176939/" ] }
293,107
I just wrote the following bash script to check the ping access on list of Linux machines: for M in $list do ping -q -c 1 "$M" >/dev/null if [[ $? -eq 0 ]] then echo "($C) $MACHINE CONNECTION OK" else echo "($C) $MACHINE CONNECTION FAIL" fi let C=$C+1done This prints: (1) linux643 CONNECTION OK (2) linux72 CONNECTION OK (3) linux862 CONNECTION OK (4) linux12 CONNECTION OK (5) linux88 CONNECTION OK (6) Unix_machinetru64 CONNECTION OK How can I use printf (or any other command) in my bash script in order to print the following format? (1) linux643 ............ CONNECTION OK (2) linux72 ............. CONNECTION OK (3) linux862 ............ CONNECTION OK (4) linux12 ............. CONNECTION OK (5) linux88 ............. CONNECTION FAIL (6) Unix_machinetru64 ... CONNECTION OK
Using parameter expansion to replace spaces resulting from %-s by dots: #!/bin/bashlist=(localhost google.com nowhere)C=1for M in "${list[@]}"do machine_indented=$(printf '%-20s' "$M") machine_indented=${machine_indented// /.} if ping -q -c 1 "$M" &>/dev/null ; then printf "(%2d) %s CONNECTION OK\n" "$C" "$machine_indented" else printf "(%2d) %s CONNECTION FAIL\n" "$C" "$machine_indented" fi ((C=C+1))done
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/293107", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153544/" ] }
293,275
I have several ASCII files, having one column of data, as follows: DATA1564189612381479156218941489.... I need to implement a column containing the date. I know that the each data set begins in 1900-01-01 (Year-Month-Day). Therefore, I would like to reformat each file as follows: DATE DATA1900-01-01 15641900-01-02 18961900-01-03 12381900-01-04 14791900-01-05 15621900-01-06 18941900-01-07 1489..... How can I do this?
If you have access to GNU date , you can do: $ ( date="1899-12-31"; printf 'DATE\tDATA\n'; tail -n+2 file | while read line; do date="$(date -d "$date + 1 day" +%F)" printf '%s\t%s\n' "$date" "$line" done; ) > newfile Explanation date="1899-12-31" : set the variable $date to the start date minus one day. printf 'DATE\tDATA\n'; : print the column headers. tail -n+2 file | : print everything except the first line (the header) of your file, and pass that to the while loop. while read line; do ... ; done : process each input line, saving it as $line . date="$(date -d "$date + 1 day" +%F)" : add one day to the value of $date . printf '%s\t%s\n' "$date" "$line" : print the current $date and $line variables. ( ... ) > newfile : this makes the entire command run in a subshell so you can capture the output of the first printf and the loop and redirect it into newfile .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293275", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/125347/" ] }
293,278
I want to monitor a file. So I have tailed a file using below command, to execute a script per line. tail -3f logfile.log | grep "action.*add" | sed -u -e "s/^/'/" -e "s/$/'/" | xargs -L 1 -P 5 bash testscript.sh But it seems that script is not getting executed. I observed that grep is not giving any input for next pipe. When I tried, tail -3f logfile.log | grep "action.*add" it worked. But when given next filter like sed , grep , xargs etc. It didn't worke like the one shown below. tail -3f /var/tmp/rabbitmq-tracing/logfile.log | grep "action.*add" | grep add Please help me to know why this is happening and how to overcome this. Edit 1: Basically anything like below should work and it was working previously. Confused why it is not working now. tail -f file.txt | grep something | grep something | grep something EDIT 2: Output line after first grep will be a json string like below. And I want to give this line as input( enclosed in single quotes) to bash script. {"twNotif": {"originator": "api", "chain": "test", "txId": "08640-0050568a5514", "version": "1.0", "msgType": "api", "twData": {"api": {"hostId": "007bdcc5", "user": "test", "cmdTxt": "100599"}}, "action": "add", "store": "test", "msgTime": 1467280648.971042}}
use --line-buffered switch on grep tail -3f logfile.log | grep --line-buffered "action.*add" | sed -u -e "s/^/'/" -e "s/$/'/" | xargs -L 1 -P 5 bash testscript.sh from man grep: --line-buffered Use line buffering on output. This can cause a performance penalty. or you can use stdbuf read more stdbuf allows one to modify the buffering operations of the three standard I/O streams associated with a program. Synopsis: use this syntax: ... | stdbuf -oL grep ... | ... your example: tail -3f logfile.log | stdbuf -oL grep "action.*add" | sed -u -e "s/^/'/" -e "s/$/'/" | xargs -L 1 -P 5 bash testscript.sh
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293278", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109432/" ] }
293,304
I have a process that listens on an IP:port - in fact it is spark streaming which connects to a socket. The issue is that I wish to somehow create a server that connects to spark on one port and data is streamed into this server from another port. For example, the spark streaming example uses the netcat utility (for example nc -lk 5005 ). However, I have another service that listens for incoming messages and then spit out a message. So I need some kind of server that can listen to messages from service A and pass them to spark. My service A, relies on sockets. And my spark consumer relies on sockets. Here is what I have done so far is the forwarding from port to port but this does not seem to work: nc -X 4 -x 127.0.0.1:5005 localhost 5006 With the idea that the service A:5005 -> socket -> 5006 -> Spark I cannot seem to find the correct way to make this work. Some answers have suggested the following: socat tcp-l:5005,fork,reuseaddr tcp:127.0.0.1:5006 My spark socket reciever doesn't or cannot seem to connect. I get the error: Error connecting to 127.0.0.1:5006 - java.net.ConnectException: Connection refused
you can't use only nc for forward traffic, nc have not keep-alive or fork mode you must use another tools instead nc ; for example use socat or ncat socat ( source code ) this command listen on port 5050 and forward all to port 2020 socat tcp-l:5050,fork,reuseaddr tcp:127.0.0.1:2020 ncat readmore Ncat is a feature-packed networking utility which reads and writes data across networks from the command line. Ncat was written for the Nmap Project as a much-improved reimplementation of the venerable Netcat. It ncat -l localhost 8080 --sh-exec "ncat example.org 80" And you can use another tools: goproxy : (download source code or bin file ) Listen on port 1234 and forward it to port 4567 on address "1.1.1.1" ./proxy tcp -p ":1234" -T tcp -P "1.1.1.1:4567" gost (Download source code and bin ) ENGLISH readme Listen on port 1234 and forward it to port 4567 on address "1.1.1.1" source ./gost -L tcp://:1234/1.1.1.1:4567 redir ( source code ) ./redir :1234 1.1.1.1:5678
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/293304", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177669/" ] }
293,307
I've encountered the following problem when using debconf to configure a package in Ubuntu 16.04 during installation. More precisely, the package uses debconf to save configurations files, and right after, in the postinst script, a service is started. This service also uses a debconf module to load the configurations saved in the previous step. However, the service started with systemd fails with the error: debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable From what I could find, dpkg is still accessing this file with the debconf frontend, and the service crashes when it tries to start another frontend (the environmental variable DEBIAN_HAS_FRONTEND is not passed to the service). I have tried forcing the env variable DEBIAN_HAS_FRONTEND in the script, but then other errors appear. I think I should force starting the daemon after the dpkg process has ended, and debconf has already finished, any ideas?
you can't use only nc for forward traffic, nc have not keep-alive or fork mode you must use another tools instead nc ; for example use socat or ncat socat ( source code ) this command listen on port 5050 and forward all to port 2020 socat tcp-l:5050,fork,reuseaddr tcp:127.0.0.1:2020 ncat readmore Ncat is a feature-packed networking utility which reads and writes data across networks from the command line. Ncat was written for the Nmap Project as a much-improved reimplementation of the venerable Netcat. It ncat -l localhost 8080 --sh-exec "ncat example.org 80" And you can use another tools: goproxy : (download source code or bin file ) Listen on port 1234 and forward it to port 4567 on address "1.1.1.1" ./proxy tcp -p ":1234" -T tcp -P "1.1.1.1:4567" gost (Download source code and bin ) ENGLISH readme Listen on port 1234 and forward it to port 4567 on address "1.1.1.1" source ./gost -L tcp://:1234/1.1.1.1:4567 redir ( source code ) ./redir :1234 1.1.1.1:5678
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/293307", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177689/" ] }
293,340
I am making a tool script for my theme with 2 functions: Check for update, reinstall theme So here is the code for selection menu: PS3='Choose an option: 'options=("Check for update" "Reinstall theme")select opt in "${options[@]}"do case $opt in "Check for update") echo "Checking update" ;; "Reinstall theme") echo "Reinstalling" ;; *) echo invalid option;; esacdone When running it appear like this 1) Check for update2) Reinstall themeChoose an option: I type 1 and enter, the check for update command is performed The problem is when it finished performing the script, it re-display "Choose an option:" not with the menu. So it can make users hard to choose without the menu (especially after a long script) 1) Check for update2) Reinstall themeChoose an option: 1Checking updateChoose an option: So how can I re-display the menu after an option is performed
I'm guessing you really want something like this: check_update () { echo "Checking update"}reinstall_theme () { echo "Reinstalling theme"}while true; do options=("Check for update" "Reinstall theme") echo "Choose an option:" select opt in "${options[@]}"; do case $REPLY in 1) check_update; break ;; 2) reinstall_theme; break ;; *) echo "What's that?" >&2 esac done echo "Doing other things..." echo "Are we done?" select opt in "Yes" "No"; do case $REPLY in 1) break 2 ;; 2) break ;; *) echo "Look, it's a simple question..." >&2 esac donedone I've separated out the tasks into separate function to keep the first case statement smaller. I've also used $REPLY rather than the option string in the case statements since this is shorter and won't break if you decide to change them but forget to update them in both places. I'm also choosing to not touch PS3 as that may affect later select calls in the script. If I wanted a different prompt, I would set it once in and leave it (maybe PS3="Your choice: " ). This would give a script with multiple questions a more uniform feel. I've added an outer loop that iterates over everything until the user is done. You need this loop to re-display the question in the first select statement. I've added break to the case statements, otherwise there's no way to exit other than interrupting the script. The purpose of a select is to get an answer to one question from the user, not really to be the main event-loop of a script (by itself). In general, a select - case should really only set a variable or call a function and then carry on. A shorter version that incorporates a "Quit" option in the first select : check_update () { echo "Checking update"}reinstall_theme () { echo "Reinstalling theme"}while true; do options=("Check for update" "Reinstall theme" "Quit") echo "Choose an option: " select opt in "${options[@]}"; do case $REPLY in 1) check_update; break ;; 2) reinstall_theme; break ;; 3) break 2 ;; *) echo "What's that?" >&2 esac donedoneecho "Bye bye!"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/293340", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177727/" ] }
293,365
I know that I can change a network interface's MAC address by bringing the interface down, using a command like ifconfig eth0 hw ether 00:11:22:33:44:55 or ip link set eth0 address 00:11:22:33:44:55 and then bringing the interface up again. A command like ip link show eth0 then confirms that the change was successful. But I recently discovered the files in /sys/class/net (originally from this answer): each one is a symbolic link to a directory containing files with information about the interface as documented here For example, on my machine, the ethernet interface is enp3s0 (I have no idea why it has such a strange name), and /sys/class/net/enp3s0 links to /sys/devices/pci0000:00/0000:00:1c.2/0000:03:00.0/net/enp3s0 . In this directory, I then found the file address which is just a text file containing the MAC address of the interface. But when I attempt to change the address using one of the commands above, the address file stays the same, so apparently, the commands do not change the MAC address on the lowest level. It is also not possible to change this file in any manner, not even the superuser has permission to do this. So now, just out of curiosity: Is it possible to change the MAC address of a network interface on this level?
Background /proc and /sys filesystems are just a view of kernel structures, both filesystems reside in memory. Although both filesystems are writable (well, some of the files in there are writable) it is unwise to assume that they behave the same way as a real filesystems. Operations that allow you to write into a file inside /proc or /sys end as hooks and then as function calls. For example: # echo 3 > /proc/sys/vm/drop_caches Does not really write to that file, it calls a userspace kernel function. If a function is not defined for a certain write you will get: write error: Input/output error That is because it does not make sense to write to that file. It is not that different from writing to the character device of a USB device that has no driver associated to. The kernel does not know what to do. There is no function defined for writes against /sys/class/net/enp3s0/address , therefore that is not a viable route to change the MAC address of that interface. Can I change the MAC address without calling ifconfig or ip link set ? Yes, you can. If you look at the code for iproute2 you will find a lot of argument parsing and a call to rtnl_talk . It looks as follows (this is from the ip/iplink.c file): /* lot of argument parsing and `req` setting */if (rtnl_talk(&rth, &req.n, 0, 0, NULL, NULL, NULL) < 0) exit(2);return 0; req.n in there is the MAC address being passed to rnetlink function rtnl_talk ( man rnetlink is relevant here). If you write a program that perform this call it will fire a system call and update the MAC address. Yet, then you will be doing exactly the same what ip link set does.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293365", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177750/" ] }
293,393
I'd like to find a way to find a given file by looking upward in the directory structure, rather than recursively searching through child directories. There's a node module that appears to do exactly what I want , but I don't want to depend on installing JavaScript or packages like that. Is there a shell command for this? A way to make find do that? Or a standard approach that I just wasn't able to find through Googling?
This is a direct translation of the find-config algorithm in generic shell commands (tested under bash, ksh, and zsh), where I use a return code of 0 to mean success and 1 to mean NULL/failure. findconfig() { # from: https://www.npmjs.com/package/find-config#algorithm # 1. If X/file.ext exists and is a regular file, return it. STOP # 2. If X has a parent directory, change X to parent. GO TO 1 # 3. Return NULL. if [ -f "$1" ]; then printf '%s\n' "${PWD%/}/$1" elif [ "$PWD" = / ]; then false else # a subshell so that we don't affect the caller's $PWD (cd .. && findconfig "$1") fi} Sample run, with the setup stolen copied and extended from Stephen Harris 's answer: $ mkdir -p ~/tmp/iconoclast$ cd ~/tmp/iconoclast$ mkdir -p A/B/C/D/E/F A/good/show $ touch A/good/show/this A/B/C/D/E/F/srchup A/B/C/thefile $ cd A/B/C/D/E/F$ findconfig thefile/home/jeff/tmp/iconoclast/A/B/C/thefile$ echo "$?"0$ findconfig foobar$ echo "$?"1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293393", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3358/" ] }
293,396
Is there a Linux distribution certified with the Single UNIX Specification ? What are the primary reasons that most distributions don't get certified?
Yes, there are Inspur 's K-UX Huawei's EulerOS They've been certified by the Open Group to be conformed to the UNIX 03 Product Standard . Currently no other Linux distros have the certification due to the high cost. The list of Unix-certified systems can be found below The Open Group official register of UNIX Certified Products https://en.wikipedia.org/wiki/Single_UNIX_Specification#Compliance https://en.wikipedia.org/wiki/POSIX#POSIX-certified See also UNIX®-Certified Linux-Based Operating Systems
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/293396", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148780/" ] }
293,425
I've downloaded minecraft via the PPA & want to use mods, the only mods I can find need Forge to run, on Microsoft doing so is easy, anyone know how to on Linux? (I prefer to do it via terminal) (I'll add info or change info as I get responses) (I'm technically using lubuntu)
I figured it out after asking. I started out by going to: https://files.minecraftforge.net/ then I downloaded it to the desktop (I used 1.10 version, but that doesn't really matter) then i used then commands in this order $ cd Desktop$ java -jar jarfilename.jar And now it's modded. .;,;. (I used the command to launch the jar file due to me not being able to do so any other way on my computer).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293425", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153351/" ] }
293,450
On my laptop, I have an onboard sound card, and also a connected bluetooth headset. I have configured the bluetooth device in /etc/asound.conf : # cat /etc/asound.confpcm.bluetooth { type bluetooth device 12:34:56:78:9a:bc profile "auto"}ctl.bluetooth { type bluetooth} Now I can play audio to my headset by specifying the new audio device, such as: mplayer -ao alsa:device=bluetooth file.mp3 If I want to play to my default device, I simply omit the device: mplayer file.mp3 However, I need to configure ALSA, so that all sound is sent to both devices by default, without having to explicitly set this per application. ie: mplayer file.mp3 should play both on the laptop soundcard, as well as in the bluetooth headset. How can I do that ?
Here's one way to do it from ~/.asoundrc ; example shows an on-board and soundblaster live card united under the default PCM. # duplicate audio to both devicespcm.!default plug:bothctl.!default { type hw card SB}pcm.both { type route; slave.pcm { type multi; slaves.a.pcm "sblive"; slaves.b.pcm "onboard"; slaves.a.channels 2; slaves.b.channels 4; bindings.0.slave a; bindings.0.channel 0; bindings.1.slave a; bindings.1.channel 1; bindings.2.slave b; bindings.2.channel 0; bindings.3.slave b; bindings.3.channel 1; bindings.4.slave b; bindings.4.channel 2; bindings.5.slave b; bindings.5.channel 3; } ttable.0.0 1; ttable.1.1 1; ttable.0.2 1; # front left ttable.1.3 1; # front right ttable.0.4 1; # copy front left to rear left ttable.1.5 1; # copy front right to rear right}ctl.both { type hw; card Live;}pcm.onboard { type dmix ipc_key 1024 slave { pcm "hw:0,1" period_time 0 period_size 2048 buffer_size 65536 buffer_time 0 periods 128 rate 48000 channels 4 } bindings { 0 0 1 1 2 2 3 3 }}pcm.sblive { type dmix ipc_key 2048 slave { pcm "hw:1,0" period_time 0 period_size 2048 buffer_size 65536 buffer_time 0 periods 128 rate 48000 channels 2 } bindings { 0 0 1 1 }}ctl.onboard { type hw card "SB"}ctl.sblive { type hw card "Live"} ( Source )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293450", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
293,454
How does sleep work on a Linux (Debian) box? I do not usually use the box with a keyboard and mouse. I usually SSH into it. Does it sleep, but then "wake up" when I connect to it? What if I were running a web server? Would it sleep until a GET request arrived, then wake up, or would it just ignore incoming packets when sleeping? I have a background process that I do not want suspended. How can I check from the command line what the sleep settings are? How does the "power management" capability decide when to sleep and when not to sleep? Does it look at the running processes and somehow decide to sleep based on what those processes are?
Here's one way to do it from ~/.asoundrc ; example shows an on-board and soundblaster live card united under the default PCM. # duplicate audio to both devicespcm.!default plug:bothctl.!default { type hw card SB}pcm.both { type route; slave.pcm { type multi; slaves.a.pcm "sblive"; slaves.b.pcm "onboard"; slaves.a.channels 2; slaves.b.channels 4; bindings.0.slave a; bindings.0.channel 0; bindings.1.slave a; bindings.1.channel 1; bindings.2.slave b; bindings.2.channel 0; bindings.3.slave b; bindings.3.channel 1; bindings.4.slave b; bindings.4.channel 2; bindings.5.slave b; bindings.5.channel 3; } ttable.0.0 1; ttable.1.1 1; ttable.0.2 1; # front left ttable.1.3 1; # front right ttable.0.4 1; # copy front left to rear left ttable.1.5 1; # copy front right to rear right}ctl.both { type hw; card Live;}pcm.onboard { type dmix ipc_key 1024 slave { pcm "hw:0,1" period_time 0 period_size 2048 buffer_size 65536 buffer_time 0 periods 128 rate 48000 channels 4 } bindings { 0 0 1 1 2 2 3 3 }}pcm.sblive { type dmix ipc_key 2048 slave { pcm "hw:1,0" period_time 0 period_size 2048 buffer_size 65536 buffer_time 0 periods 128 rate 48000 channels 2 } bindings { 0 0 1 1 }}ctl.onboard { type hw card "SB"}ctl.sblive { type hw card "Live"} ( Source )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293454", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47542/" ] }
293,495
I am making the check for update script for my theme I have 2 text files. First one is called "current.txt" and contains the current version. There is 4.1.1 string in that text file. Second one is called "latest.txt" and contains the latest version. There is 4.2 string in this text file. So here is the code echo "Checking update";x=$(cat ./current.txt)y=$(cat ./latest.txt)if [ "$x" -eq "$y" ]then echo There is version $y updateelse echo Version $x is the latest versionfi What it mean is if current.txt is NOT the same with latest.txt then it will say "there is version 4.2 update". If not, it will say "version 4.1.1 is the latest version" But when I try to run it. I get this error Checking update./test.sh: line 4: [: 4.1.1: integer expression expectedVersion 4.1.1 is the latest version So what am I doing wrong with this?
The test command, also named [ , has separate operators for string comparisons and integer comparisons: INTEGER1 -eq INTEGER2 INTEGER1 is equal to INTEGER2 vs STRING1 = STRING2 the strings are equal and STRING1 != STRING2 the strings are not equal Since your data is not strictly an integer, your test needs to use the string comparison operator. The last realization in the comments was that the "-eq" logic did not match the sense of the if/else echo statements, so the new snippet should be: ...if [ "$x" != "$y" ]then echo There is version $y updateelse echo Version $x is the latest versionfi
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/293495", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177727/" ] }
293,499
I would like to know if there's a command that can tell me how much storage are the .jpg files (for example). Something like if I do find / -iname "*.jpg" which can add the size of each file found and output a total.
find ./path/to/your/drive -type f -name '*.jpg' -exec du -ch {} + Or much faster find /path/to/your/drive -name "*.jpg" -print0 | du -ch --files0-from=- Or simply, du -ch /path/to/your/drive/*.jpg | grep total Or with help of awk , find /path/to/your/drive -iname "*.jpg" -ls | awk '{total += $7} END {print total}' On my system file size shows on seventh field, if it's different for you then adjust accordingly. As requested by OP in comment, if you want to find all images from a directory and total size you can use this command (suggested by @Stéphane Chazelas) find . -type f -exec file --mime-type {} + | sed -n 's|: image/[^[:blank:]]*$||p' | tr '\n' '\0' | du --files0-from=- -hc Or du -shc $(find . -name '*' -exec file {} \; | grep -o -P '^.+: \w+ image' | cut -d':' -f1) | grep total
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293499", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91570/" ] }
293,570
I'm using Debian Jessie as a virtual machine host using libvirt/qemu/kvm. I've set some of the guest virtual machines to automatically start when the host OS boots up, this is working fine. For maintenance purposes, I'm running "service libvirt-guests stop" to shut all the guests down (but not the host). Once I've done my maintenance, I want to easily boot all the guests up again (without rebooting the host). Is there a single command that will start all the guest VMs up again? I'm interested in knowing about both: a command to start all the autostart-marked guests up again a command to start all the guests up again that were running before I ran "service libvirt-guests stop" Rebooting the host OS would achieve #1, but I don't want to reboot the host. I tried, "service libvirt-guests start" but it doesn't seem to do it.
Like @jason-harris solution. But simpler and start only marked for autostart. for i in $(virsh list --name --autostart); do virsh start "$i"; done UPD: I tested it on libvirt 3.2.0 (CentOS 7.4.1708)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/293570", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68915/" ] }
293,580
I executed the following line: which lsb_release 2&>1 /dev/null Output: error: no null in /dev When I verified the error using the ls /dev/null command, null was present in /dev . Why is this error occurring? I could not decipher the problem. UPDATE I just tried the above which command on someone else's system, it worked perfectly without generating the error that I got.
First of all, redirections can occur anywhere in the command line, not necessarily at the end or start. For example: echo foo >spamegg bar will save foo bar in the file spamegg . Also, there are two versions of which , one is shell builtin and the other is external executable (comes with debianutils in Debian). In your command: which lsb_release 2>&1 /dev/null by 2&>1 , you are redirecting the STDERR (FD 2) to where STDOUT is (FD 1) pointed at, not to /dev/null and this is done first. So the remaining command is: which lsb_release /dev/null As there is no command like /dev/null , hence the error. Note that, this behavior depends on whether which is a shell builtin or external executable, bash , ksh , dash do not have a builtin and use external which and that simply ignores the error, does not show any error message. On the other hand, zsh uses a builtin which and shows: /dev/null not found So presumably, that specific error is shown by the builtin which of the shell you are using. Also, it seems you wanted to just redirect the STDERR to /dev/null if lsb_release does not exist in the PATH i.e. which shows an error. If so, just redirect the STDERR to /dev/null : which lsb_release 2> /dev/null
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293580", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171377/" ] }
293,642
I'm currently attempting to follow Hardening Debian for the Desktop Using Grsecurity guide in order to install the 4.5.7 kernel with Grsecurity on my Kali Linux desktop. I am following that list of instructions verbatim , except for the fact that I'm trying to use Grsecurity's test patch for the 4.5.7 kernel and I'm running Kali Linux instead of straight Debian. Every time I attempt to compile the kernel, however, I get this error following the line "CC certs/system_keyring.o": CC certs/system_keyring.omake[2]: *** No rule to make target 'debian/certs/[email protected]', needed by 'certs/x509_certificate_list'. Stop.Makefile:951: recipe for target 'certs' failedmake[1]: *** [certs] Error 2make[1]: Leaving directory '/home/jc/Downloads/linux-4.5.7'debian/ruleset/targets/common.mk:295: recipe for target 'debian/stamp/build/kernel' failedmake: *** [debian/stamp/build/kernel] Error 2 I get this error, as I found out, for any kernel even if I apply no patches or modifications, so it has something to do with the tools I'm using to compile the kernel (apparently a system keychain of some sort). Can someone out there tell me how to fix my OS and compile my kernel? P.S. Here is the output of cat /proc/version : Linux version 4.6.0-kali1-amd64 ([email protected]) (gcc version 5.4.0 20160609 (Debian 5.4.0-4) ) #1 SMP Debian 4.6.2-2kali2 (2016-06-28)
I ran into this several years ago on a Debian build. In the .config file you copied from /boot find and comment out the lines CONFIG_SYSTEM_TRUSTED_KEY and CONFIG_MODULE_SIG_KEY . During the build you can use your own cert or just use a random one time cert. Found the above in this thread .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/293642", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177987/" ] }
293,647
I am trying to set up a script that will loop over a set of directories, and do one thing when it finds .jpg files, and another when it finds .nef files. The problem is, if a directory does not have .jpg files for example (or .nef) then the glob entry is no longer an expanded glob, but just a string. For example: my_dir="pictures/"ext="JPG"for f in "$my_dir"*."$ext"; do echo $fdone if the my_dir folder has .JPG files in it, then they will be echoed correctly on the command line. pictures/one.JPGpictures/two.JPG However, if my_dir has no .JPG files, then the loop will enter for one iteration and echo: pictures/*.JPG how do I construct this so that if the glob has no matches, it does not enter the for loop?
This is normal and default behavior: If globbing fails to match any files/directories, the original globbing character will be preserved. If you want to get back an empty result instead, you can set the nullglob option in your script as follows: $ shopt -s nullglob$ for f in "$my_dir"*."$ext"; do echo $f; done$ You can disable it afterwards with: $ shopt -u nullglob
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/293647", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177986/" ] }
293,648
From the manual of coreutils, for ln When creating a relative symlink in a different location than the current directory, the resolution of the symlink will be different than the resolution of the same string from the current directory . Therefore, many users prefer to first change directories to the location where the relative symlink will be created, so that tab-completion or other file resolution will find the same target as what will be placed in the symlink. The string stored in a relative symlink is determined completely by the source pathname and the target pathname, both specified as command line arguments to ln . I don't see how the current directory gets involved. So I don't quite understand the reason why many users prefer to change the current directory to the parent directory of a to-be-created relative symlink before creating it. Could you rephrase or give some examples? Thanks.
There are two things to remember: ln -s (without -r ) stores the target name literally as you pass it to it if you pass a relative target, it resolves relatively to the link name , not your current working directory Example: I'm in /home/user/d0 and I want a link to /home/user/file so I do: ln -s ../file . ../file is a valid path from d0 . Now if there's a subdirectory d1 ( /home/user/d0/d1 ) and I want to place a link to ../file ( /home/user/file ) there without changing dirs, I need to do: ln -s ../../file d1/ because the relative path needs to be relative to the link name , not my current working directory. ../../file (probably) resolves to nothing relative to d0 (unless there's a file named /home/file ), so I won't get autocompletion for it which might make the operation more error prone. So I change into d1 first: cd d1ln -s ../../file . and now ../../file makes sense relative to both the current directory and the link name; autocompletion kicks in, and I get my assurance I've got the right name. GNU ln has a --relative|-r flag which makes this easier by saving you from having to compose these relative paths manually. With it, you can use a path relative to the current directory or an absolute path, and it'll relativize it relative to the link name (as it needs to be).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293648", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
293,675
I'm trying to install docker on ubuntu xenial and am following this tutorial: https://docs.docker.com/engine/installation/linux/ubuntulinux/ . So far it's gone without a hitch except for there apparently not being a linux-image-extra for my kernel version (4.6.0-040600-generic). The tutorial said that that wasn't required, though, so I figured it wasn't completely necessary. I got to the point of running sudo apt-get install docker-engine , and the install is hanging on Setting up docker-engine (1.11.2-0~xenial) ... . I've looked at top and it's not using any cpu, so I don't think it's actually doing anything. I ended up restarting my computer, getting rid of the partly installed package with some combination of dpkg -r, apt-get --purge remove, and maybe some other related stuff that I've forgotten about, and I tried installing it again. It hung the same way. How can I install this successfully?
This is because the service can't be started. You can interrupt the apt command by doing systemctl restart docker and then just following this solution: https://stackoverflow.com/a/37640824/287130
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293675", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142910/" ] }
293,684
I have two files genelist.txt and data.txt . genelist.txt simply contains a single column of ~500 gene names, whereas data.txt is a tab delimited file that contains ~1000 columns (the samples) and ~30,000 rows (gene names). The general scheme of data.txt is outlined below. Sample 1 Sample 2 Sample 3 Sample 4 Gene A      1.04       1.81        1.92        0.45    Gene B      1.11       1.12        1.32        0.92    Gene C      0.72       0.71        0.85        1.12    Gene D      1.19       1.42        0.13        0.32 I need to extract each row (the entire row, i.e. all samples) from data.txt containing each of the ~500 gene names in genelist.txt and have these rows extracted to a separate file. I've been told to use grep or awk and have looked into how to do this, however as a simple biologist with little/no coding experience I'm having a bit of trouble. Would it be possible for someone to explain how this is done, and hopefully provide some code for me to get underway. It would also be neat if the extraction returned only terms that match the entire gene name in genelist.txt . For example, if I had ABC123 but not ABC1234 in genelist.txt , I would want only ABC123 to be extracted and not ABC1234 . Furthermore, after this is done how would I then check to see which of my genes from genelist.txt were not included in the extraction? (i.e. some genes may be incorrectly named, so I would have to go back and re-extract them with their alternative and/or correct name).
To extract the lines from data.txt with the genes listed in genelist.txt : grep -w -F -f genelist.txt data.txt > newdata.txt grep options used: -w tells grep to match whole words only (i.e. so ABC123 won't also match ABC1234 ). -F search for fixed strings (plain text) rather than regular expressions -f genelist.txt read search patterns from the file If you want the header (Sample 1, Sample 2, etc) line as well: grep -w -F -f genelist.txt -e Sample data.txt > newdata.txt -e Sample also search for "Sample" To find lines in genelist.txt that aren't in newdata.txt : grep -v -w -F -f <(sed -E -e 's/(\t| +).*//' newdata.txt) genelist.txt -v invert the search, print non-matching lines. The rest of the grep options are the same, but instead of using a file with the -f option, it's using something called Process Substitution (See also ), which allows you to use a command in place of an actual file. Whatever output the command creates is treated as the "file"'s contents. In this case, we're using the command sed -E -e 's/(\t| +).*//' newdata.txt , which outputs each line of newdata.txt after first deleting everything from either the first TAB character or the first pair of spaces it sees. In other words, the first field (e.g. "Gene A"). I had to use TAB or double space because a) i wasn't sure if your data was space-separated or TAB separated and b) the first fields in your example contained spaces. sed options used: -E use extended regular expressions, so we can use plain ( , ) , and + which are more readable than having to escape them with \ as \( , \) , \+ . -e 's/(\t| +).*//' specifies the sed script to apply against the input (newdata.txt) Running that command on your sample data.txt would produce the following output: $ sed -E -e 's/(\t| +).*//' data.txtGene AGene BGene CGene D Anyway, the output of that sed command is used as the list of search patterns by the grep command.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/293684", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178020/" ] }
293,697
I am trying to get images from a website url " www.example.com/products " in this products folder lots of subfolders there I need to download the products folder. In the www.example.com/products, www.example.com/products/subfolders, the image is www.example.com/products/subfolder1/image.jpg, www.example.com/products/subfolder2/image.jpg, www.example.com/products/subfolder3/image.jpg How can I download the products folder with subfolders with data.
wget -nd -r -l1 -P /save/location -A jpeg,jpg http://www.example.com/products Explanation : -nd prevents the creation of a directory hierarchy (i.e. no directories ). -r enables recursive retrieval. See Recursive Download for more information. -l1 Specify recursion maximum depth level. 1 for just this directory in your case it's products . -P sets the directory prefix where all files and directories are saved to. -A sets a whitelist for retrieving only certain file types. Strings and patterns are accepted, and both can be used in a comma separated list (as seen above). See Types of Files for more information.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293697", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178027/" ] }
293,724
I have files like : 4-some file.mp41-another file.mp43-one more file.mp42-got another file.mp4 and so on. Using the command line to play the files in vlc using vlc * plays them in unsorted order. So I tried to play the files using vlc " vlc < <(ls * | sort -V) " does not work. Trying to change the timestamp of the files using for i in "$(ls [!R]* | sort -V)"; do touch "$i";sleep 1; done is not working because "$(ls [!R]* | sort -V)" represents the complete list of files as one argument but I cannot remove the double quotes since files have spaces in their names.
This should work find . -name "*mp4" -print0 | sort -Vz | xargs -0 vlc
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293724", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8032/" ] }
293,743
I'm trying to compile NVIDIA CUDA on a Fedora 24 Workstation. I'm using CUDA version 7.5 and when I try to compile it I get this method: gcc versions later than 4.9 are not supported! I have installed: gcc (GCC) 6.1.1.20160621 (Red Hat 6.1.1-3). How can I install gcc 4.9 on my machine? My question is related to this one , but on that one doesn't tell how to install to different gcc versions on the same machine. On ubuntu I can do it with this command: sudo apt-get install gcc-4.9 g++-4.9 But I have a Fedora 24.
This should work find . -name "*mp4" -print0 | sort -Vz | xargs -0 vlc
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293743", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/132714/" ] }
293,755
I just updated one of our debian jessie servers and the kernel was updated, nothing special, as we have done this many times. But the first time there were some warnings when the grub configuration file was being generated. I have never seen them before. As far as I can tell the system runs nicely after a reboot. Setting up linux-image-3.16.0-4-amd64 (3.16.7-ckt25-2+deb8u3) .../etc/kernel/postinst.d/initramfs-tools:update-initramfs: Generating /boot/initrd.img-3.16.0-4-amd64/etc/kernel/postinst.d/zz-update-grub:Generating grub configuration file ... WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!Found linux image: /boot/vmlinuz-3.16.0-4-amd64Found initrd image: /boot/initrd.img-3.16.0-4-amd64 WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!done I searched for the warning online, but I couldn't find a decent explanation that made sense to me (maybe not understood?) and also couldn't understand if this can be ignored. Anyone here has an idea? Thanks
according to info from Peter Rajnoha about an old 2014 fedora bug 1152185, "The warning is there because if lvmetad is already instantiated and running, then using use_lvmetad=0 will cause LVM commands run under this setting to not notify lvmetad about any changes - therefore lvmetad may miss some information - hence the warning.". https://bugzilla.redhat.com/show_bug.cgi?id=1152185 However, in our case use_lvmetad = 0, so I tend to believe the warnings appear only during the update and the grub reconfiguration. According to the explanations in the bug report, this is connected with lvm2-monitor, which is happily running on my system, I believe on yours too. Please check out the Process line: # systemctl status lvm2-monitorâ lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling Loaded: loaded (/lib/systemd/system/lvm2-monitor.service; enabled) Active: active (exited) since Sat 2016-07-09 04:04:49 EEST; 34min ago Docs: man:dmeventd(8) man:lvcreate(8) man:lvchange(8) man:vgchange(8) Process: 328 ExecStart=/sbin/lvm vgchange --monitor y --ignoreskippedcluster (code=exited, status=0/SUCCESS) Main PID: 328 (code=exited, status=0/SUCCESS) CGroup: /system.slice/lvm2-monitor.service I do not see any traces of the warning after reboot and based on the other information I believe the warning is safe to ignore at this stage. If you get any more or other warnings, you should look into it further. Also, I used to receive LVM warnings on each image update or grub reconfiguration about the names I believe, which turned out to be unimportant and most probably connected to the old hardware. So this is not uncommon. Preexo, I hope that this has answered your two concerns. Rubo77, I hope I have been helpful for you too. Kind regards!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293755", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41400/" ] }
293,756
I just installed a CentOS-based to serve as an SFTP server. I need all incoming files to go /mnt/inbound/ folder, so I want to ensure that every user from this host that logs in via SFTP gets /mnt/inbound/ as their starting point, and I want to ensure that they cannot go anywhere else. I got as far as being able to connect with a test user using an SFTP client and ensuring the user is jailed to their respective folder -- but user cannot upload files... Here is what I have done so far: Create a group called sftponly for to contain all customer inbound users: $ groupadd sftponly Modify /etc/ssh/sshd_config to use the internal-sftp Subsystem: # Enable built-in implementation of SFTP Subsystem sftp internal-sftp Add the following at the end of sshd_conf: Match Group sftponly # Force the connection to use the built-in SFTP support ForceCommand internal-sftp # Chroot the connection into the specified directory ChrootDirectory /mnt/inbound/%u # Disable network tunneling PermitTunnel no # Disable authentication agent forwarding AllowAgentForwarding no # Disable TCP connection forwarding AllowTcpForwarding no # Disable X11 remote desktop forwarding X11Forwarding no Adding a user to use sftp: $ sudo useradd -g sftponly testuser must create the user folder under /mnt/inbound: $ sudo mkdir /mnt/inbound/testuser I can now use FileZilla (or any other client) to do an SFTP connection to this host, and I can see that the user is jailed to the /mnt/inbound/testuser folder. However the user cannot upload files. I have tried changing the rights of the /mnt/inbound/test folder so that the user test can get access to it, but that breaks the user's ability to connect via SFTP. I cannot change the owner of the folder to other than root, so how can I ensure that the user can read and write into their respective folder? I have seen several attempts to answer similar questions around the internet (including StackExchange) but all seem to be missing a point or two -- since I always end up with a Broken SFTP setup when I try to follow the instructions provided in the answers. Regards,P.
Right I managed to get some advice at #openssh IRC channel and here is what was missing from my solution: The directory specified in ChrootDirectory must be owned by root. Since in the above sshd_config file I have specified the %u variable so every user has their own root directory base on their username (e.g. testuser would be /mnt/inbound/testuser/ ) then all of those directories must be owned by root. This is in fact the default when I create the directories doing sudo mkdir /mnt/inbound/<username> since the mkdir command is elevate via sudo . So what I needed to do is to create a sub-directory under /mnt/inbound/<username> and give that directory permission for the user. In my case I called this directory uploads . So I changed my configuration slightly as follows: Match Group sftponly # Chroot the connection into the specified directory ChrootDirectory /mnt/inbound/%u # Force the connection to use the built-in SFTP support ForceCommand internal-sftp -d /uploads The ForceCommand line has been changed to include -d /uploads , meaning that the default directory after the user logs-in in is /uploads . Note that it is /uploads and not /mnt/inbound/%u/uploads because it takes into account that /mnt/inbound/%u has been specified as the new root in the previous line in the config. If I do ChrootDirectory /mnt/inbound/ an then specify ForceCommand internal-sftp -d /%u , I could make the /mnt/inbound/<username> folder be owned by the end-user since /mnt/inbound is now the new root directory that must be owned by the root account. However users would be able to navigate to the parent folder and see the directory names of all other accounts. I decided against that :)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/293756", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178059/" ] }
293,763
I have a issue: I need to change the permission of the symlink from 777 to 755 and I do not have any idea how should I do it. I have tried using the chmod command but it's not working.I want lrwxrwxrwx 1 frosu 2016_cluj 5 Jul 4 13:53 test6 -> test0 to lrwxr-xr-x 1 frosu 2016_cluj 5 Jul 4 13:53 test6 -> test0
Some systems support changing the permission of a symbolic link, others do not. chmod -- change file modes or Access Control Lists (OSX and FreeBSD , using -h ) -h If the file is a symbolic link, change the mode of the link itself rather than the file that the link points to. chmod - change file mode bits (Linux) chmod never changes the permissions of symbolic links; the chmod system call cannot change their permissions. This is not a problem since the permissions of symbolic links are never used. However, for each symbolic link listed on the command line, chmod changes the permissions of the pointed-to file. In contrast, chmod ignores symbolic links encountered during recursive directory traversals. Since the feature differs, POSIX does not mention the possibility. From comments, someone suggests that a recent change to GNU coreutils provides the -h option. At the moment, that does not appear in the source-code for chmod : while ((c = getopt_long (argc, argv, ("Rcfvr::w::x::X::s::t::u::g::o::a::,::+::=::" "0::1::2::3::4::5::6::7::"), long_options, NULL)) and long_options has this: static struct option const long_options[] ={ {"changes", no_argument, NULL, 'c'}, {"recursive", no_argument, NULL, 'R'}, {"no-preserve-root", no_argument, NULL, NO_PRESERVE_ROOT}, {"preserve-root", no_argument, NULL, PRESERVE_ROOT}, {"quiet", no_argument, NULL, 'f'}, {"reference", required_argument, NULL, REFERENCE_FILE_OPTION}, {"silent", no_argument, NULL, 'f'}, {"verbose", no_argument, NULL, 'v'}, {GETOPT_HELP_OPTION_DECL}, {GETOPT_VERSION_OPTION_DECL}, {NULL, 0, NULL, 0}}; Permissions are set with chmod . Ownership is set with chown . GNU coreutils (like BSD) supports the ability to change a symbolic link's ownership. This is a different feature, since the ownership of a symbolic link is related to whether one can modify the contents of the link (and point it to a different target). Again, this started as a BSD feature (OSX, FreeBSD , etc), which is also supported with Linux (and Solaris , etc). POSIX says of this feature : -h For each file operand that names a file of type symbolic link, chown shall attempt to set the user ID of the symbolic link. If a group ID was specified, for each file operand that names a file of type symbolic link, chown shall attempt to set the group ID of the symbolic link. So much for the command-line tools (and shell scripts). However, you could write your own utility, using a feature of POSIX which is not mentioned in the discussion of the chmod utility: int chmod(const char *path, mode_t mode); int fchmodat(int fd, const char *path, mode_t mode, int flag); The latter function adds a flag parameter, which is described thus: Values for flag are constructed by a bitwise-inclusive OR of flags from the following list, defined in <fcntl.h> : AT_SYMLINK_NOFOLLOW If path names a symbolic link, then the mode of the symbolic link is changed. That is, the purpose of fchmodat is to provide the feature you asked about. But the command-line chmod utility is documented (so far) only in terms of chmod (without this feature). fchmodat , by the way, appears to have started as a poorly-documented feature of Solaris which was adopted by the Red Hat and GNU developers ten years ago, and suggested by them for standardization: one more openat-style function required: fchmodat Austin Group Minutes of the 17 May 2007 Teleconference [Fwd: The Austin Group announces Revision Draft 2 now available] According to The Linux Programming Interface , since 2.6.16, Linux supports AT_SYMLINK_NOFOLLOW in these calls: faccessat , fchownat , fstatat , utimensat , and linkat was implemented in 2.6.18 (both rather "old": 2006, according to OSNews ). Whether the feature is useful to you, or not, depends on the systems that you are using.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/293763", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178062/" ] }
293,770
I looked at this scheme and now i want to know, can one executable be runned in a two systems, which have the same ancestor? (and so probably the same kernel?) For example, according to the scheme: Solaris <- System V R4 <- BSD 4.3 , so, can the BSD* (OpenBSD, FreeBSD, NetBSD) and the Solaris run the same executable? P. S. may be this question is obvious and meanigless to you, but i am completly new to the *nix, so for me it is important.
Short answer: No. Medium answer: Maybe, if the target OS supports it. Long answer... First thing to be aware of is that different vendors may use different chip sets. So a Solaris binary may be compiled for a SPARC chip. This won't run on an Intel/AMD machine. Similarly AIX may be on a PowerPC. HP-UX might be on PA-RISC. Let's ignore all these problems and just stick with the "Intel/AMD" space. The next problem is that different OSes may expose different kernel system calls . This means that any call the application makes into the kernel won't do what is expected. This is obviously a problem. However the target kernel may be able to provide an "ABI compatibility layer"; the kernel (let's say a FreeBSD kernel) can detect you are trying to run a Linux binary and can translate between the Linux kernel ABI and the native kernel ABI. The next problem is one of libraries; a Linux binary would expect to be able to load glibc of a specific version, which may not be present in the hosting OS. This may be solvable by copying the required libraries over. Again an OS may make this easier for you, e.g. by having a package for these libraries to make them easy to install. After all this your binary may run :-) Back in the 90s, Linux had a iBCS module which allowed for exactly this sort of thing. It made it possible to run, for example, SCO Unix programs on Linux. I had run SCO Unix Oracle on my machine as a proof of concept. It worked pretty well! Obviously there was no vendor support, so it wasn't suitable for production :-) Now Linux has a massive foothold in this space other OSes try and add compatibility layers to allow Linux programs to run on their OSes. So if your OS supports is and if you install and configure it properly then you may be able to run some programs from another Unix.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/293770", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178064/" ] }
293,775
I want to write a script that merges contents of several .csv files in one .csv file, i.e appends columns of all other files to the columns of first file. I had tried doing so using a "for" loop but was not able to proceed with it. Does anyone know how to do this in Linux?
The simplest approach for achieving that would be typing the following command cat *csv > combined.csv This file would contain the contents of all your csv files just in the way you mentioned.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/293775", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172771/" ] }