source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
298,737
My partition in Linux Mint 18 is in need of repair, but first I will backup some files using a Live CD. I can't copy the two files that I want, Bookmarks and Bookmarks.bak, because I am not the owner thus don't have permissions. How can I become the owner and obtain permissions? When I open Nemo, I can access my hard drive from the sidebar. Since I am not owner, I had tried gksudo nemo . When Nemo opened with privileges, The hard drive wasn't listed in the sidebar. The files path is: /media/mint/bfcc9b0f-abbf-49cc-86a7-4b97475bf409/home/luis/.config/chromium/Default
With awk awk 'BEGIN{RS=">\n+";ORS=">\n";FS="\n"} {$1=$1}1' yourfile< Jan 20, 2016 11:58:09 AM EST Test1 Sample Test1 >< Jan 20, 2016 11:58:09 AM EST Sample Test It is not T1 T2 > If you want a blank line between each output, you can add an extra \n to the ORS i.e. awk 'BEGIN{RS=">\n+";ORS=">\n\n";FS="\n"} {$1=$1}1' yourfile (although this may also add a trailing blank line at the end of the file).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298737", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122885/" ] }
298,869
I am creating a script and when I try to capture a command return, I have an error of command not found, if I use this command on the terminal: gcloud -q compute snapshots list --format='csv(NAME)' It works fine. The script is: #!/bin/shCSV_SNAPSHOTS= $(gcloud -q compute snapshots list --format='csv(NAME)')IFS=$'\n'for i in $CSV_SNAPSHOTSdo echo "$i"done
There must not be any whitespace after = (and also before = ) in variable declaration. So this should do: CSV_SNAPSHOTS=$(gcloud -q compute snapshots list --format='csv(NAME)') Also note that, you should (almost always) quote variable and command substitution, although you would get away in this case as you are saving the command substitution to a variable. Example: $ foo="$(echo spam)"$ echo "$foo"spam$ bar= "$(echo egg)"No command 'egg' found, did you mean:
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298869", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181950/" ] }
298,888
I wonder what is a simpler way to do this: awk 'NR > 1 {print $1"\t"$2"\t"$3"\t"$4"\t"$5"\t"$6"\t"$7"\t"$8"\t"$9$10$11$12$13$14$15$16}' file.in > file.out which is simply speaking " concatenate columns 9 to 16 by removing tabs in-between" Merged columns 9-16 become "Notes" so may include whitespaces. As of today there are 16 columns but this may evolve in more/less if required. Eventually column 9 (concatenated 9-16) becomes "notes" field. Cheers, Xi
paste <(cut -f 1-8 file) <(cut -f9- file | tr -d '\t')
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298888", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/110681/" ] }
298,984
Trying to do a "lookup" in a pipeline, where the input looks like this: alice 5bob 7... I want to look up codes in the second column in a database and return the corresponding name, and keep on trucking with the original and looked-up data. cat source.tab | \ tee foo.tmp | \ cut -f 2 | \ dbstream ... -s "select(select name from my_lookup where code=?)" | \ paste foo.tmp - Result should be: alice 5 foobob 7 bar... Imagine for a minute that cat source.tab is really a long pipeline that does other pre-processing. And that dbstream .. could be some other command, say wget | jq . Important: I only want to start the lookup process once. a) is this a terrible idea, and if so what should I be doing instead? b) is there a better pattern than tee tmp | cut | "lookup" | paste tmp - ?
It depends on how complicated the output is and how much formatting needs to be maintained (eg is the first column always 8 characters long? etc). However a while loop might work cat source.tab | while read -r name iddo echo "$name $id $(dbstream .... code=$id)"done You can change what happens inside the loop to format however you likeeg cat source.tab | while read -r name iddo res=$(dbstream ... code=$id) printf "%10s %5d %s" $name $id $resdone As per comment, you only want to call dbstream once. This requires dbstream to keep output in the same order as input. Here's a simple example dbstream program: #!/bin/shfor a in "$@"do echo dbstream $$ sees $adone We include the PID in the output so we can show it only gets called once. Now we can use paste and process substitution: $ paste source.tab <(./dbstream $(awk '{print $2}' source.tab ))alice 1 dbstream 20671 sees 1bob 2 dbstream 20671 sees 2 Now if the source.tab is a slow process I would recommend using a temporary file eg #!/bin/bashtmp=`mktemp`trap '/bin/rm -f $tmp ; exit' 0 1 2 3 15cat source.tab > $tmppaste $tmp <(./dbstream $(awk '{print $2}' $tmp ))
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298984", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40373/" ] }
299,037
I'm trying to pass a variable to ssh remote but not works. My code is: #!/bin/bashset -xconexion="[email protected]"parameter="$1"ssh -T $conexion <<'ENDSSH' clearecho "$parameter"ENDSSH I execute: ./script.sh try It says me: parameter: Undefined variable. any help please?
Passing variables (environment variables) over ssh is possible but generally restricted. You need to tell the client to send them. For instance with OpenSSH, that's with: ssh -o SendEnv=parameter host cmd... But you also need the server to accept it ( AcceptEnv configuration directive with OpenSSH). Accepting any variable is a big security risk so is generally not done by default, though some ssh deployments allow some variables under some namespace (like LC_* in some OpenSSH deployments). You also need to export the variable before calling ssh , like: LC_parameter="$parameter" ssh -o SendEnv=LC_parameter host csh << 'END'echo $LC_parameter:qEND Above, we're passing the content of the $parameter bash shell variable as the LC_parameter environment variable to ssh . ssh sends that over to sshd , which if it accepts it, passes it as an environment variable to the login shell of the user which then passes it to that csh command (which can then expand it). But as mentioned earlier, that won't work unless the administrator of the host machine has added a AcceptEnv LC_parameter or AcceptEnv LC_* (that one sometimes done by default) to the sshd configuration. The Undefined variable error message in your example suggests the login shell of the remote user is csh or tcsh . It's better to explicitly invoke the shell to avoid surprises ( ssh host csh also means a tty is not requested so you don't need -T ). Note the $LC_parameter:q syntax, which is the csh way to pass the content of a variable verbatim, not "$LC_parameter" which doesn't work if the variable contains newline characters. If using LC_* variables is not an option, then alternatively, you can have the client shell ( bash in your case) expand the variable. A naive way would be with ssh host csh << ENDecho "$variable"END But that would be dangerous as the content of the variable would be interpreted by the remote shell. If $variable contains `reboot` or "; reboot; : " for instance, that would have bad consequences. So, you'd need first to make sure the variable is properly quoted in the syntax of the remote shell. Here, I would avoid csh where it's hard to do reliably and use sh / bash / ksh instead. Use a helper function to do the sh quoting: shquote() { awk -v q=\' -v b='\\' ' BEGIN{ for (i=1; i<ARGC; i++) { gsub(q, q b q q, ARGV[i]) printf "%s ", q ARGV[i] q } print "" exit }' "$@"} And call ssh as: ssh host sh << ENDparameter=$(shquote "$parameter")echo "\$parameter"END See how we escape the third $ so the expansion of $parameter is done by the remote shell, not the local one.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299037", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80895/" ] }
299,038
Ksar is a BSD licensed Java based application to create graph of all parameters from the data collected by Unix sar utilities. Usually Unix sar is part of Unix' sysstat package and run sa1, sa2, sadc through cron to created data files in /var/log/sa/saNN. Image can be zoomed by dragging mouse on image to pin point problems Results can be exported to PDF or JPEG format ksar was an awesome tool for generating graphs out of sar statistics. Sadly development stopped in 2013. Do you know any user friendly alternative? I know that it's possible to generate graphs with gnuplot , but requires much more effort. My desktop OS is macOS 10.11.6
Passing variables (environment variables) over ssh is possible but generally restricted. You need to tell the client to send them. For instance with OpenSSH, that's with: ssh -o SendEnv=parameter host cmd... But you also need the server to accept it ( AcceptEnv configuration directive with OpenSSH). Accepting any variable is a big security risk so is generally not done by default, though some ssh deployments allow some variables under some namespace (like LC_* in some OpenSSH deployments). You also need to export the variable before calling ssh , like: LC_parameter="$parameter" ssh -o SendEnv=LC_parameter host csh << 'END'echo $LC_parameter:qEND Above, we're passing the content of the $parameter bash shell variable as the LC_parameter environment variable to ssh . ssh sends that over to sshd , which if it accepts it, passes it as an environment variable to the login shell of the user which then passes it to that csh command (which can then expand it). But as mentioned earlier, that won't work unless the administrator of the host machine has added a AcceptEnv LC_parameter or AcceptEnv LC_* (that one sometimes done by default) to the sshd configuration. The Undefined variable error message in your example suggests the login shell of the remote user is csh or tcsh . It's better to explicitly invoke the shell to avoid surprises ( ssh host csh also means a tty is not requested so you don't need -T ). Note the $LC_parameter:q syntax, which is the csh way to pass the content of a variable verbatim, not "$LC_parameter" which doesn't work if the variable contains newline characters. If using LC_* variables is not an option, then alternatively, you can have the client shell ( bash in your case) expand the variable. A naive way would be with ssh host csh << ENDecho "$variable"END But that would be dangerous as the content of the variable would be interpreted by the remote shell. If $variable contains `reboot` or "; reboot; : " for instance, that would have bad consequences. So, you'd need first to make sure the variable is properly quoted in the syntax of the remote shell. Here, I would avoid csh where it's hard to do reliably and use sh / bash / ksh instead. Use a helper function to do the sh quoting: shquote() { awk -v q=\' -v b='\\' ' BEGIN{ for (i=1; i<ARGC; i++) { gsub(q, q b q q, ARGV[i]) printf "%s ", q ARGV[i] q } print "" exit }' "$@"} And call ssh as: ssh host sh << ENDparameter=$(shquote "$parameter")echo "\$parameter"END See how we escape the third $ so the expansion of $parameter is done by the remote shell, not the local one.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299038", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26612/" ] }
299,043
I know that set -e is my friend in order to exit on error. But what to do if the script is sourced, e.g. a function is executed from console? I don't want to get the console closed on error, I just want to stop the script and display the error-message. Do I need to check the $? of each command by hand to make that possible ? Here an example-script myScript.sh to show the problem: #!/bin/shset -ecopySomeStuff(){ source="$1" dest="$2" cp -rt "$source" "$dest" return 0}installStuff(){ dest="$1" copySomeStuff dir1 "$dest" copySomeStuff dir2 "$dest" copySomeStuff nonExistingDirectory "$dest"} The script is used like that: $ source myScript.sh$ installStuff This will just close down the console. The error displayed by cp is lost.
I would recommend having one script that you run as a sub-shell, possibly sourcing a file to read in function definitions. Let that script set the errexit shell option for itself. When you use source from the command line, "the script" is effectively your interactive shell. Exiting means terminating the shell session. There are possibly ways around this, but the best option, if you wanted to set errexit for a session, would be to simply have: #!/bin/bashset -o errexitsource file_with_functionsdo_things_using_functions Additional benefit: Will not pollute the interactive session with functions.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299043", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57118/" ] }
299,067
I'm encountering an issue where I am trying to get the size of a terminal by using scripts. Normally I would use the command tput cols inside the console, however I want to be able to accomplish this feature by strictly using scripts. As of now I am able to detect the running console and get its file path. However I'm struggling to use this information to get the console's width. I've attempted using the command tput , but I'm fairly new to Linux/scripts so therefore don't really know what to do. The reason for doing this is I want to be able to setup a cron entry that notifies the console of its width/columns every so often. This is my code so far: tty.sh #!/bin/bash#Get PID of terminal#terminal.txt holds most recent PID of console in usevalue=$(</home/test/Documents/terminal.txt)#Get tty using the PID from terminal.txtTERMINAL="$(ps h -p $value -o tty)"echo $TERMINAL#Use tty to get full filepath for terminal in useTERMINALPATH=/dev/$TERMINALecho $TERMINALPATHCOLUMNS=$(/home/test/Documents/get_columns.sh)echo $COLUMNS get_columns.sh #!/usr/bin/env bashecho $(/usr/bin/tput cols) The normal output of TERMINAL & TERMINALPATH are pts/ terminalnumber and /dev/pts/ terminalnumber , for example pts/0 & /dev/pts/0
The tput command is an excellent tool, but unfortunately it can't retrieve the actual settings for an arbitrarily selected terminal. The reason for this is that it reads stdout for the terminal characteristics, and this is also where it writes its answer. So the moment you try to capture the output of tput cols you have also removed the source of its information. Fortunately, stty reads stdin rather than stdout for its determination of the terminal characteristics, so this is how you can retrieve the size information you need: terminal=/dev/pts/1columns=$(stty -a <"$terminal" | grep -Po '(?<=columns )\d+')rows=$(stty -a <"$terminal" | grep -Po '(?<=rows )\d+') By the way, it's unnecessarily cumbersome to write this as echo $(/usr/bin/tput cols) . For any construct echo $(some_command) you are running some_command and capturing its output, which you then pass to echo to output. In almost every situation you can imagine you might as well have just run some_command and let it deliver its output directly. It's more efficient and also easier to read.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/299067", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182119/" ] }
299,106
I try to keep last 50 lines in my file where I save temperature every minute. I used this command: tail -n 50 /home/pi/Documents/test > /home/pi/Documents/test But the result is empty test file. I thought, it will lists last 50 lines of test file and insert it to test file. When I use this command: tail -n 50 /home/pi/Documents/test > /home/pi/Documents/test2 it is working fine. There is 50 lines in test2 file. Can anybody explain me where is the problem?
The problem is that your shell is setting up the command pipeline before running the commands. It's not a matter of "input and output", it's that the file's content is already gone before tail even runs. It goes something like: The shell opens the > output file for writing, truncating it The shell sets up to have file-descriptor 1 (for stdout) be used for that output The shell executes tail . tail runs, opens /home/pi/Documents/test and finds nothing there There are various solutions, but the key is to understand the problem, what's actually going wrong and why. This will produce what you are looking for, echo "$(tail -n 50 /home/pi/Documents/test)" > /home/pi/Documents/test Explanation : $() is called command substitution which executes tail -n 50 /home/pi/Documents/test the quotation marks preserve line breaks in the output. > /home/pi/Documents/test redirects output of echo "$(tail -n 50 /home/pi/Documents/test)" to the same file.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/299106", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176802/" ] }
299,120
I have a certain bash script, which wants to preserve original /dev/stdout location before replacing 1st file descriptor with other location. So, naturally, I wrote something like old_stdout=$(readlink -f /dev/stdout) And it didn't work. Very quickly I understand what the problem was: test@ubuntu:~$ echo $(readlink -f /dev/stdout)/proc/5175/fd/pipe:[31764]test@ubuntu:~$ readlink -f /dev/stdout/dev/pts/18 Obvioulsly, $() runs in a subshell, which is piped to the parent shell. So the question is: is there a reliable (scoped to portability between Linux distributions) way to save /dev/stdout location as a string in a bash script?
To save a file descriptor, you duplicate it on another fd. Saving a path to the corresponding file is not enough, you'd need to save the opening mode, the opening flags, the current position within the file and so on. And of course, for anonymous pipes, or sockets, that wouldn't work as those have no path. What you want to save is the open file description that the fd refers to, and duplicating an fd is actually returning a new fd to the same open file description . To duplicate a file descriptor onto another, with Bourne-like shell, the syntax is: exec 3>&1 Above, fd 1 is duplicated onto fd 3. Whatever fd 3 was already open to before would be closed, but note that fds 3 to 9 (usually more, up to 99 with yash ) are reserved for that purpose (and have no special meaning contrary to 0, 1, or 2), the shell knows not to use them for its own internal business. The only reason fd 3 would have been open beforehand is because you did it in the script 1 , or it was leaked by the caller. Then, you can change stdout to something else: exec > /dev/null And later, to restore stdout: exec >&3 3>&- ( 3>&- being to close the file descriptor which we no longer need). Now, the problem with that is that except in ksh, every command you run after that exec 3>&1 will inherit that fd 3. That's a fd leak. Generally not a big deal, but that can cause problem. ksh sets the close-on-exec flag on those fds (for fds over 2), but not other shells and other shells don't have any way to set that flag manually. The work around for other shell is to close the fd 3 for each and every command, like: exec 3>&-exec > file.logls 3>&-uname 3>&-exec >&3 3>&- Cumbersome. Here, the best way would be to not use exec at all, but redirect command groups: { ls uname} > file.log There, it's the shell that takes care to save stdout and restore it afterwards (and it does do it internally by duplicating it on a fd (above 9, above 99 for yash ) with the close-on-exec flag set). Note 1 Now, the management of those fds 3 to 9 can be cumbersome and problematic if you use them extensively or in functions, especially if your script uses some third party code that may in turn use those fds. Some shells ( zsh , bash , ksh93 , all added the feature ( suggested by Oliver Kiddle of zsh ) around the same time in 2005 after it was discussed among their developers) have an alternative syntax to assign the first free fd above 10 instead which helps in this case: myfunction() { local fd exec {fd}>&1 # stdout was duplicated onto a new fd above 10, whose actual value # is stored in the fd variable ... # it should even be safe to re-enter the function here ... exec >&"$fd" {fd}>&-}
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/299120", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101764/" ] }
299,130
Samba does not display files correctly when they contain a colon. Original file name: test:file.txt Display name under Windows: T8S6CH~R.TXT How can I fix it? Info: Linux: SLES 11 SP 3 Samba: Version 3.6.3-0.33.39.1-3128-SUSE-CODE11-x86_64
The problem with Samba's mangled names option is that neither setting is ideal. You can have names that are not mangled, but cannot be accessed in any way because they contain illegal characters, or names that are mangled into the DOS 8.3 format and hence close to unreadable. Fortunately there is (now) a VFS module called catia which will provide custom character mappings. In particular it's possible to map out the characters considered illegal in Windows filenames. In the [global] section place these lines: # Mapping illegal characters, where enabled with "vfs objects = catia" mangled names = no catia:mappings = 0x22:0xa8,0x2a:0xa4,0x2f:0xf8,0x3a:0xf7,0x3c:0xab,0x3e:0xbb,0x3f:0xbf,0x5c:0xff,0x7c:0xa6 In each [share_name] section add this next line (if you already have a vfs objects line, just append catia to the list): vfs objects = catia As usual, if it's going to apply to all your shares, this share-based setting can be placed in [global] instead of each individual share defintion. On my Debian-based system this VFS object module was installed as part of the standard package. One example of a filename that is mapped by this setting is 2017-12-24 12:23.txt . Using mangled names = yes has this file displayed as 2BB0Y9~4.TXT . Using vfs objects = catia instead has this file name displayed as 2017-12-24 12÷23.txt . It's not perfect but it's pretty good. And most importantly, I can access it from Windows applications.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/299130", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144303/" ] }
299,147
I am setting certain environment variables for the command pyspark to work. When I set the variables in /etc/environment and source it, it doesn't work. However, when I set them in command line they do work but ofcourse only for this session. My intent is to set them globally so that even if I re-open the session I can just type pyspark Setting in /etc/environment [root@localhost ~]# more /etc/environment[root@localhost ~]# echo "export SPARK_HOME=/srv/spark" >> /etc/environment[root@localhost ~]# echo "export PATH="$SPARK_HOME"/bin:"$PATH >> /etc/environment[root@localhost ~]# echo "export JAVA_HOME=/usr/lib/jvm/jre-1.7.0-openjdk" >> /etc/environment[root@localhost ~]# source /etc/environment[root@localhost ~]# pyspark --version-bash: pyspark: command not found Setting on command line [root@localhost ~]# export SPARK_HOME=/srv/spark[root@localhost ~]# export PATH=$SPARK_HOME/bin:$PATH[root@localhost ~]# export JAVA_HOME=/usr/lib/jvm/jre-1.7.0-openjdk[root@localhost ~]# pyspark --versionWelcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 1.6.1 /_/Type --help for more information.
Put the export SPARK_HOME=... etc. commands in the startup files of your shell. With bash, that would be either ~/.profile or ~/.bash_profile . On Linux, /etc/environment is usually read by pam_env.so during login, and it doesn't support expanding existing variables, so setting PATH=$PATH:/something will result in the literal string $PATH to appear in your PATH . This isn't what you want. (See e.g. this and this , also for fun this .) Also, setting PATH in /etc/environment might not work, since the global startup scripts for the shell might rewrite them. (They do on Debian by default, on the old CentOS I have handy, the startup scripts only seem to prepend to PATH ). If your system doesn't use pam_env.so , but you only source the script by hand, then these considerations don't matter, of course. But it looks like it's widely used by at least a couple of Linux distributions, so it might be a good idea to use another filename. (Because this is completely opposite to what the other answers said, I tested it on an old CentOS.) I put the following in /etc/environment : export FOO1=barexport FOO2=foo:$FOO After logging in again, set | grep FOO shows: FOO1=barFOO2='foo:$FOO'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/299147", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/162115/" ] }
299,161
$(which zip) -r ../data.zip . The contents of the folders are changing dynamically overtime, I tested it first with including huge.pdf file and later deleted it, but still data.zip stays the same size. On unzipping I could see the huge.pdf file there. I wish that zip command overrides the existing archive completely. and I'm executing this shell command in an application (nodejs) Edit: I don't think so the linked question is duplicate.
First note that executing zip through $(which zip) is the same as executing it as just zip . The which utility locates a program file in the user's path. Given an existing Zip archive, zip will add new files, replace existing files, but it will not delete files in the archive if those files have been removed from the file system. To delete a file from a Zip archive: $ zip -d archive.zip filename You may obviously also just remove the archive with rm before re-creating it. For your NodeJS script, you could use rm -f ../data.zip && zip -r ../data.zip . The -f flag to rm makes rm not fail if the archive does not exist.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299161", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182192/" ] }
299,298
How do you delete from nth occurrence of a pattern to end of file using command line tools like sed ? e.g. delete from the third foo in the following: somethingfoo1maybe something elsefoo2maybe notfoo3 -this line and anything after is gone- I'm not here $sed '/magic/' Desired result: somethingfoo1maybe something elsefoo2maybe not Bonus points for the same thing but keeping the line containing the third foo .
Without keeping the line: awk -v n=3 '/foo/{n--}; n > 0' With keeping the line: awk -v n=3 'n > 0; /foo/{n--}' Though we may want to improve it a bit so that we quit and stop reading as soon as we've found the 3rd foo : awk -v n=3 '/foo/{n--; if (!n) exit}; {print}' # not-keepawk -v n=3 '{print}; /foo/{n--; if (!n) exit}' # keep sed would be more cumbersome. You'd need to keep the count of foo occurrences as a number of characters in the hold space like: Keep: sed ' /foo/{ x;s/^/x/ /x\{3\}/{ x;q } x }' Not keep: sed -ne '/foo/{x;s/^/x/;/x\{3\}/q;x;}' -e p
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299298", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57562/" ] }
299,321
for k in {0..49};doa=$(($((2*$k))+1));echo $a;done Hi, I need a simplified expression for the third line, maybe one that does not use command substitution.
Using arithmetic expansion: for (( k = 0; k < 50; ++k )); do a=$(( 2*k + 1 )) echo "$a"done Using the antiquated expr utility: for (( k = 0; k < 50; ++k )); do a=$( expr 2 '*' "$k" + 1 ) echo "$a"done Using bc -l ( -l not actually needed in this case as no math functions are used): for (( k = 0; k < 50; ++k )); do a=$( bc -l <<<"2*$k + 1" ) echo "$a"done Using bc -l as a co-process (it acts like a sort of computation service in the background¹): coproc bc -lfor (( k = 0; k < 50; ++k )); do printf "2*%d + 1\n" "$k" >&${COPROC[1]} read -u "${COPROC[0]}" a echo "$a"donekill "$COPROC_PID" That last one looks (arguably) cleaner in ksh93 : bc -l |&bc_pid="$!"for (( k = 0; k < 50; ++k )); do print -p "2*$k + 1" read -p a print "$a"donekill "$bc_pid" ¹ This solved a an issue for me once where I needed to process a large amount of input in a loop. The processing required some floating point computations, but spawning bc a few times in the loop proved to be exceedingly slow. Yes, I could have solved it in many other ways, but I was bored...
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/299321", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/131380/" ] }
299,322
I have a C-Media USB soundcard installed on my Raspberry Pi: Bus 001 Device 004: ID 0d8c:0008 C-Media Electronics, Inc. . It is a USB cable with an XLR end on the other side, to which I have an XLR Microphone (a Sennheiser MD 427 if anyone is interested) connected: Connecting it to my Mac I can turn up the recording volume (it says "settings for selected device" and "input volume" in german) and I get a fairly ok recording from it (it's actually a stereo recording, but this shows the volume level): Now, the same under Linux looks quite differently. The device is recognized ok, snd_usb_audio is loaded and alsamixer shows the new recording device and lets me turn up the "recording volume" all the way: Yet, the volume of what I can record using # AUDIODEV=hw:1 rec tmp.wav is abysmal at best: Now, is there a way to change the kernel module settings so that I can "crank the recording volume up" any more then what I am presented with? Or maybe any other settings I have forgotten about? I can "soft-up" the recording using # AUDIODEV=hw:1 rec tmp.wav gain 20 , but that also increases the noise and it is still below what the Mac records. Before you ask: # arecord -Lnull Discard all samples (playback) or generate zero samples (capture)default:CARD=Device C-Media USB Audio Device, USB Audio Default Audio Devicesysdefault:CARD=Device C-Media USB Audio Device, USB Audio Default Audio Devicefront:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Front speakerssurround21:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 2.1 Surround output to Front and Subwoofer speakerssurround40:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 4.0 Surround output to Front and Rear speakerssurround41:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 4.1 Surround output to Front, Rear and Subwoofer speakerssurround50:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 5.0 Surround output to Front, Center and Rear speakerssurround51:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 5.1 Surround output to Front, Center, Rear and Subwoofer speakerssurround71:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 7.1 Surround output to Front, Center, Side, Rear and Woofer speakersiec958:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio IEC958 (S/PDIF) Digital Audio Outputdmix:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Direct sample mixing devicedsnoop:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Direct sample snooping devicehw:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Direct hardware device without any conversionsplughw:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Hardware device with all software conversions## lsusbBus 001 Device 005: ID 0d8c:0008 C-Media Electronics, Inc.Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet AdapterBus 001 Device 002: ID 0424:9514 Standard Microsystems Corp.Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub## arecord -l**** List of CAPTURE Hardware Devices ****card 1: Device [C-Media USB Audio Device], device 0: USB Audio [USB Audio] Subdevices: 1/1 Subdevice #0: subdevice #0## amixer -c 1 scontrolsSimple mixer control 'PCM',0Simple mixer control 'Mic',0Simple mixer control 'Auto Gain Control',0## uname -raLinux xxx 4.4.16+ #899 Thu Jul 28 12:36:19 BST 2016 armv6l GNU/Linux## aplay -l -Lnull Discard all samples (playback) or generate zero samples (capture)default:CARD=ALSA bcm2835 ALSA, bcm2835 ALSA Default Audio Devicesysdefault:CARD=ALSA bcm2835 ALSA, bcm2835 ALSA Default Audio Devicedmix:CARD=ALSA,DEV=0 bcm2835 ALSA, bcm2835 ALSA Direct sample mixing devicedmix:CARD=ALSA,DEV=1 bcm2835 ALSA, bcm2835 IEC958/HDMI Direct sample mixing devicedsnoop:CARD=ALSA,DEV=0 bcm2835 ALSA, bcm2835 ALSA Direct sample snooping devicedsnoop:CARD=ALSA,DEV=1 bcm2835 ALSA, bcm2835 IEC958/HDMI Direct sample snooping devicehw:CARD=ALSA,DEV=0 bcm2835 ALSA, bcm2835 ALSA Direct hardware device without any conversionshw:CARD=ALSA,DEV=1 bcm2835 ALSA, bcm2835 IEC958/HDMI Direct hardware device without any conversionsplughw:CARD=ALSA,DEV=0 bcm2835 ALSA, bcm2835 ALSA Hardware device with all software conversionsplughw:CARD=ALSA,DEV=1 bcm2835 ALSA, bcm2835 IEC958/HDMI Hardware device with all software conversionsdefault:CARD=Device C-Media USB Audio Device, USB Audio Default Audio Devicesysdefault:CARD=Device C-Media USB Audio Device, USB Audio Default Audio Devicefront:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Front speakerssurround21:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 2.1 Surround output to Front and Subwoofer speakerssurround40:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 4.0 Surround output to Front and Rear speakerssurround41:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 4.1 Surround output to Front, Rear and Subwoofer speakerssurround50:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 5.0 Surround output to Front, Center and Rear speakerssurround51:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 5.1 Surround output to Front, Center, Rear and Subwoofer speakerssurround71:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 7.1 Surround output to Front, Center, Side, Rear and Woofer speakersiec958:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio IEC958 (S/PDIF) Digital Audio Outputdmix:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Direct sample mixing devicedsnoop:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Direct sample snooping devicehw:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Direct hardware device without any conversionsplughw:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Hardware device with all software conversions**** List of PLAYBACK Hardware Devices ****card 0: ALSA [bcm2835 ALSA], device 0: bcm2835 ALSA [bcm2835 ALSA] Subdevices: 8/8 Subdevice #0: subdevice #0 Subdevice #1: subdevice #1 Subdevice #2: subdevice #2 Subdevice #3: subdevice #3 Subdevice #4: subdevice #4 Subdevice #5: subdevice #5 Subdevice #6: subdevice #6 Subdevice #7: subdevice #7card 0: ALSA [bcm2835 ALSA], device 1: bcm2835 ALSA [bcm2835 IEC958/HDMI] Subdevices: 1/1 Subdevice #0: subdevice #0card 1: Device [C-Media USB Audio Device], device 0: USB Audio [USB Audio] Subdevices: 1/1 Subdevice #0: subdevice #0## lsusb -v -d 0d8c:0008Bus 001 Device 004: ID 0d8c:0008 C-Media Electronics, Inc.Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x0d8c C-Media Electronics, Inc. idProduct 0x0008 bcdDevice 1.00 iManufacturer 0 iProduct 1 C-Media USB Audio Device iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 224 bNumInterfaces 4 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 100mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 0 bInterfaceClass 1 Audio bInterfaceSubClass 1 Control Device bInterfaceProtocol 0 iInterface 0 AudioControl Interface Descriptor: bLength 10 bDescriptorType 36 bDescriptorSubtype 1 (HEADER) bcdADC 1.00 wTotalLength 71 bInCollection 2 baInterfaceNr( 0) 1 baInterfaceNr( 1) 2 AudioControl Interface Descriptor: bLength 12 bDescriptorType 36 bDescriptorSubtype 2 (INPUT_TERMINAL) bTerminalID 1 wTerminalType 0x0101 USB Streaming bAssocTerminal 0 bNrChannels 2 wChannelConfig 0x0003 Left Front (L) Right Front (R) iChannelNames 0 iTerminal 0 AudioControl Interface Descriptor: bLength 12 bDescriptorType 36 bDescriptorSubtype 2 (INPUT_TERMINAL) bTerminalID 2 wTerminalType 0x0201 Microphone bAssocTerminal 0 bNrChannels 1 wChannelConfig 0x0001 Left Front (L) iChannelNames 0 iTerminal 0 AudioControl Interface Descriptor: bLength 9 bDescriptorType 36 bDescriptorSubtype 3 (OUTPUT_TERMINAL) bTerminalID 6 wTerminalType 0x0301 Speaker bAssocTerminal 0 bSourceID 9 iTerminal 0 AudioControl Interface Descriptor: bLength 9 bDescriptorType 36 bDescriptorSubtype 3 (OUTPUT_TERMINAL) bTerminalID 7 wTerminalType 0x0101 USB Streaming bAssocTerminal 0 bSourceID 10 iTerminal 0 AudioControl Interface Descriptor: bLength 10 bDescriptorType 36 bDescriptorSubtype 6 (FEATURE_UNIT) bUnitID 9 bSourceID 1 bControlSize 1 bmaControls( 0) 0x01 Mute Control bmaControls( 1) 0x02 Volume Control bmaControls( 2) 0x02 Volume Control iFeature 0 AudioControl Interface Descriptor: bLength 9 bDescriptorType 36 bDescriptorSubtype 6 (FEATURE_UNIT) bUnitID 10 bSourceID 2 bControlSize 1 bmaControls( 0) 0x43 Mute Control Volume Control Automatic Gain Control bmaControls( 1) 0x00 iFeature 0 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 0 bInterfaceClass 1 Audio bInterfaceSubClass 2 Streaming bInterfaceProtocol 0 iInterface 0 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 1 bNumEndpoints 1 bInterfaceClass 1 Audio bInterfaceSubClass 2 Streaming bInterfaceProtocol 0 iInterface 0 AudioStreaming Interface Descriptor: bLength 7 bDescriptorType 36 bDescriptorSubtype 1 (AS_GENERAL) bTerminalLink 1 bDelay 1 frames wFormatTag 1 PCM AudioStreaming Interface Descriptor: bLength 14 bDescriptorType 36 bDescriptorSubtype 2 (FORMAT_TYPE) bFormatType 1 (FORMAT_TYPE_I) bNrChannels 2 bSubframeSize 2 bBitResolution 16 bSamFreqType 2 Discrete tSamFreq[ 0] 48000 tSamFreq[ 1] 44100 Endpoint Descriptor: bLength 9 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 9 Transfer Type Isochronous Synch Type Adaptive Usage Type Data wMaxPacketSize 0x00c8 1x 200 bytes bInterval 1 bRefresh 0 bSynchAddress 0 AudioControl Endpoint Descriptor: bLength 7 bDescriptorType 37 bDescriptorSubtype 1 (EP_GENERAL) bmAttributes 0x01 Sampling Frequency bLockDelayUnits 1 Milliseconds wLockDelay 1 Milliseconds Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 2 bAlternateSetting 0 bNumEndpoints 0 bInterfaceClass 1 Audio bInterfaceSubClass 2 Streaming bInterfaceProtocol 0 iInterface 0 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 2 bAlternateSetting 1 bNumEndpoints 1 bInterfaceClass 1 Audio bInterfaceSubClass 2 Streaming bInterfaceProtocol 0 iInterface 0 AudioStreaming Interface Descriptor: bLength 7 bDescriptorType 36 bDescriptorSubtype 1 (AS_GENERAL) bTerminalLink 7 bDelay 1 frames wFormatTag 1 PCM AudioStreaming Interface Descriptor: bLength 14 bDescriptorType 36 bDescriptorSubtype 2 (FORMAT_TYPE) bFormatType 1 (FORMAT_TYPE_I) bNrChannels 1 bSubframeSize 2 bBitResolution 16 bSamFreqType 2 Discrete tSamFreq[ 0] 48000 tSamFreq[ 1] 44100 Endpoint Descriptor: bLength 9 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 5 Transfer Type Isochronous Synch Type Asynchronous Usage Type Data wMaxPacketSize 0x0064 1x 100 bytes bInterval 1 bRefresh 0 bSynchAddress 0 AudioControl Endpoint Descriptor: bLength 7 bDescriptorType 37 bDescriptorSubtype 1 (EP_GENERAL) bmAttributes 0x01 Sampling Frequency bLockDelayUnits 0 Undefined wLockDelay 0 Undefined Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 3 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 No Subclass bInterfaceProtocol 0 None iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.00 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 50 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 32Device Status: 0x0000 (Bus Powered)#
Using arithmetic expansion: for (( k = 0; k < 50; ++k )); do a=$(( 2*k + 1 )) echo "$a"done Using the antiquated expr utility: for (( k = 0; k < 50; ++k )); do a=$( expr 2 '*' "$k" + 1 ) echo "$a"done Using bc -l ( -l not actually needed in this case as no math functions are used): for (( k = 0; k < 50; ++k )); do a=$( bc -l <<<"2*$k + 1" ) echo "$a"done Using bc -l as a co-process (it acts like a sort of computation service in the background¹): coproc bc -lfor (( k = 0; k < 50; ++k )); do printf "2*%d + 1\n" "$k" >&${COPROC[1]} read -u "${COPROC[0]}" a echo "$a"donekill "$COPROC_PID" That last one looks (arguably) cleaner in ksh93 : bc -l |&bc_pid="$!"for (( k = 0; k < 50; ++k )); do print -p "2*$k + 1" read -p a print "$a"donekill "$bc_pid" ¹ This solved a an issue for me once where I needed to process a large amount of input in a loop. The processing required some floating point computations, but spawning bc a few times in the loop proved to be exceedingly slow. Yes, I could have solved it in many other ways, but I was bored...
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/299322", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17838/" ] }
299,336
I'm trying to resize my home partition, but I can't figure out how to do so. Move/Resize doesn't let me move the bar, and there's no space to extend it to. I can't unmount it because it says it's busy. I saw talk of moving partitions around to be adjacent to extend into the free space, but how would that work? I can't move the unallocated space, and wouldn't moving my home partition before the unallocated space screw stuff up?
Using arithmetic expansion: for (( k = 0; k < 50; ++k )); do a=$(( 2*k + 1 )) echo "$a"done Using the antiquated expr utility: for (( k = 0; k < 50; ++k )); do a=$( expr 2 '*' "$k" + 1 ) echo "$a"done Using bc -l ( -l not actually needed in this case as no math functions are used): for (( k = 0; k < 50; ++k )); do a=$( bc -l <<<"2*$k + 1" ) echo "$a"done Using bc -l as a co-process (it acts like a sort of computation service in the background¹): coproc bc -lfor (( k = 0; k < 50; ++k )); do printf "2*%d + 1\n" "$k" >&${COPROC[1]} read -u "${COPROC[0]}" a echo "$a"donekill "$COPROC_PID" That last one looks (arguably) cleaner in ksh93 : bc -l |&bc_pid="$!"for (( k = 0; k < 50; ++k )); do print -p "2*$k + 1" read -p a print "$a"donekill "$bc_pid" ¹ This solved a an issue for me once where I needed to process a large amount of input in a loop. The processing required some floating point computations, but spawning bc a few times in the loop proved to be exceedingly slow. Yes, I could have solved it in many other ways, but I was bored...
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/299336", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182319/" ] }
299,350
Here is the free -m output [prem@myserver: /home/prem]$ free -m total used free shared buff/cache availableMem: 991 218 85 267 687 360Swap: 0 0 0 I have added swap space to my cent os 7 machine by using following commands dd if=/dev/zero of=/swapfile bs=1M count=2048sudo chmod 600 /swapfilesudo mkswap /swapfilesudo swapon /swapfile Now the swap space has increased to 2GB [prem@tuatahi: /home/prem]$ free -m total used free shared buff/cache availableMem: 991 284 69 265 638 292Swap: 2047 5 2042 But I guess in order to make these changes permanent, I need to add fstab entry for my swap space. Here are the contents of fstab UUID=ef6ba050-6cdc-416a-9380-c14304d0d206 / xfs defaults 0 0 I am not sure how to add the swap space in terms of UUID.
There is no UUID for a file. Simply enter it as: /swapfile none swap defaults 0 0 Since it's directly on the root filesystem, there's no worry about the mounting order.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299350", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182323/" ] }
299,360
I have file looking like this: pw1jc5ssyt6hx618,254343ysezaratlycpuggl,254333pht92h4adr3mrbz3,254343hguvgstqxu3gowfg,254344gqjp2rsjmk1a2v9c,254333twdzyi2ddbnrfknd,254333gcmj7krrx5x6nf8r,254341tpqorqbyrg1nmm7s,254333alnac47rt8d4ege3,254343 I want to merge this file based on 2nd column, with - as a delimiter, so that the result looks like this: 254343,pw1jc5ssyt6hx618-pht92h4adr3mrbz3-alnac47rt8d4ege3254333,ysezaratlycpuggl-gqjp2rsjmk1a2v9c-twdzyi2ddbnrfknd-tpqorqbyrg1nmm7s254344,hguvgstqxu3gowfg254341,gcmj7krrx5x6nf8r
awk is your friend $ cat 299360ipw1jc5ssyt6hx618,254343ysezaratlycpuggl,254333pht92h4adr3mrbz3,254343hguvgstqxu3gowfg,254344gqjp2rsjmk1a2v9c,254333twdzyi2ddbnrfknd,254333gcmj7krrx5x6nf8r,254341tpqorqbyrg1nmm7s,254333alnac47rt8d4ege3,254343$ awk -v FS="," '/^$/{next} # for empty line go to next record {if(NR==1){ # checking for first record f2[$2]=$1;next} # Adding $1 to array f2 at index $2 else{ if($2 in f2){ # Check if $2 is already an index in f2 f2[$2]=f2[$2]"-"$1;next #appending "-$1" to current value } else{ f2[$2]=$1;next } }} END{ # This line will be processed at the end for(i in f2){ # for all the indexes i in f2 printf "%s,%s\n",i,f2[i] #printing in the desired format } } ' 299360254341,gcmj7krrx5x6nf8r254333,ysezaratlycpuggl-gqjp2rsjmk1a2v9c-twdzyi2ddbnrfknd-tpqorqbyrg1nmm7s254343,pw1jc5ssyt6hx618-pht92h4adr3mrbz3-alnac47rt8d4ege3254344,hguvgstqxu3gowfg Explanation FS="," – FS is awk's built-in variable that stands for field separator. Setting field separator to , will set , as the delimiter. You access fields by $1 , $2 and so on. The awk script is enclosed in single quotes;i.e., 'awk-script-goes-here' NR is an awk built-in variable that stands for record number (the number of the record currently being processed). By default, each line is a record. By f2[$2]=$1 we are setting up an associative array f2 with field2 (i.e., $2 ) as the index. $2 in f2 checks if the index is already present in the array. The if-else and printf are self explanatory. The END block in awk is executed only at the very end; i.e., after all the records have been processed. for(i in f2) is a for loop construct used to parse the associative arrays in awk. It is the other way of saying, for every index i in f2 do something Note that the above for loop may not print the array in an order. You may use the sort bash command to sort the array, though. next goes to the next record without processing the commands that follow. The /pattern/ checks for a pattern in awk; the pattern ^$ checks for empty line. Reference If you wish to get expert in awk, Effective awk Programming is a must read. Ugly one-liner awk -v FS="," '/^$/{next}{if(NR==1){f2[$2]=$1;next}else{if($2 in f2){f2[$2]=f2[$2]"-"$1;next}else{f2[$2]=$1;next}}}END{for(i in f2){printf "%s,%s\n",i,f2[i]}}' 299360 Note: Ideally, it's not a good idea to hard-code newlines in awk scripts, as in printf "%s,%s\n",i,f2[i] . You may replace it with printf "%s,%s\n",i,f2[i];print for extra portability.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299360", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182334/" ] }
299,384
I'm using the following code to find the duplicate username. However, it gives an error. #!/bin/bashcat /etc/passwd | cut -f1 -d":" | /bin/sort -n | /usr/bin/uniq -c |\ while read x; do [ -z "${x}" ] && break set - $x if [ $1 -gt 1 ]; then uids=`/bin/gawk -F: '($1 == n) { print $3 }' n=$2 \ /etc/passwd | xargs` echo "Duplicate User Name ($2): ${uids}" fi done I'm facing a syntax error near the token 'done' and numeric error. How can I fix this error?
$ cut -d: -f1 /etc/passwd | sort | uniq -d This will extract the first field (the usernames) of the : -delimited /etc/passwd -file, sort the result and report any duplicates. To also get the UID and the rest of the duplicated passwd entries: cut -d: -f1 /etc/passwd | sort | uniq -d |while read -r username; do grep "^$username:" /etc/passwddone To only get the duplicate usernames and their UID: cut -d: -f1 /etc/passwd | sort | uniq -d |while read -r username; do awk -F: -vu="$username" '$1 == u { print $1, $3 }' /etc/passwddone A short note on your script. The syntax looks mostly ok, but you need ; after break and there is a space after both \ (this may be a cut-and-paste error (now removed by an edit)). Also, I'd avoid giving full paths to standard utilities if there is no good reason for it, and the awk program does not require GNU awk so just awk will do.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299384", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182349/" ] }
299,513
When installing a rpm package it warns that there is a necessary dependent library missing. In fact I have already installed that library from source, so I guess rpm just doesn't know about that. Then can I let rpm know the existing library, and how? Maybe add some code in a rpm configure file? By the way, installing the missing library (again) by rpm may solve the problem (quickly), but sometime there's no rpm version available.
The RPM dependency database cannot tell that you installed a package from source. The RPM database only knows about the metadata present in the RPM packages, a package installed from source does not contains this metadata. Some configure scripts that build a package from source will produce pkg-config , which is metadata about the installed package. Yet, there is no clear-cut integration between the metadata from pkg-config and RPM metadata (or DEB metadata, or pacman metadata). When packaging a distro, the packagers insert the metadata in a specific format into the packages (e.g. RPM packages) and that metadata is the one used to determine dependencies. Not metadata provided in any other form. On the other hand, you can have different versions of a library on the same system. By default (i.e. according to the GNU coding standards which most packages follow) a configure script should install its produce into /usr/local . Whilst packages packaged by the distro (e.g. RPM ) should install their content into /usr . Therefore, if you follow the convention (called FHS ) and keep packages/libraries installed from source in /usr/local , then installing the same library through RPM will not conflict with your library (since the packagers of the distro do follow FHS). When there is no RPM available, you can build it yourself. For that you need to build the package/library from source and install it into a dummy place (a build root). Then provide the metadata needed for the RPM package and package it into an RPM file. TLDP has a dated but very thorough guide on building RPMs .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299513", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182065/" ] }
299,520
Being that hapless user who straggled with "bash-shell" (I finally decided to install tshell instead -- too much trouble fixing all the old scripts), I run into another trouble: my ubuntu-14 doesn't have acroread, ratfor90, latex , and many others commands. I first was looking for them by using dpkg -s <command> and after getting a negative result (which as I know by now, doesn't mean much), was still trying to install a package by using sudo apt-get install <command> -- with negative results for the command I've mentioned. I was successful for some commands (such as "gv", "gfortran", and some others). What am I to do now? I never was good on installing commands by package downloading from the internet. Besides, there will be too many of them.
The RPM dependency database cannot tell that you installed a package from source. The RPM database only knows about the metadata present in the RPM packages, a package installed from source does not contains this metadata. Some configure scripts that build a package from source will produce pkg-config , which is metadata about the installed package. Yet, there is no clear-cut integration between the metadata from pkg-config and RPM metadata (or DEB metadata, or pacman metadata). When packaging a distro, the packagers insert the metadata in a specific format into the packages (e.g. RPM packages) and that metadata is the one used to determine dependencies. Not metadata provided in any other form. On the other hand, you can have different versions of a library on the same system. By default (i.e. according to the GNU coding standards which most packages follow) a configure script should install its produce into /usr/local . Whilst packages packaged by the distro (e.g. RPM ) should install their content into /usr . Therefore, if you follow the convention (called FHS ) and keep packages/libraries installed from source in /usr/local , then installing the same library through RPM will not conflict with your library (since the packagers of the distro do follow FHS). When there is no RPM available, you can build it yourself. For that you need to build the package/library from source and install it into a dummy place (a build root). Then provide the metadata needed for the RPM package and package it into an RPM file. TLDP has a dated but very thorough guide on building RPMs .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299520", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181973/" ] }
299,548
How do I tell zypper to reinstall all currently installed packages?
You can reinstall all currently installed packages by this command: zypper in -f $(rpm -q -a --qf '%{NAME} ') Maybe this information will be useful.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/299548", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/146096/" ] }
299,600
Situation: I frequently connect to one Linux Mint 18 machine, from Linux Mint 18, over SSH. I now have a need, without copying the files to me that is, to view pictures stored on the remote machine on my desktop, not in terminal in some pseudo-image format. Non-permanent solution: What works is adding -Y parameter to the ssh command. So it is fine, but I don't like always appending it to the command. I connect like this: ssh -Y [email protected] And display image as follows: feh *.jpg Manual only says: -Y Enables trusted X11 forwarding. Trusted X11 forwardings are not subjected to the X11 SECURITY extension controls. Question: How can I set up SSH config file to make this option permanent?
Host 192.168.0.100 # and/or preferred aliases Hostname 192.168.0.100 # if 'Host' is alias rather than actual hostname/IP User herusename ForwardX11 yes ForwardX11Trusted yes ... other options for this host ... man ssh_config
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299600", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
299,618
I was wondering if there is a convention for file type extensions for shell scripts you want to source instead of run. For example: If I want to run this script in a subshell. ./script.sh If I want to remember to run this script from the current shell. . script.source Is there a convention (like POSIX for example) for a filetype in the second example? Something like .source or .sourceme ? Update This question does not ask about any opinion. I clearly stated that I would like to know if there is a standardized file extension for this kind of scripts. This question is even less opinion-based than this well received question on a similar issue ( Use .sh or .bash extension for bash scripts? ).
I'd use .sh (for files in the POSIX sh language, .bash for non-sh-compatible bash files, that is the extension identifies the language the script is written in) for files intended to be sourced (or more generally not intended to be executed), and no extension for files that are meant to be executed. You can also add a: #! /bin/echo Please-source she-bang, so that when executed by mistake (though I'd expect those files should not be given execution permissions, which would already prevent execution), you get a notice that it should be sourced instead.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/299618", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128489/" ] }
299,657
I wish to run python script that I have locally on disk on remote machine. I used to run bash scripts like this: cat script.sh | ssh user@machine but I do not know how to do same for Python script.
As others have said, pipe it into ssh. But what you will want to do is give the proper arguments. You will want to add -u to get the output back from ssh properly. And want to add - to handle the output and later arguments. ssh user@host python -u - < script.py If you want to give command line arguments, add them after the - . ssh user@host python -u - --opt arg1 arg2 < script.py
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27960/" ] }
299,667
I just tried to move a directory containing music files with thunar 4.10 It complained that a file name was invalid. It turned out that one file name (song title) contained a question mark.I suspected that this was a problem, removed the question mark and could indeed copy the file.Adding the "?" back in was not possible. I also tried it with rename on the command line but that didn't work either. (not sure what thunar uses under the hood, so this test might be moot) Now if a question mark makes the file name invalid, how could this file be created in the first place? I created the files with SoundJuicer from a newly obtained CD. I was able to play the file (with "?" in the name) in various players. What's going on here? Can I have the "?" in the name or not? Why is the file manager unable to handle such files while other applications seem to be ok with it? Update: Next song has a ":" in it. Same problem as with the "?". These are not invalid characters to Unix; typically only the NUL character and the / character are invalid filenames (the / being the directory separator). This was what my intuition told me as well, because I never had any issues with file names in Linux and could throw pretty much everything sensible at it and it worked ok. This is what motivated the question here. I never encountered invalid file names before. Were you trying to move the files to a USB stick? If so, is that stick formatted as FAT32 or as a native Linux filesystem? The target is indeed a USB stick that I bought today. I opened gparted and it is formatted as FAT32. I'm not exactly sure but that's a Windows thing right? And Windows has a bunch of characters that it doesn't support, apaprently including ? and : . Am I right?
These characters ? and : are not valid on a FAT32 filesystem, so if that is where you need to copy your files you will need to rename them. From the command line you can use command-line tools such as rename (sometimes known as prename ) to replace these characters with _ or even to remove them: rename 's/[?<>\\:*|\"]/_/g' # Change invalid characters to _rename 's/[?<>\\:*|\"]//g' # Remove invalid characters I am not familiar with thunar so I do not know if there is a way to perform this substitution/replacement operation directly. I have just found Linux copy to fat32 filesystem: invalid argument which suggests adding this into the pax command (another tool to copy files), so that you can keep your full filenames on your local disk but convert the filenames during the copy to your USB device: pax -rw -s '/[?<>\\:*|\"]/_/gp' *.mp3 /media/usb_device If the complete filenames are really important to you, I would suggest that you reformat the USB stick to use a Linux-native filesystem such as ext4 . (There are Windows drivers available for the extN family of filesystems if that's necessary.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/299667", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182557/" ] }
299,683
I have a well configured /etc/wpa_supplicant/wpa_supplicant.conf with all my ESSIDs and passwords. I do not have a graphical interface like KDE or Gnome.To switch between available networks, I need to do four commands. I'm running Debian and would like to have a utility similar to Arch's Linux netctl to manage my connections. What are the options available to Debian?
The two option are wicd-cli (noted in the comment by meuh) and networkmanager . Which to use is matter of personal preference. I use networkmanager just because it has a better manual (but that, again, is matter of preference). Just like wpa_supplicant stores files in /etc/wpa_supplicant/ one per interface , networkmanager stores files in /etc/NetworkManager/system-connections/ one per SSID . The parameter names for networkmanager are not very different from wpa_supplicant , for example a file in /etc/NetworkManager/system-connections/ may look as follows: [connection]id=BluePenguinuuid=799ce6af-b66c-4669-9319-8d9a029cb6eetype=wifi[wifi]ssid=BluePenguin[wifi-security]auth-alg=openkey-mgmt=wpa-pskpsk=****** (This looks similar to network={} in wpa_supplicant ) My experience with networkmanager is on Arch, not Debian, therefore I cannot tell with 100% accuracy on the Debian dependency chain. But, networkmanager does not require Xorg (or GTK, or KDE). Moreover, the command line tool to networkmanager : nmcli , is very similar in design to iproute2 . In essence, as you would do: ip addr help to get help for the addr command, you do: nmcli device wifi help to get help on all wifi commands for devices . Since I use ip a lot, I find nmcli very intuitive, but then again, that is matter of personal preference. networkmanager has a built-in DHCP client, but can be configured to use an external one. As for reducing the number of commands, nmcli will perform the work of disconnecting from one SSID (closing DHCP too) and connect to a new SSID (and start DHCP) with on command (assuming the password is already saved): nmcli device wifi connect <new SSID> Or for the lazy typer: nmcli d w c <new SSID> References: Debian wiki on Network Manager Arch wiki on Network Manager
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299683", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21240/" ] }
299,715
Currently working on a project where I'm dealing with an arbitrary group of disks in multiple systems. I've written a suite of software to burn-in these disks. Part of that process was to format the disks. While testing my software, I realized that if at some point during formatting the disks, the process stops/dies, and I want to restart the process, I really don't want to reformat all of the disks in the set, which have already successfully formatted. I'm running this software from a ramfs with no disks mounted and none of the disks I am working on ever get mounted and they not be used by my software for anything other than testing, so anything goes on these bad boys. There's no data about which to be concerned. EDIT: No, I'm not partitioning. Yes, ext2 fs. This is the command I'm using to format: (/sbin/mke2fs -q -O sparse_super,large_file -m 0 -T largefile -T xfs -FF $drive >> /tmp/mke2fs_drive.log 2>&1 & echo $? > $status_file &) SOLUTION: Thanks to Jan's suggestion below: # lsblk -f /dev/<drv> I concocted the following shell function, which works as expected. SOURCE is_formatted(){ drive=$1 fs_type=$2 if [[ ! -z $drive ]] then if [[ ! -z $fs_type ]] then current_fs=$(lsblk -no KNAME,FSTYPE $drive) if [[ $(echo $current_fs | wc -w) == 1 ]] then echo "[INFO] '$drive' is not formatted. Formatting." return 0 else current_fs=$(echo $current_fs | awk '{print $2}') if [[ $current_fs == $fs_type ]] then echo "[INFO] '$drive' is formatted with correct fs type. Moving on." return 1 else echo "[WARN] '$drive' is formatted, but with wrong fs type '$current_fs'. Formatting." return 0 fi fi else echo "[WARN] is_formatted() was called without specifying fs_type. Formatting." return 0 fi else echo "[FATAL] is_formatted() was called without specifying a drive. Quitting." return -1 fi} DATA sdca ext2 46b669fa-0c78-4b37-8fc5-a26368924b8csdce ext2 1a375f80-a08c-4889-b759-363841b615b1sdck ext2 f4f43e8c-a5c6-495f-a731-2fcd6eb6683fsdcnsdby ext2 cf276cce-56b1-4027-a795-62ef62d761fasdcd ext2 42fdccb8-e9bc-441e-a43a-0b0f8d409c71sdci ext2 d6e7dc60-286d-41e2-9e1b-a64d42072253sdbw ext2 c3986491-b83f-4001-a3bd-439feb769d6asdch ext2 3e7dba24-e3ec-471a-9fae-3fee91f988bdsdcqsdcf ext2 8fd2a6fd-d1ae-449b-ad48-b2f9df997e5fsdcssdcosdcw ext2 27bf220e-6cb3-4953-bee4-aff27c491721sdcp ext2 133d9474-e696-49a7-9deb-78d79c246844sdcxsdctsdcusdcysdcrsdcvsddesddc ext2 0b22bcf1-97ea-4d97-9ab5-c14a33c71e5csddi ext2 3d95fbcb-c669-4eda-8b57-387518ca0b81sddjsddbsdda ext2 204bd088-7c48-4d61-8297-256e94feb264sdczsddk ext2 ed5c8bd8-5168-487f-8fee-4b7c671ef2cbsddlsddnsdds ext2 647d2dea-f71d-4e87-bbe5-30f6424b36c9sddf ext2 47128162-bcb7-4eab-802d-221e8eb36074sddosddh ext2 b7f41e1a-216d-4580-97e6-f2df917754a8sddg ext2 39b838e0-f0ae-447c-8876-2d36f9099568 Which yielded: [INFO] '/dev/sdca' is formatted with correct fs type. Moving on.[INFO] '/dev/sdce' is formatted with correct fs type. Moving on.[INFO] '/dev/sdck' is formatted with correct fs type. Moving on.[INFO] '/dev/sdcn' is not formatted. Formatting.[INFO] '/dev/sdby' is formatted with correct fs type. Moving on.[INFO] '/dev/sdcd' is formatted with correct fs type. Moving on.[INFO] '/dev/sdci' is formatted with correct fs type. Moving on.[INFO] '/dev/sdbw' is formatted with correct fs type. Moving on.[INFO] '/dev/sdch' is formatted with correct fs type. Moving on.[INFO] '/dev/sdcq' is not formatted. Formatting.[INFO] '/dev/sdcf' is formatted with correct fs type. Moving on.[INFO] '/dev/sdcs' is not formatted. Formatting.[INFO] '/dev/sdco' is not formatted. Formatting.[INFO] '/dev/sdcw' is formatted with correct fs type. Moving on.[INFO] '/dev/sdcp' is formatted with correct fs type. Moving on.[INFO] '/dev/sdcx' is not formatted. Formatting.[INFO] '/dev/sdct' is not formatted. Formatting.[INFO] '/dev/sdcu' is not formatted. Formatting.[INFO] '/dev/sdcy' is not formatted. Formatting.[INFO] '/dev/sdcr' is not formatted. Formatting.[INFO] '/dev/sdcv' is not formatted. Formatting.[INFO] '/dev/sdde' is not formatted. Formatting.[INFO] '/dev/sddc' is formatted with correct fs type. Moving on.[INFO] '/dev/sddi' is formatted with correct fs type. Moving on.[INFO] '/dev/sddj' is not formatted. Formatting.[INFO] '/dev/sddb' is not formatted. Formatting.[INFO] '/dev/sdda' is formatted with correct fs type. Moving on.[INFO] '/dev/sdcz' is not formatted. Formatting.[INFO] '/dev/sddk' is formatted with correct fs type. Moving on.[INFO] '/dev/sddl' is not formatted. Formatting.[INFO] '/dev/sddn' is not formatted. Formatting.[INFO] '/dev/sdds' is formatted with correct fs type. Moving on.[INFO] '/dev/sddf' is formatted with correct fs type. Moving on.[INFO] '/dev/sddo' is not formatted. Formatting.[INFO] '/dev/sddh' is formatted with correct fs type. Moving on.[INFO] '/dev/sddg' is formatted with correct fs type. Moving on. Do note that the magic potion was extending Jan's suggestion to simply output what I cared about: lsblk -no KNAME,FSTYPE $drive
Depending on how you access the drives, you could use blkid -o list (deprecated) on them and then parse the output. The command outputs, among other things, a fs_type label column, that shows the filesystem. blkid -o list has been superseded be lsblk -f .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299715", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13677/" ] }
299,719
I'm trying to decrease a Linux image running SuSE, and thought about running strip on all of the system's executables. Even though I may not re-gain much disk space this way, would there be any harm in doing so?
It's not the case for Linux (just checked...), but on other systems (such as BSDs, e.g., OSX) doing this will remove any setuid/setgid permissions as a side-effect. Also (still looking at OSX), the ownership of the file may change (to the user doing the writing). For Linux, I recall that early on, stripping a shared library would prevent linking to it. That is not a problem now, though as the Program Library HOWTO notes, it will make debuggers not useful. It prevents linking to static libraries. Further reading: 24.14 Don't Use strip Carelessly (Unix Power Tools) How do I strip local symbols from linux kernel module without breaking it? What Linux and Solaris can learn from each other
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299719", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50614/" ] }
299,854
#!bin/sha=0while["$a -lt 50"]doecho $aa='expr $a+1'done I get infinite echos of expr $a+1 . what am I doing wrong?
Your script has syntax errors. You may check shell scripts for problematic constructs using ShellCheck on-line. This will tell you Line 3:while["$a -lt 50"]^-- SC1009: The mentioned parser error was in this while loop. ^-- SC1035: You need a space after the [ and before the ]. ^-- SC1069: You need a space before the [. ^-- SC1073: Couldn't parse this test expression. ^-- SC1020: You need a space before the ]. ^-- SC1072: Missing space before ]. Fix any mentioned problems and try again. Fixing space issues by changing while["$a -lt 50"] into while [ "$a -lt 50" ] will instead give you the following: Line 3:while [ "$a -lt 50" ] ^-- SC2157: Argument to implicit -n is always true due to literal strings.Line 6:a='expr $a+1' ^-- SC2016: Expressions don't expand in single quotes, use double quotes for that. The first issue reported is about the string "$a -lt 50" . In fact, you don't want to have a string like that here, you want "$a" -lt 50 . By the way, since a string is always "true", this is why your loop is infinite (if the syntax errors are fixed). The second issue is due to the checker detecting the variable $a inside a singly quoted string, where it wouldn't be expanded to its value ( and this is why the string printed is expr $a+1 ). The solution is not to change it to double quotes as that would just give you the same string but with the value expanded. You want to execute the expr command. Do that by changing you single quotes to back-ticks. Your script now looks like this: #!bin/sha=0while [ "$a" -lt 50 ]doecho $aa=`expr $a+1`done ... and ShellCheck is still not happy: Line 6:a=`expr $a+1` ^-- SC2006: Use $(..) instead of legacy `..`. ^-- SC2003: expr is antiquated. Consider rewriting this using $((..)), ${} or [[ ]]. New shell code should really use $( ... ) rather than back-ticks. Also, it gives you a warning about your use of expr , which is outdated. The line may be rewritten as a="$(( a + 1 ))" The final version (plus indentation and a fix to the #! -line): #!/bin/sha=0while [ "$a" -lt 50 ]; do echo $a a="$(( a + 1 ))"done bash or ksh93 version using (( ... )) for arithmetic evaluation, and with further shortening of the code: #!/bin/basha=0while (( a < 50 )); do echo "$(( a++ ))"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299854", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182347/" ] }
299,858
I need to set a variable in my bash script #!/usr/bin/env bashGITNAME= git config --global user.nameecho " $GITNAME " But it doesn't seems to work that way. How does it work?
Assuming you're trying to execute the git command and store its result in a variable you'll want the $(...) syntax, where you put your command inside the parens: GITNAME="$(git config --global user.name)"printf '%s\n' "$GITNAME" note also that there is no space after the = in the assignment. As sjsam pointed out, it's best to quote around the parens too. That's because after command substitution word splitting and glob expansion and several other parsing steps still happen, so if your name contained, say * the glob would be expanded, and that's probably not what you intend. As a style note, you should generally not use all upper case for your variable names, as that could cause them to collide with environment variables.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/299858", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182708/" ] }
299,964
So here's content under my /html folder. [root@ip-10-0-7-121 html]# lsa wp-activate.php wp-content wp-mail.phpb wp-admin wp-cron.php wp-settings.phphealthy.html wp-blog-header.php wp-includes wp-signup.phpindex.php wp-comments-post.php wp-links-opml.php wp-trackback.phplicense.txt wp-config.php wp-load.php xmlrpc.phpreadme.html wp-config-sample.php wp-login.php I want to delete everything except for folder a and b without having to move a / b folder to another folder.What's the command to do that?
You can use find with a negation (at your own risk). find all file and folders named "a" or "b": find -name a -o -name b find all files and folders name "a" or "b" in the current directory" find -maxdepth 1 -name a -o -name b find all files and folders not named "a" and not named "b" in current directory: find -maxdepth 1 ! -name a ! -name b also exclude current directory from result find -maxdepth 1 ! -name a ! -name b ! -name . now you can use rm to delete all founded elements: find -maxdepth 1 ! -name a ! -name b ! -name . -exec rm -rv {} \;
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/299964", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138782/" ] }
300,053
I recently came across this awesome list , which lists all the command line tools, that are deprecated (or let's say whose functionality can be replicated ) by the new ip tool. Recently trying to get accustomed to systemd , I learned that almost all functionality of cron can be replicated by systemd . What are other tools whose functionality can be replicated by systemd ?
The obvious ones - these are what systemctl replaces: service chkconfig on redhat and update-rc.d on debian, if a systemd unit has been written for the service. reboot , poweroff , halt , telinit . pm-suspend and friends have apparently gone away. As a cross-distro effort it's the sort of thing that systemd aims to accomplish; it's just interesting given the hooks and quirks that pm-utils supported, and I'm not aware of any fallout from systemd replacing it. Also systemd-analyze provides a similar function to bootchart. As pointed out by others, it probably makes more sense to enumerate the files provided by systemd, or the documentation. By doing so, I noticed one more obscure command, runlevel . systemd only emulates runlevels, so runlevel is another of the legacy commands. Searching for an equivalent command turned up systemctl list-units --type target (note list-units only shows active units unless directed otherwise). The output is not as obvious, because targets tend to depend on other targets, and you can have multiple targets active at once, independent or overlapping. However for now I can't think exactly when you would use the runlevel command. I have an impression it would be used interactively as a summary of the state of the init system. In which case, the better alternative would be systemctl status .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/300053", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144995/" ] }
300,054
I've installed fedora 24 and updated kernel. $rpm -qa kernelkernel-4.6.4-301.fc24.x86_64kernel-4.5.5-300.fc24.x86_64 So kernel-4.6.4-301.fc24.x86_64 is installed. $uname -r4.5.5-300.fc24.x86_64 Current loaded kernel. $ cd /boot$ lltotal 90117...-rwxr-xr-x. 1 root root 6277656 Jul 29 07:09 vmlinuz-0-rescue-60cb3109c1ea41d6806444bff16cc074-rwxr-xr-x. 1 root root 6277656 May 19 16:21 vmlinuz-4.5.5-300.fc24.x86_64 But there is no file for 4.6.4 kernel. How is it possible to add newer kernel by hand into grub?
The obvious ones - these are what systemctl replaces: service chkconfig on redhat and update-rc.d on debian, if a systemd unit has been written for the service. reboot , poweroff , halt , telinit . pm-suspend and friends have apparently gone away. As a cross-distro effort it's the sort of thing that systemd aims to accomplish; it's just interesting given the hooks and quirks that pm-utils supported, and I'm not aware of any fallout from systemd replacing it. Also systemd-analyze provides a similar function to bootchart. As pointed out by others, it probably makes more sense to enumerate the files provided by systemd, or the documentation. By doing so, I noticed one more obscure command, runlevel . systemd only emulates runlevels, so runlevel is another of the legacy commands. Searching for an equivalent command turned up systemctl list-units --type target (note list-units only shows active units unless directed otherwise). The output is not as obvious, because targets tend to depend on other targets, and you can have multiple targets active at once, independent or overlapping. However for now I can't think exactly when you would use the runlevel command. I have an impression it would be used interactively as a summary of the state of the init system. In which case, the better alternative would be systemctl status .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/300054", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147200/" ] }
300,055
I have a scenario that calls for command substitution without using a subshell. I have a construct like this: pushd $(mktemp -d) Now I want to exit and remove the temporary directory in one go: rmdir $(popd) However that doesn't work because popd doesn't return the popped directory (it returns the new, now current, directory) and also because it's performed in a subshell. Something like dirs -l -1 ; popd &> /dev/null will return the popped directory but it can't be used like this: rmdir $(dirs -l -1 ; popd &> /dev/null) because the popd will only affect the subshell. What is called for is the ability to do this: rmdir { dirs -l -1 ; popd &> /dev/null; } but that's invalid syntax. Is it possible to achieve this effect ? (note: I know I can save the temporary directory in a variable; I was trying to avoid the need to do so and learn something new in the process!)
The choice of the title of your question is a bit confusing. pushd / popd , a csh feature copied by bash and zsh , are a way to manage a stack of remembered directories. pushd /some/dir pushes the current working directory onto a stack, and then changes the current working directory (and then prints /some/dir followed by the content of that stack (space-separated). popd prints the content of the stack (again, space separated) and then changes to the top element of the stack and pops it from the stack. (also beware that some directories will be represented there with their ~/x or ~user/x notation). So if the stack currently has /a and /b , the current directory is /here and you're running: pushd /tmp/whatever popd pushd will print /tmp/whatever /here /a /b and popd will output /here /a /b , not /tmp/whatever . That's independent of using command substitution or not. popd cannot be used to get the path of the previous directory, and in general its output cannot be post processed (see the $dirstack or $DIRSTACK array of some shells though for accessing the elements of that directory stack) Maybe you want: pushd "$(mktemp -d)" &&popd &&rmdir "$OLDPWD" Or cd "$(mktemp -d)" &&cd - &&rmdir "$OLDPWD" Though, I'd use: tmpdir=$(mktemp -d) || exit( cd "$tmpdir" || exit # in a subshell # do what you have to do in that tmpdir)rmdir "$tmpdir" In any case, pushd "$(mktemp -d)" doesn't run pushd in a subshell. If it did, it couldn't change the working directory. That's mktemp that runs in a subshell. Since it is a separate command, it has to run in a separate process. It writes its output on a pipe, and the shell process reads it at the other end of the pipe. ksh93 can avoid the separate process when the command is builtin, but even there, it's still a subshell (a different working environment) which this time is emulated rather than relying on the separate environment normally provided by forking. For example, in ksh93 , a=0; echo "$(a=1; echo test)"; echo "$a" , no fork is involved, but still echo "$a" outputs 0 . Here, if you want to store the output of mktemp in a variable, at the same time as you pass it to pushd , with zsh , you could do: pushd ${tmpdir::="$(mktemp -d)"} With other Bourne-like shells: unset -v tmpdirpushd "${tmpdir=$(mktemp -d)}" Or to use the output of $(mktemp -d) several times without explicitly storing it in a variable, you could use zsh anonymous functions: (){pushd ${1?} && cd - && rmdir $1} "$(mktemp -d)"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/300055", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9259/" ] }
300,067
I'm not sure how to merge that unallocated 5GB space with the ext4 partition (sda5). Is there is a way (without formatting the ext4 filesystem)?
The choice of the title of your question is a bit confusing. pushd / popd , a csh feature copied by bash and zsh , are a way to manage a stack of remembered directories. pushd /some/dir pushes the current working directory onto a stack, and then changes the current working directory (and then prints /some/dir followed by the content of that stack (space-separated). popd prints the content of the stack (again, space separated) and then changes to the top element of the stack and pops it from the stack. (also beware that some directories will be represented there with their ~/x or ~user/x notation). So if the stack currently has /a and /b , the current directory is /here and you're running: pushd /tmp/whatever popd pushd will print /tmp/whatever /here /a /b and popd will output /here /a /b , not /tmp/whatever . That's independent of using command substitution or not. popd cannot be used to get the path of the previous directory, and in general its output cannot be post processed (see the $dirstack or $DIRSTACK array of some shells though for accessing the elements of that directory stack) Maybe you want: pushd "$(mktemp -d)" &&popd &&rmdir "$OLDPWD" Or cd "$(mktemp -d)" &&cd - &&rmdir "$OLDPWD" Though, I'd use: tmpdir=$(mktemp -d) || exit( cd "$tmpdir" || exit # in a subshell # do what you have to do in that tmpdir)rmdir "$tmpdir" In any case, pushd "$(mktemp -d)" doesn't run pushd in a subshell. If it did, it couldn't change the working directory. That's mktemp that runs in a subshell. Since it is a separate command, it has to run in a separate process. It writes its output on a pipe, and the shell process reads it at the other end of the pipe. ksh93 can avoid the separate process when the command is builtin, but even there, it's still a subshell (a different working environment) which this time is emulated rather than relying on the separate environment normally provided by forking. For example, in ksh93 , a=0; echo "$(a=1; echo test)"; echo "$a" , no fork is involved, but still echo "$a" outputs 0 . Here, if you want to store the output of mktemp in a variable, at the same time as you pass it to pushd , with zsh , you could do: pushd ${tmpdir::="$(mktemp -d)"} With other Bourne-like shells: unset -v tmpdirpushd "${tmpdir=$(mktemp -d)}" Or to use the output of $(mktemp -d) several times without explicitly storing it in a variable, you could use zsh anonymous functions: (){pushd ${1?} && cd - && rmdir $1} "$(mktemp -d)"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/300067", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/169381/" ] }
300,091
In my ubuntu bash, I had remapped Ctrl-y key combo to copy text to clipboard as, bind -x '"\C-y": copy_line_from_x_clipboard' It works. Now, I am migrating to Macbook, I like to use Command key instead of Ctrl key above. I am not seeing any keybinding examples on net that contain Mac OS' command key. And I tried to get the key combo for Command-y using the command sed -n l as explained here , but it shows empty line after taking Command-y key input. For those who are interested, the callee function to paste text from clipboard is, copy_line_from_x_clipboard() { local n=$READLINE_POINT local l=$READLINE_LINE local s=$(xsel -ob) READLINE_LINE=${l:0:$n}$s${l:$n:$((${#l}-n))} #READLINE_LINE=${l:0:$n}$s READLINE_POINT=$((n+${#s}))}
According to one of the comments in Use CMD-mappings in console Vim , you cannot use the Command key in Terminal.app, though you could in iTerm2. You're probably looking for modifier , like shift , control , e.g., something analogous to the alt or meta key. In Terminal.app's keyboard preferences, you have an initial set of key definitions which use these modifiers—along with Option . You can alter these definitions, or add new ones. Here are a couple of screenshots showing that dialog: The second screenshot shows Option (alone, or in combination with other modifiers), but Command is not available for use by programs running in the terminal:
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/300091", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
300,095
A fair number of linux commands have a dry-run option that will show you what they're going to do without doing it. I see nothing in the xargs man page that does that and no obvious way to emulate it. (my specific use case is troubleshooting long pipelines, though I'm sure there are others) Am I missing something?
You may benefit from the -p or -t flags. xargs -p or xargs --interactive will print out the command to be executed and then prompt for input (y/n) to confirm before executing the command. % cat listonetwothree% lslist% cat list | xargs -p -I {} touch {}touch one ?...ytouch two ?...ntouch three ?...y% lslistonethree xargs -t or xargs --verbose will print each command, then immediately execute it: % cat list | xargs -t -I {} touch {}touch one touch two touch three % lslistonethreetwo
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/300095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121291/" ] }
300,115
I'm reading an active log and trying to get some special calls $ tail -f example.log | egrep 'pattern1|pattern2|pattern3|pattern4|pattern5' But a couple of patterns barely are printed (due to the dev flow) and the other are very continuously printed. How can I make egrep to only print one request for each pattern so I can easily see they are working great.
You could do something like: tail -f example.log | awk ' BEGIN { n = split("pattern1,pattern2,pattern3,pattern4,pattern5", pats, /,/) } { found=0 for (i in pats) if ($0 ~ pats[i]) { found=1 delete pats[i] n-- } } found {print; if (!n) exit}' Note that awk will exit as soon as it has seen all the patterns, but tail will only exit (of a SIGPIPE) only the next time it writes something after that. Or if lines may not match several patterns and if you don't care about exiting when all patterns are found, shorter but less efficient: awk '/pattern1/&&!a++ || /pattern2/&&!b++ || /pattern3/&&!c++ || \ /pattern4/&&!d++ || /pattern5/&&!e++' With zsh and GNU grep : (trap '' PIPE;tail -f example.log > >(grep -m1 pattern1) \ > >(grep -m1 pattern2) \ > >(grep -m1 pattern3) \ > >(grep -m1 pattern4) \ > >(grep -m1 pattern5)) But note that lines matching multiple patterns will be printed as many times.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/300115", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68382/" ] }
300,122
So I had a RAID 1 with two hard disk. One hard disk failed, then I replaced it and I reinstalled on this new hard disk a fresh Linux. Now If I type fdisk -l I get: root@ns354729:/mnt/sdb2# fdisk -lDisk /dev/sda: 2000.4 GB, 2000398934016 bytes255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0xbb5259be Device Boot Start End Blocks Id System/dev/sda1 * 4096 1495042047 747518976 83 Linux/dev/sda2 1495042048 1496088575 523264 82 Linux swap / SolarisDisk /dev/sdb: 750.2 GB, 750156374016 bytes255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00025c91 Device Boot Start End Blocks Id System/dev/sdb1 4096 20975616 10485760+ fd Linux raid autodetect/dev/sdb2 20975617 1464092672 721558528 fd Linux raid autodetect/dev/sdb3 1464092673 1465144064 525696 82 Linux swap / Solaris I would like to acces the second hard disk (sdb) so I try to mount sdb2 like this: mount /dev/sdb2 /mnt THis says: root@ns354729:/mnt/sdb2# mount /dev/sdb2 /mntmount: block device /dev/sdb2 is write-protected, mounting read-onlymount: you must specify the filesystem type So I tried to give: mount -t ext4 /dev/sdb2 /mnt and I got: mount: wrong fs type, bad option, bad superblock on /dev/sdb2, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so And this says: root@ns354729:/mnt/sdb2# dmesg | tailufs_read_super: bad magic numberVFS: Can't find a romfs filesystem on dev sdb2.UDF-fs: warning (device sdb2): udf_load_vrs: No VRS foundUDF-fs: warning (device sdb2): udf_fill_super: No partition found (2)XFS (sdb2): Invalid superblock magic number(mount,18813,1):ocfs2_fill_super:1038 ERROR: superblock probe failed!(mount,18813,1):ocfs2_fill_super:1229 ERROR: status = -22GFS2: not a GFS2 filesystemGFS2: gfs2 mount does not existEXT4-fs (sdb2): VFS: Can't find ext4 filesystem any help?
You need to assemble the (degraded) RAID array, using something like: mdadm --assemble --readonly /dev/md0 /dev/sdb2 Of course, pick a number besides md0 if that's already in use. Then you can mount /dev/md0 (or, if it is actually LVM, etc., continue down the chain). You can, in the case of RAID1, also do this using loopback devices & an offset, but that's much more of a pain, and really is only worth attempting if the mdadm metadata has been destroyed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/300122", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182915/" ] }
300,128
Note: I'm new to Python and I've never really used external modules like the ones listed below so feel free to let me know if there's anything I could be doing better in order to get my program up and running. I'm currently working with a python (2.7.x) program that requires the use of the SciPy stack. The previous developer of the program was using Anaconda in order to access all external modules. In my case, I need to be able to run the entire program with a single command. For example: python myFile.py Will execute myFile.py (which has the following imports): from numpy import *from pylab import *import matplotlib.pyplot as plt From what I understand, Anaconda is an IDE that requires you to execute code in a similar way to Visual Studios (i.e. a "Run" button). So my question is: Is there a way for me to do this directly from the command line? Note: The reason I'm specifying the use of Anaconda instead of just using the external modules themselves is because on the SciPy website there's constant mention that it's easiest to just use a scientific python distribution like Anaconda or Python(x,y). Ultimately, I'm okay with any solution that allows me to run my program with the above imports.
Create required Anaconda environment conda create --name environmentName python=3 pandas numpy .Include all your dependencies at once while creating the environment. Switch to the environment with conda activate environmentName . Executing the python script python fileName.py .You don't have to specify the python version because the script is running inside the Anaconda environment. The version used will be whatever is specified in the environment (the script required python3 which has already been specified in Anaconda environment).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/300128", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111566/" ] }
300,144
We inherited a bunch of used servers from another team. Some of them have SELinux enabled on it, some do not. Because of SELinux, we are having trouble setting up passwordless ssh, our webserver, etc. We found a work around on this stackexchange site , which is to run: restorecon -R -v ~/.ssh However, since we don't need SELinux running for what we do, it might be easier to turn it off than for us to remember to have everyone run the above cmd on whatever dir needs permissions. Can we turn SELinux off w/o any repercussions down the road or is it better to just re-image the server? One thing to note; our IT group is really busy so re-imaging a server is not high on their list unless it's absolutely necessary (need a very good business case)...or someone bribes their boss with a bottle of scotch or whiskey. UPDATE: Thanks for everyone's suggestion and advice. These servers are all going to be used as internal dev servers. There isn't going to be any outside access to these machines so security isn't a high concern to us. Our current servers that we are using all (to the best of my knowledge) do not have SELinux enabled. Some of the ones my manager just acquired do and those are the ones we're looking at disabling so everything in our cluster is uniform.
SELinux is a security feature of the operating system. It is designed to help protect some parts of the server from other parts. For example, if you run a web server and have some "vulnerable" code that allows for an attacker to run arbitrary commands then SELinux can help mitigate this, by preventing your web server from accessing files it's not allowed to see. Now you can disable SELinux and it shouldn't break anything. The server will keep on working as normal. But you will have disabled one of the security features.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/300144", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30038/" ] }
301,225
While using fish as my shell, i'm trying to set permissions on a bunch of c source files in current dir with find . -type f -name "*.c" -exec chmod 644 {} +; I get an error find: missing argument to `-exec' or find . -type f -name "*.c" -exec chmod 644 {} \; I get an error chmod: cannot access '': No such file or directory What's wrong?
fish happens to be one of the few shells where that {} needs to be quoted . So, with that shell, you need: find . -type f -name '*.c' -exec chmod 644 '{}' + When not quoted, {} expands to an empty argument, so the command becomes the same as: find . -type f -name '*.c' -exec chmod 644 '' + And find complains about the missing {} (or ; as + is only recognised as the -exec terminator when following {} ). With most other shells, you don't need the quotes around {} .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/301225", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183043/" ] }
301,256
If you open a file that you don't have permission to write to in vim, then decide you need to change it, you can write your changes without exiting vim by doing :w !sudo tee % I don't understand how this can work. Can you please dissect this? I understand the :w part, it writes the current buffer to disk, assuming there already is a file name associated with it, right? I also understand the ! which executes the sudo tee command and % represents the current buffer content right? But still don't understand how this works.
The structure :w !cmd means "write the current buffer piped through command". So you can do, for example :w !cat and it will pipe the buffer through cat . Now % is the filename associated with the buffer So :w !sudo tee % will pipe the contents of the buffer through sudo tee FILENAME . This effectively writes the contents of the buffer out to the file.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/301256", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3850/" ] }
301,266
I have installed an SSL certificate from Let's Encrypt with Certbot on my Apache server with Debian 8 following this tutorial from Let's Encrypt's own documentation: https://certbot.eff.org/#debianjessie-apache $ certbot --apache You need to specify the domains where you want to install the certificates for, but I only added the example.com domain. Now I want to add the www.example.com , but cannot find how to do this.
The existing answers are correct, but not everyone may be clear (I wasn't) about what is going on, especially after reading the official certbot docs on the subject. First you'll want to list your existing certificates, just to be clear on what you have already: sudo certbot certificates You'll notice each certificate has a "name". Let's say you have a certificate with a name of example.com , and it has a certificate for the domain example.com as well. You can use the certonly option to just update the certificate, and use the --cert-name option to specify exactly which certificate you are updating . Don't forget to include your existing domain as well as the new domain you are adding. sudo certbot certonly --cert-name example.com -d example.com,www.example.com If you trust certbot to figure out the correct certificate (analogous to the "I'm feeling lucky" button Google used to have for searches), it appears you can skip the --cert-name and use --expand instead. This way certbot will find which certificate you are referring to by picking the one that has a subset (a proper subset—the docs say a "strict subset") of the domains you indicate. sudo certbot certonly --expand -d example.com,www.example.com In all of these, whether you need --webroot depends on your particular configuration.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/301266", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183067/" ] }
301,271
How can I grep for a combination of tab and star ( * ) characters in a text file? For example: Input: text * 0 * 0 * * some_texttext * 9 45 9 0 0 some_textTEXT * 0 * 0 0 * some_text I need to grep for a specific combination of tab and star and zeros, say for example: * 0 * 0 0 * Expected output: TEXT * 0 * 0 0 * some_text I can grep for stars, separately, using: grep -P '\t' input > output I can grep for tabs, separately, using: grep '\*' input > output But how can I combine both? I'm trying, unsuccessfully, the following combination: grep -P '\*\t0\t\*0\t0\*' input > output
Portably: tab=$(printf '\t')grep -F "*${tab}0${tab}*${tab}0${tab}0" With some shells ( ksh93 , zsh , bash , mksh , FreeBSD sh ), you can use: grep -F $'*\t0\t*\t0\t0' ( $'\t' can also be written $'\u0009' or (on ASCII-based systems) $'\x09' , $'\11' or $'\CI' ) Some grep implementations like ast-open's one recognise \t (or \x09 ) themselves as meaning a tab character. So you can do: grep '\*\t0\t\*\t0\t0' (same with other regexp types there ( -E for ERE, -P for perl-like (similar to PCRE), -A for augmented). GNU grep (at least on GNU systems) doesn't recognise \t nor \x09 with BRE or ERE, but does with PCREs (when the support has been built-in), (and also \x09 or \11 ). grep -P '\*\t0\t\*\t0\t0' would work with GNU grep as long as PCRE support has been enabled (which tends to be the case on modern systems). Another portable solution is to use awk instead were the \t is universally supported: awk '/\*\t0\t\*\t0\t0/'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301271", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74555/" ] }
301,315
I want my bash script to execute random command of given below.For example [mysterious command] ("command1", "command2", "command3")
Put your commands in an array. cmds=( "cmd1" "cmd2" "cmd3" ) $RANDOM is a random number and ${#cmds[@]} evaluates to the length of your array (3 in this example). $(( RANDOM % ${#cmds[@]} )) will be a random number between 0 and one less than the length of the array cmds , i.e. 0, 1, or 2. i=$(( RANDOM % ${#cmds[@]} )) Doing the following would pick the string out of $cmds corresponding to the index $i and execute it as a command. ${cmds[i]} or all in one go (which looks a bit horrible): ${cmds[RANDOM % ${#cmds[@]}]}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301315", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
301,318
First, apologies if this has been asked before - I searched for a while through the existing posts, but could not find support. I am interested in a solution for Fedora to OCR a multipage non-searchable PDF and to turn this PDF into a new PDF file that contains the text layer on top of the image. On Mac OSX or Windows we could use Adobe Acrobat, but is there a solution on Linux, specifically on Fedora? This seems to describe a solution - but unfortunately I am already lost when retrieving exact-image.
ocrmypdf does a good job and can be used like this: ocrmypdf in.pdf out.pdf To install: pip install ocrmypdf or sudo apt install ocrmypdf # ubuntusudo dnf -y install ocrmypdf # fedora
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/301318", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156074/" ] }
301,334
On Linux, what is the sixth character of the password hash stored in /etc/shadow ? On my puppy style linux box, if I try to generate 100 random passwords using shuf and /dev/urandom , then the sixth character is / about half the time. My question is not for production purpose, since I boot it up every time fresh from CD. Does this mean that my system is misconfigured or insecure in some way? I ran file on shuf to see if it was a busybox link. file /usr/bin/shuf shuf: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, stripped I don't think that shuf is a busybox link here. ls -l /usr/bin/shuf -rwxr-xr-x 1 root root 41568 Mar 7 2015 /usr/bin/shuf while ls -l /bin/wget lrwxrwxrwx 1 root root 14 Apr 29 03:49 wget -> ../bin/busybox Here is a rough idea of what I did: # ! / b i n / b a s h## don't try this on any real computer## this is not a production script, it is just psuedo code## with pseudo results to illustrate a point## for this run of 100 ?random? passwords,## 46 of the 6th character of the hash stored in## '/ect/shadow' were '/'function is_this_really_a_random_password () {PERHAPS_RANDOM=''for (( Z=0 ; Z<=8 ; Z++ )) doPERHAPS_RANDOM="$PERHAPS_RANDOM$( shuf --head-count=1 --random-source=/dev/urandom $FILE_OF_SAFE_CHARACTERS )"doneecho "$USER_NAME:$PERHAPS_RANDOM" | chpasswd}rm sixth-character-often-forward-slash.txtfor (( I=1; I<=100; I++ )) dois_this_really_a_random_passwordgrep --regexp=root /etc/shadow | cut --characters=-40 >> sixth-character-often-forward-slash.txtdone root:$5$56YsS//DE$HasM6O8y2mnXbtgeE64zK root:$5$ho8pk/4/A6e/m0eW$XmjA5Up.0Xig1e root:$5$jBQ4f.t1$vY/T/1kX8nzAEK8vQD3Bho root:$5$BJ44S/Hn$CsnG00z6FB5daFteS5QCYE root:$5$Jerqgx/96/HlV$9Wms5n1FEiM3K93A8 root:$5$qBbPLe4zYW$/zXRDqgjbllbsjkleCTB root:$5$37MrD/r0AlIC40n6$8hplf2c3DgtbM1 root:$5$.4Tt5S6F.3K7l7E$dAIZzFvvWmw2uyC root:$5$A4dX4ZlOoE$6axanr4GLPyhDstWsQ9B root:$5$HXAGhryJ/5$40tgmo7q30yW6OF7RUOE root:$5$EzNb9t5d$/nQEbEAQyug7Dk9X3YXCEv root:$5$HHS5yDeSP$LPtbJeTr0/5Z33vvw87bU root:$5$sDgxZwTX5Sm$6Pzcizq4NcKsWEKEL15 root:$5$FK1du/Paf/$hAy8Xe3UQv9HIpOAtLZ2 root:$5$xTkuy/BLUDh/N$/30sESA.5nVr1zFwI root:$5$PV4AX/OjZ$VU8vX651q4eUqjFWbE2b/ root:$5$iDuK0IUGijv4l$cdGh8BlHKJLYxPB8/ root:$5$0DEUp/jz$JBpqllXswNc0bMJA5IFgem root:$5$Wz3og/W3Jra/WKA.$6D7Wd4M1xxRDEp root:$5$ntHWB.mC3x$Kt4DNTjRZZzpbFvxpMxP root:$5$g/uEc/cq$Ptlgu8CXV.vrjrmuok9RRT root:$5$/XAHs/5x$Z9J4Zt4k6NxdjJ27PpLmTt root:$5$mgfbZeWD0h/$UDGz8YX.D85PzeXnd2K root:$5$f4Oh3/bF2Ox/eN$xt/Jkn0LxPnfKP8. root:$5$J0mZZXGJG7/v$e16VxghNvZZKRONown root:$5$SNza9XFl9i$Qq7r/N6Knt2j74no8H0x root:$5$aFCu//xiL$Ocn9mcT2izcnm3rUlBOJg root:$5$kMkyos/SLZ/Mm6$wNYxZ9QeuJ8c8T.o root:$5$ujXKC/Xnj0h/nQ$PUmePvJZr.UXmTGK root:$5$wtEhA/YKaTKH$6VCSXUiIdsfelkCYWV root:$5$I1taRlq59YZUGe$4OyIfByuvJeuwsjM root:$5$N54oH//j4nbiB$K4i6QOiS9iaaX.RiD root:$5$ps8bo/VjPGMP0y4$NTFkI6OeaMAQL7w root:$5$IRUXnXO8tSykA8$NatM5X/kKHHgtDLt root:$5$VaOgL/8V$m45M9glUYnlTKk8uCI7b5P root:$5$/lPDb/kUX73/F3$jJL.QLH5o9Ue9pVa root:$5$/sHNL/tVzuu//cr$QasvQxa02sXAHOl root:$5$hGI.SMi/7I$fYm0rZP0F5B2D1YezqtX root:$5$WsW2iENKA$4HhotPoLRc8ZbBVg4Z5QW root:$5$cN6mwqEl$q5S3U85cRuNHrlxS9Tl/PC root:$5$wwzLR/YMvk5/7ldQ$s3BJhq5LyrtZww root:$5$GUNvr/d15n8/K$CiNHwOkAtxuWJeNy1 root:$5$nGE75/8mEjM/A$pD/84iLunN/ZNI/JK root:$5$77Dn2dHLS$d5bUQhTz.OU4UA.67IGMB root:$5$EWrI//1u$uubkPk3YhAnwYXOYsvwbah root:$5$Hzfw1UCudP/N/U$Rjcdzdbov1YgozSJ root:$5$2y8CKTj.2eTq$7BEIgMWIzAJLl1SWBv root:$5$lcWsD/42g8zEEABA$r/vGxqqUZTkJ0V root:$5$LPJLc/Xz$tnfDgJh7BsAT1ikpn21l76 root:$5$ucvPeKw9eq8a$vTneH.4XasgBIeyGSA root:$5$Fwm2eUR7$ByjuLJRHoIFWnHtvayragS root:$5$yBl7BtMb$KlWGwBL6/WjgHVwXQh9fJS root:$5$1lnnh2kOG$rdTLjJsSpC3Iw4Y6nkPhq root:$5$WfvmP6cSfb066Z$1WvaC9iL11bPCAxa root:$5$qmf/hHvalWa4GE25$m3O2pdu25QBCwU root:$5$4P.oT/9HQ$Ygid4WXi0QCEObLVNsqFZ root:$5$FNr4Bkj56Y$38mG7mKV0mdb1PMCxrVd root:$5$hoNcyURtV$aTidBWHjngc1I0vUTi5bB root:$5$rzHmykYT$ATiXdUDUvUnB2fNMUQgwvE root:$5$o11Yb/ZQv2/k3wg9$5yShpVejDBk6HB root:$5$REPGN//y9H$awpPmUvCqvi6Bd/6bQxF root:$5$HbAEY/djXJx$y56GhMwavd7xTQ.jPg6 root:$5$3T1k5.LZUcy$Cup.LM5AnaBTIaJtBnF root:$5$wXaSC/P8bJ$y/0DoYJVjaP09O6GWiki root:$5$YuFfY8QPqm/dD$IIh0/tyn.18xEBl5Y root:$5$uTTBpjsKG//3Et8$9ibN9mVwSeVyOI4 root:$5$dASlMLzbVbFMnZ$N4uGBwGHhdg93z/V root:$5$03.FA/LnRBb.k7Zl$XOHU2ZlHkV9oz9 root:$5$2zL1p/VDCi$/QRT7Bo3cZ3Rxb8Y7ddo root:$5$0NpZqZs/qt/jIv.$8W/TTM3Gy2UMOWy root:$5$a4SXynoro7ucT$qFM2C79QJ15jQ0ZlL root:$5$RL0Eg/jroH8/ONP$EzceXz.pz74k104 root:$5$O3R5V/n1$U.mmCTbpID8xMXbvtzd4ch root:$5$0T2nVrv/P/xaRwUD$YVm17XF8kTsL0f root:$5$2bRwMNIXobZwn$Q228FJqg6/iRCe9GQ root:$5$PyYgL/axfgj/$uaL5y/kdzU4Kzi.JlB root:$5$A6QtfJdJ4Gwvx4$d4PA5AJ0806NzRnm root:$5$H8Mta5LDgGXp$QGdOJh.bFWgR3L719Z root:$5$H06URjv4BtOAbA$EJs1mZYhdKIVgCmn root:$5$OeB.O/GrmFB/az$SoE759KE9WIE17Uf root:$5$huiB9/sk$el3XMf7SGX81LnD3.SaF8J root:$5$fO7tfM.fjdSHA8G6$s.QIjfNniCzFdU root:$5$32at3SQJAD/xlw$HbXmBLVXTTyZfxQv root:$5$FHBFL/QdFl$FMipxpW0HlEFUIAr7IxF root:$5$sHvKf/M5OPdBuZZ$dz4qLOkTLGeCINX root:$5$hw4Vu/e34$/82lXu7ISrse.Ihk.qbqT root:$5$k1JOy/jRWZ$30YSk7kbhdKOjfDaiWVf root:$5$MnX.LUzqrB/B2$JuwqC.SmKFnMUWkEf root:$5$arRYf/PG$Xw6PpZNFO656p.Eb636iLt root:$5$5op/p8Hqs5$Nj2jA0Qxm80aG4fHW3oz root:$5$VHIT9/8yzZ$CpIK4ODps78GcqcsgiMT root:$5$.AlH7jBJoh/8$sjuVt.PcRH.vyvB3og root:$5$f7Ewinqm$nrJ2p/hKTuiEK//IfCTjth root:$5$N.dv/VCvrCADg$peSXfo35KN1dmbw/n root:$5$PSc4W./54l/SroH$CFFVOHRYK.Jj8Sp root:$5$8UBP3f4IcnAd/N1/$P.ud49qTStQ7Lw root:$5$qnXsZ/NlLZh/$nlaQVTS3FCJg1Jb2QG root:$5$xOpbbBqENR/7$boYJQzkCkZhRf7Uicf root:$5$V93tjZhzT$LrsIZWZmYo4ocRUvCixO6 root:$5$1MVz8/lf5oC/$rUKpnX23MhFx4.y2ZS Roughly half of the 6th hash characters are / : cat sixth-character-often-forward-slash.txt | cut --character=14 | sort / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / . . . . 2 5 6 8 8 B d D e e E f H I j j j J k k K l L M M n n N q r r r s S S t t T U U U U V w x X X X Z Z Z
Hash format and source The format of the password hash is $<type>$<salt>$<hash> , where <type> 5 is an SHA-256 based hash. The salt is usually at least 8 characters, (and is in the examples in the question) so the sixth character is part of the salt. Those hashes are likely generated by a version of the shadow tool suite (src package shadow in Debian, shadow-utils in CentOS) I tried to find out why, exactly, the code biases the slash. (thanks to @thrig for originally digging up the code.) TLDR: It's a bit interesting, but doesn't matter. The code generating the salt In libmisc/salt.c , we find the gensalt function that calls l64a in a loop: strcat (salt, l64a (random()));do { strcat (salt, l64a (random()));} while (strlen (salt) < salt_size); The loop takes a random number from random() , turns it into a piece of a string, and concatenates that to the string forming the salt. Repeat until enough characters are collected. What happens in l64a is more interesting though. The inner loop generates one character at a time from the input value (which came from random() ): for (i = 0; value != 0 && i < 6; i++) { digit = value & 0x3f; if (digit < 2) { *s = digit + '.'; } else if (digit < 12) { *s = digit + '0' - 2; } else if (digit < 38) { *s = digit + 'A' - 12; } else { *s = digit + 'a' - 38; } value >>= 6; s++;} The first line of the loop ( digit = value & 0x3f ) picks six bits from the input value, and the if clauses turn the value formed by those into a character. ( . for zero, / for a one, 0 for a two, etc.) l64a takes a long but the values output by random() are limited to RAND_MAX , which appears to be 2147483647 or 2^31 - 1 on glibc. So, the value that goes to l64a is a random number of 31 bits. By taking 6 bits at a time or a 31 bit value, we get five reasonably evenly distributed characters, plus a sixth that only comes from one bit! The last character generated by l64a cannot be a . , however, since the loop also has the condition value != 0 , and instead of a . as sixth character, l64a returns only five characters. Hence, half the time, the sixth character is a / , and half the time l64a returns five or fewer characters. In the latter case, a following l64a can also generate a slash in the first positions, so in a full salt, the sixth character should be a slash a bit more than half the time. The code also has a function to randomize the length of the salt, it's 8 to 16 bytes. The same bias for the slash character happens also with further calls to l64a which would cause the 11th and 12th character to also have a slash more often than anything else. The 100 salts presented in the question have 46 slashes in the sixth position, and 13 and 15 in the 11th and 12th position, respectively. (a bit less than half of the salts are shorter than 11 characters). On Debian On Debian, I couldn't reproduce this with a straight chpasswd as shown in the question. But chpasswd -c SHA256 shows the same behaviour. According to the manual, the default action, without -c , is to let PAM handle the hashing, so apparently PAM on Debian at least uses a different code to generate the salt. I didn't look at the PAM code on any distribution, however. (The previous version of this answer stated the effect didn't appear on Debian. That wasn't correct.) Significance, and requirements for salts Does it matter, though? As @RemcoGerlich commented, it's pretty much only a question of encoding. It will effectively fix some bits of the salt to zero, but it's likely that this will have no significant effect in this case, since the origin of those bits is this call to srandom in seedRNG : srandom (tv.tv_sec ^ tv.tv_usec ^ getpid ()); This is a variant of ye olde custom of seeding an RNG with the current time. ( tv_sec and tv_usec are the seconds and microseconds of the current time, getpid() gives the process id if the running process.)As the time and PIDs are not very unpredictable, the amount of randomness here is likely not larger than what the encoding can hold. The time and PID is not something you'd like to create keys with, but might be unpredictable enough for salts. Salts must be distinct to prevent brute-force testing multiple password hashes with a single calculation, but should also be unpredictable, to prevent or slow down targeted precomputation, which could be used to shorten the time from getting the password hashes to getting the actual passwords. Even with the slight issues, as long as the algorithm doesn't generate the same salt for different passwords, it should be fine. And it doesn't seem to, even when generating a couple dozen in a loop, as the list in the question shows. Also, the code in question isn't used for anything but generating salts for passwords, so there are no implications about problems elsewhere. For salts, see also, e.g. this on Stack Overflow and this on security.SE . Conclusion In conclusion, there's nothing wrong with your system. Making sure your passwords are any good, and not used on unrelated systems is more useful to think about.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/301334", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183124/" ] }
301,341
I'm using rsync to backup my data to an external USB3 disk (encrypted with dmcrypt/luks) and the problem is that the transfer hangs on a file, for an amount of time that can go from seconds to minutes, and then resumes with no errors or issues. This happens to several files (apparently random) during the same rsync "session" making it very slow, even though some file transfers can reach speeds of 100MB/s. I'm running Debian Jessie 8.5, rsync is at version 3.1.1, the source file system is formatted with btrfs (version 3.17) and the external disk was encrypted with crypsetup 1.6.6. The encrypted partition was formatted with btrfs, but after noticing this issue and finding this apparently unrelated ubuntu bug , I reformatted the partition to ext4 and, although it seemed to make the issue less frequent, the problem was still there. During these "hangs" no strange CPU or memory usage is detected but disk reads and writes drop to zero. This is an iotop output during a freeze: Total DISK READ : 0.00 B/s | Total DISK WRITE : 0.00 B/sActual DISK READ: 0.00 B/s | Actual DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND21879 be/4 root 0.00 B/s 0.00 B/s 0.00 % 99.00 % [kworker/6:1] 1085 be/4 root 0.00 B/s 0.00 B/s 0.00 % 99.00 % [kworker/3:2]31994 be/4 root 0.00 B/s 0.00 B/s 0.00 % 99.00 % [kworker/4:3] 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] 3 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0] 5 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0H] 7 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_sched] The kworker processes are always changing but keep the 99% IO. This is an iostat output during one of the freezes (the external disk is sdg): avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 0.25 99.75 0.00 0.00Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %utilsda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00sde 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00sdf 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00sdg 0.00 0.00 0.00 141.00 0.00 16920.00 240.00 135.94 868.20 0.00 868.20 7.09 100.00dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20343.88 0.00 0.00 0.00 0.00 100.00 I also ran ps aux | awk '$8 ~ /D/ { print $0 }' with watch and during the freeze it's this: root 1080 0.1 0.0 0 0 ? D 16:23 0:00 [kworker/0:0]root 5851 0.0 0.0 0 0 ? D 01:41 0:02 [btrfs-transacti]root 17455 4.4 0.0 105028 5192 pts/3 D+ 14:10 6:11 rsync -avr --stats --progress --inplace --delete /data/ /media/BKP-DISK/root 24219 0.1 0.0 0 0 ? D 15:16 0:08 [kworker/5:0]root 31892 0.2 0.0 0 0 ? D Aug02 2:08 [usb-storage]root 31956 0.1 0.0 0 0 ? D 15:41 0:04 [kworker/7:0]root 31994 0.0 0.0 0 0 ? D 15:42 0:01 [kworker/4:3]root 32100 0.1 0.0 0 0 ? D 15:52 0:03 [kworker/u16:2] When the transfer is running ok it's this: root 17453 4.4 0.1 105020 33304 pts/3 D+ 14:10 6:32 rsync -avr --stats --progress --inplace --delete /data/ /media/BKP-DISK/ I'm out of ideas and know-how so I need help to troubleshoot this further. Edit @derobert I tested in an USB2 port but the issue continues to appear (found a gap of 11 seconds in the strace log and then stopped the test). The last dmesg backtrace was when the external disk was still formatted with btrfs and here's the output (there were more but all the same): INFO: task kworker/u16:21:12881 blocked for more than 120 seconds. Not tainted 3.16.0-4-amd64 #1"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.kworker/u16:21 D ffff8807f72bfa48 0 12881 2 0x00000000Workqueue: btrfs-endio-write btrfs_endio_write_helper [btrfs] ffff8807f72bf5f0 0000000000000046 0000000000012f00 ffff88022dcfbfd8 0000000000012f00 ffff8807f72bf5f0 ffff880103b92c00 ffff88026de241f0 ffff88026de241f0 0000000000000001 0000000000000000 ffff88010c4662d8Call Trace: [<ffffffffa02843ef>] ? wait_current_trans.isra.20+0x9f/0xf0 [btrfs] [<ffffffff810a7e60>] ? prepare_to_wait_event+0xf0/0xf0 [<ffffffffa0285948>] ? start_transaction+0x298/0x570 [btrfs] [<ffffffffa028da90>] ? btrfs_finish_ordered_io+0x250/0x5c0 [btrfs] [<ffffffffa02b2f25>] ? normal_work_helper+0xb5/0x290 [btrfs] [<ffffffff810817c2>] ? process_one_work+0x172/0x420 [<ffffffff81081e53>] ? worker_thread+0x113/0x4f0 [<ffffffff81510d61>] ? __schedule+0x2b1/0x700 [<ffffffff81081d40>] ? rescuer_thread+0x2d0/0x2d0 [<ffffffff8108809d>] ? kthread+0xbd/0xe0 [<ffffffff81087fe0>] ? kthread_create_on_node+0x180/0x180 [<ffffffff81514958>] ? ret_from_fork+0x58/0x90 [<ffffffff81087fe0>] ? kthread_create_on_node+0x180/0x180 @roaima Since I run rsync with --progress I can see the current state of the transfer and that's how I first caught the issue. For example, for a file that is 1GB it could hang at 100MB and I would see all the transfer info (transfered bytes, speed, etc) stop updating (this is where iotop would show disk reads and write at 0), and when the info would start updating again iotop would show normal read and write values. @activesheetd Here is a section of the strace log (I added the timestamp option): 29253 03:47:18 <... select resumed> ) = 1 (in [0], left {59, 999999})29253 03:47:18 read(0, "\355\1H\347?~\0\255", 8) = 829251 03:47:18 select(6, [5], [4], [5], {60, 0} <unfinished ...>29253 03:47:18 write(1, "\235\356\374|\f\230\310u\330{\7\24\3169<\255\213>\347m\335kX\350\234\253\1\226M\6#\341"..., 262144) = 26214429253 03:47:31 select(1, [0], [], [0], {60, 0}) = 1 (in [0], left {59, 999997})29253 03:47:31 read(0, <unfinished ...>29251 03:47:31 <... select resumed> ) = 1 (out [4], left {47, 597230}) Between the 4th and the 5th lines we can see a gap of 13 seconds, which corresponded with a hang, and then it resumed. @Fiximan The log option doesn't give me more info on this problem. Since the freeze is in the middle of a file transfer for the logs it's like nothing happened (even the strace logs show timestamp gaps).
Hash format and source The format of the password hash is $<type>$<salt>$<hash> , where <type> 5 is an SHA-256 based hash. The salt is usually at least 8 characters, (and is in the examples in the question) so the sixth character is part of the salt. Those hashes are likely generated by a version of the shadow tool suite (src package shadow in Debian, shadow-utils in CentOS) I tried to find out why, exactly, the code biases the slash. (thanks to @thrig for originally digging up the code.) TLDR: It's a bit interesting, but doesn't matter. The code generating the salt In libmisc/salt.c , we find the gensalt function that calls l64a in a loop: strcat (salt, l64a (random()));do { strcat (salt, l64a (random()));} while (strlen (salt) < salt_size); The loop takes a random number from random() , turns it into a piece of a string, and concatenates that to the string forming the salt. Repeat until enough characters are collected. What happens in l64a is more interesting though. The inner loop generates one character at a time from the input value (which came from random() ): for (i = 0; value != 0 && i < 6; i++) { digit = value & 0x3f; if (digit < 2) { *s = digit + '.'; } else if (digit < 12) { *s = digit + '0' - 2; } else if (digit < 38) { *s = digit + 'A' - 12; } else { *s = digit + 'a' - 38; } value >>= 6; s++;} The first line of the loop ( digit = value & 0x3f ) picks six bits from the input value, and the if clauses turn the value formed by those into a character. ( . for zero, / for a one, 0 for a two, etc.) l64a takes a long but the values output by random() are limited to RAND_MAX , which appears to be 2147483647 or 2^31 - 1 on glibc. So, the value that goes to l64a is a random number of 31 bits. By taking 6 bits at a time or a 31 bit value, we get five reasonably evenly distributed characters, plus a sixth that only comes from one bit! The last character generated by l64a cannot be a . , however, since the loop also has the condition value != 0 , and instead of a . as sixth character, l64a returns only five characters. Hence, half the time, the sixth character is a / , and half the time l64a returns five or fewer characters. In the latter case, a following l64a can also generate a slash in the first positions, so in a full salt, the sixth character should be a slash a bit more than half the time. The code also has a function to randomize the length of the salt, it's 8 to 16 bytes. The same bias for the slash character happens also with further calls to l64a which would cause the 11th and 12th character to also have a slash more often than anything else. The 100 salts presented in the question have 46 slashes in the sixth position, and 13 and 15 in the 11th and 12th position, respectively. (a bit less than half of the salts are shorter than 11 characters). On Debian On Debian, I couldn't reproduce this with a straight chpasswd as shown in the question. But chpasswd -c SHA256 shows the same behaviour. According to the manual, the default action, without -c , is to let PAM handle the hashing, so apparently PAM on Debian at least uses a different code to generate the salt. I didn't look at the PAM code on any distribution, however. (The previous version of this answer stated the effect didn't appear on Debian. That wasn't correct.) Significance, and requirements for salts Does it matter, though? As @RemcoGerlich commented, it's pretty much only a question of encoding. It will effectively fix some bits of the salt to zero, but it's likely that this will have no significant effect in this case, since the origin of those bits is this call to srandom in seedRNG : srandom (tv.tv_sec ^ tv.tv_usec ^ getpid ()); This is a variant of ye olde custom of seeding an RNG with the current time. ( tv_sec and tv_usec are the seconds and microseconds of the current time, getpid() gives the process id if the running process.)As the time and PIDs are not very unpredictable, the amount of randomness here is likely not larger than what the encoding can hold. The time and PID is not something you'd like to create keys with, but might be unpredictable enough for salts. Salts must be distinct to prevent brute-force testing multiple password hashes with a single calculation, but should also be unpredictable, to prevent or slow down targeted precomputation, which could be used to shorten the time from getting the password hashes to getting the actual passwords. Even with the slight issues, as long as the algorithm doesn't generate the same salt for different passwords, it should be fine. And it doesn't seem to, even when generating a couple dozen in a loop, as the list in the question shows. Also, the code in question isn't used for anything but generating salts for passwords, so there are no implications about problems elsewhere. For salts, see also, e.g. this on Stack Overflow and this on security.SE . Conclusion In conclusion, there's nothing wrong with your system. Making sure your passwords are any good, and not used on unrelated systems is more useful to think about.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/301341", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183100/" ] }
301,378
I want to list all running processes. Each process should be listed with: PID user name group name Also, the parent/child hierarchy of the processes should be displayed.
The magic combination is ps axfo pid,euser,egroup,args Here is an output example on Ubuntu 16.04: $ ps axfo pid,euser,egroup,args PID EUSER EGROUP COMMAND 2 root root [kthreadd] 3 root root \_ [ksoftirqd/0] 4 root root \_ [kworker/0:0] 5 root root \_ [kworker/0:0H] 6 root root \_ [kworker/u4:0] 7 root root \_ [rcu_sched] 8 root root \_ [rcu_bh] 9 root root \_ [migration/0] 10 root root \_ [watchdog/0] 11 root root \_ [watchdog/1] 12 root root \_ [migration/1] 13 root root \_ [ksoftirqd/1] 14 root root \_ [kworker/1:0] 15 root root \_ [kworker/1:0H] 16 root root \_ [kdevtmpfs] 17 root root \_ [netns] 18 root root \_ [perf] 19 root root \_ [khungtaskd] 20 root root \_ [writeback] 21 root root \_ [ksmd] 22 root root \_ [khugepaged] 23 root root \_ [crypto] 24 root root \_ [kintegrityd] 25 root root \_ [bioset] 26 root root \_ [kblockd] 27 root root \_ [ata_sff] 28 root root \_ [md] 29 root root \_ [devfreq_wq] 30 root root \_ [kworker/u4:1] 31 root root \_ [kworker/1:1] 32 root root \_ [kworker/0:1] 34 root root \_ [kswapd0] 35 root root \_ [vmstat] 36 root root \_ [fsnotify_mark] 37 root root \_ [ecryptfs-kthrea] 53 root root \_ [kthrotld] 54 root root \_ [acpi_thermal_pm] 55 root root \_ [bioset] 56 root root \_ [bioset] 57 root root \_ [bioset] 58 root root \_ [bioset] 59 root root \_ [bioset] 60 root root \_ [bioset] 61 root root \_ [bioset] 62 root root \_ [bioset] 63 root root \_ [bioset] 64 root root \_ [bioset] 65 root root \_ [bioset] 66 root root \_ [bioset] 67 root root \_ [bioset] 68 root root \_ [bioset] 69 root root \_ [bioset] 70 root root \_ [bioset] 71 root root \_ [bioset] 72 root root \_ [bioset] 73 root root \_ [bioset] 74 root root \_ [bioset] 75 root root \_ [bioset] 76 root root \_ [bioset] 77 root root \_ [bioset] 78 root root \_ [bioset] 79 root root \_ [scsi_eh_0] 80 root root \_ [scsi_tmf_0] 81 root root \_ [scsi_eh_1] 82 root root \_ [scsi_tmf_1] 83 root root \_ [kworker/u4:2] 87 root root \_ [ipv6_addrconf] 88 root root \_ [kworker/1:2] 89 root root \_ [kworker/u4:3] 102 root root \_ [deferwq] 103 root root \_ [charger_manager] 221 root root \_ [kpsmoused] 242 root root \_ [kworker/0:2] 506 root root \_ [mpt_poll_0] 509 root root \_ [mpt/0] 513 root root \_ [scsi_eh_2] 514 root root \_ [scsi_tmf_2] 515 root root \_ [bioset] 517 root root \_ [bioset] 662 root root \_ [raid5wq] 695 root root \_ [bioset] 736 root root \_ [jbd2/sda1-8] 737 root root \_ [ext4-rsv-conver] 802 root root \_ [iscsi_eh] 805 root root \_ [ib_addr] 806 root root \_ [ib_mcast] 807 root root \_ [ib_nl_sa_wq] 808 root root \_ [ib_cm] 809 root root \_ [iw_cm_wq] 810 root root \_ [rdma_cm] 824 root root \_ [kauditd] 1198 root root \_ [iprt-VBoxWQueue] 1778 root root \_ [kworker/1:1H] 1800 root root \_ [kworker/0:1H] 1854 root root \_ [kworker/1:3] 2524 root root \_ [kworker/0:3] 1 root root /sbin/init 794 root root /lib/systemd/systemd-journald 848 root root /sbin/lvmetad -f 872 root root /lib/systemd/systemd-udevd 1815 systemd+ systemd+ /lib/systemd/systemd-timesyncd 1836 root root /usr/sbin/cron -f 1838 daemon daemon /usr/sbin/atd -f 1840 root root /lib/systemd/systemd-logind 1851 root root /usr/sbin/acpid 1853 syslog syslog /usr/sbin/rsyslogd -n 1860 root root /usr/bin/lxcfs /var/lib/lxcfs/ 1865 root root /usr/lib/accountsservice/accounts-daemon 1870 root root /usr/lib/snapd/snapd 1875 message+ message+ /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation 1888 root root /sbin/mdadm --monitor --pid-file /run/mdadm/monitor.pid --daemonise --scan --syslog 1890 root root /usr/lib/policykit-1/polkitd --no-debug 1995 root root /sbin/dhclient -1 -v -pf /run/dhclient.enp0s3.pid -lf /var/lib/dhcp/dhclient.enp0s3.leases -I -df /var/lib/dhcp/dhclient6.enp0s3.lease 2184 root root /sbin/iscsid 2185 root root /sbin/iscsid 2288 root root /usr/sbin/irqbalance --pid=/var/run/irqbalance.pid 2294 root root /usr/sbin/sshd -D 2566 root root \_ sshd: ubuntu [priv] 2602 ubuntu ubuntu \_ sshd: ubuntu@pts/0 2603 ubuntu ubuntu \_ -bash 2618 ubuntu ubuntu \_ ps axfo pid,euser,egroup,args 2301 root root /sbin/agetty --keep-baud 115200 38400 9600 ttyS0 vt220 2305 root root /sbin/agetty --noclear tty1 linux 2568 ubuntu ubuntu /lib/systemd/systemd --user 2570 ubuntu ubuntu \_ (sd-pam) What I usually do is - I create an alias and drop it in a convenient place such as /etc/profile.d/aliases.sh with the following content: alias psx='ps axfo pid,euser,egroup,args' And then I only need to run psx to get the nice, informative output shown above.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301378", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57907/" ] }
301,426
I want to write the following bash function in a way that it can accept its input from either an argument or a pipe: b64decode() { echo "$1" | base64 --decode; echo} Desired usage: $ b64decode "QWxhZGRpbjpvcGVuIHNlc2FtZQo="$ b64decode < file.txt$ b64decode <<< "QWxhZGRpbjpvcGVuIHNlc2FtZQo="$ echo "QWxhZGRpbjpvcGVuIHNlc2FtZQo=" | b64decode
See Stéphane Chazelas's answer for a better solution. You can use /dev/stdin to read from standard input b64decode(){ if (( $# == 0 )) ; then base64 --decode < /dev/stdin echo else base64 --decode <<< "$1" echo fi} $# == 0 checks if number of command line arguments is zero base64 --decode <<< "$1" one can also use herestring instead of using echo and piping to base64
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/301426", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183200/" ] }
301,434
I have a remote server with an rsync share and I need to pull just the most recent 7 days of files. I can SEND files based on date by using: find /path -mtime 7 ... -exec rsync -a but I can't just login to the server and send the files to my local machine so I need a way to use rsync to get all the files that are less than 7 days old and grab them. I cannot simply sync everything as the total data on the server exceeds 100TB and I don't have access to the remote server other than the rsync module.
Surprisingly, this can actually be done with rsync(1) alone, without shell access to the remote server. Assuming bash(1) and GNU date(1) : #! /usr/bin/env bashsrc=rsync://example.com/archives/dst=/path/to/mirrorscutoff=$( date -d '7 days ago' +%s )rsync -na --no-motd --out-format='%M %f' "$src" "$dst" | \ while IFS= read -r line; do d=${line%% *} fn=${line#* } fdate=$( date -d "${d/-/ }" +%s ) || continue if [ $fdate -ge $cutoff ]; then printf '%s\0' "$fn"; fi done | \ rsync -a --files-from=- -0 "$src" "$dst"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301434", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61289/" ] }
301,437
How can I create a data file with one column in which there will be 1000 rows with zero values? something like: output:00000.. .
You might use yes(1) for that (piped into head(1) ...): yes 0 | head -n 1000 > data_file_with_a_thousand_0s.txt and if you need a million zeros, replace the 1000 with 1000000 PS. In the old days, head -1000 was enough since equivalent to head -n 1000 today.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301437", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133262/" ] }
301,483
I'm trying to copy a large directory from one drive to another. I mistakenly logged out before it was finished so only about 80% of the files copied over. Is there away to copy the remaining files without starting from scratch?
I would try, rsync -a /from/file /dest/file you can use other options like --append , -P (--partial --progress) . See man rsync for more info. Or if you are using cp then use cp -u . from man cp : -u, --update copy only when the SOURCE file is newer than the destination file or when the destination file is missing.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/301483", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183260/" ] }
301,530
I have to run some tests on a server at the University. I have ssh access to the server from the desktop in my office. I want to launch a python script on the server that will run several tests during the weekend. The desktop in the office will go on standby during the weekend and as such it is essential that the process continues to run on the server even when the SSH session gets terminated. I know about nohup and screen and tmux , as described in questions like: How to keep processes running after ending ssh session? How can I close a terminal without killing the command running in it? What am I doing right now is: ssh username@server tmux python3 run_my_tests.py -> this script does a bunch of subprocess.check_output of an other script which itself launches some Java processes. Tests run fine. I use Ctrl+B, D and I detach the session. When doing tmux attach I reobtain the tmux session which is still running fine, no errors whatsoever . I kept checking this for minutes and the tests run fine. I close the ssh session After this if I log in to the server via SSH, I do am able to reattach to the running tmux session, however what I see is something like: Traceback (most recent call last): File "run_my_examples.py", line 70, in <module> File "run_my_examples.py", line 62, in run_cmd_aggr File "run_my_examples.py", line 41, in run_cmd File "/usr/lib64/python3.4/subprocess.py", line 537, in call with Popen(*popenargs, **kwargs) as p: File "/usr/lib64/python3.4/subprocess.py", line 858, in __init__ restore_signals, start_new_session) File "/usr/lib64/python3.4/subprocess.py", line 1456, in _execute_child raise child_exception_type(errno_num, err_msg)PermissionError: [Errno 13] Permission denied I.e. the process that was spawning my running tests, right after the end of the SSH session , was completely unable to spawn other subprocesses. I have chmod ed the permissions of all files involved and nothing changes. I believe the servers use Kerberos for login/permissions, the server is Scientific Linux 7.2. Could it be possible that the permissions of spawning new processes get removed when I log off from the ssh sessions? Is there something I can do about it? I have to launch several tests, with no idea how much time or space they will take... The version of systemd is 219 The file system is AFS, using fs listacl <name> I can confirm that I do have permissions over the directories/files that are used by the script.
Thanks to Mark Plotnick I was able to identify and fix the issue. The problem is the interaction between the AFS file system used by the server and Kerberos handling the authentication. The same issue was brought up in this question on SO . Basically what is happening is that when I ssh into the server, Kerberos gives the authentication token to the session. This token is used also to access the AFS file system. When closing the SSH session this token gets destroyed and the processes running start to get permission denied errors when trying to access files on the AFS. The way to fix this is to start a new window inside screen / tmux and launch the command: kinit && aklog After that you can detach from screen / tmux and close the ssh session safely. The commands above create new Kerberos tokens and associate those with the screen / tmux session, in this way when the ssh connection is closed the initial tokens get revoked but since the subprocesses now use those you created they don't suffer permission denied errors. To summarize: ssh username@server tmux Launch the process you need to keep running Create a new window with Ctrl+B, C kinit && aklog Detach from the session with Ctrl+B, D Close ssh session
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301530", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24712/" ] }
301,570
Why is it that find prints out a leading ./ to results if no paths are given? $ find./file1./file2./file3 What is the reason for not printing out this? $ findfile1file2file3
The reason why you see this is because the developer of GNU find chose to provide a "reasonable" behavior for find when no path is given. In contrast, POSIX doesn't state that the parameter is optional: The find utility shall recursively descend the directory hierarchy from each file specified by path , evaluating a Boolean expression composed of the primaries described in the OPERANDS section for each file encountered. Each path operand shall be evaluated unaltered as it was provided, including all trailing <slash> characters; all pathnames for other files encountered in the hierarchy shall consist of the concatenation of the current path operand, a <slash> if the current path operand did not end in one, and the filename relative to the path operand. The relative portion shall contain no dot or dot-dot components, no trailing characters, and only single <slash> characters between pathname components. You can see the difference in the synopsis for each. GNU has (as is the convention) optional items in square brackets: find [-H] [-L] [-P] [-D debugopts] [-Olevel] [starting-point...] [expression] while POSIX doesn't indicate that it can be optional: find [-H|-L] path... [operand_expression...] In the GNU program, that's done in ftsfind.c : if (empty) { /* * We use a temporary variable here because some actions modify * the path temporarily. Hence if we use a string constant, * we get a coredump. The best example of this is if we say * "find -printf %H" (note, not "find . -printf %H"). */ char defaultpath[2] = "."; return find (defaultpath); } and a literal "." is used for simplicity. So you'll see the same result with find and find . because (and POSIX agrees) the given path will be used to prefix the results (see above for concatenation ). With a little work, one could determine when the feature was first added; it was present in the initial creation of "findutils" in 1996 (see find.c ): + /* If no paths are given, default to ".". */+ for (i = 1; i < argc && strchr ("-!(),", argv[i][0]) == NULL; i++)+ process_top_path (argv[i]);+ if (i == 1)+ process_top_path (".");++ exit (exit_status);+} From the changelog for find 3.8, this was apparently Sat Dec 15 19:01:12 1990 David J. MacKenzie (djm at egypt) * find.c (main), util.c (usage): Make directory args optional, defaulting to "."
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/301570", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27926/" ] }
301,582
I have data like this (the real data has over 50,000 digits and 8000 rows): input: 1 111221 211212 222212 111223 211213 11122 I want to put the value of each second row beside the value of the first row with the same name. Also, there should be two space as deliminator between each pair of values and there should be one tab as deliminator among different pair of values. The output should look like: output: 1 1 2 1 1 1 1 2 2 2 12 2 1 2 1 2 1 2 2 1 23 2 1 1 1 1 1 2 2 1 2 any suggestion?
I'd use perl, and run it as oneliner like this: perl -wne 'sub parseline { ($id,$v) = split; return split //,$v }; @a = parseline(); print "$id\t"; $_ = <>; @b = parseline(); for ($i=0; $i<@a; $i++) { print "$a[$i] $b[$i]\t" }; print "\n"' < input > output Explanation: perl -wne runs the rest of command for each line of input sub parseline { .... } will parse input, and set first number in line as $id , and return the rest as array of characters. @a=parseline() will store first line chars in array @a next, we print $id , followed by TAB ( \t ) $_=<>; @b=parseline(); will read next (even) line and put it's data in array @b for ($i=0; $i<@a; $i++) { print "$a[$i] $b[$i]\t" } for each element of the array @a , we will print that element, two spaces, corresponding element from array @b and then tab print "\n" will print newline at the end due to -n parameter to perl at the start, whole process will restart with line 3, then 5, then 7 etc. < input > output indicates from which file we read our input, and to which file we write output. Note: the code will print extra tab at the end of each line. Removing it is left as an exercise for the reader to prevent crowdsourced homework assignments and keep code little simpler. Also the code assumes that lines to pair are always two and one after another (as given in example) As it processes input file line by line, it easily scales linearly for many thousands of lines...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301582", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133262/" ] }
301,587
Today first time in my life I saw tar.xz download. I searched the internet and found Wikipedia articles ( xz and XZ Utils ) Interesting quote about the users of xz xz has gained notability for compressing packages in the GNU coreutils project,[7] Debian family of systems deb (file format), openSUSE,[8] Fedora,[9] Arch Linux,[10] Slackware,[11] FreeBSD,[12] Gentoo,[13] GNOME,[14] and TeX Live,[15] as well as being an option to compress a compiled Linux kernel.[16] In March 2013, kernel.org announced the use of xz as the default compressed file format for distributing kernel archive files.[17] I always use tar.gz . When and why should I use tar.xz ? What's the use case? I found out after first comment that a similar question already posted. I often compress mongodump/mongoexport (BSON/JSON) and mysqldump (SQL text). Is there an advantage to use tar.xz for those backups?
gzip and xz uses two different algorithms, and therefore they perform differently, both in terms of what level of compression they achieve and in terms of the amount of resources they consume while compressing or decompressing. In general , xz achieves higher compression ratios, but needs a lot more memory and time. I personally use xz for archiving data; big files that I need to put away for a long time. I use gzip otherwise, since it's usually quicker. Do test them both and see how they perform on your average tar (or whatever) file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301587", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26612/" ] }
301,646
Every once in a while, I find the need to do: cp /really/long/path/to/file.txt /totally/different/long/path/to/copy.txt Since I use autojump , getting to the directories is very fast and easy. However, I'm at a loss when it comes to copying from one directory to the other without having to type out at least one of the full paths. In a GUI filesystem navigator, this is easy: navigate to the first directory; Copy the original file; navigate to the second directory; and Paste . But with cp , it seems like I can't do the copy in two steps. I'm looking to do something like the following: (use autojump to navigate to the first directory)$ copy file.txt(use autojump to navigate to the second directory)$ paste copy.txt Instead of the longer-to-type: (use autojump to navigate to the first directory)$ cp file.txt /totally/different/long/path/to/copy.txt Is there a tool that provides the functionality I'm looking for? I'm using Zsh on OS X El Capitan.
The below works in bash . I haven't tried it in zsh . Try: echo ~- # Just to make sure you know what the "last directory" is Then: cp file.txt ~-/copy.txt Also see: More examples of use of ~- (and its interaction with pushd and popd ) Is it possible to name a part of a command to reuse it in the same command later on?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301646", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176805/" ] }
301,664
I'm doing multiple raid monitoring in the same script and I want to have the script send alert/go red if EITHER variable comes back null. I tried reading up and thought I had it, but what I tried ended up just never failing. For testing, I have it grep fail and this SHOULD cause it to fail, but so far, I can't get it to fail, it actually just always passes. The Test Environment looks like this : var="$(sudo /usr/StorMan/arcconf GETCONFIG 1 LD 0 | grep Optimal)"var1="$(sudo /usr/StorMan/arcconf GETCONFIG 1 LD 1 | grep fail)" This works for 1 Variable if [ -z "$var" ] I have tried if [ -z "$var" ] && [ -z "$var1" ]if [ -z "$var" && -z "$var1" ]if [[ -z "$var" && -z "$var1" ]] But to no avail, I'm sure somebody would know what I'm doing wrong in a heartbeat, I appreciate the time taken to read this!
Use || rather than && , e.g., if [ -z "$var" ] || [ -z "$var1" ] The bash manual explains it: AND and OR lists are sequences of one or more pipelines separated by the control operators && and || , respectively. AND and OR lists are executed with left associativity.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301664", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/130767/" ] }
301,685
I have an Ubuntu server running Redis, which suffers from a high load problem. Forensics Uptime # uptime05:43:53 up 19 min, 1 user, load average: 2.96, 2.07, 1.52 sar # sar -q 05:24:00 AM LINUX RESTART05:25:01 AM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked05:35:04 AM 0 116 3.41 2.27 1.20 4Average: 0 116 3.41 2.27 1.20 4 htop The CPU is utilization in htop is embarrassingly low: top netstat 34 open redis-server connections: $ sudo netstat -natp | grep redis-server | wc -l34 free $ free -g total used free shared buffers cachedMem: 14 6 8 0 0 2-/+ buffers/cache: 4 10Swap: 0 0 0 How do I know which processes are causing the high load, waiting to enter the Running state? Is the number of connections too high?
You're seeing the unexpected loadavg because of high iowait. 98.7 in the wa section of top shows this. From your screenshots I see the kworker process is also in uninterruptible sleep (state of D within top) which occurs when a process is waiting for disk I/O to complete. vmstat gives you visibility into the run queue. Execute vmstat 1 in typical sar fashion for updates every second. The r column shows runnable/running processes which the kernel uses to calculate loadavg and the b column shows processes blocked waiting for disk I/O aka uninterruptible sleep. Processes in b are added to the loadavg calculation, which is how iowait causes mysterious loadavg. So to answer your question of how to see which procs are causing high loadavg, in your case of iowait, use top / ps to look for procs in a state of D then troubleshoot from there.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/301685", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1079/" ] }
301,717
On this question or on this one (for example) you will get solutions on how to look for symlinks pointing to a given directory (let's call it /dir1 ), while I am interested to symbolic links possibly pointing to any file/folder inside /dir1 . I want to delete such directory but I am not sure that I am safe to do so, as on an other directory (let's call it /dir2 ), I may have symlinks pointing to inner parts of /dir1 . Further, I may have created these symlinks using absolute or relative paths. My only help is that I know the symlinks I want to check are on a mounted filesystem, on /dir2 .
You can find all the symbolic links using: find / -type l you might want to run this as root in order to get to every place on the disc. You can expand these using readlink -f to get the full path of the link and you should be able to grep the output against the target directory that you are considering for deletion: find / -type l -exec readlink -f {} + | grep -F /dir2 Using find / -type l -printf '%l\n' doesn't work as you get relative links like ../tmp/xyz which might be pointing to your target dir, but are not matched because they are not fully expanded.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/301717", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77329/" ] }
301,724
I have my ASUS X556U with DualBoot between W10 and Debian Jessie, but I need to regulate the brightness. I've been serching in Google and I found xbacklight, but I have a problem while executing it: barreeeiroo@Debian-Diego ~> xbacklight -dec 10No outputs have backlight propertybarreeeiroo@Debian-Diego ~> Then I search in Google more info about the problem, and I found this post , but it causes another problem: barreeeiroo@Debian-Diego ~> sudo ln -s /sys/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5:1.0/rtsx_usb_sdmmc.4/leds/mmc0::/brightness /sys/class/backlight[sudo] password for barreeeiroo: ln: failed to create symbolic link ‘/sys/class/backlight/brightness’: Operation not permittedbarreeeiroo@Debian-Diego ~> I've adapted the route to my computer Then I tried to use chmod and chown , but is the same problem. So, my questions are: Is possible to fix that error? Is there any other method to manage brightness in Debian? Thanks
Arch Linux has the following to say about xbacklight : Brightness can be set using the xorg-xbacklight package. Note: xbacklight only works with intel. Radeon does not support the RandR backlight property. xbacklight currently does not work with the modesetting driver. To set brightness to 50% of maximum: $ xbacklight -set 50 Increments can be used instead of absolute values, for example to increase or decrease brightness by 10%: $ xbacklight -inc 10$ xbacklight -dec 10 If you get the "No outputs have backlight property" error, it is because xrandr/xbacklight does not choose the right directory in /sys/class/backlight . You can specify the directory by setting the Backlight option of the device section in xorg.conf . For instance, if the name of the directory is intel_backlight , the device section can be configured as follows: /etc/X11/xorg.conf-------------------Section "Device" Identifier "Card0" Driver "intel" Option "Backlight" "intel_backlight"EndSection The following worked for me on Debian Stretch LXDE. Checked the backlight directory: ls /sys/class/backlight . I happen to have intel_backlight . To get the Identifier, I ran xrandr --verbose . Mine happened to be 0x72 . Checking /etc/X11/ , I found no xorg.conf , so I made my own and entered the information I had found: Section "Device" Identifier "0x72" Driver "intel" Option "Backlight" "intel_backlight"EndSection I then rebooted. It worked from there. Since LXDE runs openbox, I edited ~/.config/openbox/lxde-rc.xml and inserted the following keybindings: <!-- Increase backlight 10% --><keybind key="XF86MonBrightnessUp"> <action name="Execute"> <command>xbacklight -inc 10</command> </action></keybind><!-- Decrease backlight 10% --><keybind key="XF86MonBrightnessDown"> <action name="Execute"> <command>xbacklight -dec 10</command> </action></keybind>
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301724", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183420/" ] }
301,726
I have web app that I want to test for DNS failures because I think it don't handle them correctly, how can temporarly make all DNS lookup return error? I'm using Xubuntu (XFCE).
@schaiba suggested renaming /etc/resolv.conf ; a little better would be to make /etc/resolv.conf point to a live address without a DNS server running. That is likely to reduce the timeouts.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301726", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1806/" ] }
301,729
Just tried to uninstall bitcoind and install bitcoin-qt but now it says error while loading shared libraries : libminiupnpc.so.16 cannot open shared object file : no such file or directory pacman -Fs libminiupnpc.so.16 returns nothing Any idea how to fix ?
@schaiba suggested renaming /etc/resolv.conf ; a little better would be to make /etc/resolv.conf point to a live address without a DNS server running. That is likely to reduce the timeouts.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301729", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/149443/" ] }
301,743
I have rather basic task but wasn't able to find proper solution to it.I want to iterate date interval since 2008 to current and need epoch values for each iteration of the loop. I am interested in iterating both years, halves and months as well. I wrote such script #!/bin/bashinitial_date=`date -d "2008-02-04 00:00:00 UTC" +%s`end_date=`date +%s`n=0until [ $initial_date -gt $end_date ]; do echo $initial_date let n+=1 readable_date=`date +"%Y-%m-%d %T" -d "1970-01-01 $initial_date sec"` echo $readable_date readable_date=`date -d "$readable_date + $n year"` initial_date=`date -d "$readable_date" +%s`done but its output is rather weird to me: 1202083200 2008-02-04 00 :00:00 1233702000 2009-02-03 23 :00:00 1265230800 2010-02-03 21 :00:00 1296756000 2011-02-03 18 :00:00 1328277600 2012-02-03 15 :00:00 1359885600 2013-02-03 11 :00:00 1391403600 2014-02-03 06 :00:00 1422918000 2015-02-02 23 :00:00 1454425200 2016-02-02 15 :00:00 Why the year is not incremented properly? Shifted hours seems to me a side-effect of unix>>UTC>>unix conversion.Is there any direct (without reconversion) method for doing this? P.S. And yes, I checked this , this and this question and found no clear way of doing this. All they are based on incrementing number and converting date with the help of it which doesn't seem precise to me. And yes, I thought about adding 60*60*24*30*365 to initial date but would it be correct? This approach doesn't consider leap years, months comprised of 31 days and so on.
@schaiba suggested renaming /etc/resolv.conf ; a little better would be to make /etc/resolv.conf point to a live address without a DNS server running. That is likely to reduce the timeouts.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301743", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101580/" ] }
301,789
How do I output how much of file nominal size is actually filled with data? Like vmtouch shows how much of file is currently in memory... I expect the workflow to be like this: $ fallocate -l 1000000 data $ measure_sparseness data100%$ fallocate -p -o 250000 -l 500000 data$ measure_sparseness50% Workaround: use du -bsh and du -sh and compare them.
find has %S format specifier which is even named "sparseness" %S File's sparseness. This is calculated as (BLOCKSIZE*st_blocks / st_size). The exact value you will get for an ordinary file of a certain length is system-dependent. However, normally sparse files will have values less than 1.0, and files which use indirect blocks may have a value which is greater than 1.0. The value used for BLOCKSIZE is system-dependent, but is usually 512 bytes. If the file size is zero, the value printed is undefined. On systems which lack support for st_blocks, a file's sparseness is assumed to be 1.0. $ fallocate -l 1000000 data$ find data -printf '%S\n'1.00352$ fallocate -p -o 250000 -l 500000 data$ find data -printf '%S\n'0.507904
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/301789", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17594/" ] }
301,922
Let's say I have a simple stupid script that removes files by ending, looking like this: rm *.uvw *.xyz The script, or rm , to be precise, writes messages on stderr if it cannot find at least one file with the specified ending. Now let's say the script is a bit bigger and does a bit more with a bit more file types and I'm not interested in which file types exist and which don't, but the complains about non-existent file types obstruct the rest of the output and error messages I am more interested in, so I want to filter the output: rm *.uvw *.xyz 2>&1 | grep -v "No such file or directory" This works fine for the most part, but it removes the message part of interactive dialogs, which for example ask, if a write-protected file should be deleted, so I get prompted without the according message. I do not understand this behaviour and could not find any related information. Can someone explain this?
The problem When rm prompts the use for input, it does not put a newline at the end of the prompt: $ rm *.uvw *.xyzrm: remove write-protected regular empty file 'a.xyz'? grep is line-based. It can only process complete lines. It cannot tell whether the line should be printed until the line is complete. Thus, standard utilities for dealing with buffering, such as stdbuf , cannot help. The solution Use nullglob and remove the missing files messages. Without nullglob, the messages you don't want appear: $ rm *.uvw *.xyzrm: cannot remove '*.uvw': No such file or directoryrm: remove write-protected regular empty file 'a.xyz'? n With it, the "No such file or directory" message is suppressed: $ shopt -s nullglob$ rm *.uvw *.xyzrm: remove write-protected regular empty file 'a.xyz'? n Refinement If there is no file at all that matches either glob, then a different error message appears: $ shopt -s nullglob$ rm *.uvw *.xyzrm: missing operandTry 'rm --help' for more information. A simple way to avoid this is to make sure that at least one such file exists: shopt -s nullglob[ -e "deleteme.xyz" ] ||touch deleteme.xyzrm *.uvw *.xyz Since deleteme.xyz is going to be erased anyway, there is no harm in touching it before we run rm .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301922", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183553/" ] }
301,933
I'm thinking that the smart thing to do with my raid setup is to replace the drives before they start failing and as they start to get old... I can't really afford a lot of cloud backup space, and I want to get a jump on the guaranteed eventual fail of my drives due to wear. I have 3 2TB drives with GPT, grub, a small system raid1 partition, and a large raid5 home partition. I'm using Arch Linux. I was going to replace the drives one at a time. I wanted to post my plan of action and see if anyone could think of a reason why it wouldn't work or if there was a better way to do it. step one: figure out which device (ie /dev/sda ) I am replacing by unplugging it physically and checking /proc/mdstat to find out the /dev/sdx that fails. step two: Plug it back in and use sfdisk to copy the partition table sfdisk -d /dev/sdx > partition.layout step three: Put in a new physical drive of the same size step four: sfdisk /dev/sdx < partition.layout step five: Use mdadm to add the new drive to the array based on the instructions on the arch wiki. mdadm --add /dev/md0 /dev/sdx1mdadm --add /dev/md1 /dev/sdx2 step six: Reinstall grub? wait for the resync to complete, then repeat the whole process with the other 2 drives? I guess my question is mostly like, will this work out? is there anything I'm missing? I don't want to miss something obvious and lose all my data. Thank you very much for any assistance/insight. Edit: Just to get the results of the discussion down in the same place, I wanted to say that I figured out how to have mdadm and smartmontools (smartd) montior and notify me via email if things start going bad with my hard drives. I set up ssmtp with a gmail account that I have synced to my phone. Since I already bought the new drives, I'm going to keep them around, and swap them in as things go bad. It is my understanding that eventually all hard drives fail. Thanks for the suggestions and protips on how to do that (without degrading the array). Once I can afford an upgrade I'm going to use ZFS with an ECC motherboard/memory/etc. and thanks for the tips in that direction. Thanks a lot you guys really helped :D
That's a bad idea because you're deliberately degrading your RAID and Resyncs might fail unexpectedly. It's better to hook the new disk up to the system (so you have n+1 disks) and then use mdadm --replace to sync it in. That way the RAID never degrades in between. You don't have to fail / remove drives to find out which is which. You can see a device's role number in mdadm --examine , in mdstat output [UUU] in role numbers is [012] ; and you can check the drive's serial number with hdparm or smartctl and compare to the sticker on the drive itself. For partitions, it might be better to use GPT nowadays instead of MSDOS. If you are not only replacing disks but also upgrading them in size, you might have no other choice anyhow, since MSDOS partitions pretty much stop at 2TB. Personally I don't do this at all. So what if the disks are 3 years old? Disks live a lot longer than that, and new disks die all the same. It's much more important to test your disks on a regular (automated) basis, and replace disks once they have their first pending/uncorrectable/reallocated sector, read error in selftest, or other issues. Even more important is having backups of any data you don't want to lose. You could also switch to RAID6 for more redundancy, but the case of two disks dying at the same time is highly unlikely as long as you actively check for errors. Don't let your rebuild be your first read test in years.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301933", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183563/" ] }
301,953
I recently uninstalled Ubuntu 16.04 and installed Peppermint 7 instead,I had a few problems with grub not showing but fixed it by running bcdedit /set {bootmgr} path \EFI\ubuntu\grubx64.efi in admin cmd prompt as mentioned in this post .However I have a lot of weird options on grub(2) now... (Previously had only 4 which were Ubuntu, Ubuntu with adv conf., windows boot manager and system setup) . Now I have: Peppermint GNU/Linux Advanced options for Peppermint GNU/Linux Windows UEFI bootmgfw.efi Windows Boot UEFI loader EFI/Ubuntu/fwupx64.efi EFI/Ubuntu/MokManager.efi EFI/toshiba/Boot/bootmgfw.efi Windows Boot manager (on /dev/sda/2) System setup I understand the first and last two, but what is all this UEFI/boot manager paths in between, and should I/how can I remove any of them (if there are unnecessary ones). Edit: /etc/default/grub : # If you change this file, run 'update-grub' afterwards to update# /boot/grub/grub.cfg.# For full documentation of the options in this file, see:# info -f grub -n 'Simple configuration'GRUB_DEFAULT=0#GRUB_HIDDEN_TIMEOUT=0GRUB_HIDDEN_TIMEOUT_QUIET=trueGRUB_TIMEOUT=-1GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"GRUB_CMDLINE_LINUX=""# Uncomment to enable BadRAM filtering, modify to suit your needs# This works with Linux (no patch required) and with any kernel that obtains# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"# Uncomment to disable graphical terminal (grub-pc only)#GRUB_TERMINAL=console# The resolution used on graphical terminal# note that you can use only modes which your graphic card supports via VBE# you can see them in real GRUB with the command `vbeinfo'#GRUB_GFXMODE=640x480# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux#GRUB_DISABLE_LINUX_UUID=true# Uncomment to disable generation of recovery mode menu entries#GRUB_DISABLE_RECOVERY="true"# Uncomment to get a beep at grub start#GRUB_INIT_TUNE="480 440 1"
That's a bad idea because you're deliberately degrading your RAID and Resyncs might fail unexpectedly. It's better to hook the new disk up to the system (so you have n+1 disks) and then use mdadm --replace to sync it in. That way the RAID never degrades in between. You don't have to fail / remove drives to find out which is which. You can see a device's role number in mdadm --examine , in mdstat output [UUU] in role numbers is [012] ; and you can check the drive's serial number with hdparm or smartctl and compare to the sticker on the drive itself. For partitions, it might be better to use GPT nowadays instead of MSDOS. If you are not only replacing disks but also upgrading them in size, you might have no other choice anyhow, since MSDOS partitions pretty much stop at 2TB. Personally I don't do this at all. So what if the disks are 3 years old? Disks live a lot longer than that, and new disks die all the same. It's much more important to test your disks on a regular (automated) basis, and replace disks once they have their first pending/uncorrectable/reallocated sector, read error in selftest, or other issues. Even more important is having backups of any data you don't want to lose. You could also switch to RAID6 for more redundancy, but the case of two disks dying at the same time is highly unlikely as long as you actively check for errors. Don't let your rebuild be your first read test in years.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301953", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183584/" ] }
301,961
I have to replace every string after colon with the same word + underscore + number in following way: {"first": "1_first", "second": "1_second"} Expected results: {"first": "first_1", "second": "second_1"}{"first": "first_20", "second": "second_20"}{"first": "first_33", "second": "second_33"} I've succeeded with the first one: echo '{"first": "first_1", "second": "second_1"}' | sed "s/\( \".*\",\)/ \"first_$j\",/" Result is: {"first": "first_888", "second": "second_1"} But have problems with the second one. I suppose that this expression is too greedy: echo '{"first": "first_1", "second": "second_1"}'|sed "s/\( \".*\)\"}/ \"second_$j\"}/" This one cuts too much: {"first": "second_888"} Maybe there is some more elegant way to do this? With one expression instead of 2?
That's a bad idea because you're deliberately degrading your RAID and Resyncs might fail unexpectedly. It's better to hook the new disk up to the system (so you have n+1 disks) and then use mdadm --replace to sync it in. That way the RAID never degrades in between. You don't have to fail / remove drives to find out which is which. You can see a device's role number in mdadm --examine , in mdstat output [UUU] in role numbers is [012] ; and you can check the drive's serial number with hdparm or smartctl and compare to the sticker on the drive itself. For partitions, it might be better to use GPT nowadays instead of MSDOS. If you are not only replacing disks but also upgrading them in size, you might have no other choice anyhow, since MSDOS partitions pretty much stop at 2TB. Personally I don't do this at all. So what if the disks are 3 years old? Disks live a lot longer than that, and new disks die all the same. It's much more important to test your disks on a regular (automated) basis, and replace disks once they have their first pending/uncorrectable/reallocated sector, read error in selftest, or other issues. Even more important is having backups of any data you don't want to lose. You could also switch to RAID6 for more redundancy, but the case of two disks dying at the same time is highly unlikely as long as you actively check for errors. Don't let your rebuild be your first read test in years.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301961", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45528/" ] }
301,976
I have an old HP Athlon machine I use for testing software under the old processor. We have frequent brown outs, and after the last one the disk was a mess. It was so bad I could not run fsck and dispatch all the problems. I performed a fresh install of the OS, but I'm still getting fsck complaints. I'd like to try one last time to reload Linux before condemning the hard drive or machine. After the filesystem is created but before the install occurs, I'd like an aggressive fsck performed to mark suspect blocks as bad. The disk is large (about 500 GB) and a Debian 8 distro is relatively small (8-12 GB is usually more than enough), so I don't care if good blocks get marked as bad. I also like the GUI install, but I'm not married to it. I have two questions: Does Debian 8 provide a choice to perform an fsck before installing the base system? If so, where is it? If not, then what is the process? Does fsck have a setting to control how aggressively blocks are marked as bad? If so, what is it? If not, then what can be used? EDIT : the machine is an HP5850. Entering the BIOS, navigating to Storage and then Drive Protection System (DPS) Self-test resulted in DPS recommending replace the drive. DPS did not provide any statistics, so I'm not sure the extent of the damage. Considering I can purchase an [old] new SATA II drive for $12 USD, I'm just going to replace it. There's no sense in wasting time or energy on it. The related references are as follows. Neither question appears to be addressed. fsck man page Chapter 6. Using the Debian Installer | 6.3. Using Individual Components
"Does Debian 8 provide a choice to perform an fsck before installing the base system? If so, where is it? If not, then what is the process?" As an alternative, first download and burn a GPartEd CD (or write to a thumb drive). Before running the installer, boot GPartEd and partition the disk to your liking and run fsck or just run badblocks at length. When you run the Debian installer, just tell it how to use the partitions that are there. The installer does not need to create its own partitions. It is perfectly happy to use existing partitions. "Does fsck have a setting to control how aggressively blocks are marked as bad? If so, what is it? If not, then what can be used?" The -c option to e2fsck causes it to run the badblocks program to scan for bad blocks. You can run badblocks directly as well. By default, badblocks does a read-only test. To be more aggressive, you can specify -n for a non-destructive read-write test. You can also set the -p option to increase the number of passes that it makes. You may want to run badblocks before you partition. That way, you can specify the faster -w write-only test.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/301976", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
301,987
After searching plenty through plenty a post, Youtube video, and "documentation" on the matter of systemd, I'm still at a loss. The link ( https://wiki.archlinux.org/index.php/systemd#Create_custom_target ) seemed promising, but was a bit vague (to me). Question How would one go about creating a custom systemd target (IE: foo.target ) so that one may boot with select .service units? Example System boots default.target (symlink of "foo.target") "foo.target" only starts a barebones X server and GUI program, say "gvim". Reason I'm simply looking to create a custom target for quickly launching one X program.I'd be nice to exclude all the services I don't need. Thanks in advance!
Reading through man 5 systemd.unit and man 5 systemd.target tells us that unit files are used to define targets as well as everything else systemd. There is no documentation specifically on how to create a target , so it's hard to determine the how it should be done, but it is not too different from creating a service. When you create your target, you will need to make symlinks to the target.wants directory from the systemd services directory. Then you can set/boot your target. Here's how it might look given your example. /etc/systemd/system/foo.target This is the target's unit file. If graphical.target is taken as an example, we can create our own target using it as a base. [Unit]Description=Foobar boot targetRequires=multi-user.targetWants=foobar.serviceConflicts=rescue.service rescue.targetAfter=multi-user.target rescue.service rescue.targetAllowIsolate=yes To explain the options taken from the systemd manpages; Description -- Describes the target. You should understand Requires -- Hard dependencies of the target. You should let the basic system start before you start your own service(s) Wants -- Soft dependencies. The target does not require these to start. Conflicts -- If a unit has a Conflicts setting on another unit, starting the former will stop the latter and vice versa. After -- Boots after these services AllowIsolate -- Really up to you and your environment. Details are available in the manpage systemd.unit(5) /etc/systemd/system/foo.target.wants/ This is the directory where you will link the services you create/require for your target. It is equivalent to the Wants= option in the unit file. Create this directory and then create symlinks like so; ln -s /usr/lib/systemd/system/bar.service /etc/systemd/system/foo.target.wants/bar.service . This creates a symlink from bar.service in the system directory to your foo.target.wants directory. I think creating a unit file for a service is kind of out of the scope of this answer, and that question is definitely more documented so I'll leave that out for now. When you create your unit file, just symlink it into the target.wants directory or add it to the Wants= directive.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/301987", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183604/" ] }
302,011
how to delete the files that created on Aug 7 with the name DBG_A_sql* under /tmp as the following example: -rw-r--r-- 1 root root 51091 Aug 7 11:22 DBG_A_sql.2135-rw-r--r-- 1 root root 15283 Aug 7 11:22 DBG_A_sql.2373-rw-r--r-- 1 root root 51091 Aug 7 11:22 DBG_A_sql.2278-rw-r--r-- 1 root root 9103 Aug 7 11:22 DBG_A_sql.2485-rw-r--r-- 1 root root 9116 Aug 7 11:22 DBG_A_sql.2573-rw-r--r-- 1 root root 9140 Aug 7 11:22 DBG_A_sql.2679-rw-r--r-- 1 root root 15695 Aug 7 11:22 DBG_A_sql.2897
You can use find . Calculate date according to your requirement and use, find /tmp -maxdepth 1 -mtime -1 -type f -name "DBG_A_sql*" -print After confirming it delete them, find /tmp -maxdepth 1 -mtime -1 -type f -name "DBG_A_sql*" -delete
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/302011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153544/" ] }
302,017
I have a program which uses these tables, and I want to add some additional functionality to its logging without modifying the program. groups------id bigint not nullname character varying(100) not nullusers-----id bigint not nullname character varying(100) not nullusers_groups------------group_id bigint not nulluser_id bigint not null I want to write into syslog6 a "user123 added to group456" or "user123 removed from group456" message every time a user added or removed from a group. My first idea was using PostgreSQL triggers. CREATE OR REPLACE FUNCTION process_ext_audit() RETURNS trigger AS $ext_audit$BEGIN IF (TG_OP = 'DELETE') THEN SELECT name into uname FROM users WHERE id = OLD.user_id; SELECT name into gname FROM groups WHERE id = OLD.group_id; -- write into local6: "uname removed from gname" ELSIF (TG_OP = 'INSERT') THEN SELECT name into uname FROM users WHERE id = NEW.user_id; SELECT name into gname FROM groups WHERE id = NEW.group_id; -- write into local6: "uname added to gname" END IF; RETURN NULL;END;$ext_audit$ LANGUAGE plpgsql;CREATE TRIGGER ext_auditAFTER INSERT OR DELETE ON users_groups FOR EACH ROW EXECUTE PROCEDURE process_ext_audit(); Is my approach good? If yes, how can I write into syslog from this function? I use postgresql 9.2 with CentOS 7 which uses rsyslog.
You can use find . Calculate date according to your requirement and use, find /tmp -maxdepth 1 -mtime -1 -type f -name "DBG_A_sql*" -print After confirming it delete them, find /tmp -maxdepth 1 -mtime -1 -type f -name "DBG_A_sql*" -delete
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/302017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119219/" ] }
302,105
so I'm using exec_always feh --bg-scale *wallpaper* and it sets it correctly. I'm also using i3-gaps which can be found here . When I start setting my windows up, my wallpaper starts freaking out. I'm not quite sure if this is feh or i3-gaps. Basically where ever I create a new window, more of my wallpaper turns black. Can be seen here:
When a wallpaper is set with feh using any command feh --bg-* a file is created in your $HOME -dir, named .fehbg , which basically stores the latest feh command that you ran. Thus, the file content would be similar to #!/bin/shfeh --bg-scale '/home/username/Pictures/mywallpaper.jpg' This script can then be run from your i3config file, by adding the line exec --no-startup-id exec bash $HOME/.fehbg If you run i3 with gaps, I would recommend including these two lines in your config as well: for_window [class=".*"] border pixel 0hide_edge_borders both This disables all borders, which is mentioned to prevent issues with gaps in i3-gaps.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302105", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61468/" ] }
302,139
I was messing around with PowerShell this week and discovered that you are required to Sign your scripts so that they can be run. Is there any similar secure functionality in Linux that relates to preventing bash scripts from being run? The only functionality similar to this, that I'm aware of is that of SSH requiring a certain key.
If you're locking users' ability to run scripts via sudo then you could use the digest functionality. You can specify the hash of a script/executable in sudoers which will be verified by sudo before being executed. So although not the same as signing, it gives you a basic guarantee that the script has at least not been modified without sudoers also being modified. If a command name is prefixed with a Digest_Spec, the command will only match successfully if it can be verified using the specified SHA-2 digest. This may be useful in situations where the user invoking sudo has write access to the command or its parent directory. The following digest formats are supported: sha224, sha256, sha384 and sha512. The string may be specified in either hex or base64 format (base64 is more compact). There are several utilities capable of generating SHA-2 digests in hex format such as openssl, shasum, sha224sum, sha256sum, sha384sum, sha512sum. http://www.sudo.ws/man/1.8.13/sudoers.man.html
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/302139", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3570/" ] }
302,153
When I use popd alone it removes a directory from the stack and takes me to that directory. However, if I do cd $(popd) then no directory is removed from the stack. Since the process is simply forked and the result is put in place of the shell expansion, why isn't a directory taken off of the stack?
The command substitution $(…) runs the command in a subshell. A subshell starts out as an identical¹ copy of the main shell, but from that point on the main shell and the subshell live their own life. The shell process creates a pipe and forks. The child runs popd with its output connected to the pipe, then exits. The parent reads the data from the pipe and substitutes it into the command line. Since popd runs in the child process, its effect is limited to the child process. The directory is taken off the stack — off the child's stack. Nothing happens to the stack in the parent. ¹ Nearly identical; the differences are not relevant here.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302153", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/162496/" ] }
302,192
I'm installing sqlite on Alpine Linux. I download sqlite-autoconf-3130000.tar.gz but tar could not open it. I tried this answer but it's not working. tar gives this message: tar: invalid magictar: short read I wrote these commands. wget https://www.sqlite.org/2015/sqlite-autoconf-3090100.tar.gztar -zxvf sqlite-autoconf-3090100.tar.gz
Try to install the tar package (apk add tar). Busybox tar (default) doesn't support all features.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/302192", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183682/" ] }
302,202
Posting question by curiosity,I wanted to create directory like January, February...to December so I created them by individually giving the name. ( mkdir January February ....etc)Is it possible to create directories or filenames with the name of all the months in easy manner? for example : touch {1..10} will create 10 files 1,2,3...10 easily, like this is there any other solution to create the files or directory with month name?
POSIXly, (IFS=';'; set -f; mkdir -- $(locale mon)) Note, that it's the month names in the current language. Replace with LC_ALL=C locale mon if you want the English ones regardless of the language of the user. With zsh , you can also use the $langinfo special associative array (in the zsh/langinfo module): zmodload zsh/langinfoeval mkdir -- '$langinfo[MON_'{1..12}']' Though mkdir -- ${(s:;:)"$(locale mon)"} would be shorter. In rc / es which are other shells with splitting operators where you can specify the separators (other than via that global $IFS setting like in Bourne-like shells): mkdir -- ``';'{locale mon}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302202", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167425/" ] }
302,205
I am working on BeagleboneBlack based custom board, Recently I was debugging system hang issues, I had access to debug console using minicom , And I could enable log using key combination Ctrl + A , F `. However all production devices won't have debug console. I won't be able to connect it to through minicom , However I will get access through ssh connection. Now, I wanted to know if there is any way to send SysRq keys over ssh connection, In normal working of device, if I use echo h > /proc/sysrq-trigger output is printed on the debug(or serial) console(minicom) and not on ssh.:( , is there any way to get this output to ssh terminal ? I know what I am asking is somewhat like impossible as, if system locks-up ssh connection won't be alive. But just in case if connection is alive then I want to know if there is any way to send SysRq keys.
POSIXly, (IFS=';'; set -f; mkdir -- $(locale mon)) Note, that it's the month names in the current language. Replace with LC_ALL=C locale mon if you want the English ones regardless of the language of the user. With zsh , you can also use the $langinfo special associative array (in the zsh/langinfo module): zmodload zsh/langinfoeval mkdir -- '$langinfo[MON_'{1..12}']' Though mkdir -- ${(s:;:)"$(locale mon)"} would be shorter. In rc / es which are other shells with splitting operators where you can specify the separators (other than via that global $IFS setting like in Bourne-like shells): mkdir -- ``';'{locale mon}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302205", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60966/" ] }
302,209
My /etc/passwd has a list of users in a format that looks like this: username:password:uid:gid:firstname.lastname, somenumber:/... Goal : I want to see only the first names and than sort them having the most common name appear first, 2nd most common appear 2nd etc.... I saw some solutions as to how to do the 2nd part, although they are relevant to working with a text file and not to reading from a map. In regards to the first part, I really don't know how to approach this. I know that there are some solutions but don't really know how to do them.
One way to do it: cut -d: -f5 /etc/passwd | \ sed 's/\..*//' | \ sort -i | \ uniq -ci | \ sort -rn
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302209", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182347/" ] }
302,219
In *nix, if I don't have a mouse , nor am I running a GUI, what can I do to copy from what is on the screen? Take this for example: What if I want to copy things from "Entering /mnt/..." to the last "}" Thanks for the answer Read a character from an x-y coordinate on the screen But something unique on Chromebook is that I only have /dev/tty and /dev/tty8 . And I don't have /dev/vcsN what should I do?
In such circumstances, script is very handy: it runs a shell, recording all the output. In your example, before entering the chroot you'd run script temp_file.txt and then sudo enter-chroot etc. On exit from the chroot, you'd exit again to exit script , and you'd find the text you wanted (along with everything else you did) in temp_file.txt . Another possibility is to run your session within screen ; that allows both saving the current "window" (in screen parlance) to a file ( Ctrl + a followed by h by default; this dumps the contents of the screen to a file named hardcopy.n where n is a counter) and copying and pasting between windows ( Ctrl + a followed by Esc by default will enter scrollback/copy mode; see the documentation for details).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302219", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183787/" ] }
302,261
Can someone clarify for me difference between "enable" and "start" for a systemd unit. I have been told that if a unit has an [Install] section, then enable should be called, otherwise just start is enough. How this handled in startup process? Systemd automagically makes right decision?
To start (activate) a service , you will run the command systemctl start my_service.service , this will start the service immediately in the current session. To enable a service at boot , you will run systemctl enable my_service.service . Enable one or more units or unit instances. This will create a set of symlinks, as encoded in the "[Install]" sections of the indicated unit files. After the symlinks have been created, the system manager configuration is reloaded (in a way equivalent to daemon-reload), in order to ensure the changes are taken into account immediately The /usr/lib/systemd/system/ contains init scripts , when you type systemctl enable to start a service at boot it will be linked to /etc/systemd/system/ . #systemctl enable my_service.serviceln -s '/usr/lib/systemd/system/my_service.service' '/etc/systemd/system/multi-user.target.wants/my_service.service'
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/302261", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183272/" ] }
302,287
I'm working on different Linux distributions. In my .bashrc I'd like to set up an alias that opens a window of the default file manager (e.g. nautilus , nemo , pacman , ...). Is there a way find out what the file-manager of a session is? (It does also depends on the session, doesn't it?)
As comments have already stated, you're probably better off with xdg-open (no alias needed), but to answer the question: You can use xdg-mime to query and set default applications. To get the default file manager: xdg-mime query default inode/directory Read more about this topic in the xdg-mime manual or the Arch Wiki .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302287", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/103727/" ] }
302,289
I have an input string like: arg1.arg2.arg3.arg4.arg5 The output I want is: arg5.arg4.arg3.arg2.arg1 It's not always 5 arg's, could be 2 to 10. How can I do this in a bash script?
Using combination of tr + tac + paste $ tr '.' $'\n' <<< 'arg1.arg2.arg3.arg4.arg5' | tac | paste -s -d '.'arg5.arg4.arg3.arg2.arg1 If you still prefer bash, you could do in this way, IFS=. read -ra line <<< "arg1.arg2.arg3.arg4."let x=${#line[@]}-1; while [ "$x" -ge 0 ]; do echo -n "${line[$x]}."; let x--; done Using perl , $ echo 'arg1.arg2.arg3.arg4.arg5' | perl -lne 'print join ".", reverse split/\./;'arg5.arg4.arg3.arg2.arg1
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/302289", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160653/" ] }
302,399
I'm trying to create an alias that starts with a pipe. ex: echo -i "hello\nworld" | grep world> worldalias gr="| grep"echo -i "hello\nworld" gr world> hello> world gr world I.e, if the alias start with a pipe, aliasing doesn't seem to work properly.Is there a way to do this?
From man bash : Aliases allow a string to be substituted for a word when it is used as the first word of a simple command. Pipe can't be the first word of a simple command.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302399", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77271/" ] }
302,400
I have started running Jessie (Debian 8) with a LightDM/Xfce desktop on my HTPC after it grinding to a near-halt on W7. One of the things that I cannot get past is having to type the password -- not a normal thing to do for watching TV. Following the instructions on the Debian Wiki I got as far as my login being automatically selected. But this still requires the password, and half-fixes like empty / trivial passwords are not allowed. Is it possible to go straight to the Xfce session without login/password?
This page describes how to enable it. Edit the LightDM configuration file and ensure these lines are uncommented and correctly configured: /etc/lightdm/lightdm.conf [Seat:*]pam-service=lightdmpam-autologin-service=lightdm-autologinautologin-user=usernameautologin-user-timeout=0session-wrapper=/etc/X11/Xsessiongreeter-session=lightdm-greeter LightDM goes through PAM even when autologin is enabled. You must be part of the autologin group to be able to login automatically without entering your password: # groupadd -r autologin# gpasswd -a username autologin
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302400", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86288/" ] }
302,407
My data looks like: $ cat input121210312212333212120000022221 I want the output to look like: $ cat output1 2 1 2 1 0 3 1 2 21 2 3 3 3 2 1 2 1 20 0 0 0 0 2 2 2 2 1 I tried: sed -i 's// /g' input > output but it does not work. Any suggestions?
Here you are: sed 's/\(.\{1\}\)/\1 /g' input > output And if you want to save the changes in-place: sed -i 's/\(.\{1\}\)/\1 /g' input How it works: s/\(.\{1\}\)/\ /g will add a space, after each 1 character. For instance, if you wanted an output file like: 12 12 10 31 2212 33 32 12 1200 00 02 22 21 You could edit my answer to: sed -i 's/\(.\{2\}\)/\1 /g' So it will add a space, after each 2 characters. In addition, /\1 / is the same as /& , and will add one white-space.For instance, to add three: /\1 / or /& / . You have many more options to use. Sed is a super-powerful tool. In addition yes, as @Law29 mentioned, this will leave a space at the end of each line if you do not remove, so to remove them while adding spaces you can add a s/ $// to the end of given solution, to do so: sed 's/./& /g; s/ $//' I hope this could help.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302407", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133262/" ] }
302,419
I don't know if this is normal, but the thing is, let's say I have a Solaris user called gloaiza and its password is password2getin I'm logging into the server with PuTTY, I just put 192.168.224.100 and it prompts a windows asking for an user, so I type gloaiza , then it asks for a password and let's say I type password2geti by mistake, and it worked! I'm IN the server! Is that normal? It also works if I put something like password2getin2 . I'm not a native English speaker, so, in case there's something you can't understand please ask me OS: Oracle Solaris 10 1/13
The operating system stores a hash of the password in /etc/shadow (or, historically, /etc/passwd ; or a different location on some other Unix variants). Historically, the first widespread password hash was a DES-based scheme which had the limitation that it only took into account the first 8 characters of the password. In addition, a password hashing algorithm needs to be slow; the DES-based scheme was somewhat slow when it was invented but is insufficient by today's standards. Since then, better algorithms have been devised. But Solaris 10 defaults to the historical DES-based scheme. Solaris 11 defaults to an algorithm based on iterated SHA-256 which is up to modern standards. Unless you need historical compatibility with ancient systems, switch to the iterated SHA-256 scheme. Edit the file /etc/security/policy.conf and change the CRYPT_DEFAULT setting to 5 which stands for crypt_sha256 . You may also want to set CRYPT_ALGORITHMS_ALLOW and CRYPT_ALGORITHMS_DEPRECATE . Once you've changed the configuration, run passwd to change your password. This will update the password hash with the currently configured scheme.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/302419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179831/" ] }
302,437
I have a 1Gb HDD image (created using bximage for Bochs), onto which I wish to install Grub 2. I understand that a Grub installation consists of 3 parts: The boot.img image, which occupies the first sector The core.img image, which occupies space following the first sector up until the start of the next track And the /boot/grub/ directory, in which the grub.cfg and other modules are located. First I use a boot.img image that from within my own Linux /boot/grub/ directory. Following this, I generate my core.img image using the following command: sudo grub-mkimage -v --format=i386-pc -o core.img -p\(hd0,msdos1\)/boot/grub ls ext2 part_msdos And to install them onto the final disk image, I use the following commands: sudo dd if=boot.img of=/dev/loop0 bs=446 count=1 the 446 blocksize is used so as to not overwrite The partition data that resides within the MBR sudo dd if=core.img of=/dev/loop0 bs=512 seek=1 and here, seek=1 is so as to not overwrite the MBR that was just written. The disk, starting from sector 2048 until the last, is formatted with an ext2 partition, and contains a boot/grub/ directory containing a grub.cfg (with a single bogus menuentry which doesn't load anything), and modules in the /boot/grub/i386-pc/ directory. Bochs successfully boots this installation of grub all the way to the grub> prompt. As This Ubuntu guide points out, this behaviour indicates that grub.cfg was not found. Upon invoking ls , I am faced with an interesting problem - I apparently have no devices connected at all! To further elaborate on the nature of the problem, I observed that when booting a grub-mkrescue image from a slave drive, invoking ls displayed its own rescue drive, and the previously 'non-existant' primary disk drive, along with the ext2 partition. I verified that /boot/grub.cfg could indeed be accessed. From this observation I would assume that my own core.img is missing some fundamental module or functionality. But which, and how would I amend this? I also conducted this exercise on a physical machine using a USB stick, and the exact same thing happened, so I can confirm that the problem is not with Bochs.
The operating system stores a hash of the password in /etc/shadow (or, historically, /etc/passwd ; or a different location on some other Unix variants). Historically, the first widespread password hash was a DES-based scheme which had the limitation that it only took into account the first 8 characters of the password. In addition, a password hashing algorithm needs to be slow; the DES-based scheme was somewhat slow when it was invented but is insufficient by today's standards. Since then, better algorithms have been devised. But Solaris 10 defaults to the historical DES-based scheme. Solaris 11 defaults to an algorithm based on iterated SHA-256 which is up to modern standards. Unless you need historical compatibility with ancient systems, switch to the iterated SHA-256 scheme. Edit the file /etc/security/policy.conf and change the CRYPT_DEFAULT setting to 5 which stands for crypt_sha256 . You may also want to set CRYPT_ALGORITHMS_ALLOW and CRYPT_ALGORITHMS_DEPRECATE . Once you've changed the configuration, run passwd to change your password. This will update the password hash with the currently configured scheme.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/302437", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/175978/" ] }
302,439
I have a file looks like: input: 112 1 2 01 1 000 0 0 22 0122 2 2 22 0 I want to delete those columns in which there is less than 2 digits in each row. So the out should look like: 112 01000 22122 22 any suggestion? note that the real file is huge.
The operating system stores a hash of the password in /etc/shadow (or, historically, /etc/passwd ; or a different location on some other Unix variants). Historically, the first widespread password hash was a DES-based scheme which had the limitation that it only took into account the first 8 characters of the password. In addition, a password hashing algorithm needs to be slow; the DES-based scheme was somewhat slow when it was invented but is insufficient by today's standards. Since then, better algorithms have been devised. But Solaris 10 defaults to the historical DES-based scheme. Solaris 11 defaults to an algorithm based on iterated SHA-256 which is up to modern standards. Unless you need historical compatibility with ancient systems, switch to the iterated SHA-256 scheme. Edit the file /etc/security/policy.conf and change the CRYPT_DEFAULT setting to 5 which stands for crypt_sha256 . You may also want to set CRYPT_ALGORITHMS_ALLOW and CRYPT_ALGORITHMS_DEPRECATE . Once you've changed the configuration, run passwd to change your password. This will update the password hash with the currently configured scheme.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/302439", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133262/" ] }
302,453
Can anyone tell me what is systemd and why CentOS 7 has systemd but CentOS 6 doesn't have it? Related question : Which ubuntu versions have systemd?
What is systemd ? systemd is a system and service manager for Linux, compatible with SysV and LSB init scripts. systemd provides aggressive parallelization capabilities, uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux control groups, supports snapshotting and restoring of the system state, maintains mount and automount points and implements an elaborate transactional dependency-based service control logic. Systemd replace SysVinit on CentOS 7 , it makes a server boot quicker because it uses fewer scripts and tries to run more tasks in parallel, Systemd calls them units , The global Systemd configuration is stored in the /etc/systemd directory.Service configuration files are located in the /usr/lib/systemd/system directory and custom service configuration files are stored in the /etc/systemd/ system directory. Why CentOS 7 have systemd but CentOS 6 doesn't have ? Red Hat-based distributions are migrating to systemd , it has been the default system and services manager in Red Hat 7 , CentOs7 and Fedora since the release of Fedora 15. Which ubuntu version have systemd ? Ubuntu 15.04 is the first version (of Ubuntu) that uses systemd , You can read the blog post of Mark_Shuttleworth
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302453", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183938/" ] }
302,516
Standard file descriptors <= 2 are opened by default. A program can write to or read from, a file descriptor after 2, without using the open system call to obtain such a descriptor. The program can then advertise in its manual, which file descriptors it is using and how. To make use of this, a POSIX shell can open a file and assign that file to a descriptor with the exec built-in. After that, the shell would start the program which will use that descriptor and file. One reason for doing that would be if the program wants to have more than one output or input file, and does not want to specify them as command line arguments. If there was just one file, you could just redirect a standard file descriptor. I have never seen a generally available program which would advertise such a thing in its manual. Does this happen in practice? Has anybody heard of such a thing? Yes, I do want to stay within POSIX world - so no bash-only built-ins. I just want to know if there is such a program, not a shell built-in.
When you use process substitution with <(...) or >(...) , bash will open a pipe to the other program on an arbitrary high file descriptor (I think it used to count up from 10, but now it counts down from 63) and pass the name as /dev/fd/N on the command line of the first program. This isn't POSIX, but other shells also support it (it's a ksh88 feature). That's not exactly a feature of the program you're running though, it just sees /dev/fd/N and tries to open it like a regular file. The Autoconf manual mentions some historic notes: A few ancient systems reserved some file descriptors. By convention, file descriptor 3 was opened to /dev/tty when you logged into Eighth Edition (1985) through Tenth Edition Unix (1989). File descriptor 4 had a special use on the Stardent/Kubota Titan (circa 1990), though we don't now remember what it was. Both these systems are obsolete, so it's now safe to treat file descriptors 3 and 4 like any other file descriptors. Also while I did a google search for this I found a program called runit that uses file descriptors 4 and 5 for some purpose related to log rotation. And quoting from the svlogd man page: If svlogd is told to process recent log files, (...). svlogd also saves any output that the processor writes to file descriptor 5, and makes that output available on file descriptor 4 when running processor on the next log file rotation.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302516", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183221/" ] }
302,518
When I try to do apt-get update , nothing is working. I have got info that I should edit /etc/sources.list , but when I try writing deb http://ftp.de.debian.org/debian/ jessie main contrib non-free I am unable to save the file, as I get an error saying that I cannot edit the sources list.
When you use process substitution with <(...) or >(...) , bash will open a pipe to the other program on an arbitrary high file descriptor (I think it used to count up from 10, but now it counts down from 63) and pass the name as /dev/fd/N on the command line of the first program. This isn't POSIX, but other shells also support it (it's a ksh88 feature). That's not exactly a feature of the program you're running though, it just sees /dev/fd/N and tries to open it like a regular file. The Autoconf manual mentions some historic notes: A few ancient systems reserved some file descriptors. By convention, file descriptor 3 was opened to /dev/tty when you logged into Eighth Edition (1985) through Tenth Edition Unix (1989). File descriptor 4 had a special use on the Stardent/Kubota Titan (circa 1990), though we don't now remember what it was. Both these systems are obsolete, so it's now safe to treat file descriptors 3 and 4 like any other file descriptors. Also while I did a google search for this I found a program called runit that uses file descriptors 4 and 5 for some purpose related to log rotation. And quoting from the svlogd man page: If svlogd is told to process recent log files, (...). svlogd also saves any output that the processor writes to file descriptor 5, and makes that output available on file descriptor 4 when running processor on the next log file rotation.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302518", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183992/" ] }
302,548
This is how my bash prompt used to look like. Then I did something which was probably not so smart, I did cat /bin/bash .And now my bash prompt looks like this, with a pound symbol (£) instead of a hash symbol (#). It even affects hash symbols within files, see here: Any Idea how to revert this? Edit: This question does not ask "How to change my bash prompt?", but "my bash prompt changed by itself, how can I restore it?" Complete .bashrc for those who are interested.
The terminal accepts and executes a bunch of different character sequences as control commands. For example, all cursor movement is done using those. Some of the codes make permanent changes, like setting colors, or telling the terminal to use an alternate character set. Executables and other binary files can well contain bytes that represent those commands, so dumping binary files to the terminal can have annoying side effects. See e.g. here for some of the control codes. The historical background to this is that originally, terminals were rather dumb devices with a screen and a keyboard , and they connected to the actual computer via a serial port. Before that, they were printers with keyboards. There wasn't much of a protocol to separate data bytes from command bytes, so commands were given to the terminal "inline". (Or rather, the escape codes and control characters were the protocol.) One might assume that if the system was devised today, there would be clearer separation between data and commands. Instead of just closing the terminal window or killing the emulator, you can use the reset command , which sends a similar command (or several) to reset the terminal back to sane defaults. I don't know what exactly would cause the hash to pound change. (But @Random832 does, see their answer .) I'm more familiar with the "alternate character set", which can change all characters into line-drawing glyphs. Even if that happens, input from the keyboard usually goes through unchanged, so writing reset Enter still works even if the characters display as garbage or not at all. (Compared to your prompt being turned into a bunch of lines, you only got a minor effect.)
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/302548", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184025/" ] }
302,576
I have a file look like: input: 34343443545410001100011000110001100005100005500000 So I need each number appears only once: OUt: 34435410001100005500000 any suggestion by considering that real file is huge with too many repetitions?
Here you are: $ uniq inputFile > outputFile But notice uniq only removes those repetitive lines coming after each other and sequently. So if you wanted to delete all repetitive lines even those not coming sequently, you can first sort your input file and the use uniq : $ sort -g inputFile | uniq > outputFile -g option will compare according to the general numeric value. Or just use sort -nu inputFile > outputFile
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/302576", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133262/" ] }
302,599
$ echo $PS1$$ echo $PS2>$ echo $PS3$ echo $PS4+$ select i in 1 2 3> do> case $i in> 1)> echo 1> ;;> *)> ;;> esac> done1) 12) 23) 3#? 11$ PS3="##? "$ select i in 1 2 3; do case $i in 1) echo 1; ;; *) ;; esac; done1) 12) 23) 3##? 11 As you can see, $PS1 , $PS2 and $PS4 have a value and work like intended. $PS3 is empty (or contains a space, tab, etc.), but select is using #? for $PS3 , but when the variable is set, it works normally. Why does this behave this way and why was it designed this way?
Because the doc says so: https://www.gnu.org/software/bash/manual/html_node/Bash-Variables.html#Bash-Variables PS3 The value of this variable is used as the prompt for the select command. If this variable is not set, the select command prompts with ‘#? ’
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/302599", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166118/" ] }