source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
296,162
I want to transform the short repeated words in columns into numbers. In the following example I want to change the words (with ONLY 2 LETTERS) in column 3 for numbers, so that AA is changed to 2 , AB or BA into 1 , BB into 0 . The first and second column may also contain AA , BB , AB and BA . These should not be changed. Columns are separated by " " (). Id_animal Id_SNP AlleleID01 rs01 ABID02 rs01 BAID03 rs01 AAID04 rs01 BB The wanted output is: Id_animal Id_SNP AlleleID01 rs01 1ID02 rs01 1ID03 rs01 2ID04 rs01 0
sed -i.bak -r 's/ AA$/ 2/;s/ (AB|BA)$/ 1/;s/ BB$/ 0/' input -i.bak in place editing and create a backup of original file as input.bak -r extended regex syntax s/ AA$/ 2/ replace ending character sequence of ' AA' with 2 (AB|BA) either AB or BA ; separates the different substitute operations
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296162", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171751/" ] }
296,179
I'd like to use cp -b to copy a file to a destination, possibly creating a backup file of the destination path if it already exists. But, if the backup file already exists, I'd like to have cp fail with an error. I know I can use -n to avoid clobbering the target file, but I want to instead refuse to clobber the backup file. Is there a way to do that? I happen to be using GNU cp on Linux, and I'm willing to accept an answer that is specific to Linux if no POSIX option is available.
If you want to avoid clobbering any backup files with GNU cp , you can use numbered backups: cp --backup=t source destination Rather than overwrite a backup, this creates additional backups. Example As an example, let's consider a directory with two files: $ lsfile1 file2 Now, let's copy file1 over file2: $ cp --backup=t file1 file2$ lsfile1 file2 file2.~1~ As we can see, a backup was made. Let's copy it again: $ cp --backup=t file1 file2$ lsfile1 file2 file2.~1~ file2.~2~ Another backup was made. Documentation From man cp , just before the end of the "description" section, the various possible options for --backup are itemized: The backup suffix is '~', unless set with --suffix or SIMPLE_BACKUP_SUFFIX. The version control method may be selected via the --backup option or through the VERSION_CONTROL environment variable. Here are the values: none, off never make backups (even if --backup is given) numbered, t make numbered backups existing, nil numbered if numbered backups exist, simple otherwise simple, never always make simple backups As a special case, cp makes a backup of SOURCE when the force and backup options are given and SOURCE and DEST are the same name for an existing, regular file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296179", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68318/" ] }
296,224
I'm using make to handle building files for my application, and those build processes use node modules. Since I install those node modules locally, I have to specify in my $PATH where to call the executables, e.g. PATH=$(npm bin):$PATH . I've set up a variable within my Makefile, NPMEXEC := PATH=$(shell npm bin):$$PATH , and prepend this to my commands when I need to. However, for some longer tasks such as during testing that run multiple commands, it would be convenient to have that PATH assignment occur during the entire duration of the task, kind of like pushd / popd . Is that possible?
“Task” is not common make terminology. I assume that you mean a rule . If you're using GNU make, you can set a variable for a specific rule, or more precisely, for a specific target . test-results: export PATH := $(shell npm bin):$$PATHtest-results: test-binary1 test-binary2 test-data2 reference-test-results test-binary1 >test-results test-binary2 test-data2 >>test-results diff test-results reference-test-results Note that the assignment is in make syntax, which is not the same as shell syntax. And note that when modifying a variable, you must use eager (“expanded”) assignment , not the = lazy assignment which would create a circular reference.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296224", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18885/" ] }
296,247
Observe: $ ls$ ls > list$ cat listlist This appears to indicate that when ls is executed that the redirection into file list has already begun and the list file is already created. A fine enough explanation at any rate, but the question is this: How can I prevent this from happening? What I expected to happen was ls would execute and its output dumped into list and that is what I want.
As you've noticed, the file is created before ls is run. This is due to how the shell handles its order of operations. In order to do ls > file the shell needs to create file and then set stdout to point to that and the finally run the ls program. So you have some options. Create the file in another directory (eg /tmp ) and then mv it to the final directory Create it as a hidden file ( .file ) and rename it Use grep to remove the file from the output Cheat :-) The cheat would be something like x=$(ls) ; printf "%s\n" "$x" > file This causes the output of ls to be held in a variable, and then we write that out.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/296247", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12497/" ] }
296,250
When I run systemctl start sshd.socket on my Arch Linux machine, then log into the server from my mac, then run systemctl stop sshd.socket on my Arch Linux machine, nothing happens until I logout from the mac machine. I can still edit files on the arch linux machine from my mac. Is there any way to close the server so that my mac can't edit files, instead of waiting until I type exit on my mac (other than shutting down the Arch Linux machine)?
Typically, stopping sshd will prevent it from accepting any new connections, but won't kill off existing connections. Once you've done the stop you then need to do something like killall sshd .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296250", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176809/" ] }
296,272
I'd like to use the Vi editor to delete multiple rows in a file. Please give me idea or suggestion. My goal is like this: Before: 123456789101112131415161718192021..2930.. After; 110203040..
If you mean you want to keep every 10th line and delete the rest: %norm 9ddj Explanation: % whole file norm execute the following commands in "normal mode" 9dd delete 9 lines j move down one line (i.e. keep it) note: this deletes the first row. Adapted from http://www.rayninfo.co.uk/vimtips.html Or using the global command: Duplicate the first line g g Y P :g/^/+ d9 Adapted from https://stackoverflow.com/questions/1946738/vim-how-to-delete-every-second-row Or you could use awk : %!awk 'NR \% 10 == 0 || NR == 1'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/296272", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177843/" ] }
296,297
Passing a password on command line (to a child process started from my program) is known to be insecure (because it can be seen even by other users with ps command). Is it OK to pass it as an environment variable instead? What else can I use to pass it? (Except of environment variable) the easiest solution seems to use a pipe, but this easiest solution is not easy. I program in Perl.
Process arguments are visible to all users, but the environment is only visible to the same user ( at least on Linux , and I think on every modern unix variant). So passing a password through an environment variable is safe. If someone can read your environment variables, they can execute processes as you, so it's game over already. The contents of the environment is at some risk of leaking indirectly, for example if you run ps to investigate something and accidentally copy-paste the result including confidential environment variables in a public place. Another risk is that you pass the environment variable to a program that doesn't need it (including children of the process that needs the password) and that program exposes its environment variables because it didn't expect them to be confidential. How bad these risks of secondary leakage are depends on what the process with the password does (how long does it run? does it run subprocesses?). It's easier to ensure that the password won't leak accidentally by passing it through a channel that is not designed to be eavesdropped, such as a pipe. This is pretty easy to do on the sending side. For example, if you have the password in a shell variable, you can just do echo "$password" | theprogram if theprogram expects the password on its standard input. Note that this is safe because echo is a builtin; it would not be safe with an external command since the argument would be exposed in ps output. Another way to achieve the same effect is with a here document: theprogram <<EOF$passwordEOF Some programs that require a password can be told to read it from a specific file descriptor. You can use a file descriptor other than standard input if you need standard input for something else. For example, with gpg : get-encrypted-data | gpg --passphrase-fd 3 --decrypt … 3<<EOP >decrypted-data$passwordEOP If the program can't be told to read from a file descriptor but can be told to read from a file, you can tell it to read from a file descriptor by using a file name like `/dev/fd/3. theprogram --password-from-file=/dev/fd/3 3<<EOF$passwordEOF In ksh, bash or zsh, you can do this more concisely through process substitution. theprogram --password-from-file=<(echo "$password")
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/296297", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9158/" ] }
296,299
I am trying to run my application directly from the linux kernel (without using cron or something similar). I've tried using ./init/init.c , but it runs too early: $ dmesg ...[ 0.605657] TEST!!!... My idea is to launch an application after successful user login, but I cannot find an appropriate function to use.
Process arguments are visible to all users, but the environment is only visible to the same user ( at least on Linux , and I think on every modern unix variant). So passing a password through an environment variable is safe. If someone can read your environment variables, they can execute processes as you, so it's game over already. The contents of the environment is at some risk of leaking indirectly, for example if you run ps to investigate something and accidentally copy-paste the result including confidential environment variables in a public place. Another risk is that you pass the environment variable to a program that doesn't need it (including children of the process that needs the password) and that program exposes its environment variables because it didn't expect them to be confidential. How bad these risks of secondary leakage are depends on what the process with the password does (how long does it run? does it run subprocesses?). It's easier to ensure that the password won't leak accidentally by passing it through a channel that is not designed to be eavesdropped, such as a pipe. This is pretty easy to do on the sending side. For example, if you have the password in a shell variable, you can just do echo "$password" | theprogram if theprogram expects the password on its standard input. Note that this is safe because echo is a builtin; it would not be safe with an external command since the argument would be exposed in ps output. Another way to achieve the same effect is with a here document: theprogram <<EOF$passwordEOF Some programs that require a password can be told to read it from a specific file descriptor. You can use a file descriptor other than standard input if you need standard input for something else. For example, with gpg : get-encrypted-data | gpg --passphrase-fd 3 --decrypt … 3<<EOP >decrypted-data$passwordEOP If the program can't be told to read from a file descriptor but can be told to read from a file, you can tell it to read from a file descriptor by using a file name like `/dev/fd/3. theprogram --password-from-file=/dev/fd/3 3<<EOF$passwordEOF In ksh, bash or zsh, you can do this more concisely through process substitution. theprogram --password-from-file=<(echo "$password")
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/296299", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179952/" ] }
296,347
Here's what I did on Debian Jessie: install cron via apt-get install cron put a backup_crontab file in /etc/cron.d/ However the task is never running. Here are some outputs: /# crontab -lno crontab for root/# cd /etc/cron.d && lsbackup_crontab/etc/cron.d# cat backup_crontab0,15,30,45 * * * * /backup.sh >/dev/null 2>&1 Is there something to do to activate a particular crontab, or to activate the cron "service" in itself?
Files in /etc/cron.d need to also list the user that the job is to be run under. i.e. 0,15,30,45 * * * * root /backup.sh >/dev/null 2>&1 You should also ensure the permissions and owner:group are set correctly ( -rw-r--r-- and owned by root:root )
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/296347", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180052/" ] }
296,354
Forgive me; I'm pretty new to bash files and the like. Here is a copy of my .bashrc: alias k='kate 2>/dev/null 1>&2 & disown'function kk {kate 2>/dev/null 1>&2 & disown} The alias in the first line works fine, but the second line throws: bash: /home/mozershmozer/.bashrc: line 3: syntax error near unexpected token `{kate'bash: /home/mozershmozer/.bashrc: line 3: `function kk {kate 2>/dev/null >1>&2 & disown}' I'm running Linux Mint 17.3 and those are the only two lines in my .bashrc file. Pretty much everything else on my machine is default vanilla. Ultimately I want to play around with the function to get it to do some specific things, but I hit the syntax wall immediately. The exact function I have listed here is just a sort of experimental dummy to allow me to learn the syntax more clearly.
In bash and other POSIX shells, { and } aren't exactly special symbols so much as they are special words in this context. When creating a compound command like in your function definition, it is important that they remain words , i.e. surrounded by whitespace. The final command in a single-line function definition like this must be terminated by a semicolon. Otherwise the closing brace } is treated as an argument to the command. As an aside, if you want your function to be portable to other POSIX shells, it is better to use a different function syntax: kk () { kate 2>/dev/null 1>&2 & disown; } The use of function is specific to bash , while the form given here works with bash as well as others like sh, Korn and Almquist shells. disown is also bash specific.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296354", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180059/" ] }
296,389
I'm trying to compress a virtual machine image file through a script, but I want to be sure the file isn't being accessed. I could check if virt-manager is running, since it should be the only program accessing the image, but I don't know if there's a better way to do it. I also want the script to continue trying until the file is available to compress. I don't know how do to that either. #Check if virt-manager is runningif pgrep "virt-manager" > /dev/nullthen #re-run script until successelse gzip -k < /home/brady/.vms/windows10/hdd.img > /media/backup/vms/windows10/hdd.$(date +"%F.%T).img.gz
The lsof command can tell you if a file is in use. You can put that in a while loop with a sleep to make it check every so often. For example: In window 1 you can run sleep 10000 > /tmp/x In window 2 run this script: #!/bin/bashFILE=/tmp/xwhile [ -n "$(lsof "$FILE")" ]do sleep 1doneecho "File $FILE not in use" Now when you press control-C to abort the sleep you'll see the "File not in use" response within a second or so.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296389", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120197/" ] }
296,401
I would like if someone points out mistakes in my script. The source where I'm learning from is so buggy that's why it's confusing me. PURPOSE OF THIS SCRIPT: It will count the numbers from whatever number the user enters to number 1 #!/bin/bashecho -n Enter a numberread numberif (($number > 0)) ; thenindex = $numberwhile [ $index => 1 ] ; doecho $index((index--))breakdonefi ERROR IT GIVES: index: command not found
index = $number cannot use spaces around = for variable assignment.. use index=$number or ((index = number)) [ $index => 1 ] I suppose you want to check if index is greater than or equal to 1, use [ $index -ge 1 ] or ((index >= 1)) why is the break statement used? it is used to quit loop also the if statement is not required you can also use read -p option to add message for user putting it all together: #!/bin/bashread -p 'Enter a number: ' numberwhile ((number >= 1)) ; do echo $number ((number--))done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296401", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172540/" ] }
296,453
I have one particular service, that logs rare, but important information. I've set it up few months ago, and today I've run journalctl -n 50 --unit=my-service only to find there are no entries. I'm perfectly happy with this behavior for most units — I either need something that happened right away (or few days ago at most), and I don't care about months-old records. However, is there a way to tell journald to have an independent storage and retention policy for a single particular unit's records? I want to persist those particular logs for, say, 5 years — no matter the size it would take. The other units' logs should be unaffected by this, and retain their existing behavior. I'm sort of lost understanding journald.conf(5) , and can't figure out whenever per-unit configuration is possible at all. If it is — would appreciate a brief concrete example - which file should I edit/create and what should I write. Or, if you know for sure it's certainly not doable — that would be a good answer as well. NOTE: My particular case involves Arch Linux host, but I guess this shouldn't matter much.
Seems that I'm most likely out of luck with journald. Unless I'll figure out a way to spawn a independent "long-term storage" journal (like currently there are different per-user journals), but I'm not sure it's a viable and sane approach. I guess, setting up a syslogd (and logrotate) would be easier. The feature wasn't present in late 2014 , as confirmed by Lennart himself. And it seems that it's not here yet. At least, the line "journald: allow per-priority and per-service retention times when rotating/vacuuming" is still in the TODO file (link to revision from 2016-07-11).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296453", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15549/" ] }
296,484
Is it possible to have a terminal only desktop in Linux (Mint)? I want to boot normally, meaning I want to be able to start GUI programs (IDEs, Browsers etc.), but I don't want anything on the desktop but a Terminal after booting. Ideally some kind of embedded terminal on the desktop and nothing but that. My current "workaround" is to have a pure black desktop and use Ctrl + Alt + T to start a shell, but ideally I want one as fix part of the desktop. The purpose would be to be forced to do standard stuff with terminal only and as little distraction as possible.
One way or another, you would need X running. But you can get something like what you're asking with a tiling window manager. One of the earlier ones was "ion" (not as popular now). Further reading (no specific recommendations, of course: that would introduce opinion): Comparison of tiling window managers (Arch wiki) Why You Should Try a Tiling Window Manager Exploring Tiling Window Managers 5 Great Tiling Window Managers for Linux
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296484", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180144/" ] }
296,489
What is the most concise way to resolve a hostname to a local IP address in Arch Linux?
You can use either host or nslookup from bind-tools : $ host 172.217.19.195195.19.217.172.in-addr.arpa domain name pointer fra02s21-in-f3.1e100.net.$ nslookup 172.217.19.195Server: 192.168.2.1Address: 192.168.2.1#53Non-authoritative answer:195.19.217.172.in-addr.arpa name = fra02s21-in-f3.1e100.net.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296489", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/134726/" ] }
296,596
I have a script that generates some output. I want to check that output for any IP address like 159.143.23.12134.12.178.131124.143.12.132if (IPs are found in <file>)then // bunch of actions //else // bunch of actions // Is fgrep a good idea? I have bash available.
Yes , You have lot of options/tools to use. I just tried this , it works: ifconfig | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b" so you can use grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b" to grep the ip addresses from your output.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/296596", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/152598/" ] }
296,618
ip route get 2607:f8b0:4005:804::200e will show me the best (longest prefix) route to google.com , but it does not show all routes that can take me there. Right now I am using ip -6 route show | grep 2607:f8b0: . This prints the right routes, but it also prints every other route in that /32. There has got to be a better way.
There is an easy way to list all routes matchig prefix on linux : ip -6 route list match 2607:f8b0:4005:804::200e table all This will list all possible routes to specified target (including default, if no more specific is found) in all tables. Obviously, this works for IPv4 too. PS: I know that my answer is a bit too late, and most likely you have figured this on your own already, but nevertheless - whoever hits this question may find it helpful :)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296618", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31123/" ] }
296,626
I have the following file named OpenSimStats.txt: TestreportsRootAgentCount=0agent(s)TestreportsChildAgentCount=0childagent(s)TestreportsGCReportedMemory=10MB(Global)TestreportsTotalObjectsCount=0Object(s)TestreportsTotalPhysicsFrameTime=0msTestreportsPhysicsUpdateFrameTime=0msTestreportsPrivateWorkingSetMemory=2144MB(Global)TestreportsTotalThreads=0Thread(s)(Global)TestreportsTotalFrameTime=89msTestreportsTotalEventFrameTime=0msTestreportsLandFrameTime=0msTestreportsLastCompletedFrameAt=25msagoTestreportsTimeDilationMonitor=1TestreportsSimFPSMonitor=55.3333320617676TestreportsPhysicsFPSMonitor=55.4766654968262TestreportsAgentUpdatesPerSecondMonitor=0persecondTestreportsActiveObjectCountMonitor=0TestreportsActiveScriptsMonitor=0TestreportsScriptEventsPerSecondMonitor=0persecondTestreportsInPacketsPerSecondMonitor=0persecondTestreportsOutPacketsPerSecondMonitor=0persecondTestreportsUnackedBytesMonitor=0TestreportsPendingDownloadsMonitor=0TestreportsPendingUploadsMonitor=0TestreportsTotalFrameTimeMonitor=18.18239402771msTestreportsNetFrameTimeMonitor=0msTestreportsPhysicsFrameTimeMonitor=0.0106373848393559msTestreportsSimulationFrameTimeMonitor=0.17440040409565msTestreportsAgentFrameTimeMonitor=0msTestreportsImagesFrameTimeMonitor=0msTestreportsSpareFrameTimeMonitor=18.1818199157715msTestreportsLastReportedObjectUpdates=0TestreportsSlowFrames=1 I want to transform this file into a CSV file like the following: TestreportsRootAgentCount,TestreportsChildAgentCount,...,TestreportsSlowFrames0,0,10,0,0...,1 By which I mean: take out all words before and after a delimiter in this case the delimeter is "=" Put all words on the left of the delimeter in one line separated by commas Insert a new line at the end Then put whatever after the delimiter ( = ) - the numbers only (without the units or characters after the numbers) in another line where these numbers are separated by commas. Then insert a new line Any ideas/suggestions on how this can be done in Linux shell scripting? By using sed or gawk?
The 9 paths to OpenSim enlightenment: With sed and some shell magic: sed 's/=.*//' OpenSimStats.txt | paste -sd, >out.csvsed 's/.*=//; s/[^0-9]*$//' OpenSimStats.txt | paste -sd, >>out.csv With sed , without shell magic: sed -n 's/=.*//; 1{ h; b; }; $! H; $ { x; s/\n/,/g; p; }' OpenSimStats.txt >out.csvsed -n 's/.*=//; 1{ s/[0-9]*$//; h; b; }; s/[^0-9]*$//; $! H; $ { x; s/\n/,/g; p; }' OpenSimStats.txt >>out.csv With shell magic and a tiny bit of sed : paste -sd, <(cut -d= -f1 OpenSimStats.txt) <(cut -d= -f2 OpenSimStats.txt | sed 's/[^0-9]*$//') With cut and some shell magic: cut -d= -f1 OpenSimStats.txt | paste -sd, >out.csvcut -d= -f2 OpenSimStats.txt | sed 's/[^0-9]*$//' | paste -sd, >>out.csv With GNU datamash : sed 's/=/,/; s/[^0-9]*$//' OpenSimStats.txt | datamash -t, transpose With perl : perl -lnE 's/\D+$//o; ($a, $b) = split /=/; push @a, $a; push @b, $b; END { $, = ","; say @a; say @b }' OpenSimStats.txt With grep : grep -o '^[^=]*' OpenSimStats.txt | paste -sd, >out.csvegrep -o '[0-9.]+' OpenSimStats.txt | paste -sd, >>out.csv With bash : #! /usr/bin/env bashline1=()line2=()while IFS='=' read -r a b; do line1+=("$a") [[ $b =~ ^[0-9.]+ ]] line2+=("$BASH_REMATCH")done <OpenSimStats.txt( set "${line1[@]}"; IFS=,; echo "$*" ) >out.csv( set "${line2[@]}"; IFS=,; echo "$*" ) >>out.csv With awk : awk -F= ' NR==1 { a = $1; sub(/[^0-9]+$/, "", $2); b = $2; next } { a = a "," $1; sub(/[^0-9]+$/, "", $2); b = b "," $2 } END { print a; print b }' OpenSimStats.txt Bonus 10th path for data nerds, with csvtk : csvtk replace -d= -f 2 -p '\D+$' -r '' <OpenSimStats.txt | csvtk transpose Bonus 11th path with vim : :%s/\D*$//:%s/=/\r/qaq:g/^\D/y A | normal dd:1,$-1 s/\n/,/"aP:2,$-2 s/\n/,/:d 1:w out.csv
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296626", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180240/" ] }
296,640
I'm trying to place a zip file with the content of the directory in each directory that doesn't contain any subdirectory. The following command works if there's no space in dirnames and/or filenames but fails if there are: find pictures/ -type d -links 2 -execdir sh -c "pwd; echo '{}'; zip -v '{}/{}.zip' '{}/*' -x \*.zip -x \*.id" \; I don't understand why it fails since I've quoted the '{}' as usual. Also tried "{}".Said differently, it should create A/B/C/C.zip with the content of A/B/C/*. If there are spaces in 'C', I get: zip warning: name not matched: ./C/*zip error: Nothing to do! (./C/./C.zip)
The 9 paths to OpenSim enlightenment: With sed and some shell magic: sed 's/=.*//' OpenSimStats.txt | paste -sd, >out.csvsed 's/.*=//; s/[^0-9]*$//' OpenSimStats.txt | paste -sd, >>out.csv With sed , without shell magic: sed -n 's/=.*//; 1{ h; b; }; $! H; $ { x; s/\n/,/g; p; }' OpenSimStats.txt >out.csvsed -n 's/.*=//; 1{ s/[0-9]*$//; h; b; }; s/[^0-9]*$//; $! H; $ { x; s/\n/,/g; p; }' OpenSimStats.txt >>out.csv With shell magic and a tiny bit of sed : paste -sd, <(cut -d= -f1 OpenSimStats.txt) <(cut -d= -f2 OpenSimStats.txt | sed 's/[^0-9]*$//') With cut and some shell magic: cut -d= -f1 OpenSimStats.txt | paste -sd, >out.csvcut -d= -f2 OpenSimStats.txt | sed 's/[^0-9]*$//' | paste -sd, >>out.csv With GNU datamash : sed 's/=/,/; s/[^0-9]*$//' OpenSimStats.txt | datamash -t, transpose With perl : perl -lnE 's/\D+$//o; ($a, $b) = split /=/; push @a, $a; push @b, $b; END { $, = ","; say @a; say @b }' OpenSimStats.txt With grep : grep -o '^[^=]*' OpenSimStats.txt | paste -sd, >out.csvegrep -o '[0-9.]+' OpenSimStats.txt | paste -sd, >>out.csv With bash : #! /usr/bin/env bashline1=()line2=()while IFS='=' read -r a b; do line1+=("$a") [[ $b =~ ^[0-9.]+ ]] line2+=("$BASH_REMATCH")done <OpenSimStats.txt( set "${line1[@]}"; IFS=,; echo "$*" ) >out.csv( set "${line2[@]}"; IFS=,; echo "$*" ) >>out.csv With awk : awk -F= ' NR==1 { a = $1; sub(/[^0-9]+$/, "", $2); b = $2; next } { a = a "," $1; sub(/[^0-9]+$/, "", $2); b = b "," $2 } END { print a; print b }' OpenSimStats.txt Bonus 10th path for data nerds, with csvtk : csvtk replace -d= -f 2 -p '\D+$' -r '' <OpenSimStats.txt | csvtk transpose Bonus 11th path with vim : :%s/\D*$//:%s/=/\r/qaq:g/^\D/y A | normal dd:1,$-1 s/\n/,/"aP:2,$-2 s/\n/,/:d 1:w out.csv
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296640", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92092/" ] }
296,675
I read here that chmod -R 777 / is a really bad idea, because it overwrites permissions on files, and erases sticky bits and setgid and stuff. However I was thinking that chmod -R ugo+rwx / would not overwrite the permissions but add them, if not there already present, and that it would be therefore much safer than the aforementioned command. Am I right? Or are those commands basically the same and both would destroy my system? I don't intend on doing it, just asking for general culture.
You would strip all security from your system making it extremely vulnerable. Lots of programs would stop functioning due to insecure permissions. You are technically right it would append those rather than over write so you would keep SGID and SUID permissions. I have an old Ubuntu machine I no longer need so I figured I would test this. After running chmod -R ugo+rwx / sudo stopped working due to insecure perms on /usr/lib/sudo/sudoers.so . ssh stopped working because I was using rsa keys which also required strict permissions. I couldn't reboot the machine in the OS because sudo was broken however the power button worked just fine. I was surprised because the server actually booted up just fine, I could probably fix it with single user mode but I am just going to reinstall. So to answer your question, no. While chmod -R ugo+rwx / is technically different than chmod -R 777 / it is not safer because they both break your system.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296675", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148149/" ] }
296,680
On my linux box I've 2 NICS: br0 192.168.1.0/24 (192.168.1.1 server)eth1 192.168.2.0/24 lan1eth2 192.168.3.0/24 lan2[...] there's a routing from eth1 to br0 and also from eth2 to br0 I droppped icmp echo requests on eth1 from same subnet: iptables -A INPUT -i eth1 -p icmp --icmp-type echo-request -j DROP but I also want blocks icmp echo requests ONLY from 192.168.2.0/24 network on 192.168.1.1. So I don't want that clients on 192.168.2.0/24 can ping 192.168.1.1
You would strip all security from your system making it extremely vulnerable. Lots of programs would stop functioning due to insecure permissions. You are technically right it would append those rather than over write so you would keep SGID and SUID permissions. I have an old Ubuntu machine I no longer need so I figured I would test this. After running chmod -R ugo+rwx / sudo stopped working due to insecure perms on /usr/lib/sudo/sudoers.so . ssh stopped working because I was using rsa keys which also required strict permissions. I couldn't reboot the machine in the OS because sudo was broken however the power button worked just fine. I was surprised because the server actually booted up just fine, I could probably fix it with single user mode but I am just going to reinstall. So to answer your question, no. While chmod -R ugo+rwx / is technically different than chmod -R 777 / it is not safer because they both break your system.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296680", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40628/" ] }
296,697
I want to encrypt a file with a private key and decrypt it with a public key. A public key will be embedded in my app. So I want to have a guarantee that the file was created by me. How can I use gpg or openssl to implement it.
It makes no sense to encrypt a file with a private key . Using a private key to attach a tag to a file that guarantees that the file was provided by the holder of the private key is called signing , and the tag is called a signature . There is one popular cryptosystem (textbook RSA) where a simplified (insecure) algorithm uses has public and private keys of the same type, and decryption is identical to signature and encryption is identical to verification. This is not the case in general: even RSA uses different mechanisms for decryption and signature (resp. encryption and verification) with proper, secure padding modes; and many other algorithms have private and public keys that aren't even the same kind of mathematical objects. So you want to sign the file. The de facto standard tool for this is GnuPG . To sign a file with your secret key: gpg -s /path/to/file Use the --local-user option to select a secret key if you have several (e.g. your app key vs your personal key). Transfer file.gpg to the place where you want to use the file. Transfer the public key as well (presumably inside the application bundle). To extract the original text and verify the signature, run gpg file.gpg If it's more convenient, you can transfer file itself, and produce a separate signature file which is called a detached signature. To produce the detached signature: gpg -b /path/to/file To verify: gpg file.gpg file You can additionally encrypt the file with the -e option. Of course this means that you need a separate key pair, where the recipient (specified with the -r option) has the private key and the producer has the public key.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296697", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179952/" ] }
296,750
Given the script below, how can I ensure that the argument only contains a valid filename within /home/charlesingalls/ and not a path ( ../home/carolineingalls/ ) or wildcard, etc? I only want the script to be able to delete a single file from the given hard-coded directory. This script will run as a privileged user. #!/bin/bashrm -f /home/charlesingalls/"$1"
This answer assumes that $1 is allowed to include subdirectories. If you are interested in the simpler case where $1 should be a simple directory name, then see one of the other answers. Wildcards are not expanded when in double-quotes. Since $1 is in double-quotes, wildcards are not a problem. Both ../ and symlinks can obscure the real location of a file. Shown below are tests to determine if the file is really, not just seemingly, under the path we want. Newer systems: using realpath As for finding out if the file is really if the file is really under /home/charlesingalls/ or not, you can use realpath : realpath --relative-base=/home/charlesingalls/ "/home/charlesingalls/$1" | grep -q '^/' && exit 1 The above runs exit 1 if the file specified by $1 is anywhere other than under the directory /home/charlesingalls/ . realpath canonicalizes the whole path, eliminating both symlinks and ../ . realpath is part of GNU coreutils and should be available on any Linux system. realpath requires GNU coreutils 8.15 (Jan 2012) or better . Examples To demonstrate how realpath follows ../ to determine the real location of a file (for examples, the -q option to grep is omitted so that the actual output of grep is visible): $ touch /tmp/test$ realpath --relative-base=$HOME "$HOME/../../tmp/test" | grep '^/' && echo FAIL/tmp/testFAIL To demonstrate how it follows symlinks: $ ln -s /tmp/test ~/test$ realpath --relative-base=$HOME "$HOME/test" | grep '^/' && echo FAIL/tmp/testFAIL Older systems: using readlink -e readlink is also capable of cononicalizing a path, following both symlinks and ../ : readlink -e "$HOME/test" | grep -q "^$HOME" || exit 1 Using the same example files: $ readlink -e "$HOME/../../tmp/test" | grep "$HOME" || echo FAILFAIL$ readlink -e "$HOME/test" | grep "^$HOME" || echo FAILFAIL In addition to being available on older GNU systems, versions of readlink are available on BSD.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296750", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180332/" ] }
296,751
When working on linux ( mint mate 17.2 ), need kill ibus daemon and restart it for some reason. After that one of the editor which is a wine application can't use the ibus input anymore, while other non-wine application could. Trying to restart the wine application or ibus again won't fix the problem. Restart the machine fixes the issue, but it's not preferred. Wondering is it due to some kind of cache in wine or wine application. So, any idea? Thanks.
This answer assumes that $1 is allowed to include subdirectories. If you are interested in the simpler case where $1 should be a simple directory name, then see one of the other answers. Wildcards are not expanded when in double-quotes. Since $1 is in double-quotes, wildcards are not a problem. Both ../ and symlinks can obscure the real location of a file. Shown below are tests to determine if the file is really, not just seemingly, under the path we want. Newer systems: using realpath As for finding out if the file is really if the file is really under /home/charlesingalls/ or not, you can use realpath : realpath --relative-base=/home/charlesingalls/ "/home/charlesingalls/$1" | grep -q '^/' && exit 1 The above runs exit 1 if the file specified by $1 is anywhere other than under the directory /home/charlesingalls/ . realpath canonicalizes the whole path, eliminating both symlinks and ../ . realpath is part of GNU coreutils and should be available on any Linux system. realpath requires GNU coreutils 8.15 (Jan 2012) or better . Examples To demonstrate how realpath follows ../ to determine the real location of a file (for examples, the -q option to grep is omitted so that the actual output of grep is visible): $ touch /tmp/test$ realpath --relative-base=$HOME "$HOME/../../tmp/test" | grep '^/' && echo FAIL/tmp/testFAIL To demonstrate how it follows symlinks: $ ln -s /tmp/test ~/test$ realpath --relative-base=$HOME "$HOME/test" | grep '^/' && echo FAIL/tmp/testFAIL Older systems: using readlink -e readlink is also capable of cononicalizing a path, following both symlinks and ../ : readlink -e "$HOME/test" | grep -q "^$HOME" || exit 1 Using the same example files: $ readlink -e "$HOME/../../tmp/test" | grep "$HOME" || echo FAILFAIL$ readlink -e "$HOME/test" | grep "^$HOME" || echo FAILFAIL In addition to being available on older GNU systems, versions of readlink are available on BSD.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296751", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78405/" ] }
296,776
Since I move around, I need to be to change time-zones frequently. I'm on Arch/Xfce. How can I do that? I've tried right click on the watch on the top panel -> properties -> time settings -> time zone. It didn't work. When I type a time-zone, it's not auto-completing and not showing suggestions. When I enter it, nonetheless and press Ok, the time doesn't change according to a new time-zone. What's the proper way to do that?
It's as simple as typing in just one command: timedatectl set-timezone Zone/SubZone Where you replace Zone/SubZone with correct data. You can obtain list of all available timezones by typing: timedatectl list-timezones If you want to have your RTC (hardware clock) using local time, run the following command: timedatectl set-local-rtc 1 If you prefer your RTC at UTC, use this one: timedatectl set-local-rtc 0
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/296776", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180184/" ] }
296,800
I am trying to comparing two files and obtain the matching values. I tried this command: grep -Fwf file_1.txt file_2.txt > matched_output.txt However, this script extracts only unique values . File_1.txt K00012 K00012 K00024 K00024 K00024 K00027 K00027 K00027 K00027 File_2.txt ko:K00012 UGDH; UDPglucose 6-dehydrogenase ko:K00024 mdh; malate dehydrogenase ko:K00027 ME2; malate dehydrogenase (oxaloacetate-decarboxylating) Expected Output K00012 ko:K00012 UGDH; UDPglucose 6-dehydrogenase K00012 ko:K00012 UGDH; UDPglucose 6-dehydrogenase K00024 ko:K00024 mdh; malate dehydrogenase K00024 ko:K00024 mdh; malate dehydrogenase K00024 ko:K00024 mdh; malate dehydrogenase K00027 ko:K00027 ME2; malate dehydrogenase (oxaloacetate-decarboxylating) K00027 ko:K00027 ME2; malate dehydrogenase (oxaloacetate-decarboxylating) K00027 ko:K00027 ME2; malate dehydrogenase (oxaloacetate-decarboxylating) K00027 ko:K00027 ME2; malate dehydrogenase (oxaloacetate-decarboxylating)
It's as simple as typing in just one command: timedatectl set-timezone Zone/SubZone Where you replace Zone/SubZone with correct data. You can obtain list of all available timezones by typing: timedatectl list-timezones If you want to have your RTC (hardware clock) using local time, run the following command: timedatectl set-local-rtc 1 If you prefer your RTC at UTC, use this one: timedatectl set-local-rtc 0
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/296800", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180368/" ] }
296,838
eval and exec are both built in commands of bash(1) that execute commands. I also see exec has a few options but is that the only difference? What happens to their context?
eval and exec are completely different beasts. (Apart from the fact that both will run commands, but so does everything you do in a shell.) $ help execexec: exec [-cl] [-a name] [command [arguments ...]] [redirection ...] Replace the shell with the given command. What exec cmd does, is exactly the same as just running cmd , except that the current shell is replaced with the command, instead of a separate process being run. Internally, running say /bin/ls will call fork() to create a child process, and then exec() in the child to execute /bin/ls . exec /bin/ls on the other hand will not fork, but just replaces the shell. Compare: $ bash -c 'echo $$ ; ls -l /proc/self ; echo foo'7218lrwxrwxrwx 1 root root 0 Jun 30 16:49 /proc/self -> 7219foo with $ bash -c 'echo $$ ; exec ls -l /proc/self ; echo foo'7217lrwxrwxrwx 1 root root 0 Jun 30 16:49 /proc/self -> 7217 echo $$ prints the PID of the shell I started, and listing /proc/self gives us the PID of the ls that was ran from the shell. Usually, the process IDs are different, but with exec the shell and ls have the same process ID. Also, the command following exec didn't run, since the shell was replaced. On the other hand: $ help evaleval: eval [arg ...] Execute arguments as a shell command. eval will run the arguments as a command in the current shell. In other words eval foo bar is the same as just foo bar . But variables will be expanded before executing, so we can execute commands saved in shell variables: $ unset bar$ cmd="bar=foo"$ eval "$cmd"$ echo "$bar"foo It will not create a child process, so the variable is set in the current shell. (Of course eval /bin/ls will create a child process, the same way a plain old /bin/ls would.) Or we could have a command that outputs shell commands. Running ssh-agent starts the agent in the background, and outputs a bunch of variable assignments, which could be set in the current shell and used by child processes (the ssh commands you would run). Hence ssh-agent can be started with: eval $(ssh-agent) And the current shell will get the variables for other commands to inherit. Of course, if the variable cmd happened to contain something like rm -rf $HOME , then running eval "$cmd" would not be something you'd want to do. Even things like command substitutions inside the string would be processed, so one should really be sure that the input to eval is safe before using it. Often, it's possible to avoid eval and avoid even accidentally mixing code and data in the wrong way.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/296838", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21240/" ] }
296,902
I have a date of the format YYYYMMDDHHMM. For example, 201607131001 would be 10:01 AM July 13, 2016 . Is there any way I can use the date utility to format the timestamp?
Assuming you have GNU date, we need to add a space between the date and time and pass it with -d $ date -d "20160713 1001"Wed Jul 13 10:01:00 EDT 2016 We can do that split pretty easily with parameter expansions. e.g $ d=201607131001$ date -d "${d%????} ${d#????????}"Wed Jul 13 10:01:00 EDT 2016 You can then use the standard + formatting strings to get it in the format you want ( man date explains all the options). $ d=201607131001$ date -d "${d%????} ${d#????????}" +"%I:%M %p %B %d, %Y"10:01 AM July 13, 2016
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/296902", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118072/" ] }
296,967
I made a backup to an NTFS drive, and well, this backup really proved necessary. However, the NTFS drive messed up permissions. I'd like to restore them to normal w/o manually fixing each and every file. One problem is that suddenly all my text files gained execute permissions, which is wrong ofc. So I tried: sudo chmod -R a-x folder\ with\ restored\ backup/ But it is wrong as it removes the x permission from directories as well which makes them unreadable. What is the correct command in this case?
If you are fine with setting the execute permissions for everyone on all folders: chmod -R -x+X -- 'folder with restored backup' The -x removes execute permissions for all The +X will add execute permissions for all, but only for directories. See Stéphane Chazelas's answer for a solutionthat uses find to really not touch folders, as requested.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/296967", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112145/" ] }
297,006
I have a collection of files ( *.zip, *.txt, *.tar.gz, *.doc, ...etc ). These files reside within a path. I want to find all the files ( *.txt), then copy, only, the text files that contain specific words ( e.g LINUX/UNIX). I ran the following: find . -name "*.txt" | grep 'LINUX/UNIX' This command was able to find all the text files, then "grep" filtered the resultant text files by listing only the text files that contain 'LINUX/UNIX'. How can I copy these final files (i.e. the text files that contain 'LINUX/UNIX') to a specific path of choice? I tried to apply xargs find . -name "*.txt" | grep 'LINUX/UNIX' | xargs cp <to a path> But it didn't work
Try: grep -rl --null --include '*.txt' LINUX/UNIX . | xargs -0r cp -t /path/to/dest Because this command uses NUL-separation, it is safe for all file names including those with difficult names that include blanks, tabs, or even newlines. The above requires GNU cp . For MacOS/FreeBSD, try: grep -rl --null --include '*.txt' LINUX/UNIX . | xargs -0 sh -c 'cp "$@" /path/to/dest' sh How it works: grep options and arguments -r tells grep to search recursively through the directory structure. (On FreeBSD, -r will follow symlinks into directories. This is not true of either OS/X or recent versions of GNU grep .) --include '*.txt' tells grep to only return files whose names match the glob *.txt (including hidden ones like .foo.txt or .txt ). -l tells grep to only return the names of matching files, not the match itself. --null tells grep to use NUL characters to separate the file names. ( --null is supported by grep under GNU/Linux , MacOS and FreeBSD but not OpenBSD .) LINUX/UNIX tells grep to look only for files whose contents include the regex LINUX/UNIX . search in the current directory. You can omit it in recent versions of GNU grep , but then you'd need to pass a -- option terminator to cp to guard against file names that start with - . xargs options and arguments -0 tells xargs to expect NUL-separated input. -r tells xargs not to run the command unless at least one file was found. (This option is not needed on either BSD or OSX and is not compatible with OSX's xargs .) cp -t /path/to/dest copies the directories to the target directory. ( -t requires GNU cp .)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/297006", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
297,019
In my Linux environment, the file descriptors are located on /dev/fd. Where is the location of stdin, stdout, stderr file descriptor in AIX(unix). I couldn't find them.
The question is based on a misconception about the generality of proc filesystems. Systems which implement this (such as Solaris and Linux) have special devices which may be used for scripting, including /dev/fd followed by a file descriptor (number). With Solaris (as well as some BSDs such as MacOS, FreeBSD), /dev/fd is a virtual folder under /dev , while Linux uses a symbolic link to /proc into a (virtual) folder matching your process id. There are no standards for proc filesystems, and details will differ. The BSDs do not use identical schemes: Solaris, MacOS and FreeBSD use symbolic links for the stdin , stdout , stderr into this /dev/fd folder, while NetBSD and OpenBSD do this using a device node (with the same major/minor numbers). But they are similar. The /dev/fd is separate from /proc (which Solaris supports, unlike the BSDs). Checking AIX 5.3 and 7.1 systems, they do implement a proc filesystem, but have no /dev/fd . However, they do have a virtual filesystem /proc , under which you can find your current process-id, and under that is an fd folder with file descriptors. Conventionally, file descriptors are initialized 0, 1, 2 for stdin , stdout , stderr respectively. Further reading: /proc - Contains state information about processes and threads in the system (AIX 7) for fd : Contains files for all open file descriptors of the process. Each entry is a decimal number corresponding to an open file descriptor in the process. If an entity refers to a regular file, it can be opened with normal file semantics. To ensure that the controlling process cannot gain greater access, there are no file access modes other than its own read/write open modes in the controlled process. Directories will be displayed as links. An attempt to open any other type of entry will fail (hence it will display 0 permission when listed). Using the /proc File System and Commands (Oracle/Solaris) fd - file descriptor files (Oracle/Solaris) Linux Filesystem Hierarchy: /proc 3.15 /dev/fd , from Advanced Programming in the UNIX® Environment: UNIX File I/O Says SVR4 supports /dev/fd , however in a quick check only Solaris (versus the other two SVr4: AIX and HPUX) provide that. HPUX also lacks /proc , so that platform has no magic stdin device in either location. The /dev/fd feature was developed by Tom Duff and appeared in the 8th Edition of the Research Unix System. It is supported by SVR4 and 4.3+BSD. It is not part of POSIX.1.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297019", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/125570/" ] }
297,086
I want to use something like that if [[ ! -z "$ENV" && $ENV == 'production' ]]; then echo "production"; else echo "dev"; fi but in BusyBox it does not work :( sh: 1: [[: not found It looks like any combination AND or OR does not work in IF statement into busybox
You can use the older [ syntax if [ -n "$ENV" -a "$ENV" = 'production' ] (note I used -n rather than ! -z because it reads easier, but it's the same thing). Or we can simplify to an even older syntax by forcing the string to have a value: if [ "x$ENV" = 'xproduction' ] Finally, the -n test may not really be needed and you can possibly just do if [ "$ENV" = 'production' ]
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297086", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14056/" ] }
297,122
Thanks to this link I know how to pass a variable that contains slashes as a pattern to sed: sed "s~$var~replace~g" $file . Juste use a single-byte character in place of /. Thanks to this other link I also know how to replace just the first occurrence of a pattern in a file (not in a line): sed "0,/$var/s/$var/replacement/" filename or sed 0,/$var/{s/$var/replacement/} filename But if I do: sed '0,~$var~s~$var~replacement~' filename (or anything else that begins with 0, then no slash), I've got an error: unknown command: '0' . How could I combine the two? Maybe by using awk or perl or ... ?
While: sed "0,\~$var~s~$var~replacement~" Can be used to change the regex delimiter, embedding variable expansions inside sed (or any other interpreter) code is a very unwise thing to do in the general case. First, here, the delimiter is not the only character that needs to be escaped. All the regular expression operators need to as well. But more importantly, and especially with GNU sed , that's a command injection vulnerability. If the content of $var is not under your control, it's just as bad as passing arbitrary data to eval . Try for instance: $ var='^~s/.*/uname/e;#'$ echo | sed "0,\~$var~s~$var~replacement~"Linux The uname command was run, thankfully a harmless one... this time. Non-GNU sed implementations can't run arbitrary commands, but can overwrite any file (with the w command), which is virtually as bad. A more correct way is to escape the problematic characters in $var first : NL=''case $var in (*"$NL"*) echo >&2 "Sorry, can't handle variables with newline characters" exit 1esacescaped_var=$(printf '%s\n' "$var" | sed 's:[][\/.^$*]:\\&:g')# and then:sed "0,/$escaped_var/s/$escaped_var/replacement/" < file Another approach is to use perl : var=$var perl -pe 's/\Q$ENV{var}\E/replacement/g && $n++ unless $n' < file Note that we're not expanding the content of $var inside the code passed to perl (which would be another command injection vulnerability), but are letting perl expand its content as part of its regexp processing (and within \Q...\E which means regexp operators are not treated specially). If $var contains newline characters, that may only match if there's only one at the end. Alternatively, one may pass the -0777 option so the input be processed as a single record instead of line-by-line.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/297122", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180597/" ] }
297,126
I have multiple project trees of legacy code that are using RCS for version control with multiple users. I'd like to be able to walk the tree and test if any files are checked out (and thus the tree is not ready to be packaged for distribution update). For example, I have a test tree: tree -p . .├── [-r--r--r--] file1├── [drwxrwxr-x] RCS│   └── [-r--r--r--] file1,v├── [drwxrwxr-x] subdir1│   ├── [drwxrwxr-x] RCS│   │   └── [-r--r--r--] sfile1,v│   └── [-rw-r--r--] sfile1└── [drwxrwxr-x] subdir2├── [drwxrwxr-x] RCS│   └── [-r--r--r--] sfile2,v└── [-r--r--r--] sfile25 directories, 6 files In which all files but sfile1 are checked in to their respective RCS dirs. sfile1 has been checked out and modified. rlog subdir1/sfile1 (a file that is properly checked-out) returns: RCS file: subdir1/RCS/sfile1,vWorking file: subdir1/sfile1head: 1.1branch:locks: strict torfey: 1.1access list:symbolic names:keyword substitution: kvtotal revisions: 1; selected revisions: 1description:----------------------------revision 1.1 locked by: torfey;date: 2016/07/20 13:09:34; author: torfey; state: Exp;Initial revision============================================================================= Whereas rlog subdir2/sfile2 (a file that is properly checked-in) returns: RCS file: subdir2/RCS/sfile2,vWorking file: subdir2/sfile2head: 1.1branch:locks: strictaccess list:symbolic names:keyword substitution: kvtotal revisions: 1; selected revisions: 1description:----------------------------revision 1.1date: 2016/07/20 13:10:04; author: torfey; state: Exp;Initial revision============================================================================= So the command I'd like would, given a directory argument, search for all files under that dir that are part of RCS and return names of any that are not checked in. (Ideally, also if there's some other state that is detectable and bad, like not locked yet different from checked in version, report that too.) test_rcs_tree . It would return, for my above simple case: ./subdir1/sfile1 checked-out What I'm struggling with is whether there's maybe something out there that already does this that I'm just missing in all my searches. I'm running on RHEL 6.7 which has rcs 5.7, gnu awk 3.1.7, gnu make 3.81, bash 4.1.2
While: sed "0,\~$var~s~$var~replacement~" Can be used to change the regex delimiter, embedding variable expansions inside sed (or any other interpreter) code is a very unwise thing to do in the general case. First, here, the delimiter is not the only character that needs to be escaped. All the regular expression operators need to as well. But more importantly, and especially with GNU sed , that's a command injection vulnerability. If the content of $var is not under your control, it's just as bad as passing arbitrary data to eval . Try for instance: $ var='^~s/.*/uname/e;#'$ echo | sed "0,\~$var~s~$var~replacement~"Linux The uname command was run, thankfully a harmless one... this time. Non-GNU sed implementations can't run arbitrary commands, but can overwrite any file (with the w command), which is virtually as bad. A more correct way is to escape the problematic characters in $var first : NL=''case $var in (*"$NL"*) echo >&2 "Sorry, can't handle variables with newline characters" exit 1esacescaped_var=$(printf '%s\n' "$var" | sed 's:[][\/.^$*]:\\&:g')# and then:sed "0,/$escaped_var/s/$escaped_var/replacement/" < file Another approach is to use perl : var=$var perl -pe 's/\Q$ENV{var}\E/replacement/g && $n++ unless $n' < file Note that we're not expanding the content of $var inside the code passed to perl (which would be another command injection vulnerability), but are letting perl expand its content as part of its regexp processing (and within \Q...\E which means regexp operators are not treated specially). If $var contains newline characters, that may only match if there's only one at the end. Alternatively, one may pass the -0777 option so the input be processed as a single record instead of line-by-line.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/297126", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151249/" ] }
297,143
I have a backup server that clients use with rsync over SSH. The server is configured to run a custom shell script that wraps the invocation of the rsync server side. The client side is unaware of this and can issue any valid rsync command-line. I would like to pass some arguments from the client into that shell script. Something like $ rsync -av -customarg1 value1 -customarg2 value2 file1 file2 user@server The client-side rsync rejects any arguments that it doesn't know, as does ssh (via rsync -e ). The best I have come up with is to pass arguments as fake server-side file specs that the script extracts from the server-side command-line prior to executing it to launch the rsync server. i.e a command like $ rsync -av file1 file2 user@server:some/path\;customarg1=value1\;customarg2=value2 would provide two arguments and then behave as if the command was $ rsync -av file1 file2 user@server:some/path (This uses the fact that SSH sets a number of environment variables including $SSH_ORIGINAL_COMMAND which contains the command issued by the ssh client (i.e the rsync server command launched by the rsync client) regardless of what the ssh server is configured to execute (i.e. the custom shell script))
Perhaps you could use the -M option (long version --remote-option ), which does not seem to be checked by the client rsync. -M-xxxx is passed to theremote with the -M stripped off as -xxxx . You can easily play with this using -M-M-xxxx , which is ignored by client and remote: rsync -av -M-M-myvar=val /tmp/test/ remote:test/ If your server front-end recognises and removes the resulting flags before calling rsync, you get what you needed. You can play further with --rsync-path which allows you to run any script. It will get the remote args concatenated. Eg rsync -a -M-myvar=val --rsync-path='echo hello >/tmp/out; ./mysync' /tmp/test/ remote:test/ will run on the remote something like echo hello >/tmp/out./mysync --server -logDtpre.iLsfx -myvar=val . test/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297143", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9259/" ] }
297,145
I am starting to use Arch Linux / Mate, and Guake with it (last versions). I would like Guake can highlight the command prompt, like in this image . You can see the prompt is light green, and the other text is white. Actually Guake is showing like this one , with no differences between prompt and the rest of the text. My intention is to differ from the command prompt and the other text showed in Guake. Is there a way or a theme to make this difference?Thanks.
Perhaps you could use the -M option (long version --remote-option ), which does not seem to be checked by the client rsync. -M-xxxx is passed to theremote with the -M stripped off as -xxxx . You can easily play with this using -M-M-xxxx , which is ignored by client and remote: rsync -av -M-M-myvar=val /tmp/test/ remote:test/ If your server front-end recognises and removes the resulting flags before calling rsync, you get what you needed. You can play further with --rsync-path which allows you to run any script. It will get the remote args concatenated. Eg rsync -a -M-myvar=val --rsync-path='echo hello >/tmp/out; ./mysync' /tmp/test/ remote:test/ will run on the remote something like echo hello >/tmp/out./mysync --server -logDtpre.iLsfx -myvar=val . test/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297145", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180624/" ] }
297,149
I do #!/usr/bin/env bash# http://askubuntu.com/q/799834/25388 # http://stackoverflow.com/a/69808/54964set -eSWAP_FILE="/media/masi/SamiSwapVirtual/.swap_file_20.7.2016"SIZE_MB=16000dd if="/dev/zero" of=${SWAP_FILE} bs="1M" count=${SIZE_MB}mkswap ${SWAP_FILE}chmod 0600 ${SWAP_FILE}sudo swapon -v ${SWAP_FILE} Logs where I am unsure about the part insecure file owner 1000, 0 (root) suggested 16000+0 records in16000+0 records out16777216000 bytes (17 GB, 16 GiB) copied, 121.447 s, 138 MB/sSetting up swapspace version 1, size = 15.6 GiB (16777211904 bytes)no label, UUID=d3358f33-52d4-4029-8070-213ddf7446b7[sudo] password for masi: swapon /media/masi/SamiSwapVirtual/.swap_file_20.7.2016swapon: /media/masi/SamiSwapVirtual/.swap_file_20.7.2016: insecure file owner 1000, 0 (root) suggested.swapon: /media/masi/SamiSwapVirtual/.swap_file_20.7.2016: found swap signature: version 1d, page-size 4, same byte orderswapon: /media/masi/SamiSwapVirtual/.swap_file_20.7.2016: pagesize=4096, swapsize=16777216000, devsize=16777216000 System: Linux Ubuntu 16.04 64 bit Linux kernel: 4.6 Hardware: Macbook Air 2013-mid Related threads: Accessibility of swap files
insecure file owner 1000, 0 (root) suggested. User 1000 has read and write permission over the swap file. That means that user 1000 can see or modify anything that goes into swap. Unless you trust user 1000 and also trust that his account will never be hacked, this is dangerous. The solution is to make the file owned by root. If root is ever hacked, your system is lost anyway and root's ownership of this file adds no additional danger. Thus, make sure the file is owned by root. On Debian, Ubuntu, Fedora, and possibly others (Hat tip: Random832), the group disk should be used: sudo chown root:disk /media/masi/SamiSwapVirtual/.swapFile On other systems, the group to use may be root : sudo chown root:root /media/masi/SamiSwapVirtual/.swapFile Swap is an OS service: As Random832 points out in the comments, the swap file is owned by root in just the way that a disk, say /dev/sda , is owned by root. The operating system makes the hard disk, at least the directories that a user owns, available to that user. In the same way, the operating system allows programs run by a normal user to take advantage of swap when RAM is in short supply. Root should own the swap file because it is the OS's responsibility to manage swap.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297149", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16920/" ] }
297,180
I want to create a shell script that echos something if 3 directories don't exist. Here is the code I wrote:test.sh: if [ ! -d "/home/unix/POSTagger2" ] || [! -d "/home/unix/POSTagger2/stanford-parser-full-2015-12-09"] || [! -d "/home/unix/POSTagger2/stanford-corenlp-full-2015-12-09"]; thenecho "Nope"fi When I run it, I get this error: ./test.sh: line 1: [!: command not found What's wrong with my syntax?
You are missing some spaces, for example [! must be [ ! and "] must be " ] look to the corrected code: #!/bin/bashif [ ! -d "/home/unix/POSTagger2" ] || [ ! -d "/home/unix/POSTagger2/stanford-parser-full-2015-12-09" ] || [ ! -d "/home/unix/POSTagger2/stanford-corenlp-full-2015-12-09" ] then echo "Nope"fi Another way for your code: #!/bin/bashfor dir in "/home/unix/POSTagger2" "/home/unix/POSTagger2/stanford-parser-full-2015-12-09" "/home/unix/POSTagger2/stanford-corenlp-full-2015-12-09"; do if [ ! -d "$dir" ]; then echo nope ; break; fi done
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/297180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32474/" ] }
297,288
Assume doc.pdf is the target. The following rule triggers a regeneration of doc.pdf whenever doc.refer is updated, but is also happy when doc.refer does not exist at all: doc.pdf: doc.mom $(wildcard doc.refer) pdfmom -e -k < $< > $@ However the following pattern rule does not accomplish the same (the PDF is generated correctly, but a rebuild is not triggered when changing doc.refer ): %.pdf: %.mom Makefile $(wildcard %.refer) pdfmom -e -k < $< > $@ I suspect that the wildcard command is executed before the % character is expanded. How can I work around this?
The GNU Make function wildcard takes a shell globbing pattern and expands it to the files matching that pattern. The pattern %.refer does not contain any shell globbing patterns. You probably want something like %.pdf: %.mom %.refer pdfmom -e -k < $< > $@%.pdf: %.mom pdfmom -e -k < $< > $@ The first target will be invoked for making PDF files when there's a .mom and a .refer file available for the base name of the document. The second target will be invoked when there isn't a .refer file available. The order of these targets is important.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/297288", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52728/" ] }
297,346
I did tried sed and awk , but its not working as the character involves / which is already there in command as delimiter. Please let me know how can I achieve this. Below is a sample Example. We want to remove the commented sections, i.e /*.....*/ /*This is to print the outputdata*/proc print data=sashelp.cars;run;/*Creating dataset*/data abc;set xyz;run;
I think i found a easy solution! cpp -P yourcommentedfile.txt SOME UPDATES: Quote from the user ilkachu (original text from the user comments): I played a bit with the options for gcc: -fpreprocessed will disable most directives and macro expansions (except #define and #undef apparently). Adding -dD will leave defines in too; and std=c89 can be used to ignore new style // comments. Even with them, cpp replaces comments with spaces (instead of removing them), and collapses spaces and empty lines. But I think it is still reasonable and a easy solution for the most of the cases, if you disable the macro expansion and other things I think you will get good results... - and yes you can combine that with shell script for getting better... and much more...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/297346", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180786/" ] }
297,366
I try get standard error communicate from mkdir -p $FINAL_BACKUP_DIR and send in message by logger . It make log more complete for example the user will be know he do not have permission or $FINAL_BACKUP_DIR does not exists. if ! mkdir -p $FINAL_BACKUP_DIR; then logger -t $LOGGER_TAG "Cannot create backup directory in $FINAL_BACKUP_DIR. Standard error communicate. Backup canceling." 1>&2 exit 1;fi; I try something like that: if ! mkdir -p $FINAL_BACKUP_DIR 2>> ${test1}; then logger -t $LOGGER_TAG "Cannot create backup directory in $FINAL_BACKUP_DIR. Backup canceling. $test1" 1>&2 exit 1;fi; But this solution does not working for me in two ways. When I create test1 earlier test1=0 or without that. I work with Ubuntu 14.04.
/dev/null is the standard device to "throw things away". So some_command 2> /dev/null will send the errors from some_command to /dev/null - ie throw away the errors. Thus: if ! mkdir -p $FINAL_BACKUP_DIR 2> /dev/nullthen logger -t $LOGGER_TAG "Cannot create backup directory in $FINAL_BACKUP_DIR. Backup canceling." exit 1fi Note that you also didn't need all those extra ; characters :-) EDIT: You can also direct error to output and capture the result in a variable and test if that variable is empty. In this way you can report on the reason to the user result=$(mkdir -p $FINAL_BACKUP_DIR 2>&1)if [ -n "$result" ]then logger -t $LOGGER_TAG "Cannot create backup directory in $FINAL_BACKUP_DIR. Backup canceling: $result" exit 1fi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297366", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180796/" ] }
297,374
I have a deployment script. It must add something to a user crontab (trigger a script that cleans logs every XXX days);however, this must be done only during the first deployment,or when it needs to be updated. (I can run xxx.py deploy env or xxx.py update env .) so I need to do this: Check if my cronJob already exists Submit my cronJob if it does not already exist or update my cronJob if one of the parameter(s) of the command is different I don't see how to add/check/remove something to the crontab without using crontab -e or "manually" editing the crontab file(download it, rewrite it, re-upload it). PS: this is a user specific cronjob;"webadmin" is going to do it and he should not use sudo to do it.
My best idea so far To check first if the content matches what should be in there and only update if it doesn't: if [[ $(crontab -l | egrep -v "^(#|$)" | grep -q 'some_command'; echo $?) == 1 ]]then set -f echo $(crontab -l ; echo '* 1 * * * some_command') | crontab - set +ffi but this gets complicated enough to build a separate script around that cron task. Other ideas You could send the string via stdin to crontab (beware, this clears out any previous crontab entries): echo "* 1 * * * some_command" | crontab - This should even work right through ssh: echo "* 1 * * * some_command" | ssh user@host "crontab -" if you want to append to the file you could use this: # on the machine itselfecho "$(echo '* 1 * * * some_command' ; crontab -l 2>&1)" | crontab -# via sshecho "$(echo '* 1 * * * some_command' ; ssh user@host crontab -l 2>&1)" | ssh user@host "crontab -"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/297374", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160341/" ] }
297,386
I am trying to install some packages and yum fails every time stating that the package should be installed by load-transaction command. I want yum to fetch the packages from internet and install it as exiting in this way is lame? Please find the yum command and output: Command: Step 4 : RUN yum install httpd php php-cli gcc glibc glibc-common gd gd-devel net-snmp openssl-devel wget unzip ---> Running in b0cdbf62be4e Output: Total download size: 82 MInstalled size: 186 MIs this ok [y/d/N]: Exiting on user commandYour transaction was saved, rerun it with: yum load-transaction /tmp/yum_save_tx.2016-07-21.12-39.KWu7ih.yumtxThe command '/bin/sh -c yum install httpd php php-cli gcc glibc glibc-common gd gd-devel net-snmp openssl-devel wget unzip' returned a non-zero code: 1 Now the funny thing is it is happening in docker build process so I thought to delete the interim image but its happening even if I delete the image. I looked at yum help but there are no flags which would override/force installation regardless if it is saved. I tried even yum clean all before yum -y install but its of no use.
Issue:Even though I was doing yum "-y" was at the end , I bought it forward and its all good now. from: yum install httpd php php-cli gcc glibc glibc-common gd gd-devel net-snmp openssl-devel wget unzip -y to: yum -y install httpd php php-cli gcc glibc glibc-common gd gd-devel net-snmp openssl-devel wget unzip People may argue that doesn't make difference and technically it should not. But matter of fact is when docker executes per line basis he doesn't see -y on next line as he is executing the previous line yum command and this was the problem. Thought to share with community as simple pointer on the Internet can make big difference :) Cheers
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297386", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171615/" ] }
297,475
I accidentally removed the wrong image file in my /var/lib/libvirt/images directory. I'm not sure how to recreate one or to undo my removal. Any hints?
Since you haven't shut down the VM, then the process using that image file still has the file open and it hasn't actually been deleted yet. As long as the process keeps running, you should be able to recover it. For this answer I have a kvm image called testdelete . The VM is up, but I have deleted the file. First you need to find the process using the file. The easiest way is with lsof . # lsof | grep /var/lib/libvirt/images/testdelete.imgqemu-kvm 29627 qemu 9u REG 9,0 2147483648 399357 /var/lib/libvirt/images/testdelete.img (deleted) This tells me it's process 29627 and file descriptor 9. Let's look at this # cd /proc/29627/fd# ls -l 9lrwx------ 1 qemu qemu 64 Jul 21 18:13 9 -> /var/lib/libvirt/images/testdelete.img (deleted) OK, good. That matches. Now let's recover it! You need a disk with enough free space to hold the whole image Ideally your VM should be as quiescent as possible; because we're copying the raw disk image we do run a risk of corruption if some processes are writing to the disk. We can try to minimise this risk by sending a STOP signal. # kill -STOP 29627 This effectively "freezes" the process. The backup we're now taking would be the equivalent of what happens after a hard crash; on reboot the OS will fsck (or equivalent) to recover. Now we can copy the data # dd if=9 of=/home/sweh/recovered.img bs=1M2048+0 records in2048+0 records out2147483648 bytes (2.1 GB) copied, 5.74931 s, 374 MB/s That looks perfect; the disk image was 2Gb and that's what it copied. Does this image look good? # cd /home/sweh# sfdisk -l recovered.img Disk recovered.img: cannot get geometryDisk recovered.img: 261 cylinders, 255 heads, 63 sectors/trackUnits = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id Systemrecovered.img1 0+ 65- 66- 524288 82 Linux swap / Solarisrecovered.img2 * 65+ 261- 196- 1571840 83 Linuxrecovered.img3 0 - 0 0 0 Emptyrecovered.img4 0 - 0 0 0 Empty Yup, that looks like my partition table. At this point you can do other tests to verify the image looks good. And that's it! You have recovered your image file. NOTE: In this example I'm going to kill the existing qemu process. That step is irrevocable because it causes the disk to be freed up. If you want to do some "parallel run" testing then you can create a new image file and virsh define a new VM to use that. Let's get the VM restarted with this. Destroy the old VM, copy the datafile into place and restart it. # virsh destroy testdelete# cp -v recovered.img /var/lib/libvirt/images/testdelete.img`recovered.img' -> `/var/lib/libvirt/images/testdelete.img'# virsh start testdeleteDomain testdelete started Can we connect to the console? # virsh console testdeleteConnected to domain testdeleteEscape character is ^]CentOS release 6.8 (Final)Kernel 2.6.32-642.3.1.el6.x86_64 on an x86_64dhcp226.spuddy.org login: Recovery complete :-)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297475", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142792/" ] }
297,502
Is there some way clear the terminal but instead of leaving the prompt at the top of the screen leaves it in the middle? It looks like clear basically ignores all command-line parameters. I thought there would be some way to do this with tput but can't find one.
You could use tput to move the cursor to a given line in the screen, e.g., tput cup 11 0 to move it to the twelfth line (values count from zero). Along the same lines, you could use tput to clear from that position to the end of the screen, using the ed capability. Combining, tput cup 11 0 && tput ed might be what was wanted. If you want to go to the halfway mark on the screen, the first number returned by stty size is (on most systems) the number of rows of the screen. Adding that to the command: tput cup $(stty size|awk '{print int($1/2);}') 0 && tput ed The clear program differs from tput ed : it moves the cursor to the home position (upper left) and clears from that point to the end of the screen. Caveat: on some platforms tput ed may not work due to problems fixed long ago. In those cases, upgrading your curses/ncurses configuration will fix the problem.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/297502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92513/" ] }
297,520
I'm stuck in including regular expressions with a sed command. Q: I want to replace all occurences of two spaces after the end of a sentence with just once space. Here is what I did: sed 's/^ $/^$/' file And it didn't substituted two spaces with a one space after the sentence ends. Output I get: This is the output. Hello Hello Output I want: This is the output. Hello Hello
sed 's/\. */. /g' < file replace dot followed by two or more spaces with dot followed by a single space.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/297520", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179989/" ] }
297,569
I have a windows 10 host OS where I have installed vmware workstation 12 player. I have an Xubuntu as a guest OS (virtual machine). The complication is: the text is too small in guest OS and almost unreadable. The steps that I have already taken to rectify the problem are given below: I have already installed vmware tools (which is confirmed by hovering on Manage -> Reinstall vmware tools). I have tried to manually set the resolution in the vmware before starting the virtual machine (by manually changing it to 640 by 480 and then to other settings). In vmware workstation 12 player, i cannot see the stretch the guest OS but I have tried to stretch the guest desktop in the guest OS. Note: I am using DELL XPS 15 with 4k UHD. Any help in this regard is highly appreciable. If I am unable to explain anything please let me know, I can provide more details.
It worked for me too on a HP Spectre 4k laptop (windows 10): Right click on the vmware player icon on the desktop shortcut and click properties. Move to compatibility tab. Check the option "override high DPI scaling behavior. And select the System Enhanced for Scaling performed by: . Apply and restart VM. It should work. Got a result after 5 hours spent on the web.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/297569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180963/" ] }
297,617
$ unset foo$ unset bar$ echo $foo$ echo $bar$ echo $PATH/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games$ foo=a$ bar=b$ export bar$ echo $fooa$ echo $barb$ PATH=$ echo $PATH$ /bin/bashbash: lesspipe: No such file or directorybash: dircolors: No such file or directorybash: ls: No such file or directory$ echo $foo$ echo $barb$ echo $PATH$ As we can see, changing $PATH affects the subshell, whereas another variable needs to be export ed. Why?
There are really two types of variable: Environment variables Shell variables To make things more complicated, they both look the same, and a shell variable can be converted to an environment variable with the export command. The env command will show the current set of environment variables. $ myvar=100$ env | grep myvar$ export myvar$ env | grep myvarmyvar=100 Variables can also be temporarily exported for the life of a command. $ env | grep anothervar$ anothervar=100 env | grep anothervaranothervar=100$ env | grep anothervar$ When the shell starts up it inherits a number of environment variables (which may be zero). Startup scripts (eg .bash_profile , .bashrc , files in the /etc directory) can also set and export variables. Finally the shell, itself, may set a default number to environment variables if the environment is empty. e.g. $ PATH=foo /bin/bash -c 'echo $PATH'foo$ PATH= /bin/bash -c 'echo $PATH'$ unset PATH$ /bin/bash -c 'echo $PATH'/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297617", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166118/" ] }
297,618
I have a file name demo.txt with content as follows: value -= [ "02|05|06|abc",]/* Some other content other than value variable */value -= []value -= [ "0698|06|07|abc",] I have lots of value variables in this demo.txt file. I want to print only unique values like below after reading demo.txt file 02| 05| 06| 0698| 07| abc I tried as follows: awk '$0 == "value -= [" {i=1;next};i && i++ <= 1' which gives me "02|05|06|abc",]"0698|06|07|abc", But, I do not want "]" and also do not want repeated content. In this case its "06" and "abc" Can some one suggest ?
There are really two types of variable: Environment variables Shell variables To make things more complicated, they both look the same, and a shell variable can be converted to an environment variable with the export command. The env command will show the current set of environment variables. $ myvar=100$ env | grep myvar$ export myvar$ env | grep myvarmyvar=100 Variables can also be temporarily exported for the life of a command. $ env | grep anothervar$ anothervar=100 env | grep anothervaranothervar=100$ env | grep anothervar$ When the shell starts up it inherits a number of environment variables (which may be zero). Startup scripts (eg .bash_profile , .bashrc , files in the /etc directory) can also set and export variables. Finally the shell, itself, may set a default number to environment variables if the environment is empty. e.g. $ PATH=foo /bin/bash -c 'echo $PATH'foo$ PATH= /bin/bash -c 'echo $PATH'$ unset PATH$ /bin/bash -c 'echo $PATH'/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297618", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181023/" ] }
297,632
In Linux's mount(2) man page, I noticed the following excerpt: Many broken applications don't use fsync() when replacing existing files via patterns such as fd = open("foo.new")/write(fd,...)/close(fd)/ rename("foo.new", "foo") or worse yet fd = open("foo", O_TRUNC)/write(fd,...)/close(fd). If auto_da_alloc is enabled, ext4 will detect the replace-via-rename and replace-via-truncate patterns and force that any delayed allocation blocks are allocated such that at the next journal commit, in the default data=ordered mode, the data blocks of the new file are forced to disk before the rename() operation is committed. This provides roughly the same level of guarantees as ext3, and avoids the "zero-length" problem that can happen when a system crashes before the delayed allocation blocks are forced to disk. In what sense is this code "broken"? Are they saying that this code is illegal or not standard-conformant (POSIX, etc)? Obviously fsync() might be a good idea for people who are worried about what would happen if the system crashed. But assuming a system that doesn't crash, don't both versions of the sample code, without fsync() , do exactly the right thing?
rename is expected to be atomic: it either completes fully or not at all. Renaming A to take the place of B is supposed to leave you with either both A and B intact (it didn't happen at all); or with only A's contents under the name B (it completed fully). As long as the system doesn't crash, that'll happen regardless of fsync (etc.) calls. If the system does crash, however, it can turn out that the rename itself hit disk (and thus completes). Remember though that names != files. Files/inodes can have multiple names. Rename is changing the names, not the underlying file/data. So you can have the state that your program wrote A, renamed it to replace B, and then the power went out. Turns out the filesystem wrote the rename to disk, but not the actual data in A. It's not required to without fsync . You thus wind up with a zero-length B, or a zero-filled B. The reason an app does a write-temp-file + rename instead of just overwriting the file is because it wants crash safety. The user won't be too mad if a half-written temporary copy of his important document is left lying around, next to the unmodified good copy. But if no good copies are left, the user will not be pleased.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297632", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56646/" ] }
297,686
I want to use sed to replace anything in a stringbetween the first AB and the first occurrence of AC (inclusive)with XXX . For example , I have this string (this string is for a test only): ssABteAstACABnnACss and I would like output similar to this: ssXXXABnnACss . I did this with perl : $ echo 'ssABteAstACABnnACss' | perl -pe 's/AB.*?AC/XXX/'ssXXXABnnACss but I want to implement it with sed .The following (using the Perl-compatible regex) does not work: $ echo 'ssABteAstACABnnACss' | sed -re 's/AB.*?AC/XXX/'ssXXXss
Sed regexes match the longest match. Sed has no equivalent of non-greedy. What we want to do is match AB , followed by any amount of anything other than AC , followed by AC Unfortunately, sed can’t do #2 —at least not for a multi-character regular expression.  Of course,for a single-character regular expression such as @ (or even [123] ),we can do [^@]* or [^123]* . And so we can work around sed’s limitationsby changing all occurrences of AC to @ and then searching for AB , followed by any number of anything other than @ , followed by @ like this: sed 's/AC/@/g; s/AB[^@]*@/XXX/; s/@/AC/g' The last part changes unmatched instances of @ back to AC . But this is a reckless approachbecause the input could already contain @ characters.So, by matching them, we could get false positives.  However,since no shell variable will ever have a NUL ( \x00 ) character in it, NUL is likely a good character to use in the above work-around instead of @ : $ echo 'ssABteAstACABnnACss' | sed 's/AC/\x00/g; s/AB[^\x00]*\x00/XXX/; s/\x00/AC/g'ssXXXABnnACss The use of NUL requires GNU sed. (To make sure that GNU features are enabled, the user must not have set the shell variable POSIXLY_CORRECT.) If you are using sed with GNU's -z flag to handle NUL-separated input, such as the output of find ... -print0 , then NUL will not be in the pattern space and NUL is a good choice for the substitution here. Although NUL cannot be in a bash variable it is possible to include it in a printf command. If your input string can contain any character at all, including NUL, then see Stéphane Chazelas' answer which adds a clever escaping method.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/297686", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66727/" ] }
297,691
Appending an extra line is simple as I can use echo "line" >> file but what if I want to add a string right after the last char in the file without starting a new line? What are some good ways to do this?
A well-formed unix text file must have a trailing newline at the end of the file. To achieve what you want, the string must be placed before that existing trailing newline. Consider this test file: $ cat File123 Now, let's add words to the last line before the last newline character: $ sed '$s/$/new words/' File123new words Or, if you want to edit the file in place, use the -i option: sed -i.bak '$s/$/new words/' File How it works: $ The first $ tells sed to only perform the command which follows on the last line of the file. s/$/new words/ For that last line in the file, this places new words at the end of the line but before the final newline character. In a substitute command, $ means end of the line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297691", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181089/" ] }
297,729
This is macOS specific, but seems too unixy to go in the Ask Different community. In Terminal, I can pwd , copy the result, and type open and paste the result and the folder will open in the Finder, but pwd | open prints the help documentation for open . Why doesn't piping work but pasting does?
I don't have a Mac so I can't test it, but the solution should be something like: open "`pwd`" Not all programs take their input from stdin which would be necessary for the pipe to work.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/297729", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74293/" ] }
297,758
In *nix world, is there a way for shell script to have information about which program has executed it? Example: /path/to/script1 /path/to/script_xyz in this imaginary scenario, script_xyz would have path information ( /path/to/script1 ) or process PID of entity that have executed it. Note: I'm curious about different solutions and approaches, I don't expect exactly this to be actually possible
There's often confusion between process forking and execution. When you do at the prompt of a bash shell. $ sh -c 'exec env ps' The process P1 issuing that $ prompt is currently running bash code. That bash code forks a new process P2 that executes /bin/sh which then executes /usr/bin/env , which then executes /bin/ps . So P2 has in turn executed code of bash , sh , env and ps . ps (or any other command like a script we would use instead here) has no way to know that it has been executed by the env command. All it can do is find out what its parent process id is, which in this case would be either P1 or 1 if P1 has died in the interval or on Linux another process that has been designated as a subreaper instead of 1 . It can then query the system for what command that process is currently running (like with readlink /proc/<pid>/exe on Linux) or what arguments where passed to the last command it executed (like with ps -o args= -p <pid> ). If you want your script to know what invoked it, a reliable way would be to have the invoker tell it. That could be done for instance via an environment variable. For instance script1 could be written as: #! /bin/sh -INVOKER=$0 script2 & And script2 : #! /bin/sh -printf '%s\n' "I was invoked by $INVOKER"# and in this case, we'll probably find the parent process is 1# (if not now, at least one second later) as script1 exited just after# invoking script2:ps -fp "$$"sleep 1ps -fp "$$"exit $INVOKER will ( generally ) contain a path to script1 . In some cases, it may be a relative path though, and the path will be relative to the current working directory at the time script1 started. So if script1 changes the current working directory before calling script2 , script2 will get wrong information as to what called it. So it may be preferable to make sure $INVOKER contains an absolute path (preferably keeping the basename) like by writing script1 as: #! /bin/sh -mypath=$( mydir=$(dirname -- "$0") && cd -P -- "$mydir" && pwd -P) && mypath=$mypath/$(basename -- "$0") || mypath=$0... some code possibly changing the current working directoryINVOKER=$mypath script2 In POSIX shells, $PPID will contain the pid of the parent of the process that executed the shell at the time of that shell initialisation. After that, as seen above, the parent process may change if the process of id $PPID dies. zsh in the zsh/system module, can query the current parent pid of the current (sub-)shell with $sysparams[ppid] . In POSIX shells, you can get the current ppid of the process that executed the interpreter (assuming it's still running) with ps -o ppid= -p "$$" . With bash , you can get the ppid of the current (sub-)shell with ps -o ppid= -p "$BASHPID" .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/297758", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68350/" ] }
297,786
If user works on an application that is dynamically linked, and system is being upgraded, is there any protection mechanism that prevents application corruption? Or is it up to application?
As mentioned by @Kusalananda, usually upgrades are done by removing the old file, and creating a new one with the same name. This will actually create a new file with a new inode, leaving the system free to use the old one as long as it is open. As a simplified example, stuff like rm /bin/catcp /new/version/of/cat /bin/cat will create a logically new file, and works even though cat might be running. Same goes for libraries. (The above is an example, not a robust way of upgrading a file in the real world.) Someone could try to change the binary in-place instead of creating a new one with the same name. In this case, at least Linux actually prevents making changes to an executable that is in use: window 1 # ./catwindow 2 # echo foobar > cat-bash: cat: Text file busy However, this doesn't seem to work with dynamically loaded libraries... I made a copy of libc.so.6 for testing, and filled it with zeroes while it was in use: window 1 /tmp/lib# LD_LIBRARY_PATH=/tmp/lib ldd ./cat linux-vdso.so.1 (0x00007ffcfaf30000) libc.so.6 => /tmp/lib/libc.so.6 (0x00007f1145e67000) /lib64/ld-linux-x86-64.so.2 (0x00007f1146212000)window 1 /tmp/lib# LD_LIBRARY_PATH=/tmp/lib ./catfoofooSegmentation fault (Meanwhile in another window, after the foo , before the segfault) window 2 /tmp/lib# dd if=/dev/zero of=libc.so.6 bs=1024 count=2000 There's really nothing the program itself could do against this, since I effectively edited its code online. (This would likely be system dependant, I tested on Debian Jessie 8.5, Linux 3.16.7-ckt25-2+deb8u3. IIRC Windows systems in particular are even more aggressive about preventing in-use files from being modified.) So I guess the answer is that upgrades are usually done in an way that avoids any problems, and this is helped by the filesystem internals. But (on Linux) there don't seem to be any safeguards against actually corrupting dynamic libraries.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/297786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78297/" ] }
297,792
After some very quick research, it seems Bash is a Turing-complete language . I wonder, why is Bash used almost exclusively to write relatively simple scripts? Since a Bash shell comes with Linux, you can run shell scripts without any external interpreter or compiler, as required for other popular computer languages. This is a huge advantage, that could compensate for the mediocrity of the language itself in some cases. So, is there a limit to how complex such programs can get? Is pure Bash used to write complex programs? Is is possible to write, say, a file compressor/decompressor in pure Bash? A compiler? A simple video game? Is it so sparsely used just because there are only very limited debugging tools?
it seems Bash is a Turing-complete language The concept of Turing completeness is entirely separate from many other concepts useful in a language for programming in the large : usability, expressiveness, understandabilty, speed, etc. If Turing-completeness were all we required, we wouldn't have any programming languages at all , not even assembly language . Computer programmers would all just write in machine code , since our CPUs are also Turing-complete. why is Bash used almost exclusively to write relatively simple scripts? Large, complex shell scripts — such as the configure scripts output by GNU Autoconf — are atypical for many reasons: Until relatively recently, you couldn't count on having a POSIX-compatible shell everywhere . Many systems, particularly older ones, do technically have a POSIX-compatible shell somewhere on the system, but it may not be in a predictable location like /bin/sh . If you're writing a shell script and it has to run on many different systems, how then do you write the shebang line ? One option is to go ahead and use /bin/sh , but choose to restrict yourself to the pre-POSIX Bourne shell dialect in case it gets run on such a system. Pre-POSIX Bourne shells don't even have built-in arithmetic; you have to call out to expr or bc to get that done. Even with a POSIX shell, you're missing out on associative arrays and other features we've expected to find in Unix scripting languages since Perl first became popular in the early 1990s . That fact of history means there is a decades-long tradition of ignoring many of the powerful features in modern Bourne family shell script interpreters purely because you can't count on having them everywhere. This still continues to this day, in fact: Bash didn't get associative arrays until version 4 , but you might be surprised how many systems still in use are based on Bash 3. Apple still ships Bash 3 with macOS in 2017 — apparently for licensing reasons — and Unix/Linux servers often run all but untouched in production for a very long time, so you might have a stable old system still running Bash 3, such as a CentOS 5 box. If you have such systems in your environment, you can't use associative arrays in shell scripts that have to run on them. If your answer to that problem is that you only write shell scripts for "modern" systems, you then have to cope with the fact that the last common reference point for most Unix shells is the POSIX shell standard , which is largely unchanged since it was introduced in 1989. There are many different shells based on that standard, but they've all diverged to varying degrees from that standard. To take associative arrays again, bash , zsh , and ksh93 all have that feature, but there are multiple implementation incompatibilities. Your choice, then, is to only use Bash, or only use Zsh, or only use ksh93 . If your answer to that problem is, "so just install Bash 4," or ksh93 , or whatever, then why not "just" install Perl or Python or Ruby instead? That is unacceptable in many cases; defaults matter. None of the Bourne family shell scripting languages support modules . The closest you can come to a module system in a shell script is the . command — a.k.a. source in more modern Bourne shell variants — which fails on multiple levels relative to a proper module system, the most basic of which is namespacing . Regardless of programming language, human understanding starts to flag when any single file in a larger overall program exceeds a few thousand lines. The very reason we structure large programs into many files is so that we can abstract their contents to a sentence or two at most. File A is the command line parser, file B is the network I/O pump, file C is the shim between library Z and the rest of the program, etc. When your only method for assembling many files into a single program is textual inclusion, you put a limit on how large your programs can reasonably grow. For comparison, it would be like if the C programming language had no linker, only #include statements. Such a C-lite dialect would not need keywords such as extern or static . Those features exist to allow modularity. POSIX doesn't define a way to scope variables to a single shell script function, much less to a file. This effectively makes all variables global , which again hurts modularity and composability. There are solutions to this in post-POSIX shells — certainly in bash , ksh93 and zsh at least — but that just brings you back to point 1 above. You can see the effect of this in style guides on GNU Autoconf macro writing, where they recommend that you prefix variable names with the name of the macro itself, leading to very long variable names purely in order to reduce the chance of collision to acceptably near zero. Even C is better on this score, by a mile. Not only are most C programs written primarily with function-local variables, C also supports block scoping, allowing multiple blocks within a single function to reuse variable names without cross-contamination. Shell programming languages have no standard library. It is possible to argue that a shell scripting language's standard library is the contents of PATH , but that just says that to get anything of consequence done, a shell script has to call out to another whole program, probably one written in a more powerful language to begin with. Neither is there a widely-used archive of shell utility libraries as with Perl's CPAN . Without a large available library of third-party utility code, a programmer must write more code by hand, so she is less productive. Even ignoring the fact that most shell scripts rely on external programs typically written in C to get anything useful done, there's the overhead of all those pipe() → fork() → exec() call chains. That pattern is fairly efficient on Unix, compared to IPC and process launching on other OSes, but here it's effectively replacing what you'd do with a subroutine call in another scripting language, which is far more efficient still. That puts a serious cap on the upper limit of shell script execution speed. Shell scripts have little built-in ability to increase their performance via parallel execution. Bourne shells have & , wait and pipelines for this, but that's largely only useful for composing multiple programs, not for achieving CPU or I/O parallelism. You're not likely to be able to peg the cores or saturate a RAID array solely with shell scripting, and if you do, you could probably achieve much higher performance in other languages. Pipelines in particular are weak ways to increase performance via parallel execution. It only lets two programs run in parallel, and one of the two will likely be blocked on I/O to or from the other at any given point in time. There are latter-day ways around this, such as xargs -P and GNU parallel , but this just devolves to point 4 above. With effectively no built-in ability to take full advantage of multi-processor systems, shell scripts are always going to be slower than a well-written program in a language that can use all the processors in the system. To take that GNU Autoconf configure script example again, doubling the number of cores in the system will do little to improve the speed at which it runs. Shell scripting languages don't have pointers or references . This prevents you from doing a bunch of things easily done in other programming languages. For one thing, the inability to refer indirectly to another data structure in the program's memory means you're limited to the built-in data structures . Your shell may have associative arrays , but how are they implemented? There are several possibilities, each with different tradeoffs: red-black trees , AVL trees , and hash tables are the most common, but there are others. If you need a different set of tradeoffs, you're stuck, because without references, you don't have a way to hand-roll many types of advanced data structures. You're stuck with what you were given. Or, it may be the case that you need a data structure that doesn't even have an adequate alternative built into your shell script interpreter, such as a directed acyclic graph , which you might need in order to model a dependency graph . I've been programming for decades, and the only way I can think of to do that in a shell script would be to abuse the file system , using symlinks as faux references. That's the sort of solution you get when you rely merely on Turing-completeness, which tells you nothing about whether the solution is elegant, fast, or easy to understand. Advanced data structures are merely one use for pointers and references. There are piles of other applications for them , which simply can't be done easily in a Bourne family shell scripting language. I could go on and on, but I think you're getting the point here. Simply put, there are many more powerful programming languages for Unix type systems. This is a huge advantage, that could compensate for the mediocrity of the language itself in some cases. Sure, and that's precisely why GNU Autoconf uses a purposely-restricted subset of the Bourne family of shell script languages for its configure script outputs: so that its configure scripts will run pretty much everywhere. You will probably not find a larger group of believers in the utility of writing in a highly-portable Bourne shell dialect than the developers of GNU Autoconf, yet their own creation is written primarily in Perl, plus some m4 , and only a little bit of shell script; only Autoconf's output is a pure Bourne shell script. If that doesn't beg the question of how useful the "Bourne everywhere" concept is, I don't know what will. So, is there a limit to how complex such programs can get? Technically speaking, no, as your Turing-completeness observation suggests. But that is not the same thing as saying that arbitrarily-large shell scripts are pleasant to write, easy to debug, or fast to execute. Is is possible to write, say, a file compressor/decompressor in pure bash? "Pure" Bash, without any calls out to things in the PATH ? The compressor is probably doable using echo and hex escape sequences, but it would be fairly painful to do. The decompressor may be impossible to write that way due to the inability to handle binary data in shell . You'd end up calling out to od and such to translate binary data to text format, shell's native way of handling data. Once you start talking about using shell scripting the way it was intended, as glue to drive other programs in the PATH , the doors open up, because now you're limited only to what can be done in other programming languages, which is to say you don't have limits at all. A shell script that gets all of its power by calling out to other programs in the PATH doesn't run as fast as monolithic programs written in more powerful languages, but it does run. And that's the point. If you need a program to run fast, or if it needs to be powerful in its own right rather than borrowing power from others, you don't write it in shell. A simple video game? Here's Tetris in shell . Other such games are available, if you go looking. there are only very limited debugging tools I would put debugging tool support down about 20th place on the list of features necessary to support programming in the large. A whole lot of programmers rely much more heavily on printf() debugging than proper debuggers, regardless of language. In shell, you have echo and set -x , which together are sufficient to debug a great many problems.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/297792", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180653/" ] }
297,811
I just installed Mint 18 as a virtual machine using VMware 12. I have the problem that I can't install vmware-tools. At first I tried to install open-vm-tools as is recommended by Mint, but it didn't work, so I uninstalled it and then tried to install the default vmware-tools, but it can't be installed.
Forget VM tools, use: sudo apt-get install open-vm-tools open-vm-tools-desktop Then do a full restart and check that the client screen will resize when the host window resizes.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/297811", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/96404/" ] }
297,844
Sometimes I accidentally tell vim to edit a directory instead of a file in it vim directory-name # what I typedvim directory-name/blah.txt # what I meant to type Instead of immediately giving an error message, vim will open the directory in a "file editor" mode that I personally don't like: How can I make vim refuse to open directories instead? One possible avenue could be by changing some settings in my .vimrc but I don't know what they would be. Another possibility would be to create a wrapper shell alias around vim that checks with something along the lines of test -d "$1" to see if I'm trying to open a file. However, I don't know to make this alias robust so that it can tell apart which command-line arguments are flags and which are file names.
You can put the following lines in your vimrc to quit vim if any of its arguments are a directory: for f in argv() if isdirectory(f) echomsg "vimrc: Cowardly refusing to edit directory " . f quit endifendfor Alternatively, if you only want to quit if all arguments are directories, you can try something like this: let ndirs = 0for f in argv() if isdirectory(f) let ndirs += 1 endifendforif ndirs > 0 && ndirs == argc() echomsg "vimrc: Cowardly refusing to edit directories" quitendif
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297844", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23960/" ] }
297,847
Basically the entire question is in the headline: Does ssh send the password over the network? Presuming of course that login via username and password is used. I'm asking because if ssh doesn't send the password over the network, a man in the middle can't get the user's password even if the user adds the alleged host to their known_hosts file. Someone said it has to, so I wrote down a counter example in a comment. Since the question of how else it could possibly work now came up repeatedly, I'm copying that comment here. The server can tell the client which hash to use. [The same one which is used to hash the passwords in the server's shadow file.] The client can then calculate the hash ψ which should be in the server's shadow file but let's call the one on the server ψ'. So both the server and the client know ψ. The client can then pick a random salt σ an send (hash(ψ.σ), σ) (where . is the concatenation operator) to the server. The server then hashes ψ'.σ and checks whether the first element of the tuple it received from the client matches that hash. If it does, the client knows the password.
Yes. The password is sent over the encrypted connection, but it's in plaintext to the remote server. The usual way to authenticate is for the server to calculate a hash of the password and to compare it to a value saved on the server. There are several ways of saving hashes, and with current implementations, the client doesn't know what the server uses. ( see e.g. the crypt man page ). (Even if it did, simply sending a hash of the password would make the hash equivalent to the password anyway.) Also, if the server uses PAM, the PAM modules might implement authentication with just about any method, some of which may require the password in plaintext. Authentication using public keys doesn't send the key to the remote host, however. (Some explanation and links about this in a question on security.SE ) There are also password-based authentication algorithms like SRP , that don't require sending the password in plain text to the other end. Though SRP appears to be only implemented for OpenSSH as an external patch.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297847", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147785/" ] }
297,884
Is there a way to ensure less clears screen on exit? The opposite of less -X . The screen is not being cleared when I exit a man page in iTerm2, however the screen is cleared when using the default mac terminal. Does anyone have suggestions? $LESS is set to less -R
Normally less "clears the screen" (which probably refers to switching back to the normal screen from the alternate screen) when the terminal description has the appropriate escape sequence in the rmcup capability. You would see a difference if you are using different values of TERM in the two programs. The infocmp program can show differences for the corresponding terminal descriptions. less also attempts to clear the remainder of the screen, but that depends upon whether anything was displayed, and if the output was a terminal (in contrast to a pipe). Aside from the terminal description, some terminal emulators make it optional whether to allow the alternate screen. You may have selected that option at some point. (I'm testing with default configuration, which works as intended).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297884", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/175460/" ] }
297,932
Given xdg-open and an extension, is there a way to get the application which xdg-open is set to for that particular extension? For example given xdg-open and .jpg the result is eog .
AFAIK the choice of action is based on the file's mimetype rather than its extension. At least on Ubuntu, you should be able to use the query action of xdg-mime to show the default application for a specific mimetype $ xdg-mime query default image/jpegeog.desktop You can check the mimetype for a particular file using xdg-mime query filetype e.g. $ xdg-mime query filetype kqDRdnW.jpgimage/jpeg or using the file command e.g. file --mime-type <file> See man xdg-mime for further usage information.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/297932", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/25919/" ] }
297,944
I want the following files to be renamed removing the sequence numbers. 01 X.mp3 to X.mp3 02 Add Me In.mp3 to Add Me In.mp3 I am trying with the below rename command. rename -v -n 's/^\d+\s*([a-z]+\.mp3)$/$1\.mp3/' *.mp3 Running this command gives me this error: Using expression: sub { use feature ':5.18'; s/^\d+\s*([a-z]+\.mp3)$/$1\.mp3/ }
I don't see why that error would occur. In fact, I am reasonably certain there were more lines to the error than you show; for one thing, there's no actual error message. However, that regular expression won't actually match either of your example files. You are using [a-z]+\.mp3 which will only match lower case letters and, since you're matching all the way to the extension, it will only match files whose name consists of only lower case letters after the numbers and space you want to remove. You could instead match [a-zA-Z] or use s///i to make the match case insensitive, but it would be a better idea to not match the rest of the word at all. You just want to remove the digits and whitespace from the beginning, so just match those: rename -vn 's/^\d+\s+//' *mp3
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181248/" ] }
297,954
Is it possible to blocking only a certain subpage of a website instead of all domain by modifying /etc/hosts file? For example I want to block http://example.com/whatever.html instead of all domain http://example.com . I tried to put just 0.0.0.0 http://example.com/whatever.html but it does not work.
No. /etc/hosts is used to modify the "hostname -> IP address" mapping. It can not be used for "protocol layer" filtering. If you want to limit specific pages then you need to use something like a proxy server. This can be programmed with specific rules to permit/deny certain pages. Then configure your browser to use that proxy. A simple example of this is Privoxy ( https://www.privoxy.org/ ) but there are lots of tools. EDIT TO ADD: Alternatively a browser plugin (eg "ublock origin" or "adblock plus") can be configured to block access to pages directly inside the browser.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297954", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181253/" ] }
297,969
Is the slash ( / ) really part of the name of the Linux root directory? Or is it just a symbol for it? What about /etc and so on? Update Suppose /dev/sda2 is the block device of a Linux root directory. $ sudo debugfs /dev/sda2debugfs 1.44.1 (24-Mar-2018) debugfs: pwd[pwd] INODE: 2 PATH: /[root] INODE: 2 PATH: / debugfs: stat /Inode: 2 Type: directory Mode: 0755 Flags: 0x80000Generation: 0 Version: 0x00000000:00000077User: 0 Group: 0 Project: 0 Size: 4096File ACL: 0Links: 25 Blockcount: 8Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x5b13c9f1:3f017990 -- Sun Jun 3 15:28:57 2018 atime: 0x5b13ca0f:3b3ee380 -- Sun Jun 3 15:29:27 2018 mtime: 0x5b13c9f1:3f017990 -- Sun Jun 3 15:28:57 2018crtime: 0x5aad1843:00000000 -- Sat Mar 17 16:59:39 2018Size of extra inode fields: 32EXTENTS:(0):9249 So there is a directory in there, inode #2, but it hasn't a name.
The POSIX.1-2008 standard says A pathname consisting of a single / shall resolve to the rootdirectory of the process. A null pathname shall not be successfullyresolved. The standard further makes a distinction between filenames and pathnames . / is the pathname of the root directory. The name of the directory is "the root directory", but in the filesystem it is nameless, it does not have a filename. If it had a filename, that name would be a directory entry in the directory above the root directory, and there is no such directory. The character / can never be part of a filename as it is the path separator. For clarity: / is not the name of the root directory, but the path to it, its pathname . /etc is another pathname. It is the absolute path to the etc directory. The name of the directory at that path is etc (its filename is etc ). /usr/local/bin/curl is the pathname of the curl executable file in the same way that /etc is the pathname of the etc directory.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/297969", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82589/" ] }
297,981
I to modify everything inside of specific double quotation marks. Example of my work is: <VALUE MAP_ID="1001" MAP="0" MAPNAME="BichonTown" SERVER="0" CHNSERVER="0" NEEDLEVEL="7" TIME="0" WEATHER="0" VEHICLE="1" PVP="0" MINE="0" CONTROL="" ENTERLV="0" ENTERQUEST="0" ENTERITEM="0" NORECONNECT="DQ_910" MINIMAP="1001" VER="200" APPLY_ROW="0" MAP_EXP_RATIO="0" MAP_DROP_RATIO="0" MAP_MONEY_RATIO="0" ORDERLIST="0" /><VALUE MAP_ID="1002" MAP="0_000" MAPNAME="TownHall" SERVER="1" CHNSERVER="1" NEEDLEVEL="0" TIME="1" WEATHER="0" VEHICLE="0" PVP="1" MINE="0" CONTROL="" ENTERLV="0" ENTERQUEST="0" ENTERITEM="0" NORECONNECT="5515" MINIMAP="0" VER="200" APPLY_ROW="0" MAP_EXP_RATIO="0" MAP_DROP_RATIO="0" MAP_MONEY_RATIO="0" ORDERLIST="0" /><VALUE MAP_ID="1003" MAP="0_001" MAPNAME="TownHall" SERVER="1" CHNSERVER="1" NEEDLEVEL="0" TIME="1" WEATHER="0" VEHICLE="0" PVP="1" MINE="0" CONTROL="" ENTERLV="0" ENTERQUEST="0" ENTERITEM="0" NORECONNECT="0" MINIMAP="0" VER="200" APPLY_ROW="0" MAP_EXP_RATIO="0" MAP_DROP_RATIO="0" MAP_MONEY_RATIO="0" ORDERLIST="0" /><VALUE MAP_ID="1004" MAP="0_002" MAPNAME="TownHall" SERVER="1" CHNSERVER="1" NEEDLEVEL="0" TIME="1" WEATHER="0" VEHICLE="0" PVP="1" MINE="0" CONTROL="" ENTERLV="0" ENTERQUEST="0" ENTERITEM="0" NORECONNECT="221" MINIMAP="0" VER="200" APPLY_ROW="0" MAP_EXP_RATIO="0" MAP_DROP_RATIO="0" MAP_MONEY_RATIO="0" ORDERLIST="0" /><VALUE MAP_ID="1005" MAP="1" MAPNAME="LostParadise" SERVER="1" CHNSERVER="1" NEEDLEVEL="7" TIME="0" WEATHER="0" VEHICLE="1" PVP="0" MINE="0" CONTROL="" ENTERLV="0" ENTERQUEST="0" ENTERITEM="0" NORECONNECT="11" MINIMAP="1002" VER="200" APPLY_ROW="0" MAP_EXP_RATIO="0" MAP_DROP_RATIO="0" MAP_MONEY_RATIO="0" ORDERLIST="0" /> Everything inside the NORECONNECT="" should be set to 0 there are no spaces only letters, numbers or underscores. The result should look like <VALUE MAP_ID="1001" MAP="0" MAPNAME="BichonTown" SERVER="0" CHNSERVER="0" NEEDLEVEL="7" TIME="0" WEATHER="0" VEHICLE="1" PVP="0" MINE="0" CONTROL="" ENTERLV="0" ENTERQUEST="0" ENTERITEM="0" NORECONNECT="0" MINIMAP="1001" VER="200" APPLY_ROW="0" MAP_EXP_RATIO="0" MAP_DROP_RATIO="0" MAP_MONEY_RATIO="0" ORDERLIST="0" /><VALUE MAP_ID="1002" MAP="0_000" MAPNAME="TownHall" SERVER="1" CHNSERVER="1" NEEDLEVEL="0" TIME="1" WEATHER="0" VEHICLE="0" PVP="1" MINE="0" CONTROL="" ENTERLV="0" ENTERQUEST="0" ENTERITEM="0" NORECONNECT="0" MINIMAP="0" VER="200" APPLY_ROW="0" MAP_EXP_RATIO="0" MAP_DROP_RATIO="0" MAP_MONEY_RATIO="0" ORDERLIST="0" /><VALUE MAP_ID="1003" MAP="0_001" MAPNAME="TownHall" SERVER="1" CHNSERVER="1" NEEDLEVEL="0" TIME="1" WEATHER="0" VEHICLE="0" PVP="1" MINE="0" CONTROL="" ENTERLV="0" ENTERQUEST="0" ENTERITEM="0" NORECONNECT="0" MINIMAP="0" VER="200" APPLY_ROW="0" MAP_EXP_RATIO="0" MAP_DROP_RATIO="0" MAP_MONEY_RATIO="0" ORDERLIST="0" /><VALUE MAP_ID="1004" MAP="0_002" MAPNAME="TownHall" SERVER="1" CHNSERVER="1" NEEDLEVEL="0" TIME="1" WEATHER="0" VEHICLE="0" PVP="1" MINE="0" CONTROL="" ENTERLV="0" ENTERQUEST="0" ENTERITEM="0" NORECONNECT="0" MINIMAP="0" VER="200" APPLY_ROW="0" MAP_EXP_RATIO="0" MAP_DROP_RATIO="0" MAP_MONEY_RATIO="0" ORDERLIST="0" /><VALUE MAP_ID="1005" MAP="1" MAPNAME="LostParadise" SERVER="1" CHNSERVER="1" NEEDLEVEL="7" TIME="0" WEATHER="0" VEHICLE="1" PVP="0" MINE="0" CONTROL="" ENTERLV="0" ENTERQUEST="0" ENTERITEM="0" NORECONNECT="0" MINIMAP="1002" VER="200" APPLY_ROW="0" MAP_EXP_RATIO="0" MAP_DROP_RATIO="0" MAP_MONEY_RATIO="0" ORDERLIST="0" /> How can I achieve this?
With the assumption that there's no embedded " character inside this field then this can be done with a tool like sed sed 's/NORECONNECT="[^"]*"/NORECONNECT="0"/' The first expression means match NORECONNECT=" followed by zero or more non- " characters followed by a " " So this will match things like NORECONNECT="foo"NORECONENCT="bar"NORECONNECT="" And then it replace that part with the string NORECONNECT="0" EDIT: If the word appears more than once on a line then add a g to the end: sed 's/NORECONNECT="[^"]*"/NORECONNECT="0"/g'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/297981", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180321/" ] }
297,982
I typed help while I was in the GDB but didn't find anything about step-into, step-over and step-out. I put a breakpoint in an Assembly program in _start ( break _start ). Afterwards I typed next and it finished the debugging. I guess it was because it finished _start and didn't step-into as I wanted.
help running provides some hints: There are step and next instuctions (and also nexti and stepi ). (gdb) help nextStep program, proceeding through subroutine calls.Usage: next [N]Unlike "step", if the current source line calls a subroutine,this command does not enter the subroutine, but instead steps overthe call, in effect treating it as a single source line. So we can see that step steps into subroutines, but next will step over subroutines. The step and stepi (and the next and nexti ) are distinguishing by "line" or "instruction" increments. step -- Step program until it reaches a different source linestepi -- Step one instruction exactly Related is finish : (gdb) help finishExecute until selected stack frame returns.Usage: finishUpon return, the value returned is printed and put in the value history. A lot more useful information is at https://sourceware.org/gdb/onlinedocs/gdb/Continuing-and-Stepping.html
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/297982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176824/" ] }
298,012
I came upon this strange behaviour in my bash script. #!/bin/bashV=aalias $V="echo test"echo $(a) #returns 'test'echo $($V) #returns ...'a: not found' Is there any way to emulate the former behaviour with the variable?
Aliases are only expanded if the command appears directly in the code, without any expansion. Writing things like \a , $V , $(echo a) , etc. suppresses alias lookup. In addition, bash (unlike other shells) doesn't expand aliases in scripts by default anyway, so a actually does not run the alias in bash. Use a function instead of an alias. You'll need to use the original name to define the function. V=aa () { echo test; }"$V" # prints test (There are other ways to do something like what you want by using eval , but don't use eval unless you know exactly what you're doing. Quoting things correctly with eval is tricky.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298012", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181280/" ] }
298,047
The challenge of my task is that the file.txt can be in subfolder or in sub-sub folders. The general structure looks like this: folder |___subfolder |_____file.txt | |___subfolder |____subfolder |_______file.txt etc My previous code does not deal with the sub-sub folder and I am not sure how to add in this part (since some sub folder does not have any subsequent folders) for dir in ./*/ ; do if [ -d "$dir" ]; then cd $d; for subdir in ./*/; do cd $subdir && $(sed command file.txt) && cd ..; done cd .. fidone I have also learned there's a one line method of using find , something like: find . -name 'file.txt' | sed command file.txt Apparently this one doesn't work as find only returns the dir to my file.txt
Try: find . -name 'file.txt' -exec sed command {} + This finds all files named file.txt that in subdirectories of . and runs sed command against those files. If you want sed to modify those files in place, then add the -i option. Althought -exec ... + is now required by POSIX (hat tip: jordanm), some people may be using old versions of BSD find that do not support + . If you have one of those, then use (hat tip: unxnut): find . -name 'file.txt' -exec sed command {} \; More Secure Alternative If rapid changes are made to the directory structure, -exec can be subject to a race-condition. For greater security, use -execdir (hat tip: unxnut): find . -name 'file.txt' -execdir sed command {} + Note that, if you have the current directory in your PATH , then -execdir will refuse to run.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298047", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181304/" ] }
298,110
Whenever I want to use command yum install <packagename> I get error: No package available For example, [root@cpanel1 etc]# yum install autosshLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: centos.t-2.net * extras: centos.t-2.net * updates: centos.t-2.netNo package autossh available.Error: Nothing to do[root@cpanel1 etc]# How do I make it work?
These steps might help you, yum clean all & yum clean metadata Check the files in /etc/yum.repos.d and make sure that they don't all have enabled = 0 for each repo (there may be more than one per file). Finally you would be able to do yum update and search for desired packages.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/298110", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176074/" ] }
298,139
for example I have scv file which looks like a1, b1, c1, d1a2, b2, c2, d2a3, b3, c3, d3 What I want to do is to replace the first comma , with the semicolon ; . The position of first comma can be variable ( a in the rows n and m can have different lengths). Finally my file shall look like a1; b1, c1, d1a2; b2, c2, d2a3; b3, c3, d3 The other commas have to remain. Can somebody please tell me the most simple solution? PS my solution doesn't work: sed '/s/,/;/g' file.csv
The g in: sed 's/,/;/g' is for globally , that is to substitute all occurrences of , with ; . If you want to do only one substitution per line, take off the g : sed 's/,/;/' And for completeness: You can also specify which occurrence to substitute. For instance, to substitute only the second occurrence: sed 's/,/;/2' With GNU sed , you can also substitute all occurrences starting from the second one (in effect, all but the first one) with: sed 's/,/;/2g' To perform two substitutions, in this case: sed 's/,/;/;s/,/;/' Where it gets more complicated is when the pattern can match the substitution (or parts of it), for instance when substituting , with <,> . sed has no built-in mechanism to address that. You may want to use perl instead in that case: perl -pe '$i = 0; s/,/$i++ < 2 ? "<,>" : $&/ge' perl -pe is perl 's sed mode (note that the regex syntax is different). With the e flag of the s/// operator, the replacement is considered as code. There, we replace , with <,> only when our incremented counter is < 2. Otherwise, we replace the , with itself ( $& actually referring to the matched string like & in sed 's s command). You can generalise that for a range or set of substitutions. Like for 3 rd to 5 th and 7 th to 9 th : perl -pe '$i = 0; s/,/$i++; $i >=3 && $i <= 5 || $i >= 7 && $i <= 9 ? "<,>" : $&/ge' To replace only the first occurrence in the whole input (as opposed to in each line ): sed -e 's/,/;/;t done' -e b -e :done -e 'n;b done' That is, upon the first successful substitution, go into a loop that just prints the rest of the input. With GNU sed , you can use the pseudo address 0 : sed '0,/,/s//;/' Note I suppose it's a typo, but the sed '/s/,/;/g' command you wrote in your question is something completely different. That's doing: sed '/start/,/end/g' where start is s and end is ; . That is, applying the g command (replace the pattern space with the content of the hold space (empty here as you never hold anything)) for sections of the file in between one that contains s and the next one that contains ; .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298139", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82378/" ] }
298,141
I'm trying to mount a partition with -o option, however I get this error root@blackbox:~# mount /dev/sda1 /media/ownclouddrive -o uid=33,gid=33mount: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so.[ 365.432693] EXT4-fs (sda1): Unrecognized mount option "uid=33" or missing value If I checl my /etc/passwd I can see the user there www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin Any idea why? Thanks
You are getting that error message because you specify mountoptions that don’t apply to the filesystem on the device. The mount options uid= and gid= are not valid for allavailable filesystems. The ones that support proper permissionsusually don’t accept them, as is the case for that ext4 deviceyou’re attempting to mount. For filesystems with limited functionality like VFAT, the options uid and gid allow mapping the ownership of the entirecontents of a mounted filesystem to a single local user. Similaroptions exist for other properties like the umask. Note that this is not the proper way to grant permissions to thecontents, but rather a workaround to integrate certain FS into aunixoid environment. On an ext4 partition like the one you’reattempting to mount, just change the ownership directly: mount -t ext4 /dev/sda1 /media/ownclouddrivechown -R 33:33 /media/ownclouddrive/*
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298141", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138120/" ] }
298,147
I'm trying to enable only certain options when using the read command, and to exit the script if a wrong possibility was entered. Tried many possibilities (array, variables, syntax change), but I'm still stuck with my initial problem. How do I test the input of the user and allow \ disallow to run the rest of the script? #!/bin/bashred=$(tput setaf 1)textreset=$(tput sgr0) echo -n 'Please enter requested region > 'echo 'us-east-1, us-west-2, us-west-1, eu-central-1, ap-southeast-1, ap-northeast-1, ap-southeast-2, ap-northeast-2, ap-south-1, sa-east-1'read textif [ -n $text ] && [ "$text" != 'us-east-1' -o us-west-2 -o us-west-1 -o eu-central-1 -o ap-southeast-1 -o ap-northeast-1 -o ap-southeast-2 -o ap-northeast-2 -o ap-south-1 -o sa-east-1 ] ; then echo 'Please enter the region name in its correct form, as describe above'elseecho "you have chosen ${red} $text ${textreset} region."AWS_REGION=$textecho $AWS_REGIONfi
Why don't you use case? case $text in us-east-1|us-west-2|us-west-1|eu-central-1|ap-southeast-1|etc) echo "Working" ;; *) echo "Invalid option: $text" ;;esac
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298147", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181400/" ] }
298,199
I've noticed you can't really count on seq(1) being available on anything but GNU systems. What's a simple reimplementation of seq(1) I can bring with me written in POSIX (not bash) shell? EDIT: Note that I intend to use it on at least various BSD's, Solaris, and Mac OS X.
According to the open group POSIX awk supports BEGIN , therefore it can be done in awk : awk -v MYEND=6 'BEGIN { for(i=1;i<=MYEND;i++) print i }' Where -v MYEND=6 would stand for the assignment as in the first argument to seq . In other words, this works too: seq() { end=$1 awk -v end=$end 'BEGIN { for( i = 1; i <= end; i++) print i }'} Or even with the three variables start, increment and end. (This doesn't support a negative increment nor floats): seq() { if [ "$#" = 1 ]; then start=1 incr=1 end=$1 elif [ "$#" = 2 ]; then start=$1 incr=1 end=$2 elif [ "$#" = 3 ]; then start=$1 incr=$2 end=$3 else echo "error: invalid number of arguments" >&2 return 1 fi if ! [ "$incr" -ge 1 ]; then echo "error: invalid increment (must be >= 1)" >&2 return 1 fi awk -v start=$start -v incr=$incr -v end=$end ' BEGIN { for( i = start; i <= end; i += incr) print i }'} Extra Solaris note: On Solaris /usr/bin/awk is not POSIX compliant, you need to use either nawk or /usr/xpg4/bin/awk on Solaris. On Solaris, you probably want to set /usr/xpg4/bin early in PATH if you are running a POSIX compliant script. Reference answer: awk hangs on Solaris
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298199", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181278/" ] }
298,281
My goal is to be able to develop for embedded Linux. I have experience on bare-metal embedded systems using ARM. I have some general questions about developing for different cpu targets. My questions are as below: If I have an application compiled to run on a ' x86 target, linux OS version x.y.z ', can I just run the same compiled binary on another system ' ARM target, linux OS version x.y.z '? If above is not true, the only way is to get the application source code to rebuild/recompile using the relevant toolchain 'for example, arm-linux-gnueabi'? Similarly, if I have a loadable kernel module (device driver) that works on a ' x86 target, linux OS version x.y.z ', can I just load/use the same compiled .ko on another system ' ARM target, linux OS version x.y.z '? If above is not true, the only way is to get the driver source code to rebuild/recompile using the relevant toolchain 'for example, arm-linux-gnueabi'?
No. Binaries must be (re)compiled for the target architecture, and Linux offers nothing like fat binaries out of the box. The reason is because the code is compiled to machine code for a specific architecture, and machine code is very different between most processor families (ARM and x86 for instance are very different). EDIT: it is worth noting that some architectures offer levels of backwards compatibility (and even rarer, compatibility with other architectures); on 64-bit CPU's, it's common to have backwards compatibility to 32-bit editions (but remember: your dependent libraries must also be 32-bit, including your C standard library, unless you statically link ). Also worth mentioning is Itanium , where it was possible to run x86 code (32-bit only), albeit very slowly; the poor execution speed of x86 code was at least part of the reason it wasn't very successful in the market. Bear in mind that you still cannot use binaries compiled with newer instructions on older CPU's, even in compatibility modes (for example, you cannot use AVX in a 32-bit binary on Nehalem x86 processors ; the CPU just doesn't support it. Note that kernel modules must be compiled for the relevant architecture; in addition, 32-bit kernel modules will not work on 64-bit kernels or vice versa. For information on cross-compiling binaries (so you don't have to have a toolchain on the target ARM device), see grochmal's comprehensive answer below.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/298281", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181479/" ] }
298,292
My user id kranthi has been created and added to sudoers group. I've made the following changes in /etc/ssh/sshd_config : PasswordAuthentication YesAllowUsers kranthi In order to allow a user to ssh the user should be added to AllowUsers . Port 22 is added to the sshd_config file. service sshd restart was given. Why cannot I SSH the server using putty ? What's missing ? I get access denied after entering password. After recommendation in comment section when I type ssh -l kranthi 127.0.0.1 I got the following The following is the screenshot from the console. Still I cannot SSH the server using the user kranthi.
No. Binaries must be (re)compiled for the target architecture, and Linux offers nothing like fat binaries out of the box. The reason is because the code is compiled to machine code for a specific architecture, and machine code is very different between most processor families (ARM and x86 for instance are very different). EDIT: it is worth noting that some architectures offer levels of backwards compatibility (and even rarer, compatibility with other architectures); on 64-bit CPU's, it's common to have backwards compatibility to 32-bit editions (but remember: your dependent libraries must also be 32-bit, including your C standard library, unless you statically link ). Also worth mentioning is Itanium , where it was possible to run x86 code (32-bit only), albeit very slowly; the poor execution speed of x86 code was at least part of the reason it wasn't very successful in the market. Bear in mind that you still cannot use binaries compiled with newer instructions on older CPU's, even in compatibility modes (for example, you cannot use AVX in a 32-bit binary on Nehalem x86 processors ; the CPU just doesn't support it. Note that kernel modules must be compiled for the relevant architecture; in addition, 32-bit kernel modules will not work on 64-bit kernels or vice versa. For information on cross-compiling binaries (so you don't have to have a toolchain on the target ARM device), see grochmal's comprehensive answer below.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/298292", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178935/" ] }
298,301
Is there a good clean conventional way to do this? For example, if there are any ".gz" files in a certain directory, I want to unzip them. But if there aren't, I don't want to see any error. If I use gzip -d /mydir/*.gz , I get the error: gzip: /mydir/*.gz: No such file or directory If I first shopt -s nullglob and then gzip -d /mydir/*.gz , I get the following: gzip: compressed data not read from a terminal. Use -f to force decompression.For help, type: gzip -h I have one method I know will work, which I'm posting as an answer. I'm wondering if there's a better/cleaner way. POSIX compatibility is a bonus, but not required.
With bash : shopt -s nullglobfiles=(/mydir/*.gz)((${#files[@]} == 0)) || gzip -d -- "${files[@]}" With zsh : files=(/mydir/*.gz(N))(($#files == 0)) || gzip -d -- $files Note that in zsh , without (N) , like in pre-Bourne shells, csh or tcsh, if the glob doesn't match, the command is not run, you'd only do the above to avoid the resulting error message (of no match found as opposed to gzip failing on the expanded glob in the case of bash or other Bourne-like shells). You can achieve the same result with bash with shopt -s failglob . In zsh , a failing glob is a fatal error that causes the shell (when not interactive) to exit. You can prevent your script from exiting in that case either by using a subshell or using zsh error catching mechanism ( { try-block; } always { error-catching; } ), (or by setting the nonomatch (to work like sh ), nullglob or noglob option of course, though I wouldn't recommend that): $ zsh -c 'echo zz*; echo not output'zsh:1: no matches found: zz*$ zsh -c '(echo zz*); echo output'zsh:1: no matches found: zz*output$ zsh -c '{echo zz*;} always {TRY_BLOCK_ERROR=0;}; echo output'zsh:1: no matches found: zz*output$ zsh -o nonomatch -c 'echo zz*; echo output'zz*output With ksh93 ksh93 eventually added a mechanism similar to zsh 's (N) glob qualifier to avoid having to set a nullglob option globally: files=(~(N)/mydir/*.gz)((${#files[@]} == 0)) || gzip -d -- "${files[@]}" POSIXly Portably in POSIX sh , where non-matching globs are passed unexpanded with no way to disable that behaviour (the only POSIX glob related option is noglob to disable globbing altogether), the trick is to do something like: set -- /mydir/[*].gz /mydir/*.gzcase $#$1$2 in '2/mydir/[*].gz/mydir/*.gz') : no match;; *) shift; gzip -d -- "$@"esac The idea being that if /mydir/*.gz doesn't match, then it will expand to itself ( /mydir/*.gz ). However, it could also expand to that if there was one file actually called /mydir/*.gz , so to differentiate between the cases, we also use the /mydir/[*].gz glob that would also expand to /mydir/*.gz if there was a file called like that. As that's pretty awkward, you may prefer using find in those cases: find /mydir/. ! -name . -prune ! -name '.*' \ -name '*.gz' -type f -exec gzip -d {} + The ! -name . -prune is to not look into subdirectories (some find implementations have -depth 1 or -mindepth 1 -maxdepth 1 as an equivalent). ! -name '.*' is to exclude hidden files like globs do. A benefit is that it still works if the list of files is too big to fit in the limit of the size of arguments to an executed command ( find will run several gzip commands if need to avoid that, ksh93 and zsh also have mechanisms to work around that). Another benefit is that you will get error messages if find cannot read the content of /mydir or can't determine the type of the files (globs would just silently ignore the problem and act as if the corresponding files don't exist). A small down side is that you lose the exact value of gzip 's exit status (if any one gzip invocation fails with a non-zero exit status, find will still exit with a non-zero exit status (though not necessarily the same) though, so that's good enough for most use cases). Another benefit is that you can add the -type f to avoid trying to uncompress directories or fifos/devices/sockets... whose name ends in .gz . Except in zsh ( *.gz(.) for regular files only), globs cannot filter by file types, you'd need to do things like: set --for f in /mydir/*.gz [ -f "$f" ] && [ ! -L "$f" ] && set -- "$@" "$f"done[ "$#" -eq 0 ] || gzip -d -- "$@"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/298301", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
298,312
I have an Config.yml file i want to update the values like below by using shell script. current file data servers: - uri: http://localhost:5550/service/mgmt/current- displayName: server1- username: user- password: welcome- domains:--- default--- domain1- useBulkApi: true Required output should be like below: servers: - uri: https://hostname1:5550/service/mgmt/current- displayName: instance1- username: xx- password: xx- domains:--- default--- domain1- useBulkApi: true hostname, user, password and domain name will changes for each URI. These values are coming from script. I have to update at lease 3 server details and each server has different URI, hostname, user, password and domain.
With bash : shopt -s nullglobfiles=(/mydir/*.gz)((${#files[@]} == 0)) || gzip -d -- "${files[@]}" With zsh : files=(/mydir/*.gz(N))(($#files == 0)) || gzip -d -- $files Note that in zsh , without (N) , like in pre-Bourne shells, csh or tcsh, if the glob doesn't match, the command is not run, you'd only do the above to avoid the resulting error message (of no match found as opposed to gzip failing on the expanded glob in the case of bash or other Bourne-like shells). You can achieve the same result with bash with shopt -s failglob . In zsh , a failing glob is a fatal error that causes the shell (when not interactive) to exit. You can prevent your script from exiting in that case either by using a subshell or using zsh error catching mechanism ( { try-block; } always { error-catching; } ), (or by setting the nonomatch (to work like sh ), nullglob or noglob option of course, though I wouldn't recommend that): $ zsh -c 'echo zz*; echo not output'zsh:1: no matches found: zz*$ zsh -c '(echo zz*); echo output'zsh:1: no matches found: zz*output$ zsh -c '{echo zz*;} always {TRY_BLOCK_ERROR=0;}; echo output'zsh:1: no matches found: zz*output$ zsh -o nonomatch -c 'echo zz*; echo output'zz*output With ksh93 ksh93 eventually added a mechanism similar to zsh 's (N) glob qualifier to avoid having to set a nullglob option globally: files=(~(N)/mydir/*.gz)((${#files[@]} == 0)) || gzip -d -- "${files[@]}" POSIXly Portably in POSIX sh , where non-matching globs are passed unexpanded with no way to disable that behaviour (the only POSIX glob related option is noglob to disable globbing altogether), the trick is to do something like: set -- /mydir/[*].gz /mydir/*.gzcase $#$1$2 in '2/mydir/[*].gz/mydir/*.gz') : no match;; *) shift; gzip -d -- "$@"esac The idea being that if /mydir/*.gz doesn't match, then it will expand to itself ( /mydir/*.gz ). However, it could also expand to that if there was one file actually called /mydir/*.gz , so to differentiate between the cases, we also use the /mydir/[*].gz glob that would also expand to /mydir/*.gz if there was a file called like that. As that's pretty awkward, you may prefer using find in those cases: find /mydir/. ! -name . -prune ! -name '.*' \ -name '*.gz' -type f -exec gzip -d {} + The ! -name . -prune is to not look into subdirectories (some find implementations have -depth 1 or -mindepth 1 -maxdepth 1 as an equivalent). ! -name '.*' is to exclude hidden files like globs do. A benefit is that it still works if the list of files is too big to fit in the limit of the size of arguments to an executed command ( find will run several gzip commands if need to avoid that, ksh93 and zsh also have mechanisms to work around that). Another benefit is that you will get error messages if find cannot read the content of /mydir or can't determine the type of the files (globs would just silently ignore the problem and act as if the corresponding files don't exist). A small down side is that you lose the exact value of gzip 's exit status (if any one gzip invocation fails with a non-zero exit status, find will still exit with a non-zero exit status (though not necessarily the same) though, so that's good enough for most use cases). Another benefit is that you can add the -type f to avoid trying to uncompress directories or fifos/devices/sockets... whose name ends in .gz . Except in zsh ( *.gz(.) for regular files only), globs cannot filter by file types, you'd need to do things like: set --for f in /mydir/*.gz [ -f "$f" ] && [ ! -L "$f" ] && set -- "$@" "$f"done[ "$#" -eq 0 ] || gzip -d -- "$@"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/298312", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179131/" ] }
298,329
I know I can have less show me the status line with = . Is there a way to have it displayed constantly and updated as I scroll through the file? When I use man it's actually done this way but I don't know how it's configured.
If you want to change the prompt (as it's called), -P is probably what you want (quote from the manual ): -Pprompt or --prompt=prompt Provides a way to tailor the three prompt styles to your own preference. -Ps followed by a string changes the default (short) prompt to that string. -Pm changes the medium (-m) prompt. -PM changes the long (-M) prompt. [...] See the section on PROMPTS for more details. There's a bunch of variables you can use presented in the section on prompts. On my system, the = prompt displays lines and bytes, so let's set the $LESS variable to show lines and bytes that are visible on the screen in the short (default) prompt: $ LESS='-Pslines %lt-%lb (%Pt-%Pb \%) bytes %bt-%bb file %f' ; export LESS$ less foo displays a prompt like lines 1-44 (1-53 %) bytes 0-2498 file foo ( %l , %P , %b for lines, percentage and bytes, trailing t and b for "top" and "bottom" of screen. % , ? , : , . and \ are special and need to be escaped.) The default prompt also has conditionals to not show fields that are unknown, and also to show (END) instead of 100% at the end of file. As an example, the latter can be done with ?e(END):%pB\%. .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/298329", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21305/" ] }
298,361
Let's say I want the man page related to luks. man luks doesn't work, is there a quick way for me to find the man pages using the keyword luks from bash ?
You're looking for apropos ; on my system apropos luks points me to cryptsetup(8) , luksformat(8) and a number of other relevant manpages. apropos , which is equivalent to man -k , looks in the installed manpages' names and descriptions for the search text given on its command line. The search text can include regular expressions or shell-style globs (with apropos , using the -r or -w options; -r is the default). man -K allows searching in all the contents of all the installed manpages. This takes longer than apropos or man -k . (Thanks to Stephen Harris and clusterdude for the extra clarification.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/298361", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/149443/" ] }
298,377
I have a file containing population information for a bunch of towns. I have another file that is a list of the names of a subset of those towns. I want to select the population information from the first file using the second file. How would I do this? Examples: File 1: ma-towns.txt Acton Town Middlesex Open town meeting 21,924 1735 Acushnet Town Bristol Open town meeting 10,303 1860 Adams Town Berkshire Representative town meeting 8,485 1778 Agawam City[4] Hampden Mayor-council 28,438 1855 Alford Town Berkshire Open town meeting 494 1773 Amesbury City Essex Mayor-council 16,283 1668 Amherst Town Hampshire Representative town meeting 37,819 1775 File 2: town-list.txt Acton Adams Agawam Desired output would be Acton Town Middlesex Open town meeting 21,924 1735 Adams Town Berkshire Representative town meeting 8,485 1778 Agawam City[4] Hampden Mayor-council 28,438 1855 Basically, as said generally, extract the line if it contains the string in one of the lines of file 2.
grep -f <(sed 's/.*/\^&\\>/' town-list.txt) ma-towns.txt Explanation: grep -f file reads file for a list of patterns to match against. We are searching in the ma-towns.txt list, using patterns from town-list.txt . Each separate line is treated as a new pattern, i.e. a new search term. However, that's not quite enough, so I've included a sed to format the search terms like this: ^Acton\>^Adams\>^Agawam\> The ^ makes grep only match that pattern at the start of a line, and the \> makes grep only match if the word ends at that point. Together this ensures that the search term only looks at the beginning of the line (where the town names are), and that the search term must end where the town name ends. The sed command itself runs a s (substitute) command, of the form s/search/replace/ . The search term .* matches a whole line. The replacement, \^&\\> , replaces it with a literal ^ character, followed by the original line, followed by the text \> . What this answer does that the other does not: Handles town names beginning with a dash or containing backslashes (which is unlikely, but if the input is taken from a user you don't want them to be able to break your scripts in unpredictable ways). Note that both answers treat town names as a regex rather than a literal search term. Outputs the towns in the original order as specified in ma-towns.txt Performs better Searches the beginning of the line for the town name, not just anywhere in the line Does not match a town if only a substring matches (e.g. Waterloo will not match Waterlooville )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298377", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/174308/" ] }
298,382
So, I've been using Linux for a few years now, and I really should know this answer, but I'm having trouble finding it. Specifically I've been using Debian based distro's....mostly Ubuntu. If I have a server, that has more than three users, how do I set a different set of permissions to a file for each user. For example: If I have a file with these permissions and ownership: rwx rw_ r__ user1:group1 file1.txt and I have 3 users with these desired permissions.... user1 rwx user2 rw_ user3 r__ All I have to do is have user1 own the file, user2 be in group1 , and user3 can be neither -- correct? But, what if I have a user4 and user5 . user4 _wx user5 __x How would I set that up? I haven't had to do this before, but I was asked that question by a Windows admin, and I honestly couldn't answer.
Traditional unix permissions only allow user, group, other permissions as you've found. These can result in some awkward combination of groups needing to be created... So a new form of ACL (Access Control Lists) were tacked on. This allows you to specify multiple users and multiple groups with different permissions. These are set with the setfacl command and read with getfacl $ setfacl -m u:root:r-- file.txt$ setfacl -m u:bin:-wx file.txt $ setfacl -m u:lp:--x file.txt $ getfacl file.txt# file: file.txt# owner: sweh# group: swehuser::rw-user:root:r--user:bin:-wxuser:lp:--xgroup::r--mask::rwxother::r-- You can easily tell if a file has an ACL by looking at the ls output: $ ls -l file.txt-rw-rwxr--+ 1 sweh sweh 0 Jul 26 10:33 file.txt The + at the end of the permissions indicates an ACL.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/298382", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67988/" ] }
298,421
I'm setting up an automation process where I am deleting files in a directory which contains sub-directories. I only want to delete the files in the directory, and want to keep the sub-directories intact. So right now I am just using rm * to delete the files in that directory. However, this command throws the message: cannt remove 'dir': Is a directory . I know I'm being knit-picky, but I don't want that message to repeatedly appear in my logs. Is there a better command I can use for deletion or a way that I can tell rm to ignore the sub-directories?
You can just throw away the error messages: rm * 2>/dev/null That'll throw away all errors. If you want to see other potential errors then we can do something more complicated: rm * 2>&1 | grep -v 'cannot remove .*: Is a directory' In this way other errors will still be logged.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172048/" ] }
298,499
Im trying to install Jekyll on El Capitan, however I do get permissions error, as you can see below. I am logged in as a root user. Linards:~ Berzins$ sudo gem install jekyllPassword:Ignoring psych-2.0.15 because its extensions are not built. Try: gem pristine psych --version 2.0.15Ignoring json-1.8.3 because its extensions are not built. Try: gem pristine json --version 1.8.3Fetching: jekyll-3.1.6.gem (100%)ERROR: While exexcuting gem ... (Errno::EPERM) Operation not permitted - /usr/bin/jekyllLinards:~ Berzins$ gem pristine psych --version 2.0.15ERROR: While executing gem ... (Gem::FilePermissionError) You don't have write permissions for the /Library/Ruby/Gems/2.0.0 directory.Linards:~ Berzins$ ls -ltotal 8drwxr-xr-x 94 Berzins staff 3196 27 Mar 19:08 Applicationsdrwx------+ 34 Berzins staff 1156 26 Jul 22:41 Desktopdrwx------+ 16 Berzins staff 544 7 Jul 21:58 Documentsdrwx------+ 12 Berzins staff 408 23 Jul 20:58 Downloadsdrwx------@ 36 Berzins staff 1224 26 Jan 2015 Google Drivedrwx------@ 60 Berzins staff 2040 7 Jul 21:58 Library-rw-r--r--@ 1 Berzins staff 724 8 Nov 2014 Linards Berzins.downsizelicensedrwx------+ 3 Berzins staff 102 25 Aug 2014 Moviesdrwx------+ 7 Berzins staff 238 13 Feb 22:30 Musicdrwx------+ 20 Berzins staff 680 16 Jul 21:03 Picturesdrwxr-xr-x+ 6 Berzins staff 204 23 Sep 2015 Publicdrwxr-xr-x 5 Berzins staff 170 9 Apr 20:53 WebstormProjectsdrwxr-xr-x 2 Berzins staff 68 18 Nov 2015 node_modulesdrwxr-xr-x 4 Berzins staff 136 19 May 21:55 sitesdrwxr-xr-x 25 Berzins staff 850 30 Sep 2015 veltaberzina.comdrwxr-xr-x 6 Berzins staff 204 18 Nov 2015 version_controlLinards:~ Berzins$ chmod 755 LibraryLinards:~ Berzins$ sudo chmod 777 /LibraryPassword:chmod: Unable to change file mode on /Library: Operation not permitted Any advice appreciated. UPDATE: After suggested commands - sudo chflags -R nouchg /Library and ls -le / and got the output: Linards:~ Berzins$ ls -le /total 61drwxrwxr-x+ 108 root admin 3672 26 Jul 22:53 Applications 0: group:everyone deny deletedrwxr-xr-x 62 root wheel 2108 1 May 18:43 Librarydrwxr-xr-x@ 2 root wheel 68 1 May 18:34 Networkdrwxr-xr-x@ 4 root wheel 136 1 May 18:29 System 0: group:everyone deny deletelrwxr-xr-x 1 root wheel 49 25 Aug 2014 User Information -> /Library/Documentation/User Information.localizeddrwxr-xr-x 6 root admin 204 20 Jun 09:20 Usersdrwxrwxrwt@ 5 root admin 170 26 Jul 23:50 Volumes 0: group:everyone deny add_file,add_subdirectory,directory_inherit,only_inheritdrwxr-xr-x@ 39 root wheel 1326 12 Mar 08:08 bindrwxrwxr-t@ 2 root admin 68 1 May 18:34 coresdr-xr-xr-x 3 root wheel 4316 29 May 11:59 devlrwxr-xr-x@ 1 root wheel 11 1 May 18:32 etc -> private/etcdr-xr-xr-x 2 root wheel 1 23 Jul 21:03 home-rw-r--r--@ 1 root wheel 313 2 Aug 2015 installer.failurerequestsdr-xr-xr-x 2 root wheel 1 23 Jul 21:03 netdrwxr-xr-x@ 6 root wheel 204 1 May 18:34 privatedrwxr-xr-x@ 59 root wheel 2006 1 May 18:32 sbin-rw-rw-rw- 1 Berzins wheel 586 25 Jul 21:46 sockets.loglrwxr-xr-x@ 1 root wheel 11 1 May 18:32 tmp -> private/tmpdrwxr-xr-x@ 12 root wheel 308 1 May 18:43 usrlrwxr-xr-x@ 1 root wheel 11 1 May 18:32 var -> private/var
Recent versions of Mac OS X have what's known as System Integrity Protection, aka "SIP", aka "Rootless". It basically makes parts of the file system read-only to everybody , including root. You may have bumped into that. https://en.wikipedia.org/wiki/System_Integrity_Protection The intent is to prevent mistakes and malware from modifying your base operating system. See /System/Library/Sandbox/rootless.conf for a list of directories protected under SIP. Your simplest solution is to install Jekyll under /usr/local instead, if you can.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298499", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/107902/" ] }
298,504
I am on Debian wheezy and attempted to upgrade to jessie as follows: sudo apt-get updatesudo apt-get upgradesudo apt-get dist-upgrade These operations completed successfully and I rebooted, but the system is still wheezy: ~: cat /etc/*releasePRETTY_NAME="Debian GNU/Linux 7 (wheezy)"NAME="Debian GNU/Linux"VERSION_ID="7"VERSION="7 (wheezy)"ID=debianANSI_COLOR="1;31"HOME_URL="http://www.debian.org/"SUPPORT_URL="http://www.debian.org/support/"BUG_REPORT_URL="http://bugs.debian.org/"~: sudo apt-get dist-upgrade[sudo] password for abc:Reading package lists... DoneBuilding dependency treeReading state information... DoneCalculating upgrade... Done0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. How can I accomplish the distribution upgrade? I do not have any custom inits or other custom configuration that would conflict with jessie.
apt-get dist-upgrade does nothing because your system is already up-to-date… for wheezy. You've instructed your system to follow wheezy, and that's what it does. To upgrade to another release, you need to change your package sources to point to that other release. Package sources are declared in the file /etc/apt/sources.list . Edit this file and change all references to wheezy into jessie . Also edit files under /etc/apt/sources.list.d in the same way, if you have any. You can make upgrades follow releases automatically by writing stable instead of e.g. wheezy , but this is not recommended because you'll get a whooping big upgrade each time a new stable release comes out, whether you're ready or not. Using moving release targets is mostly useful for testing . Once you've updated /etc/apt/sources.list , run apt-get update to read the list of available packages for the release that you are now targeting, then apt-get dist-upgrade to perform the upgrade. This is covered in the upgrade notes under “Preparing sources for APT” . It's a good idea to review the upgrade notes before you perform the upgrade. (Switch to the right architecture if you aren't on a 32-bit PC.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/298504", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47542/" ] }
298,514
Intel "Core-2" processor (refurbished), Dell box, 2GiB RAM. I had all the everyday applications placed on the bottom 'panel.' Then I installed several new packages using Aptitude. Afterward. about half of the icons on the panel had disappeared (or become invisible). The launchers are still there, and work; only the display is wrong.
apt-get dist-upgrade does nothing because your system is already up-to-date… for wheezy. You've instructed your system to follow wheezy, and that's what it does. To upgrade to another release, you need to change your package sources to point to that other release. Package sources are declared in the file /etc/apt/sources.list . Edit this file and change all references to wheezy into jessie . Also edit files under /etc/apt/sources.list.d in the same way, if you have any. You can make upgrades follow releases automatically by writing stable instead of e.g. wheezy , but this is not recommended because you'll get a whooping big upgrade each time a new stable release comes out, whether you're ready or not. Using moving release targets is mostly useful for testing . Once you've updated /etc/apt/sources.list , run apt-get update to read the list of available packages for the release that you are now targeting, then apt-get dist-upgrade to perform the upgrade. This is covered in the upgrade notes under “Preparing sources for APT” . It's a good idea to review the upgrade notes before you perform the upgrade. (Switch to the right architecture if you aren't on a 32-bit PC.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/298514", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44128/" ] }
298,519
It seems that whenever I create a file with touch the permissions are set to: -rw-r--r-- . Is there some way that I can configure the permissions with touch or does this have to be done after with a different command?
You can modify your umask to allow (for most implementations) more read/write privileges, but not executable, since generally the requested permissions are 0666 . If your umask is 022 , you'll see touch make a 0644 file. Interestingly, POSIX describes this behavior in terms of creat : If file does not exist: The creat() function is called with the following arguments: The file operand is used as the path argument. The value of the bitwise-inclusive OR of S_IRUSR , S_IWUSR , S_IRGRP , S_IWGRP , S_IROTH , and S_IWOTH is used as the mode argument. and it is only by following the links to creat , then to open , noticing the mention of umask and back-tracking to open (and creat ) to verify that umask is supposed to affect touch . For umask to affect only the touch command, use a subshell: (umask 066; touch private-file)(umask 0; touch world-writable-file)touch file-as-per-current-umask (note that in any case, if the file existed beforehand, touch will not change its permissions, just update its timestamps).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/298519", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171477/" ] }
298,538
I'm looking for an incremental backup tool to use on a running, disk based linux system, such as ubuntu or the like, and a tool which is freeware. Clonezilla can do accurate backups but only when the system is idle. Acronis for linux can do accurate backups on a running system, but it's not freeware. There are methods that Acronis and BTRFS use to capture file system changes as it runs, so as to emulate the effect of stopping the OS while it is working. I'm looking for something that does this. rsync, dump and many other ill-suited tools are suggested and even used for this purpose, but they can not be trusted to accurately capture a running OS. rsync, is fine when used on a static file-system, but not on a multi-threaded running file-system. It is surprising to me how many are under the belief that one can do an accurate backup of a running file system just by copying it in some fashion or another. Having built a small multi-tasking engine some years ago I am well aware of the danger of one task polluting another's work. Only if a backup runs as an atomic task, with all other tasks stopped while it works, can it be assured to capture a 100% accurate restorable backup. There is nothing worse than having a backup that you rely upon, and believe will save you, only to have it corrupt when you try to use it. I need this for a plain old desktop linux, and not for a virtual setup.
As you mentioned, BTRFS can do this. This is how I regularly backup my laptop (which has an uptime of 9 weeks, 5 days as I type this). Within my BTRFS filesystem, I have subvolumes. The way that you split your data into subvolumes and how you nest them is unimportant here, so long as you aren't using the root of the filesystem to store data that you want to back up. The following commands are to illustrate syntax and possibilities, I recommend wrapping them up in a script that runs as a cronjob or systemd.timer. To snapshot a subvolume: btrfs subvolume snapshot -r <source> <dest> To serialise a snapshot: btrfs send <snapshot> To serialise a snapshot relative to an older one (i.e. differential): btrfs send -p <start> <end> To generate a diff, compressing on the fly, and sending to backup server, with "progress" monitoring: btrfs send -p <start> <end> | \ pv -bart | \ pbzip2 --best | \ ssh [email protected] "cat > /backups/name.bz2" To do similar, but re-create the BTRFS subvolumes on the backup server rather than just compressed BTRFS streams: btrfs send -p <start> <end> | \ pv -bart | \ pbzip2 --best | \ ssh [email protected] "pbzip2 -d | \ btrfs receive <target>" To restore, apply your snapshots in order to a new BTRFS filesystem, via btrfs receive . Here is more information about BTRFS Incremental Backups
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298538", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181039/" ] }
298,548
In bash I can use curly braces to pattern match several files, e.g. ls *.{dot,svg,err} If there's no file for a particular extension, I get a warning, but the remaining files will be listed. ls: cannot access '*.err': No such file or directory However , in zsh I get an error instead and no output is produced for the existing files. zsh: no matches found: *.err This is also applies for removing, where attempting to remove files with this: rm -f *.{dot,svg,err} Will remove all matching files in bash without any error or warning,but in zsh it will err out and not remove anything at all. Is there a way to make zsh behave in the same way as bash , or is there another way to remove/list the existing files with similar ease¹? ¹Obviously I can do something like find . -name '*.dot' -or -name '*.svg' -or -name '.err' | xargs rm but that's not particularly easy/practical.
You don't get a warning in bash , you get an error by ls (you'll find that ls exit status is non-0 which indicates it has failed). In both zsh and bash , {...} is not a globbing operator, it's an expansion that occurs before globbing. In: ls -d -- *.{dot,svg,err} (you forgot the -d and -- btw), the shell expands the {...} first: ls -d -- *.dot *.svg *.err and then does the glob. bash like most Bourne-like shells has that misfeature that non-matching globs are passed as-is. While on zsh , a non-matching glob is an error. See how rm -f [ab].c in bash could delete the [ab].c file if there was no file called a.c nor b.c . In zsh , you'd get a no match error instead. See the failglob option in bash to get a similar behaviour. ls -d -- *.{dot,svg,err}(N) in zsh would enable the nullglob option on all 3 globs, so that if the globs don't match, they are removed, but that's probably not what you want, because if none of the globs match any file, the command will become: ls -d -- Which will list . (the current directory) instead. Best here is to use one glob that matches files with either of the 3 extensions: ls -d -- *.(dot|svg|err) That will give a sorted list of files to ls , ls will be run unless there's no file found matching that one pattern. You also have the option to enable the sh / bash (bogus IMO) behaviour with emulate sh or with unsetopt nomatch . A slightly better approach is to enable the csh behaviour (which was also the behaviour of Unix shells before the Bourne shell was released): setopt cshnullglob With that option, the command is cancelled only if all the globs on the command line fail to match. If at least one matches, all the ones that don't match are removed so: ls -d -- *.{dot,svg,err} Will expand the dot , svg and err in turn, omitting the missing ones. If you want to compare the effect on the order of the arguments of the different approaches, you need a command that (contrary to ls ) doesn't sort them before displaying. With GNU ls , you can pass the -U option for that, or since ls does only print its arguments here, just use printf '%s\n' *... instead.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/298548", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148823/" ] }
298,556
I have the alias : alias gwanip=echo "$(dig +short myip.opendns.com @resolver1.opendns.com) . This works fine except I just realized that it executes the command dig everytime bashrc is sourced. How do I keep the functionality of it (ie echo my ip up calling gwanip) but without it running everytime bashrc is sourced ?
Escape the dollar sign, or remove the useless echo $( ... ) and just have alias gwanip="dig +short myip.opendns.com @resolver1.opendns.com" or use a shell function instead: function gwanip { dig +short myip.opendns.com @resolver1.opendns.com} With shell functions, you could even create a more generic shortdig function and call that. function shortdig { dig +short "$1" "$2"}function gwanip { shortdig "myip.opendns.com" "@resolver1.opendns.com"} The bash manual contains the statement For almost every purpose, aliases are superseded by shell functions.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298556", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/149443/" ] }
298,568
I use a little script that copy (using cp ) a list of config files to another device. I wonder how to quickly check whether the files were modified since last time before doing it and exhausting the device for nothing.
Let's say you have a config directory locally that corresponds to a config directory on your device, and that you only make changes locally before syncing them to the device, then rsync is a good tool to perform the sync. To sync the local directory to the device's directory: $ rsync -av config/ [email protected]:path/to/config/ To delete files on the device that are not any longer present in the local config directory, add the --delete flag to rsync : $ rsync -av --delete config/ [email protected]:path/to/config/ Swap config/ and [email protected]:path/to/config/ to instead back up the config directory from the device to a local directory (it was slightly unclear what direction you wanted to go in the question).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298568", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/149443/" ] }
298,589
I would like to allocate certain number of CPU cores to a specific process for performance improvement. How can I do this?
Let's say you have a config directory locally that corresponds to a config directory on your device, and that you only make changes locally before syncing them to the device, then rsync is a good tool to perform the sync. To sync the local directory to the device's directory: $ rsync -av config/ [email protected]:path/to/config/ To delete files on the device that are not any longer present in the local config directory, add the --delete flag to rsync : $ rsync -av --delete config/ [email protected]:path/to/config/ Swap config/ and [email protected]:path/to/config/ to instead back up the config directory from the device to a local directory (it was slightly unclear what direction you wanted to go in the question).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298589", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181739/" ] }
298,590
Using /bin/find /root -name '*.csv' returns: /root/small_devices.csv/root/locating/located_201606291341.csv/root/locating/located_201606301411.csv/root/locating/g_cache.csv/root/locating/located_201606291747.csv/root/locating/located_201607031511.csv/root/locating/located_201606291746.csv/root/locating/located_201607031510.csv/root/locating/located_201606301412.csv/root/locating/located_201606301415.csv/root/locating/located_201607031512.csv I don't actually want all the files under /root/locating/ , so the expected output is simply /root/small_devices.csv . Is there an efficient way of using `find' non-recursively? I'm using CentOS if it matters.
You can do that with -maxdepth option: /bin/find /root -maxdepth 1 -name '*.csv' As mentioned in the comments, add -mindepth 1 to exclude starting points from the output. From man find : -maxdepth levels Descend at most levels (a non-negative integer) levels of directories below the starting-points. -maxdepth 0 means only apply the tests and actions to the starting-points themselves. -mindepth levels Do not apply any tests or actions at levels less than levels (a non-negative integer). -mindepth 1 means process all files except the starting-points.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/298590", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156022/" ] }
298,626
I would like to have a service which behaves differently on first run and on restart of service. Is this possible with systemd? (I use systemd in my embedded os). I tried with ExecReload and ExecStart, but ExecReload is run only when I use command "systemctl restart". On the other hand ExecStart is run after service Restart ( I have Restart=on-failure and RestartSec=5).
You could use systemctl set-environment to push some values into future runs of the service. For example, with a unit: [Unit]Description=testing[Service]Type=oneshotExecStart=/my/command myarg1 ${MYDONE}ExecStart=/usr/bin/systemctl set-environment MYDONE=1[Install] On the first systemctl start <unit> the last arg passed to /my/command will be '' and MYDONE will not be in the environment. On later starts, the last arg will be 1 and MYDONE=1 will be in the environment.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298626", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181765/" ] }
298,644
I am using Debian stretch (systemd).I was running the rsyslog daemon in foreground using /usr/sbin/rsyslogd -n and I did a Ctrl + Z to stop it. The state of the process changed to Tl (stopped, threaded).I issued multiple kill -15 <pid> commands to the process, and the state of the process was the same: Tl . Once I did an fg , it died. I have 3 questions. Why was the SIGSTOP -ed process not responding to SIGTERM ? Why does the kernel keeps it in the same state? Why did it get killed the moment it received the SIGCONT signal? If it was because of the previous SIGTERM signal, where was it kept until the process resumed?
SIGSTOP and SIGKILL are two signals that cannot be caught and handled by a process. SIGTSTP is like SIGSTOP except that it can be caught and handled. The SIGSTOP and SIGTSTP signals stop a process in its tracks, ready for SIGCONT . When you send that process a SIGTERM , the process isn't running and so it cannot run the code to exit. (There are also SIGTTIN and SIGTTOU , which are signals generated by the TTY layer when a backgrounded job tries to read or write to the terminal. They can be caught but will otherwise stop (suspend) the process, just like SIGTSTP . But I'm now going to ignore those two for the remainder of this answer.) Your Ctrl Z sends the process a SIGTSTP , which appears not to be handled specially in any way by rsyslogd , so it simply suspends the process pending SIGCONT or SIGKILL . The solution here is also to send SIGCONT after your SIGTERM so that the process can receive and handle the signal. Example: sleep 999 &# Assume we got PID 456 for this processkill -TSTP 456 # Suspend the process (nicely)kill -TERM 456 # Terminate the process (nicely). Nothing happenskill -CONT 456 # Continue the process so it can exit cleanly The documentation for the GNU C Library explains this quite well, I think (my highlighting): While a process is stopped, no more signals can be delivered to it until it is continued , except SIGKILL signals and (obviously) SIGCONT signals. The signals are marked as pending, but not delivered until the process is continued. The SIGKILL signal always causes termination of the process and can’t be blocked, handled or ignored. You can ignore SIGCONT , but it always causes the process to be continued anyway if it is stopped. Sending a SIGCONT signal to a process causes any pending stop signals for that process to be discarded. Likewise, any pending SIGCONT signals for a process are discarded when it receives a stop signal
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/298644", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/71917/" ] }
298,670
For GNU sed v4.2.2-7, info sed says: '/REGEXP/M''\%REGEXP%M' The 'M' modifier to regular-expression matching is a GNU 'sed' extension which directs GNU 'sed' to match the regular expression in 'multi-line' mode. The modifier causes '^' and '$' to match respectively (in addition to the normal behavior) the empty string after a newline, and the empty string before a newline. There are special character sequences ('\`' and '\'') which always match the beginning or the end of the buffer. In addition, the period character does not match a new-line character in multi-line mode. There's no example given. Upon testing, it's not obvious what this /M suffix actually does . It seems to behave like no /M at all. So what's a simple significant usage of /M ? Where "simplest" means "hello world" simple, nothing that requires very much additional knowledge of other programs, and "signficant" means it should do something noticeable with '/M' that can't be done if it were missing. Such as, for example, an instance of: seq 10 | sed -n '<code>;/<some regexp>/Mp' ...that behaves differently from: seq 10 | sed -n '<code>;/<some regexp>/p'
That's the equivalent of the m flag in the perl regexp operators, or using (?m) in perl regexps, or PCREs, (though gsed 's M flag would also remove the s perl flag, as without M , sed 's . matches newline, while with perl , you need the s flag for . to match newline). These flags only come into play when the pattern space contains more than one line, such as when using -z , (to read NUL delimited records), or when adding lines to the pattern space with commands like G , N or s . $ seq 3 | sed 'N;s/$/<foo>/g'12<foo>3$ seq 3 | sed 'N;s/$/<foo>/Mg'1<foo>2<foo>3 After N , the pattern space contains 1<newline>2 . Without M , $ only matches at the end of the pattern space, (after 2 ); with M , $ matches both at the end of the first line in that pattern space, (after 1 , but before the newline), and at the end the pattern space, (after 2 ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298670", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165517/" ] }
298,685
Working on OS X Yosemite (v10.11.5), I'm unable to mounst a Debian installer ISO (debian-8.5.0-amd64-CD-1.iso on Debian download page ). I get this error... $ hdiutil mount debian-8.5.0-amd64-CD-1.iso hdiutil: mount failed - no mountable file systems As a work around, I can mount the CD in a Linux VM by following the "Create copy of image" steps in the Debian installer docs . Is it possible to read the file contents on directly from a Mac, without the aid of a Linux VM? Ultimately what I'm trying to do is write a script that can download the latest stable ISO, edit the ISO w/ pre-seeing data, and then perform an automated install of Debian on a VM. This is mostly for learning purposes, but might useful down the road.
Figured it out. It's a two step process. Step 1. Attaching as a block device # the '-nomount' option avoids the 'mount failed' error$ hdiutil attach -nomount debian-8.5.0-amd64-CD-1.iso /dev/disk2 Apple_partition_scheme /dev/disk2s1 Apple_partition_map /dev/disk2s2 Apple_HFS # verify disk is a block device (indicated by 'b' at line start)$ ls -l /dev/disk2br--r----- 1 amorphid staff 1, 5 Jul 27 19:41 /dev/disk2 Step 1b. (Big Sur) Load the CD9660 kernel extension # Load the kext modulesudo kmutil load -p /System/Library/Extensions/cd9660.kext Step 2. Mount the disk with cd9660 (aka ISO9660) file system # create mount point$ mkdir -p /tmp/debian-installer# mount the disk$ mount -t cd9660 /dev/disk2 /tmp/debian-installer# see da filez!$ ls -l /tmp/debian-installertotal 2296-r--r--r-- 1 root wheel 9468 Jun 4 09:24 README.html-r--r--r-- 1 root wheel 185525 Jun 1 00:52 README.mirrors.html-r--r--r-- 1 root wheel 100349 Jun 1 00:52 README.mirrors.txt-r--r--r-- 1 root wheel 461 Jun 4 08:37 README.source-r--r--r-- 1 root wheel 6000 Jun 4 09:24 README.txt-r--r--r-- 1 root wheel 146 Jun 4 08:37 autorun.infdr-xr-xr-x 1 root wheel 2048 Jun 4 08:37 bootdr-xr-xr-x 1 root wheel 2048 Jun 4 08:37 csslr-xr-xr-x 1 root wheel 1 Jun 4 08:37 debian -> .dr-xr-xr-x 1 root wheel 2048 Jun 4 08:37 distsdr-xr-xr-x 1 root wheel 4096 Jun 4 08:37 docdr-xr-xr-x 1 root wheel 2048 Jun 4 08:37 efidr-xr-xr-x 1 root wheel 2048 Jun 4 08:37 firmware-r--r--r-- 1 root wheel 180335 Jun 2 03:18 g2ldr-r--r--r-- 1 root wheel 8192 Jun 2 03:18 g2ldr.mbrdr-xr-xr-x 1 root wheel 2048 Jun 4 08:37 installdr-xr-xr-x 1 root wheel 2048 Jun 4 08:37 install.amddr-xr-xr-x 1 root wheel 4096 Jun 4 08:37 isolinux-r--r--r-- 1 root wheel 275432 Jun 4 09:24 md5sum.txtdr-xr-xr-x 1 root wheel 4096 Jun 4 08:37 picsdr-xr-xr-x 1 root wheel 2048 Jun 4 08:37 pool-r--r--r-- 1 root wheel 368480 Jun 2 03:18 setup.exedr-xr-xr-x 1 root wheel 2048 Jun 4 08:37 tools-r--r--r-- 1 root wheel 233 Jun 4 08:37 win32-loader.ini Step 3. Unmount the disk # this will fail if the disk is being used$ umount /dev/disk2 Step 4. Detach the disk $ hdiutil detach /dev/disk2"disk2" unmounted."disk2" ejected.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/298685", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181811/" ] }
298,698
I have several remote systems, and one of them, a linode running debian, is very slow to ssh into - it takes approximately 20-25 seconds every time. This seems to have happened relatively recently. I have tried setting GSSAPIAuthentication to no or to yes as suggested in several answers to similar questions, and it doesn't make a difference. It also doesn't make any difference if I login using the fqdn or the ip address. I have the same delay sshing from either my local linux box or my local Macintosh. I have no such delay sshing from the linode to the local linux box. I have another remote system using the same version of Debian and I can ssh into it in 2 seconds. The only difference between the /etc/ssh/sshd_config files on the two Debian boxes is that the fast one doesn't allow passwords and also specifies a list of allowed ciphers. If I login using ssh -vvv root@linode , the delay happens at the part marked with >>>>>> debug2: key: /root/.ssh/id_ecdsa ((nil))debug2: key: /root/.ssh/id_ed25519 ((nil))debug3: send packet: type 5debug3: receive packet: type 6debug2: service_accept: ssh-userauthdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug3: send packet: type 50>>>>>>debug3: receive packet: type 51debug1: Authentications that can continue: publickey,passworddebug3: start over, passed a different list publickey,passworddebug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,passworddebug3: authmethod_lookup publickeydebug3: remaining preferred: keyboard-interactive,passworddebug3: authmethod_is_enabled publickeydebug1: Next authentication method: publickeydebug1: Offering RSA public key: /root/.ssh/id_rsadebug3: send_pubkey_test (This is only a partial log - full log available on request) I can't find anything about the login in /var/log/auth.log or /var/log/syslog during the delay time - afterwards I just get Jul 27 13:46:43 linode sshd[23049]: Accepted publickey for root from 199.241.27.237 port 51464 ssh2: RSA 89:08:ef:44:48:a4:84:b7:0a:de:14:65:1b:d9:86:f8Jul 27 13:46:43 linode sshd[23049]: pam_unix(sshd:session): session opened for user root by (uid=0)Jul 27 13:46:43 linode systemd-logind[3235]: New session 10361 of user root.
If creating the connection is slow, but it is normal speed after being created, you will most likely have a problem that the server is doing a reverse DNS lookup for the client and that, for some reason, it fails. In general, when debugging this, you can also try to login from two terminals. With the first login look at the sshd log on the server, while you are trying to login from the second. That gives you more information about what the server is doing (or waiting for). You can try to find proof for this for the cause being reverse DNS lookup by setting one, or both, of the following in /etc/ssh/sshd_config : UseDNS noUsePAM no and see if that speeds up creating the connection. If it does you can often leave things that way until solved (if you care about that). If this is a reverse DNS lookup problem, this depends on the DNS server the machine that you login to is using. According to Wikipedia not all IP addresses have a reverse entry, as this is not an actual standards requirement. But more likely this is some configuration issue.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/298698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11163/" ] }
298,702
I am using tmux environment and often times I have to run the same python script in 4 different panes (top 4 in the image) with same command line arguments. Is there a way I can execute the script on each shell by executing command on one? I am aware of this discussion but they suggest using a different terminal environment, I am looking for something that can be done using tmux or shell scripting. The four different shells are ssh sessions to 4 different VMs.
No need for any tools. tmux can handle this: just open up the panes, ssh to the individual servers, and then Ctrl - B followed by :setw synchronize-panes and all input gets synchronized to all visible panes. Re-type this or add "off" to the command to leave.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/298702", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/103422/" ] }
298,706
I'd like to write a function that I can call from a script with many different variables. For some reasons I'm having a lot of trouble doing this. Examples I've read always just use a global variable but that wouldn't make my code much more readable as far as I can see. Intended usage example: #!/bin/bash#myscript.shvar1=$1var2=$2var3=$3var4=$4add(){result=$para1 + $para2}add $var1 $var2add $var3 $var4# end of the script./myscript.sh 1 2 3 4 I tried using $1 and such in the function, but then it just takes the global one the whole script was called from. Basically what I'm looking for is something like $1 , $2 and so on but in the local context of a function. Like you know, functions work in any proper language.
To call a function with arguments: function_name "$arg1" "$arg2" The function refers to passed arguments by their position (not by name), that is $1, $2, and so forth. $0 is the name of the script itself. Example: #!/bin/bashadd() { result=$(($1 + $2)) echo "Result is: $result"}add 1 2 Output ./script.sh Result is: 3
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/298706", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181822/" ] }
298,715
I have a file like this: < Dec 2, 2015 2:51:49 PM EST> <Error> <HTTP> <cphypprod1v..com> <AnalyticProviderServices0> <[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel> <> <1449085909249> <BEA-101017> <[ServletContext@462961596[app:bea_wls_deployment_internal module:bea_wls_deployment_internal.war path:/bea_wls_deployment_internal spec-version:null]] Root ServletException.java.lang.OutOfMemoryError: GC overhead limit exceeded >< Dec 2, 2015 2:51:49 PM EST> <Warning> <RMI> <cphypprod1v.sherwin.com> <AnalyticProviderServices0> <[STANDBY] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1449085909264> < BEA-080003> < RuntimeException thrown by rmi server: javax.management.remote.rmi.RMIConnectionImpl.invoke (Ljavax.management.ObjectName;Ljava.lang.String;Ljava.rmi.Marshal > I need to modify it so it looks like: < Dec 2, 2015 2:51:49 PM EST> <Error> <HTTP> <cphypprod1v..com> <AnalyticProviderServices0> <[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel> <><1449085909249> <BEA-101017> <[ServletContext@462961596[app:bea_wls_deployment_internal module:bea_wls_deployment_internal.war path:/bea_wls_deployment_internal spec-version:null]] Root ServletException. java.lang.OutOfMemoryError: GC overhead limit exceeded >< Dec 2, 2015 2:51:49 PM EST> <Warning> <RMI> <cphypprod1v.sherwin.com> <AnalyticProviderServices0> <[STANDBY] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> < 1449085909264> < BEA-080003> < RuntimeException thrown by rmi server: javax.management.remote.rmi.RMIConnectionImpl.invoke (Ljavax.management.ObjectName;Ljava.lang.String;Ljava.rmi.Marshal > Everything between < and > must be on the same line. How can I do this?
With awk awk 'BEGIN{RS=">\n+";ORS=">\n";FS="\n"} {$1=$1}1' yourfile< Jan 20, 2016 11:58:09 AM EST Test1 Sample Test1 >< Jan 20, 2016 11:58:09 AM EST Sample Test It is not T1 T2 > If you want a blank line between each output, you can add an extra \n to the ORS i.e. awk 'BEGIN{RS=">\n+";ORS=">\n\n";FS="\n"} {$1=$1}1' yourfile (although this may also add a trailing blank line at the end of the file).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/298715", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181653/" ] }