source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
228,824 | I have a string /tmp/testing which I want to replace with \/tmp\/testing sed works fine as follows: echo testdir=/tmp/testing | sed 's/\//\\\//g'testdir=\/tmp\/testing However if I try and put this into a variable which I want to use later I get this error: myvar=`echo testdir=/tmp/testing | sed 's/\//\\\//g'`sed: -e expression #1, char 9: unknown option to `s' Any ideas please? | Try this $ echo testdir=/tmp/testing | sed 's/\//\\\//g'testdir=\/tmp\/testing In variable var=$(echo testdir=/tmp/testing | sed 's/\//\\\//g') | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228824",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133570/"
]
} |
228,888 | The POSIX docs here and here refer to "mandatory utilities", but I can't find any listing of such utilities. Is there one somewhere in the POSIX docs? Granted, the links given above point to older version of the docs. Maybe the nomenclature has changed since then (E.g., maybe what used to be called "mandatory utilities" are now called "required utilities", or "obligatory utilities", or "core utilities", etc.) or the mandatory/optional distinction has been dropped altogether? Clarifications welcome. | From one of the sections that you cite : Optional utilities that are present only on systems supporting the associated option; see Codes for information on the options in this volume of IEEE Std 1003.1-2001 The mandatory utilities are the ones that are not marked as optional. For example basename has no annotation to indicate that it's optional, so it's mandatory. alias is annotated as UP , so it's only mandatory if an implementation claims to include the User Portability Utilities option. command is mandatory, but the -v and -V options are not unless the implementation claims to include the User Portability Utilities option. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228888",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
228,894 | I tried the following to no avail and couldn't find any documentation. read Optioncase $Option in [1] ) echo "1" ;; [2] ) echo "2" ;; [$'\n'] ) echo "LF" ;; [0] ) exit ;; * ) echo "Invalid input" ;;esac The following works though. $ x=" > "$ if [ "$x" = $'\n' ]; then echo "newline"; finewline | Since read only reads a single line of input, sans newline, you just want to check for the empty string: read Optioncase $Option in 1 ) echo "1" ;; 2 ) echo "2" ;; "" ) echo "LF" ;; 0 ) exit ;; * ) echo "Invalid input" ;;esac [1] and 1 (and similarly for the other one-character classes) match the same strings as patterns. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228894",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36191/"
]
} |
228,925 | My TeX Live installation has a file called .texlive2015 and was originally set to drwx------T . I changed it, and I want to change it back, but I can't get the 'T' back. How do I get it back? | The T flag is a special version of the expected t . Usually t sits with execute x , but if the execute bit is not set for others then the t is flagged up as a capital. touch fchmod u=rwx,go=rx f # "-rwxr-xr-x 1 roaima 0 Sep 10 23:13 f"chmod +t f # "-rwxr-xr-t 1 roaima 0 Sep 10 23:13 f"chmod o-x f # "-rwxr-xr-T 1 roaima 0 Sep 10 23:13 f"chmod u=rwx,go=,+t f # "-rwx-----T 1 roaima 0 Sep 10 23:13 f" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228925",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3127/"
]
} |
228,949 | I have a directory, dir1 which contains many files whose names end in either .jpg or .png . I want to copy all the .png files to dir2 which is empty. This command works: find dir1 -name '*.png' -exec cp {} dir2 \; but this command doesn't: find dir1 -name '*.png' -exec cp {} dir2 +find: missing argument to `-exec' I also tried: find dir1 -name '*.png' -exec cp {} -t dir2 +find: missing argument to `-exec' and: find dir1 -name '*.png' -exec cp {} dir2 \+find: missing argument to `-exec' After looking at this page , I even tried: find dir1 -name '*.png' -exec cp {} dir2 {} +find: Only one instance of {} is supported with -exec ... + This page says that: -exec {} + was added in [version] 4.2.12 in 2005 My version of find is 4.4.2. What am I doing wrong? | Thanks to 'steeldriver', I've worked out that the answer is because POSIX specification forbids anything from being between {} and + after -exec . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/228949",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85900/"
]
} |
228,981 | I want a bash script that show the CPU consumption every minute and save it in a file. The output would be like this: 11/09/2015 10:00: CPU: 60%11/09/2015 10:01: CPU: 72%11/09/2015 10:02: CPU: 32% And so on... Can somebody help me ? I can do it with # sar >> Result.txt but it shows the result every 15 minutes instead of every minute. Does anyone know how to fix this? | Put this into a bash script somewhere on your system (/opt for example): #!/bin/bashCPU_USAGE=$(top -b -n2 -p 1 | fgrep "Cpu(s)" | tail -1 | awk -F'id,' -v prefix="$prefix" '{ split($1, vs, ","); v=vs[length(vs)]; sub("%", "", v); printf "%s%.1f%%\n", prefix, 100 - v }')DATE=$(date "+%Y-%m-%d %H:%M:")CPU_USAGE="$DATE CPU: $CPU_USAGE"echo $CPU_USAGE >> /opt/cpu_usage.out Then create a file called cpu_usage under /etc/cron.d/ with the following in: */1 * * * * root /opt/your_script.sh This should execute the script once per minute, and output the CPU usage in a percentage format on a new line within the specified file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/228981",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131319/"
]
} |
229,022 | Today I'm learning something about fifo with this article: Introduction to Named Pipes , which mentions cat <(ls -l) . I did some experiments by using sort < (ls -l) , which pops out an error: -bash: syntax error near unexpected token `('` Then I found I misadded an extra space in the command. But, why this extra command will lead to this failure? Why must the redirect symbol be close to the ( ? | Because that's not an < , it's a <() which is completely different. This is called process substitution , it is a feature of certain shells that allows you to use the output of one process as input for another. The > and < operators redirect output to and input from files . The <() operator deals with commands (processes), not files. When you run sort < (ls) You are attempting to run the command ls in a subshell (that's what the parentheses mean), then to pass that subshell as an input file to sort . This, however, is not accepted syntax and you get the error you saw. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/229022",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74226/"
]
} |
229,034 | So I have a while loop: cat live_hosts | while read host; do \ sortstuff.sh -a "$host" > sortedstuff-"$host"; done But this can take a long time. How would I use GNU Parallel for this while loop? | You don't use a while loop. parallel "sortstuff.sh -a {} > sortedstuff-{}" <live_hosts Note that this won't work if you have paths in your live_hosts (e.g. /some/dir/file ) as it would expand to sortstuff.sh -a /some/dir/file > sortedstuff-/some/dir/file (resulting in no such file or directory ); for those cases use {//} and {/} (see gnu-parallel manual for details): parallel "sortstuff.sh -a {} > {//}/sortedstuff-{/}" <live_hosts | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229034",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117923/"
]
} |
229,048 | I'm trying to set up a new service (under Debian Jessie) that needs to set up some mounts where the network configuration is stored and thus this service must complete before networking.service starts. I tried the following: [Unit]Description=mount/repair remaining filesystems (all persistent fs beyond "/")#Before=network-pre.targetBefore=networking.service[Service]Type=oneshotExecStart=/opt/intermodul-mounts/start.shTimeoutSec=0RemainAfterExit=yes[Install]RequiredBy=networking.service Using systemd-analyze plot I can see that my service starts, but networking.service starts about 3 seconds earlier: Apparently my config is wrong, but I'm having a hard time finding the problem... Any help greatly appreciated.. Update I currently solved it by changing the service config to start before local-fs.target instead of networking.service : [Unit]DefaultDependencies=noDescription=mount/repair remaining filesystems (all persistent fs beyond "/")Before=local-fs.target[Service]Type=oneshotExecStart=/opt/intermodul-mounts/start.shTimeoutSec=0RemainAfterExit=yes[Install]RequiredBy=local-fs.target Still, I'd like to understand why my first configuration didn't work as expected...? | network-pre.target is a target that may be used to order services before any network interface is configured. It's primary purpose is for usage with firewall services that want to establish a firewall before any network interface is up. It's a passive unit: you cannot start it directly and it is not pulled in by the the network management service, but by the service that wants to run before it. You want to use network-pre.target if you want to setup something before network starts Services that want to be run before the network is configured should place Before=network-pre.target and also set Wants=network-pre.target to pull it in. You should put these under [Unit] section: Before=network-pre.targetWants=network-pre.target Reference | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/229048",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27552/"
]
} |
229,049 | Data 1\begin{document}3 Code #!/bin/bashfunction getStart { local START="$(awk '/begin\{document\}/{ print NR; exit }' data.tex)" echo $START}START2=$(getStart)echo $START2 which returns 2 but I want 3 . I change unsuccessfully the end by this answer about How can I add numbers in a bash script : START2=$((getStart+1)) How can you increment a local variable in Bash script? | I'm getting 2 from your code. Nevertheless, you can use the same technique for any variable or number: local start=1(( start++ )) or (( ++start )) or (( start += 1 )) or (( start = start + 1 )) or just local start=1echo $(( start + 1 )) etc. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/229049",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
229,064 | On Windows one can enforce pressing Ctrl + Alt + Del to fire an interrupt that brings up the login window. When logging onto a console of a Linux computer: How can I tell if this login is a real one or a mocked up on to steal my credentials? | Assuming that you want to be protected against other normal users of thesystem (if the adversary has root access, all bets are off), yourcould in principle use a secure attentionkey : An operating system's Secure Attention Key is a security tool which is provided as protection against trojan password capturing programs. It is an undefeatable way of killing all programs which could be masquerading as login applications. Users need to be taught to enter this key sequence before they log in to the system. ( Linux 2.4.2 Secure Attention Key (SAK) handling, Andrew Morton, 18 March 2001 ) This related U&L question may be of interest: How can I find the Secure Attention Key (SAK) on my system and can I disable it? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229064",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23598/"
]
} |
229,069 | I get the error message... could not change directory to "/home/corey/scripts": Permission denied ... when I run the following script ... #!/bin/bashsudo -u postgres psql < setup_dev_db.sqlread -rsp $'Press any key to continue...\n' -n 1 key ... the contents of setup_dev_db.sql are executed without any issues, but the error is annoying. Can I get rid of it? | To change to a directory, a user must have the 'x' permission for that directory. I assume you are running the script from '/home/corey/scripts'. When 'sudo -u postgres' changes the current user to 'postgres' it attempts to set the working directory for 'postgres' to the working directory it was called from generating the error you're seeing. Make sure that the user 'postgres' has permission 'x' for '/home/corey/scripts'. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/229069",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14143/"
]
} |
229,107 | I would like to let dig always forget a DNS record.I mean if I do dig yahoo.com then I have a record back in with ttl for 1790 seconds.Even if I have no cache service installed, next time i do the same command, the ttl have lowered.Some how, dig do remember the answer. Is it possible to clear that, so I always get a fresh answer back? | dig doesn’t remember queries. But it makes use of name servers listed in /etc/resolv.conf , unless the server to be queried is specified explicitly. Such servers normally accept recursive queries and have caches for their results. So dig can receive records cached by (intermediate) servers. Use dig +trace … to override this behaviour, forcing it to query an authoritative server. See dig(1) for more information. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/229107",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121503/"
]
} |
229,188 | How can I keep long strings from truncating in terminal? For example if I run journalctl -xn There's a lot of text that I cannot read. I am open to using other programs/tools. | From the journalctl manpage: The output is paged through less by default, and long lines are "truncated" to screen width. The hidden part can be viewed by using the left-arrow and right-arrow keys. Paging can be disabled; see the --no-pager option and the "Environment" section below. If you don't want to constantly be using the left and right arrow keys, simply pipe it directly to less : $ journalctl -xn | less This will wrap lines that are too long for your terminal (the default behavior of less , which journalctl overrides). Or, of course, if you don't mind possibly having to use your terminal's scrollback, you could use no pager at all: $ journalctl -xn --no-pager | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/229188",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104015/"
]
} |
229,234 | Having a CSV file like this: HEADER"first, column"|"second "some random quotes" column"|"third ol' column"FOOTER and looking for result like: HEADERfirst, column|second "some random quotes" column|third ol' column in other words removing "FOOTER", quotes in beginning, end and around |. So far this code works: sed '/FOOTER/d' csv > csv1 | #remove FOOTERsed 's/^\"//' csv1 > csv2 | #remove quote at the beginningsed 's/\"$//' csv2 > csv3 | #remove quote at the endsed 's/\"|\"/|/g' csv3 > csv4 #remove quotes around pipe As you see the problem is it creates 4 extra files. Here is another solution, that has a goal not to create extra files and to do the same thing in a single script. It doesn't work very well. #!/bin/kshsed '/begin/, /end/ { /FOOTER/d s/^\"// s/\"$// s/\"|\"/|/g }' csv > csv4 | First of all, as Michael showed, you can just combine all of these into a single command: sed '/^FOOTER/d; s/^\"//; s/\"$//; s/\"|\"/|/g' csv > csv1 I think some sed implementations can't cope with that and might need: sed -e '/^FOOTER/d' -e 's/^\"//' -e 's/\"$//' -e 's/\"|\"/|/g' csv > csv1 That said, it looks like your fields are defined by | and you just want to remove " around the entire field, leaving those that are within the field. In that case, you could do: $ sed '/FOOTER/d; s/\(^\||\)"/\1/g; s/"\($\||\)/\1/g' csv HEADERfirst, column|second "some random quotes" column|third ol' column Or, with GNU sed : sed -r '/FOOTER/d; s/(^|\|)"/\1/g; s/"($|\|)/\1/g' csv You could also use Perl: $ perl -F"|" -lane 'next if /FOOTER/; s/^"|"$// for @F; print @F' csv HEADERfirst, column|second some random quotes column|third ol' column | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/229234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66371/"
]
} |
229,247 | I have a column of words in which English words are glued to Chinese words like this: abominate******** abhor************* (The stars represent the Chinese alphabet) I want to write a script to separate the English words and put it in another file. Is sth like this possible by script writing? Any suggestion is welcome. | First of all, as Michael showed, you can just combine all of these into a single command: sed '/^FOOTER/d; s/^\"//; s/\"$//; s/\"|\"/|/g' csv > csv1 I think some sed implementations can't cope with that and might need: sed -e '/^FOOTER/d' -e 's/^\"//' -e 's/\"$//' -e 's/\"|\"/|/g' csv > csv1 That said, it looks like your fields are defined by | and you just want to remove " around the entire field, leaving those that are within the field. In that case, you could do: $ sed '/FOOTER/d; s/\(^\||\)"/\1/g; s/"\($\||\)/\1/g' csv HEADERfirst, column|second "some random quotes" column|third ol' column Or, with GNU sed : sed -r '/FOOTER/d; s/(^|\|)"/\1/g; s/"($|\|)/\1/g' csv You could also use Perl: $ perl -F"|" -lane 'next if /FOOTER/; s/^"|"$// for @F; print @F' csv HEADERfirst, column|second some random quotes column|third ol' column | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/229247",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133863/"
]
} |
229,275 | I have a sed command that removes comments in a file as, sed -i /^#/d /path/to/file This works but not when the comments are indented/have a preceding space. like #this is a good comment ---- works #this is an indented comment ---- doesn't work How can i change it to remove lines that has # as the first visible character? | Modify your regex so that it allows for leading whitespace. sed -e '/^[ \t]*#/d' This regex will match lines beginning with 0 or more spaces or tabs (in any order), followed by a hash sign. GNU sed also supports symbolic names: sed -e '/^[[:space:]]*/d' Which includes all whitespace characters, including the funny unicode foreign language ones. That's less portable, however. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/229275",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
229,348 | I've accidentally created a file called > option[value='2016'] How can I delete it? My attempts:$ rm "> option[value='2016']"rm: cannot remove ‘> option[value='2016']’: No such file or directory$ rm \> o*rm: cannot remove ‘>’: No such file or directoryrm: cannot remove ‘o*’: No such file or directory$ rm `> o*` rm: missing operandTry 'rm --help' for more information.$ rm \> option*rm: cannot remove ‘>’: No such file or directoryrm: cannot remove ‘option*’: No such file or directory$ rm '\> option*' rm: cannot remove ‘\\> option*’: No such file or directory$$ rm "\> option*" rm: cannot remove ‘\\> option*’: No such file or directory File listing: HAPPY_PLUS_OPTIONS/o*op*> option[value='2016']> option[value='ALFA ROMEO']README.mdrspec_conversions/.rubocop.ymlSAD/SAD_PLUS_OPTIONS/ | another option ls -i which give (with proper inode value) 5233 > option[value='2016'] 5689 foo then find . -inum 5233 -delete optionnaly (to preview) find . -inum 5233 -print you can also add -xdev if there is another filesystem beneath . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229348",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
229,354 | Say you want to zero-out a failing hard disk. You want to overwrite as much as possible with zeros. What you don't want is: the process aborts on the first write-error. How to do that? AFAICS, plain dd only provides an option for ignoring read errors. Thus, something like dd if=/dev/zero of=/dev/disk/by-id/lousy-vendor-123 bs=128k is not enough. ddrescue seems to be better at ignoring errors - but what would be the optimal command line with it? My try with GNU ddrescue: ddrescue --verbose --force --no-split /dev/zero /dev/disk/by-id/lousy-vendor-123 | I prefer badblocks in destructive write mode for this. It writes, it continues doing so when it hits errors, and finally it tells you where those errors were, and this information may help you decide what to do next (Will It Blend?). # badblocks -v -b 4096 -t random -o badblocks.txt -w /dev/destroymeChecking for bad blocks in read-write modeFrom block 0 to 2097151Testing with random pattern: doneReading and comparing: donePass completed, 52105 bad blocks found. (0/52105/0 errors) And the block list: # head badblocks.txt20970002097001209700220970032097004 And what's left on the disk afterwards: # hexdump -C /dev/destroyme00000000 be e9 2e a5 87 1d 9e 61 e5 3c 98 7e b6 96 c6 ed |.......a.<.~....|00000010 2c fe db 06 bf 10 d0 c3 52 52 b8 a1 55 62 6c 13 |,.......RR..Ubl.|00000020 4b 9a b8 d3 b7 57 34 9c 93 cc 1a 49 62 e0 36 8e |K....W4....Ib.6.| Note it's not really random data - the pattern is repetitive, so if you skipped 1MiB you'd see the same output again. It will also try to verify by reading the data back in, so if you have a disk that claims to be writing successfully but returns wrong data on readback, it will find those errors too. (Make sure no other processes write to the disk while badblocks is running to avoid false positives.) Of course with a badly broken disk this may take too long: there is no code that would make it skip over defective areas entirely. The only way you could achieve that with badblocks would be using a much larger blocksize. I'm not sure if ddrescue does this any better; it's supposed to do that in the other direction (recover as much data as fast as possible). You can do it manually for dd/ddrescue/badblocks by specifying first/last block... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229354",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1131/"
]
} |
229,355 | Every day 10GB text file is downloaded, the file is ~200 million lines and ~1% of the lines are changed the next day. I want to keep daily files as backup, but I'm trying to save disk space by using the CPU. EDIT Currently the best way I found is to keep diff files and rebuild them with patch (how @Simon suggested), for example on 01 Jan download the big file and then for a whole month keep doing only diff diff 01jan.txt 02jan.txt > 02jan.diff; rm 02jan.txt and so on for every day of the month. Is there better way to do this? | I prefer badblocks in destructive write mode for this. It writes, it continues doing so when it hits errors, and finally it tells you where those errors were, and this information may help you decide what to do next (Will It Blend?). # badblocks -v -b 4096 -t random -o badblocks.txt -w /dev/destroymeChecking for bad blocks in read-write modeFrom block 0 to 2097151Testing with random pattern: doneReading and comparing: donePass completed, 52105 bad blocks found. (0/52105/0 errors) And the block list: # head badblocks.txt20970002097001209700220970032097004 And what's left on the disk afterwards: # hexdump -C /dev/destroyme00000000 be e9 2e a5 87 1d 9e 61 e5 3c 98 7e b6 96 c6 ed |.......a.<.~....|00000010 2c fe db 06 bf 10 d0 c3 52 52 b8 a1 55 62 6c 13 |,.......RR..Ubl.|00000020 4b 9a b8 d3 b7 57 34 9c 93 cc 1a 49 62 e0 36 8e |K....W4....Ib.6.| Note it's not really random data - the pattern is repetitive, so if you skipped 1MiB you'd see the same output again. It will also try to verify by reading the data back in, so if you have a disk that claims to be writing successfully but returns wrong data on readback, it will find those errors too. (Make sure no other processes write to the disk while badblocks is running to avoid false positives.) Of course with a badly broken disk this may take too long: there is no code that would make it skip over defective areas entirely. The only way you could achieve that with badblocks would be using a much larger blocksize. I'm not sure if ddrescue does this any better; it's supposed to do that in the other direction (recover as much data as fast as possible). You can do it manually for dd/ddrescue/badblocks by specifying first/last block... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229355",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99999/"
]
} |
229,390 | I'd like to use the new codec x265 (libx265) to encode my video collection. For this I created a lovely bash script under linux which works in general very well! But something is strange: I prohibit the output of ffmpeg to echo on my own way. With x264 (the "old" one) everything works fine. But as soon as I use x265 I get always this kind of output on my terminal: x265 [info]: HEVC encoder version 1.7x265 [info]: build info [Linux][GCC 5.1.0][64 bit] 8bppx265 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64x265 [info]: Main profile, Level-2.1 (Main tier)x265 [info]: Thread pool created using 2 threadsx265 [info]: frame threads / pool features : 1 / wpp(5 rows)x265 [info]: Coding QT: max CU size, min CU size : 64 / 8x265 [info]: Residual QT: max TU size, max depth : 32 / 1 inter / 1 intrax265 [info]: ME / range / subpel / merge : hex / 57 / 2 / 2x265 [info]: Keyframe min / max / scenecut : 25 / 250 / 40x265 [info]: Lookahead / bframes / badapt : 20 / 4 / 2x265 [info]: b-pyramid / weightp / weightb / refs: 1 / 1 / 0 / 3x265 [info]: AQ: mode / str / qg-size / cu-tree : 1 / 1.0 / 64 / 1x265 [info]: Rate Control / qCompress : CRF-28.0 / 0.60x265 [info]: tools: rd=3 psy-rd=0.30 signhide tmvp strong-intra-smoothingx265 [info]: tools: deblock sao This is the way I encode my video with ffmpeg: ffmpeg -i /input/file -c:v libx265 -c:a copy -loglevel quiet /output/file.mp4 <>/dev/null 2>&1 I thought that the <>/dev/null 2>&1 and the -loglevel quiet will do this but apparently I'm mistaken. How can I solve this problem? Thanks for your help! | Solution You need to add an additional parameter -x265-params log-level= xxxxx , as in ffmpeg -i /input/file -c:v libx265 -c:a copy -loglevel quiet -x265-params log-level=quiet \ /output/file.mp4 <>/dev/null 2>&1 Note that, while the FFmpeg option is -loglevel ,the x25 option is log-level , with a - between log and level ;see the x265 Command Line Options documentation. Explanation The FFmpeg command you wrote should have worked(see: ffmpeg documentation );however, it looks like FFmpeg doesn't tell the x265 encoderto use the loglevel you're telling FFmpeg to use. So, assuming you want to the whole FFmpeg command to run quietly(i.e., suppress the messagesfrom both the main FFmpeg program and the x265 encoder),you need to explicitly set the log level options for both of them. Analogously, if you have an FFmpeg command that looks like this: ffmpeg -loglevel error -stats -i "inputfile.xyz" -c:v libx265 -x265-params parameter1 = value : parameter2 = value outputfile.xyz You can add the log-level=error optionto the list of x265-params like this: ffmpeg -loglevel error -stats -i "inputfile.xyz" -c:v libx265 -x265-params log-level=error: parameter1 = value : parameter2 = value … | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133959/"
]
} |
229,401 | I'd like to swap RETURN (scroll forward N lines, default one window) and SPACE (scroll forward N lines, default 1) in less to get a for me more natural way to page through man pages. I saw years ago a colleague having this setup during a telnet session to a router while he was skimming through a config file, so I don't actually know if this setting was on his SSH client, on the node or wherever. Anyway, I'd like to achieve this to the most extent possible, be it locally or remotely. I checked less man page's key bindings section and found a reference to lesskey . Unfortunately, Darwin doesn't have this program. | Solution You need to add an additional parameter -x265-params log-level= xxxxx , as in ffmpeg -i /input/file -c:v libx265 -c:a copy -loglevel quiet -x265-params log-level=quiet \ /output/file.mp4 <>/dev/null 2>&1 Note that, while the FFmpeg option is -loglevel ,the x25 option is log-level , with a - between log and level ;see the x265 Command Line Options documentation. Explanation The FFmpeg command you wrote should have worked(see: ffmpeg documentation );however, it looks like FFmpeg doesn't tell the x265 encoderto use the loglevel you're telling FFmpeg to use. So, assuming you want to the whole FFmpeg command to run quietly(i.e., suppress the messagesfrom both the main FFmpeg program and the x265 encoder),you need to explicitly set the log level options for both of them. Analogously, if you have an FFmpeg command that looks like this: ffmpeg -loglevel error -stats -i "inputfile.xyz" -c:v libx265 -x265-params parameter1 = value : parameter2 = value outputfile.xyz You can add the log-level=error optionto the list of x265-params like this: ffmpeg -loglevel error -stats -i "inputfile.xyz" -c:v libx265 -x265-params log-level=error: parameter1 = value : parameter2 = value … | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229401",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36191/"
]
} |
229,402 | The EFI partition is formatted in ext4 during the setup of debian whereas it should be vfat. I am trying to preseed the install of debian jessie and I can't get it working since the UEFI partition is formatted in ext4 (got information with blkid). I can't get it formatted in vfat. My preseed for partitionning is the following: d-i partman-auto/expert_recipe string \ boot-root :: \ 1 1 1 free \ $gptonly{ } \ $primary{ } \ $bios_boot{ } \ method{ biosgrub } \ . \ 512 100 512 vfat \ $gptonly{ } \ $primary{ } \ method{ efi } \ format{ } \ $lvmignore{ } \ mountpoint{ /boot/efi } \ . \ ... . And I get the following error: "Failed to mount vfat filesystem on /boot/efi"(error message translated from FR, sry) Of course, its an ext4 fs...! Could anybody help? | Solution You need to add an additional parameter -x265-params log-level= xxxxx , as in ffmpeg -i /input/file -c:v libx265 -c:a copy -loglevel quiet -x265-params log-level=quiet \ /output/file.mp4 <>/dev/null 2>&1 Note that, while the FFmpeg option is -loglevel ,the x25 option is log-level , with a - between log and level ;see the x265 Command Line Options documentation. Explanation The FFmpeg command you wrote should have worked(see: ffmpeg documentation );however, it looks like FFmpeg doesn't tell the x265 encoderto use the loglevel you're telling FFmpeg to use. So, assuming you want to the whole FFmpeg command to run quietly(i.e., suppress the messagesfrom both the main FFmpeg program and the x265 encoder),you need to explicitly set the log level options for both of them. Analogously, if you have an FFmpeg command that looks like this: ffmpeg -loglevel error -stats -i "inputfile.xyz" -c:v libx265 -x265-params parameter1 = value : parameter2 = value outputfile.xyz You can add the log-level=error optionto the list of x265-params like this: ffmpeg -loglevel error -stats -i "inputfile.xyz" -c:v libx265 -x265-params log-level=error: parameter1 = value : parameter2 = value … | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229402",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133970/"
]
} |
229,408 | I am on Linux Mint 17. I experienced an unexpected software failure. The desktop did not respond to anything. As I am inexperienced, I only managed to switch to the console using CTRL + ALT + F1 and then restart the machine using: reboot Is there a more appropriate procedure? | Update 2018-Apr-15 As of Linux Mint 18 there has been a move to LightDM display manager, which you can restart as follows: sudo service lightdm restart Original Running reboot is a perfectly safe way of doing it. If you just wanted to log out (restart your GUI session), you could run: sudo service mdm restart That will restart the Mint Display Manager, the default display manager under Mint. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/229408",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
229,424 | I wrote a simple bash script (I'm pretty new to it, you know!) to accept a directory as its argument and print its listing, testing for files and directories. Here's how I approached it: #!/bin/bash# Lists all files and directories (non-recursively) in a specified directoryif [ $# -gt 1 ]; then echo "Error. Please specify only one directory. ($#)" exitfiif [ -z $1 ]; then echo "No directory specified. Exiting." exitfiecho "Listing for $1:"$dirs=`ls $1`echo "Dirs: $dirs" # Just to confirm if all is well# Loop through and printfor i in $dirs;do if [ -f $i ]; then echo "File: $i" elif [ -d $i ]; then echo "Directory: $i" fidone The problem is in my for loop. When I run this script and feed it my home directory, I get this error: ./list_files_and_dirs.sh: line 16: =Calibre: command not found I know I'm making a mistake in command substitution involving variables, but I just don't know what. Someone please help! ================= Update ================= Here's the new (final section) code as per inputs from answers: dirs=`ls "$1"`#echo "Dirs: $dirs" # Just to confirm if all is wellIFS=$'\n'# Loop through and printfor i in $dirs;do if [ -f "$i" ]; then echo "File: $i" elif [ -d "$i" ]; then echo "Directory: $i" fidone | Note: I've assumed you're teaching yourself Bash. Do not use this code in production. find "$directory" -maxdepth 1 -type d will get you the directories, -type f will get you the files. Since it is complaining about line 16, let's have a look at it: $dirs=`ls $1` If you want to assign to a variable, you should not include the $ . I'm guessing you meant: dirs=`ls $1` What's happening now, is this: $dirs is probably empty, so substituted with nothing. The ls command is run, and its output substituted in the 'command'. The first file in your directory is called Calibre , leaving you with the command: =Calibre x y z ... =Calibre is not a valid command, thus the error. However, we're not there yet: this will go wrong if there are spaces in any of the filenames. To fix that, you need to do more: You need to include IFS=$'\n' somewhere before the for loop. This sets the fields separator to the linefeed character, which stops the for loop from splitting up the files on the spaces or tabs (it's technically possible for a file to have a linefeed in its filename as well: you're unlikely to ever encounter such a file and it won't cause serious problems in this case if you do run into such a file, but it's worth being aware of the possibility in case it ever does matter). In order for a filename with spaces not to turn into several different arguments to -f and -d , you need to put quotes $i . (So: [ -f "$i" ] and [ -d "$i" ] . In order to support directories with spaces in them, you should do the same for $1 where it is used. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/229424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41746/"
]
} |
229,431 | I'm new to using SSH and related technologies, so it's very possible I'm not understanding something basic. I'm trying to SSH into a web server (that I own) and the connection is never established due to timeout. ~ $ ssh -vvv example.comOpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011debug1: Reading configuration data /Users/USER/.ssh/configdebug1: Reading configuration data /etc/ssh_configdebug1: /etc/ssh_config line 20: Applying options for *debug1: /etc/ssh_config line 102: Applying options for *debug2: ssh_connect: needpriv 0debug1: Connecting to example.com [123.45.67.89] port 22.debug1: connect to address IPADD port 22: Operation timed outssh: connect to host example.com port 22: Operation timed out My first thought was that I had somehow specified the domain wrong, or that something was wrong with my site. So I tried connecting to the same domain via FTP, and that worked fine (was prompted for user name): ~ $ ftpftp> open(to) example.comConnected to example.com.220---------- Welcome to Pure-FTPd [privsep] [TLS] ----------220-You are user number 2 of 50 allowed.220-Local time is now 12:47. Server port: 21.220-This is a private system - No anonymous login220-IPv6 connections are also welcome on this server.220 You will be disconnected after 15 minutes of inactivity.Name (example.com:USER): So then I thought maybe I was just using SSH wrong. I started watching this tutorial video . At about 1 minute in he does ssh [email protected] and gets a username prompt, but it gives me the same timeout as above. I then tried ssh google.com which does the same. ssh localhost , on the other hand, works fine. So the problem seems to be something to do with SSH requests over a network. My next thought was that it may be a firewall issue. I do have Sophos installed on this machine, but according to my administrator it "should not" block outgoing SSH requests. Can anyone help figure out why this is happening? | That error message means the server to which you are connecting does not reply to SSH connection attempts on port 22. There are three possible reasons for that: You're not running an SSH server on the machine. You'll need to install it to be able to ssh to it. You are running an SSH server on that machine, but on a nonstandard port. You need to figure out on which port it is running; say it's on port 2222, you then run ssh -p 2222 hostname . You are running an SSH server on that machine, and it does use the port on which you are trying to connect, but the machine has a firewall that does not allow you to connect to it. You'll need to figure out how to change the firewall, or maybe you need to ssh from a different host to be allowed in. EDIT : as (correctly) pointed out in the comments, the third is certainly the case; the other two would result in the server sending a TCP "reset" package back upon the client's connection attempt, resulting in a "connection refused" error message, rather than the timeout you're getting. The other two might also be the case, but you need to fix the third first before you can move on. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/229431",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133987/"
]
} |
229,440 | I would like to check, in a bash script, on what filesystem type a directory is. The idea is something like if [path] is on a [filesystem] filesystem then filesystem specific commandend if | Use df . You can pass it a path, and it will give you the filesystem information for that path. If you need the filesystem type, use the -T switch, like so: $ df -T testFilesystem Type 1K-blocks Used Available Use% Mounted on/dev/sda2 ext4 182634676 32337180 141020160 19% /home To extract the filesystem type, you can parse it (use the -P switch to avoid df breaking lines if the device part is too long): $ df -PT test | awk 'NR==2 {print $2}'ext4 So you can use that value in an if construct like so: if [ "$(df -PT "$path" | awk 'NR==2 {print $2}')" = "ext4" ] ; then it is an ext4 filesystemfi Beware that the device column can contain spaces (but it's rare), in which case the parsing will fail. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229440",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70379/"
]
} |
229,488 | I am wondering if it's possible to get files with sftp , but prevent it from re-downloading files that already exist in the destination folder? | sftp has limited capabilities. Nonetheless, the get command has an option which may do the trick: get -a completes partial downloads, so if a file is already present on the client and is at least as large as the file on the server, it won't be downloaded. If the file is present but shorter, the end of the file will be transferred, which makes sense if the local file is the product of an interrupted download. The easiest way to do complex things over SFTP is to use SSHFS . SSHFS is a filesystem that uses SFTP to make a remote filesystem appear as a local filessytem. On the client, SSHFS requires FUSE , which is available on most modern unices. On the server, SSHFS requires SFTP; if the server allows SFTP then you can use SSHFS with it. mkdir serversshfs server.example.com:/ serverrsync -a server/remote/path /local/path/fusermount -u server Note that rsync over SSHFS can't take advantage of the delta transfer algorithm, because it's unable to compute partial checksums on the remote side. That's irrelevant for a one-time download but wasteful if you're synchronizing files that have been modified. For efficient synchronization of modified files, use rsync -a server:/remote/path /local/path/ , but this requires SSH shell access, not just SFTP access. The shell access can be restricted to the rsync command though. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229488",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89322/"
]
} |
229,504 | I have a project where I am downloading and compiling a bunch of files. The assumptions I can make about this files are: there will only be one top level directory the folder name doesn't necessarily match the tar.xz/gz filename (in other words something-1.3.5a.tar.gz might extract to a folder called something-1.3.5) In order to lessen the amount of typing I have to do I wrote a small script, which among other things does the follow: extracts the tar.xz/gz file cds into the directory My current hack is to do something like this: test1=$(tar -axvf something-1.3.5a.tar.gz) cd $(echo $test1 | cut -f1 -d" ") Basically what this does is it captures the output of the extraction, and takes the first line (which is the top level directory) and then cds into it. So, my question is this, is there a cleaner/better way of doing this? | #!/bin/shDir_name=`tar -tzf something-1.3.5a.tar.gz | head -1 | cut -f1 -d"/"`echo $Dir_nametar options details,-t, --list -z, --gzip, --ungzip filter the archive through gzip-f, --force-local archive file is local even if has a colon Here, we are getting the list of file in tar and taking the first line using head -1 cmd, extracting the first field using cut cmd. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/229504",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134034/"
]
} |
229,530 | I can see the difference between /dev/tty and /dev/tty0 by testing the provided method from this question . But I really wonder about the practical usage of those devices (like situations they will be used). | /dev/tty is the controlling tty of the current process , for any process that actually opens this special file. It isn’t necessarily a virtual console device ( /dev/tty n ), and can be a pty , a serial port, etc. If the controlling tty isn’t a virtual console, then the process has not to interact with console devices even if its pseudotty is actually implemented on the system console. E. g. for a shell in a terminal emulator under locally-running X server, said programs form such chain of interactions as: Unix shell ⇕ /dev/pts/2 (≡ /dev/tty for its processes) kernel pty driver ⇕ /dev/ptmx terminal emulator ⇕ X Window protocol X server ⇕ /dev/tty7 (≡ /dev/tty for the server) system console z x c ↿⇂[_̈░░] user Use of /dev/tty by userland programs includes: Write something to the controlling terminal, ignoring all redirections and pipes; Make an ioctl() – see tty_ioctl(4); For example, detach from the terminal (TIOCNOTTY). /dev/tty0 is the currently active (i. e. visible on the monitor) virtual console of the operating system . This special file unlikely is used significantly by system software, but /dev/console is virtually an “alias” for tty0 and /dev/console has much use by syslog daemons and, sometimes, by the kernel itself. Experiment to show the difference: run a root shell on tty3 ( Ctrl + Alt + F3 ) or in a terminal emulator. Now # sleep 2; echo test >/dev/tty then quickly Ctrl + Alt + F2 , wait for two seconds, and Ctrl + Alt +whatever back. Where do you see the output? And now the same test for /dev/tty0 . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63649/"
]
} |
229,541 | When running ps with the -f option in PuTTY (to see the command corresponding to each process), lines which are longer than the terminal width are not fully visible (they are not wrapped on multiple lines). How can I force line wrapping so that I can see the full commands (on multiple lines, if necessary) when running ps -f ? | If you have a POSIX-conforming ps implementation, you may try ps -f | more Note that we¹ recently changed the behavior and if you have an implementation that follows POSIX issue 7 tc2, you may try: ps -wwf | more ¹ We being the people who have weekly teleconferences to discuss the evolvement of the POSIX standard. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/229541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101052/"
]
} |
229,555 | I would like to disable the Ctrl-Alt-Backspace combination using a command line tool, without root priviliges. I know I can use setxkbmap to en able “zapping” with the option terminate:ctrl_alt_bksp . Further, setxkbmap -option [naming no option] removes all options. Is there a way to unset only one option? | A little bit crutched: remove all options using -option with an empty argument first, then set same options with terminate excluded from the list: setxkbmap -option -option $(setxkbmap -query | sed -n 's/options:\s*\(terminate:[^:]*,\)\?\|,terminate:[^,]*//gp') | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/229555",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45956/"
]
} |
229,622 | I am trying to run a small script which check for two variables to see if they are empty or not.I am getting the correct output but if also shows me error for missing right parenthesis. I tried using double parenthesis as well as round bracket but didn't work. var=""non="hi" if ([ -z "$var"] && [ -z "$non"])then echo "both empty"else echo "has data"fi OUTPUT : line 6: [: missing `]'has data | You need a space bettween "$non" and ], and you don't need ()'s: if [ -z "$var" ] && [ -z "$non" ] | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92804/"
]
} |
229,661 | I have used atrpms repo before, but recently I am getting this error whenever I try to update or install something http://dl.atrpms.net/el6Server-x86_64/atrpms/stable/repodata/repomd.xml: [Errno 12] Timeout on http://dl.atrpms.net/el6Server-x86_64/atrpms/stable/repodata/repomd.xml: (28, 'connect() timed out!') . I checked http://dl.atrpms.net/ and that gives timed out error. Does anyone know if that repo has moved anywhere? | Just checked one of their mirrors and the last update there seems to be from December 2014, for Fedora 20 packages. But I didn't find any recent news that it might be down for everybody. Right now their servers seem to time out on connection requests. If you only require some older packages from them (only el6 ?) then you could easily switch the repo configuration /etc/yum.repos.d/atrpms.repo to any of the mirrors, like the one mentioned above or this one: https://www.mirrorservice.org/sites/dl.atrpms.net/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/229661",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33783/"
]
} |
229,711 | I recently installed CentOS 7 as the sole OS on an Acer Aspire T. There is no GUI, as it is a server with a terminal-only interface. What do I need to do to get CentOS 7 to be able to see and list the available wifi connections? When I use the Network Manager Command Line Tool nmcli, I get the following, which indicates that nmcli has wifi enabled, but that it cannot see any wifi connections: [root@localhost ~]# nmcli general statusSTATE CONNECTIVITY WIFI-HW WIFI WWAN-HW WWAN disconnected none enabled enabled enabled enabled [root@localhost ~]# nmcli connection showNAME UUID TYPE DEVICE [root@localhost ~]# nmcli device statusDEVICE TYPE STATE CONNECTION eno1 ethernet unmanaged -- lo loopback unmanaged -- wlp3s0 wifi unmanaged -- I then checked the firewall config, which shows that ssh is the only open service, as follows: [root@localhost network-scripts]# firewall-cmd --list-allpublic (default, active) interfaces: eno1 sources: services: dhcpv6-client ssh ports: masquerade: no forward-ports: icmp-blocks: rich rules: What do I need to change in order to get CentOS to be able to see the wifi connections? There are available connections. Does the firewall need to change? Or something else? EDIT: I am not able to do the things that @TimS. suggested because the following tools are not pre-installed on the computer, and it is not connected directly to the internet: [root@localhost ~]# ifconfig -a-bash: ifconfig: command not found[root@localhost ~]# lspci -v-bash: lspci: command not found [root@localhost ~]# iw dev-bash: iw: command not found[root@localhost ~]# iwconfig-bash: iwconfig: command not found I am able to open nmtui , but am not sure what parameters to enter to create a new connection. [root@localhost ~]# iw dev -bash: iw: command not found [root@localhost ~]# iwconfig -bash: iwconfig: command not found | When I use the Network Manager Command Line Tool nmcli, I get the following, which indicates that nmcli has wifi enabled, but that it cannot see any wifi connections: Not at all. They only say that you haven't configured any wifi connection. You need to use other commands to check wifi connections and connect to wifi. Make sure NetworkManager supports wifi and manages the wireless device wlp3s0 wifi unmanaged -- This is a problem. If NetworkManager doesn't manage your wireless ethernet controller then you cannot expect it to see wifi networks and connect to them. NetworkManager would normally manage all devices automatically after a fresh boot. You might want to check presence of the wifi package. If you don't have that package installed, you don't have wifi support in NetworkManager. rpm -q NetworkManager-wifi In that case you have to temprarily use an ethernet connection or transfer the RPM via other means. yum install NetworkManager-wifisystemctl restart NetworkManager Connect using nmcli To view available wifi networks: nmcli dev wifi list To connect to a wifi network called TestWifi: nmcli --ask dev wifi connect TestWifi Connect using nmtui I also just successfully tried to view wifi networks in nmtui (not in CentOS but it should work). Choosing Activate new connection was enough to see the list of available wifi networks. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229711",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92670/"
]
} |
229,715 | I have a file whose first line starts with <?xml I can remove it with sed using /<\?xml/d but if I try and ensure start of line - /^<\?xml/d it doesn't match. However other lines such as <head ... are removed with /^<head/d I also tried /^\<\?xml/d but no match. | Use: sed '/^<?xml/d' filename Under GNU sed, \? means zero or one of the preceding character. (In POSIX sed, \? is undefined.) Since you want to match a literal ? , leave it unescaped. Examples Let's consider this test file: $ cat filename<?xml deleteme<.xml keepme..xml keepme The solution above produces the desired result: $ sed '/^<?xml/d' filename<.xml keepme..xml keepme The first command in the question incorrectly produces no results: $ sed '/<\?xml/d' filename$ This is because it matches all lines which contain xml optionally preceded by < . Since all lines contain xml , they are all deleted. The second command deletes nothing: $ sed '/^<\?xml/d' filename<?xml deleteme<.xml keepme..xml keepme This deletes any line that starts with zero or one < followed immediately by xml . Since the lines always have at least one character between < and xml , no lines are deleted. How to escape characters when in doubt If you are unsure if a character is regex active and you want to deactivate it, the safe thing to do is put it in square brackets: $ sed '/^[<][?]xml/d' filename<.xml keepme..xml keepme Inside [...] , all characters are treated as literal characters. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/229715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
229,745 | I like to use shutdown -h TIME/+DELAY sometimes. However, since the switch to systemd (on Ubuntu), things seem to have changed quite a bit. Apart from the fact that a previous shutdown command no longer prevents running a new one, I can't figure out how to check for the planned shutdown time of a current shutdown process. I used to just run ps aux | grep shutdown to see the planned shutdown time. Now with systemd it just shows something like this: root 5863 0.0 0.0 13300 1988 ? Ss 09:04 0:00 /lib/systemd/systemd-shutdownd How can I check the scheduled shutdown time of such a process? I tried shutdown -k , but instead of only writing a wall message, it seems to also change the scheduled shutdown time to now+1 minute. | Most simple: (and working on Debian/Ubuntu and RedHat) date --date @$(head -1 /run/systemd/shutdown/scheduled |cut -c6-15) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229745",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16451/"
]
} |
229,787 | I would like to find an application external to the internet browser that would play only youtube sound. Preferably a very light one, CLI or GUI. | There is youtube-dl that lets you download youtube videos from the cli. There is also a new(ish) tool called mps-youtube, that I haven't personally used, but looks like it does exactly what you want. https://github.com/mps-youtube/mps-youtube Give it a try and let us know if it works MPS is available in Ubuntu repos. Launch the MPS console with mpsyt To search youtube in mps console: /<your_search_term> After searching a term, and then selecting a number, the stream will play sound; there are play/pause, seek, volume options: To see options: mpsyt h More detailed options: mpsyt help search mpsyt help download After searching and then selecting the number of the stream with a command that would show download options: d <number> Playlists can also be searched in the PLS console with pls <search_term> or even simpler //<serch_term> | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/229787",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
229,802 | Unfortunately I lost my source code and I just have the output file that made with gcc in linux and I don’t have any access to my pc now.is there any way to convert output file to source file (in c under linux)? | So you had a cow, but you inadvertently converted it to hamburger, and now you want your cow back. Sorry, it just doesn't work that way. Simply restore the source file from your backups. Ah, you didn't have backups. Unfortunately, the universe doens't give you a break for that. You can decompile the binary. That won't give you your source code, but it'll give you some source code with the same behavior. You won't get the variable names unless it was a debug binary. You won't get the exact same logic unless you compiled without optimizations. Obviously, you won't get comments. I've used Boomerang to decompile some programs, and the result was more readable than the machine code. I don't know if it's the best tool out there. Anyway, don't expect miracles. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229802",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134265/"
]
} |
229,849 | I need to indirectly reference a variable in the bash shell. I basically want to what you can do in make by writing $($(var)) . I have tried using ${$var} which would be the most straight forward solution in bash but then I get this error: bash: ${$var}: bad substitution Is there a way to do this? What I am trying to do is to iterate over all the arguments ( $1 , $2 , $3 , ...) to a program using an iteration variable and I cannot do this without indirection. | If you have var1=foo and foo=bar , you can get bar by saying ${!var1} . However, if you want to iterate over the positional parameters, it's almost certainly better to do for i in "$@"; do # somethingdone | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229849",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128859/"
]
} |
229,850 | I'm trying to figure out (1) how much actual space is in use on my server and (2) what will happen if I use more of it . I originally posted a version of this question on ServerFault , but they kicked me over here. Background: I am running a server that uses virtfs (thanks to cPanel), and seeing very high disk usage compared to the amount of data actually uploaded/created by each user. I am getting warning emails telling my that I am using nearly all of my available space. The following is the output of df -h / : Filesystem Size Used Avail Use% Mounted on/dev/simfs 30G 25G 5.9G 81% / I ran du -h / | grep "[0-9][MG]" | sort -n -r to generate a list of the paths using the most space. This was the output: 68G /44G /home43G /home/virtfs11G /home/virtfs/john11G /home/virtfs/paul11G /home/virtfs/george11G /home/virtfs/ringo11G /backup5.3G /usr5.3G /home/virtfs/john/usr5.3G /home/virtfs/paul/usr5.3G /home/virtfs/george/usr5.3G /home/virtfs/ringo/usr5.2G /var5.2G /home/virtfs/john/var5.2G /home/virtfs/paul/var5.2G /home/virtfs/george/var5.2G /home/virtfs/ringo/var4.6G /var/lib4.6G /home/virtfs/john/var/lib4.6G /home/virtfs/paul/var/lib4.6G /home/virtfs/george/var/lib4.6G /home/virtfs/ringo/var/lib4.3G /home/virtfs/paul/usr/local4.2G /usr/local4.2G /home/virtfs/john/usr/local4.2G /home/virtfs/george/usr/local4.2G /home/virtfs/ringo/usr/local3.8G /usr/local/cpanel3.8G /home/virtfs/john/usr/local/cpanel3.8G /home/virtfs/paul/usr/local/cpanel3.8G /home/virtfs/george/usr/local/cpanel3.8G /home/virtfs/ringo/usr/local/cpanel3.0G /var/lib/mysql.orig3.0G /home/virtfs/john/var/lib/mysql.orig3.0G /home/virtfs/paul/var/lib/mysql.orig3.0G /home/virtfs/george/var/lib/mysql.orig3.0G /home/virtfs/ringo/var/lib/mysql.orig2.6G /backup/weekly2.2G /backup/cpbackup2.1G /var/lib/mysql.orig/ringo_demo2.1G /home/virtfs/john/var/lib/mysql.orig/ringo_demo2.1G /home/virtfs/paul/var/lib/mysql.orig/ringo_demo2.1G /home/virtfs/george/var/lib/mysql.orig/ringo_demo2.1G /home/virtfs/ringo/var/lib/mysql.orig/ringo_demo1.9G /cpanel_backups1.7G /backup/monthly1.6G /var/lib/mysql1.6G /home/virtfs/john/var/lib/mysql1.6G /home/virtfs/paul/var/lib/mysql1.6G /home/virtfs/george/var/lib/mysql1.6G /home/virtfs/ringo/var/lib/mysql1.2G /usr/local/cpanel/bin1.2G /home/virtfs/john/usr/local/cpanel/bin1.2G /home/virtfs/paul/usr/local/cpanel/bin1.2G /home/virtfs/george/usr/local/cpanel/bin1.2G /home/virtfs/ringo/usr/local/cpanel/bin1.1G /root (No, my users aren't actually all named for the Beatles...) It looks like nearly all of the disk usage is due to the virtfs redundancy, such as redundant references to system files like /usr/local/cpanel/... . None of my users is actually using as much space as reported. For example, none of them alone uses the full 1.6 GB reported above for /var/lib/mysql . And when I look at cPanel's own reports in the web interface, I see that the disk usage for these accounts ranges from essentially zero to no more than 237 MB: nowhere near the 11 GB reported. So, my questions: How can I determine how much space is actually being used? What happens if I add another, say, 10 GB of data to the server? Will it have some sort of meltdown because df will think I'm using 35 out of 30 GB? Or will everything work just fine because I'm still using less than 30 GB? Please note: this question is not about cPanel; it's about virtfs and what tools I can use to determine my available disk space. | If you have var1=foo and foo=bar , you can get bar by saying ${!var1} . However, if you want to iterate over the positional parameters, it's almost certainly better to do for i in "$@"; do # somethingdone | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/229850",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54371/"
]
} |
229,857 | I am wondering if there is a filter command to quote input lines. So when piping: line number 1line number 2line number 3 to it, you get: "line number 1""line number 2""line number 3" I need this command to pipe a stream of lines to xargs to make sure that xargs treats line number 1 as one argument and not as three. I am sure that it has many other uses too. Is there such a command? What is it called? | I'd offer Perl's quotemeta function. Not quite what you asked, because it escapes spaces rather than replacing them with quotes. But as a fringe benefit, it also handles other special characters (like * ): perl -nle 'print quotemeta' (Or as noted in the comments, the shorter form: perl -ple '$_=quotemeta' ) Which takes your lines and turns them into: line\ number\ 1line\ number\ 2line\ number\ 3 Which should have the same result - as well as handling: Line number \"`rm -rf *`\" And similar such shenanigans :) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/229857",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128859/"
]
} |
229,931 | Given a (really long) list of zip files, how can you tell the size of them once uncompressed? | You can do that using unzip -Zt zipname which prints a summary directly about the archive content, with total size. Here is an example on its output: unzip -Zt a.zip1 file, 14956 bytes uncompressed, 3524 bytes compressed: 76.4% Then, using awk, you can extract the number of bytes: unzip -Zt a.zip | awk '{print $3}'14956 Finally, put it in a for loop as in Tom's answer: total=0for file in *.zip; do # or whichever files you want (( total += $(unzip -Zt $file |awk '{ print $3 }') ))doneecho $total | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/229931",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46600/"
]
} |
229,992 | I have multiple folders which contain subfolders like JAN/ Jan/ FEB/ Feb/ MAR/ Mar/ and so on. I need to move all files from JAN/* to Jan/ , FEB/* to Feb/ and so on. How do I achieve this with a shell script? Edit Thanks to @Costas for pointing me in the right direction. His solution will work with Bash 4 and up. Since I had v3 I ended up using this. for DIR in [A-Z][A-Z]*/do NEWDIR=`echo "$(echo "$DIR" | sed 's/.*/\L&/; s/[a-z]*/\u&/g')"` mv $DIR/* $NEWDIRdone sed script taken from here . | For modern bash (which supports case change): for dir in [A-Z][a-z]*/do mv -t "$dir" ${dir^^}/*done In unsupported versions you free to use tr | sed |… conversion instead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/229992",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56586/"
]
} |
230,037 | I have deleted my old Linux Mint partition I had installed beside my new current Rafaela one.Thus, I have free space I want to add to my home partition. Above you can see the partition layout: sda4 is the system partition with /boot sda5 is the home partition I want to extend home with the unallocated space, but unfortunately the system partition is inbetween and I would need to move it to the beginning of the unallocated space. Since I got a warning message that the system might not boot anymore, if I move /boot, I would like to know how I can do it without breaking the system.It makes sense that the system cannot boot, if the bootloader cannot find the kernel anymore, so I guess after changing the partition layout I need to chroot on / and regenerate grub. Does anybody know how I can add the unallocated space to home safely? | The boot sector needs to find the boot partition, after that the boot loader goes off the partitions, it doesn't care were they're located at on the drive. I'm assuming you're using gparted live by the screen shot. After resizing you'll need to from the shell: Mount your relocated root partition containing the boot directory if /boot is a separate partition mount it in the root partition mount Mount the /dev to the dev directory in your root mount using the --bind option Chroot into the root mount Run grub-install or liloconfig to reinstall the boot sector | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230037",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134422/"
]
} |
230,040 | I have a file with multiple columns and have identified lines where specific column values (cols 3-6) have been duplicated using a bash script. Example input: A B C D E F G1 2 T TACA A 3 2 Q3 4 I R 8 2 Q9 3 A C 9 3 P8 3 I R 8 2 Q I can display both instances of the repeated values. The other column values (cols 1, 2 and 7+) can be different between the 2 lines hence the need for me to view both instances. I want to save the unique records and the first instance of the duplicated records after sorting these dups have been sorted on col 5 (any order will do) then col 1 (descending order --> largest value first). Desired ouput: A B C D E F G1 2 T TACA A 3 2 Q9 3 A C 9 3 P8 3 I R 8 2 Q NB: The ordering on final output is not important as it will be resorted later. Making sure the desired rows are present is what matters. My code so far is: tot=$(awk 'n=x[$3,$6]{print n"\n"$0;} {x[$3,$6]=$0;}' oldfilename | wc -l) #counts duplicated records and saves overall count as $totif [ $tot == "0" ] then awk '{print}' oldfilename >> newfilename #if no dups found, all lines saved in new fileelse if awk '(!(n=x[$3,$6]{print n"\n"$0;} {x[$3,$6]=$0;})' oldfilename >> newfilename #if dups found, unique lines in old file saved in new fileelse awk 'n=x[$3,$6]{print n"\n"$0;} {x[$3,$6]=$0;}' oldfilename > tempfile #save dups in tempfile sort -k1,1, -k5,5 tempfile #sort tempfile on cols 1 then 5 (want descending order) fi What I am unable to do is take the first instance of each duplicate and save it in newfile and I still have errors in the above code. Please help. | sort itself should suffice. First sort such that rows are "grouped" by field range 3-6 , records within each group further ordered by fields 5 and 1 . Pipe this to sort -u on 3-6 , this disables last-resort comparison and returns the first record from each 3-6 group. Finally, pipe this to sort , this time by fields 5 and 1 sort -k3,6 -k5,5r -k1,1r file | sort -k3,6 -u | sort -k5,5r -k1,1rA B C D E F G1 2 T TACA A 3 2 Q9 3 A C 9 3 P8 3 I R 8 2 Q | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230040",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90628/"
]
} |
230,047 | Let's say you have a project structure with lots of Makefiles and there is a top level Makefile that includes all the other. How can you list all the possible targets? I know writing make and then tabbing to get the suggestions would generally do the trick, but in my case there are 10000 targets. Doing this passes the results through more and also for some reason scrolling the list results in a freeze. Is there another way? | This is how the bash completion module for make gets its list: make -qp | awk -F':' '/^[a-zA-Z0-9][^$#\/\t=]*:([^=]|$)/ {split($1,A,/ /);for(i in A)print A[i]}' | sort -u It prints out a newline-delimited list of targets, without paging. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/230047",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134429/"
]
} |
230,069 | I have a file with multiple lines which looks like that : brand,model,inches,price dell xps 13 9000 macbook pro 13 13000asus zenbook 13 10500 I want to delete the lines where the price is more than 10000. I want to ask if it is possible by using grep? | You can use the following to get the lines where price is greater than 10000 : $ grep -E '.* [0]*[1-9][0-9]{4,}$' file.txt macbook pro 13 13000asus zenbook 13 10500 If you want to remove those lines add -v : $ grep -vE '.* [0]*[1-9][0-9]{4,}$' file.txt dell xps 13 9000 .* will match all characters upto the last column containing prices [1-9] will match the first digit of the price [0-9]{4,}$ will match 4 or more digits after the first digit so we have a total of five digits meaning 10000 or greater | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230069",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134432/"
]
} |
230,084 | I need to automate some identity deployments, ideally using ssh-copy-id . I'm trying to provide the password through stdin, which is possible on ssh by using the -S flag. I'm aware that I can send additional options to ssh using the -o flag in the ssh-copy-id command however there's no usage examples of this flag in the man page. So I've tried to pass the SSH password for ssh-copy-id through stdin using: $# echo $TMP_PASS | ssh-copy-id -p2222 -i key.pub user@host -o "-S" But all I get is: /bin/ssh-copy-id: ERROR: command-line: line 0: Bad configuration option: -s EDIT: I'm trying to provide the password through stdin, which is possible on ssh by using the -S flag. This statement is wrong. I've actually read this flag from sudo man; | You might want to try installing sshpass, and altering your call to ssh-copy-id : sshpass -p "$TMP_PASS" ssh-copy-id user@host | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/230084",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77597/"
]
} |
230,090 | I've found out this topic : What is the difference between /opt and /usr/local? And this link : http://www.pathname.com/fhs/pub/fhs-2.3.html To help me understand the usages between /home , /root , /usr/local , /usr/bin and /opt , I still have a question because I'm a little confused understanding the differences between each of them. For a system in which I want to install my applications that have to be used locally by a user, is it better to put the applications in /home or /usr/bin or /root ? Is there a "good practice" that I should know of ? | Well, there are various considerations. You don't put anything in /root . This is for uid 0 and systems administration only; it's often not even traversable by non-root users. Install under /home/<username> if you're an unprivileged user on the machine and you, personally, need to be able to use the software you're installing. If you're the admin, you usually shouldn't mess around with users' homedirs. Install under /usr/local for normal software packages which, for whatever reason, you're installing from source locally (instead of installing through the package manager). This is usually where things get put if you run the standard autoconf ./configure && make && make install incantation from a source tarball. I also put little utilities I've developed locally under /usr/local/bin , if I want them to be universally available. Install under /opt for third-party pre-bundled software (a good example of this is Calibre, if you use their binary installer). This makes a separate directory under /opt for every package you install, and that directory has all the requisites for the package (as opposed to /usr or /usr/local , where binaries for all the packages are under bin , libraries for all the packages are under lib , &c.). In general, if you're writing or packaging software yourself that needs a lot of different components, it might be good to put it here, but it's probably suboptimal to try to install someone else's package there, if it's not their recommendation. That can be a matter of opinion, though. If you're creating a package that users or administrators will install manually, you want either /opt or /usr/local . If you're installing someone else's package, follow their recommendation. If you're packaging something for a distribution (which you probably aren't), use /usr . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230090",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81372/"
]
} |
230,119 | I think I've noticed this before but never thought about it much; now I'm curious. > ldd /bin/bash linux-vdso.so.1 => (0x00007fff2f781000) libtinfo.so.5 => /lib64/libtinfo.so.5 (0x00007f0fdd9a9000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f0fdd7a5000) libc.so.6 => /lib64/libc.so.6 (0x00007f0fdd3e6000) /lib64/ld-linux-x86-64.so.2 (0x00007f0fddbf6000) Libtinfo is part of ncurses. This is a fedora system, but it is the same on ubuntu, and I notice on raspbian (a debian variant) it also links to libncurses itself. What's the reason for this? I thought everything bash did could be done with libreadline (which curiously, it does not link to). Is this simply a substitute for that? | If you run bash as: LD_DEBUG=bindings bash on a GNU system, and grep for bash.*tinfo in that output, you'll see something like: 797: binding file bash [0] to /lib/x86_64-linux-gnu/libtinfo.so.5 [0]: normal symbol `UP' 797: binding file bash [0] to /lib/x86_64-linux-gnu/libtinfo.so.5 [0]: normal symbol `PC' 797: binding file bash [0] to /lib/x86_64-linux-gnu/libtinfo.so.5 [0]: normal symbol `BC' 797: binding file bash [0] to /lib/x86_64-linux-gnu/libtinfo.so.5 [0]: normal symbol `tgetent' 797: binding file bash [0] to /lib/x86_64-linux-gnu/libtinfo.so.5 [0]: normal symbol `tgetstr' 797: binding file bash [0] to /lib/x86_64-linux-gnu/libtinfo.so.5 [0]: normal symbol `tgetflag' You can confirm from the output of nm -D /bin/bash that bash is using those symbols from tinfo. Bringing the man page for any of those symbols clarifies what they're for: $ man tgetentNAME PC, UP, BC, ospeed, tgetent, tgetflag, tgetnum, tgetstr, tgoto, tputs - direct curses interface to the terminfo capability database Basically, bash , more likely its readline (libreadline is statically linked in) editor, uses those to query the terminfo database to find out about terminal capabilities so it can run its line editor properly (sending the right escape sequences and identify key presses correctly) on any terminal. As to why readline is statically linked into bash , you have to bear in mind that readline is developed alongside bash by the same person and is included in the source of bash . It is possible to build bash to be linked with the system's installed libreadline , but only if that one is of a compatible version, and that's not the default. You need to call the configure script at compilation time with --with-installed-readline . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/230119",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/25985/"
]
} |
230,123 | I'm working on an assignment for my college course, and one of the questions asks for the command used to create a hard link from one file to another so that they point to the same inode. We were linked a .pdf file to refer to, but it doesn't explain said process. Is it any different from creating a standard hard link? | Hard links are not "between" the files, there's one inode , with >1 entries in various directories all pointing to that one inode. ls -i should show the inodes, then experiment around with ln (hard link) and ln -s (soft or symbolic): $ touch afile$ ln -s afile symbolic$ ln afile bfile$ ls -1 -i afile symbolic bfile7602191 afile7602191 bfile7602204 symbolic$ readlink symbolicafile$ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230123",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134476/"
]
} |
230,166 | I've recently installed Mint Linux, and when I try to login in the GUI it gives the following error message your home directory is listed as /home/username but does not appear to exist Then when I click OK appears this message User's $HOME/.dmrc file is being ignored And then it tells me that it cannot make login and forces me to log off. What do I do? | So, let's create the username home folder then. To do that just follow this steps: 1 - On the login menu press Ctrl + Alt + F1 to open the terminal 2 - Log in with your user 3 - Execute the commands sudo mkdir /home/usernamesudo chown username /home/username 4 - Then press Ctrl + Alt + F8 to return to the GUI Hopefully now you can login :) Edit Thanks to @MariusMatutiae for this aditional step When a new user is added, his home directory is endowed with a small number of files and directories, some of them hidden. They can be found in /etc/skel, and be copied over to the new home directory. After you login for the first time, open a terminal window and type the following command: cp -a /etc/skel/. /home/username This will copy all the files inside skel to the username folder. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230166",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134501/"
]
} |
230,195 | Is it possible to emulate (is that the right word?) previous versions of Bash? I am using 4.3.11, and I am curious to know if my scripts are compatible with some earlier versions, but I don't want to actually install an earlier version. I could dig through the changelogs and figure out what features I'm using that are lacking from previous versions, but that seems a bit tedious. I was hoping for some kind of magical command line option or script command, instead (wishful thinking, probably). | No, bash can't emulate older versions of bash. But it's pretty easy to set up a test environment that includes an older version of bash. Installing older version of individual software is tedious if you have to install each software package manually, not to mention resolving the library incompatibilities. But there's an easier solution: install an older distribution . Installing an older distribution, with a consistent set of software including development packages, costs about $1 of hard disk space and maybe an hour to set up the first time. The schroot package makes it easy to install an older (or newer!) Linux distribution that's running on the same system as your normal Linux system. You can easily make a schroot setup where you can run a program in an environment (a chroot ) where the system directories point to the older software, but the home directories are those of the normal environment. I wrote a guide for Debian-based distributions ; you can easily go back to Debian slink (with bash 2.01.01) this way. If you want to test with different Unix variants, different CPU architectures, or very very old software, you can run other OSes in a virtual machine. There's a little more overhead (in RAM, disk space, CPU and maintenance) but it's still very much doable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230195",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123270/"
]
} |
230,196 | I have a CSV file ( data.csv ) as below: apple_val, balloon_val, cherry_val, dog_val1 ,5 ,6 ,73 ,19 ,2 ,3 I have a text file ( sentence.txt ) as below: I have apple_val apple(s) and balloon_val balloons. My dog_val dogs were biting the cherry_val cherries. I want my output file ( output.txt ) as below: I have 1 apple(s) and 5 balloons. My 7 dogs were biting the 6 cherries.I have 3 apple(s) and 19 balloons. My 3 dogs were biting the 2 cherries. I used the below script. But my script is specific to the above example. awk -F "," {print $1, $2, $3, $4} data.csv | while read a, b, c,ddo sed -e "s/apple_val/$a/g" -e "s/balloon_val/$b/g" -e "s/dog_val/$d/g" -e "s/cherry_val/$c/g" sentence.txt >> output.txtdone I want to make it generic by reading the first line of the CSV file (the header) and replacing the occurrences of those strings (like apple_val) in the text file. How can I do it? | No, bash can't emulate older versions of bash. But it's pretty easy to set up a test environment that includes an older version of bash. Installing older version of individual software is tedious if you have to install each software package manually, not to mention resolving the library incompatibilities. But there's an easier solution: install an older distribution . Installing an older distribution, with a consistent set of software including development packages, costs about $1 of hard disk space and maybe an hour to set up the first time. The schroot package makes it easy to install an older (or newer!) Linux distribution that's running on the same system as your normal Linux system. You can easily make a schroot setup where you can run a program in an environment (a chroot ) where the system directories point to the older software, but the home directories are those of the normal environment. I wrote a guide for Debian-based distributions ; you can easily go back to Debian slink (with bash 2.01.01) this way. If you want to test with different Unix variants, different CPU architectures, or very very old software, you can run other OSes in a virtual machine. There's a little more overhead (in RAM, disk space, CPU and maintenance) but it's still very much doable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230196",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134531/"
]
} |
230,206 | Answering this question caused me to ask another question: I thought the following scripts do the same thing and the second one should be much faster, because the first one uses cat that needs to open the file over and over but the second one opens the file only one time and then just echoes a variable: (See update section for correct code.) First: #!/bin/shfor j in seq 10; do cat inputdone >> output Second: #!/bin/shi=`cat input`for j in seq 10; do echo $idone >> output while input is about 50 megabytes. But when I tried the second one, it was too ,too slow because echoing the variable i was a massive process. I also got some problems with the second script, for example the size of output file was lower than expected. I also checked the man page of echo and cat to compare them: echo - display a line of text cat - concatenate files and print on the standard output But I didn't get the difference. So: Why cat is so fast and echo is so slow in the second script? Or is the problem with variable i ? ( because in the man page of echo it is said it displays "a line of text" and so I guess it isoptimized only for short variables, not for very very long variableslike i . However, that is only a guess.) And why I got problems when I use echo ? UPDATE I used seq 10 instead of `seq 10` incorrectly. This is edited code: First: #!/bin/shfor j in `seq 10`; do cat inputdone >> output Second: #!/bin/shi=`cat input`for j in `seq 10`; do echo $idone >> output (Special thanks to roaima .) However, it is not the point of the problem. Even if the loop occurs only one time, I get the same problem: cat works much faster than echo . | There are several things to consider here. i=`cat input` can be expensive and there's a lot of variations between shells. That's a feature called command substitution. The idea is to store the whole output of the command minus the trailing newline characters into the i variable in memory. To do that, shells fork the command in a subshell and read its output through a pipe or socketpair. You see a lot of variation here. On a 50MiB file here, I can see for instance bash being 6 times as slow as ksh93 but slightly faster than zsh and twice as fast as yash . The main reason for bash being slow is that it reads from the pipe 128 bytes at a time (while other shells read 4KiB or 8KiB at a time) and is penalised by the system call overhead. zsh needs to do some post-processing to escape NUL bytes (other shells break on NUL bytes), and yash does even more heavy-duty processing by parsing multi-byte characters. All shells need to strip the trailing newline characters which they may be doing more or less efficiently. Some may want to handle NUL bytes more gracefully than others and check for their presence. Then once you have that big variable in memory, any manipulation on it generally involves allocating more memory and coping data across. Here, you're passing (were intending to pass) the content of the variable to echo . Luckily, echo is built-in in your shell, otherwise the execution would have likely failed with an arg list too long error. Even then, building the argument list array will possibly involve copying the content of the variable. The other main problem in your command substitution approach is that you're invoking the split+glob operator (by forgetting to quote the variable). For that, shells need to treat the string as a string of characters (though some shells don't and are buggy in that regard) so in UTF-8 locales, that means parsing UTF-8 sequences (if not done already like yash does), look for $IFS characters in the string. If $IFS contains space, tab or newline (which is the case by default), the algorithm is even more complex and expensive. Then, the words resulting from that splitting need to be allocated and copied. The glob part will be even more expensive. If any of those words contain glob characters ( * , ? , [ ), then the shell will have to read the content of some directories and do some expensive pattern matching ( bash 's implementation for instance is notoriously very bad at that). If the input contains something like /*/*/*/../../../*/*/*/../../../*/*/* , that will be extremely expensive as that means listing thousands of directories and that can expand to several hundred MiB. Then echo will typically do some extra processing. Some implementations expand \x sequences in the argument it receives, which means parsing the content and probably another allocation and copy of the data. On the other hand, OK, in most shells cat is not built-in, so that means forking a process and executing it (so loading the code and the libraries), but after the first invocation, that code and the content of the input file will be cached in memory. On the other hand, there will be no intermediary. cat will read large amounts at a time and write it straight away without processing, and it doesn't need to allocate huge amount of memory, just that one buffer that it reuses. It also means that it's a lot more reliable as it doesn't choke on NUL bytes and doesn't trim trailing newline characters (and doesn't do split+glob, though you can avoid that by quoting the variable, and doesn't expand escape sequence though you can avoid that by using printf instead of echo ). If you want to optimise it further, instead of invoking cat several times, just pass input several times to cat . yes input | head -n 100 | xargs cat Will run 3 commands instead of 100. To make the variable version more reliable, you'd need to use zsh (other shells can't cope with NUL bytes) and do it: zmodload zsh/mapfilevar=$mapfile[input]repeat 10 print -rn -- "$var" If you know the input doesn't contain NUL bytes, then you can reliably do it POSIXly (though it may not work where printf is not builtin) with: i=$(cat input && echo .) || exit # add an extra .\n to avoid trimming newlinesi=${i%.} # remove that trailing dot (the \n was removed by cmdsubst)n=10while [ "$n" -gt 10 ]; do printf %s "$i" n=$((n - 1))done But that is never going to be more efficient than using cat in the loop (unless the input is very small). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/230206",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132907/"
]
} |
230,238 | It seems like every application from the terminal gives warnings and error messages, even though it appears to run fine. Emacs: ** (emacs:5004): WARNING **: Couldn't connect to accessibility bus: Failed to connect to socket /tmp/dbus-xxfluS2Izg: Connection refused Evince: ** (evince:5052): WARNING **: Couldn't connect to accessibility bus: Failed to connect to socket /tmp/dbus-xxfluS2Izg: Connection refused(evince:4985): Gtk-CRITICAL **: gtk_widget_show: assertion 'GTK_IS_WIDGET (widget)' failed(evince:4985): Gtk-CRITICAL **: gtk_widget_show: assertion 'GTK_IS_WIDGET (widget)' failed Firefox: (process:5059): GLib-CRITICAL **: g_slice_set_config: assertion 'sys_page_size == 0' failed The list goes on. Is this behavior common or is there something wrong with my system? How I fix these issues? | Unfortunately, GTK libraries (used in particular by GNOME) tend to emit a lot of scary-looking messages. Sometimes these messages indicate potential bugs, sometimes they're totally spurious, and it's impossible to tell which is which without delving deep into the code. As an end user, you can't do anything about it. You can report those as bugs (even if the program otherwise behaves correctly, emitting spurious error messages is a bug), but when the program is basically working, these bugs are understandably treated as very low priority. The accessibility warning is a known bug with an easy workaround if you don't use any accessibility feature: export NO_AT_BRIDGE=1 In my experience, Gtk-CRITICAL bugs are completely spurious; while they do indicate a programming error somewhere, they shouldn't be reported to end-users, only to the developer who wrote the program (or the underlying library — often the developer of the program itself can't do anything about it because it's a bug in a library that's called by a library that's called by a library that's used in the program). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/230238",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134562/"
]
} |
230,252 | If someone refers to "Windows" then everyone understands that as a generic reference to covers any or all versions of Windows. As for Macs, I have very little personal experience but I assume that "MacOS" is sufficient to do the same. However, when referring to other OS' (see ' UNIX tree ') how should someone make reference to be understood? For example, I'm most familiar with Ubuntu but also am familiar with Mint and Fedora. As I understand it: Ubuntu is a 'flavour' of Debian Debian is 'UNIX-like' UNIX is the 'grandfather' of a whole family of OS': | Terminology is complicated because there are several Unix-like OS kernels and some flavours of non-kernel (user-space) OS software. “Unix-like” or “*nix” – anything derived from original Unix and vaguely resembling it. “Linux”, “GNU/Linux”, a “Linux distribution” – systems based on the Linux kernel. “GNU” – a collection of open-source Unix-like software, excluding the kernel, otherwise sufficient to build an OS. Can run on Linux and other Unix-like kernels. “Debian” – a distribution of open-source operating systems, based on “GNU”, united by its package management system. Variants with Linux, arguably the most important Debian, are called “Debian GNU/Linux”. Not all Debian OS variants are Linux. Ubuntu technically is a modification of Debian, not its flavour. Additional tips: POSIX-oriented (narrower) and POSIX-compliant (broader) – encompasses virtually all modern Unix-likes, but includes also some systems that are not Unixes internally but can run some Unix-like applications. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/230252",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118052/"
]
} |
230,267 | When my kernel boots, apart from the useful important information, it prints lots of debugging info, such as ....kernel: [0.00000] BIOS-e820: [mem 0x0000000000000000-0x000000000009d3ff] usablekernel: [0.00000] BIOS-e820: [mem 0x000000000009d400-0x000000000009ffff] reservedkernel: [0.00000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved...kernel: [0.00000] MTRR variable ranges enabled:kernel: [0.00000] 0 base 0000000000 mask 7E00000000 write-back...kernel: [0.00000] init_memory_mapping: [mem 0x00100000-0xcf414fff]kernel: [0.00000] [mem 0x00100000-0x001fffff] page 4kkernel: [0.00000] [mem 0x00200000-0xcf3fffff] page 2Mkernel: [0.00000] [mem 0xcf400000-0xcf414fff] page 4k....kernel: [0.00000] ACPI: XSDT 0xD8FEB088 0008C (v01 DELL CBX3 01072009 AMI 10013)kernel: [0.00000] ACPI: FACP 0xD8FFC9F8 0010C (v05 DELL CBX3 01072009 AMI 10013)....kernel: [0.00000] Early memory node rangeskernel: [0.00000] node 0: [mem 0x00001000-0x0009cfff]kernel: [0.00000] node 0: [mem 0x00100000-0xcf414fff]kernel: [0.00000] node 0: [mem 0xcf41c000-0xcfdfcfff]....kernel: [0.00000] ACPI: Local APIC address 0xfee00000kernel: [0.00000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)kernel: [0.00000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) and much much more. I don't see how this can be useful to anybody other than a kernel developer/debugger. I have found, that I can get rid of these by using loglevel=5 as boot parameter. The debugging logs are no longer printed on the terminal, but they are still in dmesg and in syslog . Is it possible to decrease the boot log verbosity globally, so that dmesg and syslog are not flooded by this useless information ? I am using self compiled kernel 3.18 ACEPTED SOLUTION Turns out, putting following lines to /etc/rsyslog.conf solved the problem for me: kern.debug /dev/null& ~ | For syslog You can add following line to /etc/syslog.conf : kern.info; kern.debug /dev/null It will discard kernel .info and .debug messages ( which are discarded with loglevel=5 ) Also, dmesg can be used with option -n to show messages with certain loglevel. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230267",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
230,272 | I have a program that takes data from five government sources and merges them into one large database for my company. I use wget to retrieve the files. However I have discovered that one of the sources changes the name every time it is updated. For example, the last time I got the file it was called myfile150727.flatfile . Today when I tried to run my program I got exit status 8 no such file . When I manually got into the ftp I found that the file is now called myfile150914.flatfile . So obviously the filename is changing based upon the date it was last updated. Can I modify my script to take this fact into account and still automatically download the file? | Yes, but the details depend on how the file's name changes. If it is always today's date, just tell your script to get that: filename=myfile"$(date +%y%m%d)".flatfilewget ftp://example.com/"$file" Or, if it is not updated daily and there is only one file called myfileWHATEVER.flatfile , get that: wget "ftp://example.com/myfile*.flatfile" If you can have many files with similar names, you could download all of them and then keep only the newest: wget -N "ftp://example.com/myfile*.flatfile"## Find the newest filefor file in myfile*.flatfile; do [[ "$file" -nt "$newest" ]] && newest="$file";done## Delete the restfor file in myfile*.flatfile; do [[ "$file" != "$newest" ]] && rm "$file"done Alternatively, you can extract the date from the file name instead: wget -N "ftp://example.com/myfile*.flatfile"for file in myfile*.flatfile; do fdate=$(basename "${file//myfile}" .flatfile) [[ "$fdate" -gt $(basename "${nfile//myfile}" .flatfile) ]] && nfile="$file"donefor file in myfile*.flatfile; do [[ "$file" = "$nfile" ]] || rm "$file"done Note that the above will keep multiple files if more than one have the same modification date. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230272",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134586/"
]
} |
230,308 | I just received a new USB flash drive, and set up 2 encrypted partitions on it. I used dm-crypt (LUKS mode) through cryptsetup . With an additional non-encrypted partition, the drive has the following structure: /dev/sdb1 , encrypted, hiding an ext4 filesystem labelled "Partition 1". /dev/sdb2 , encrypted, hiding another ext4 filesystem, labelled "Partition 2". /dev/sdb3 , clear, visible ext4 filesystem labelled "Partition 3". Because the labels are attached to the ext4 filesystems, the first two remain completely invisible as long as the partitions haven't been decrypted. This means that, in the meantime, the LUKS containers have no labels. This is particularly annoying when using GNOME (automount), in which case the partitions appear as " x GB Encrypted " and " y GB Encrypted " until I decide to unlock them. This isn't really a blocking problem, but it's quite annoying, since I really like my labels and would love to see them appear even when my partitions are still encrypted. Therefore, is there a way to attach labels to dm-crypt+LUKS containers, just like we attach labels to ext4 filesystems? Does the dm-crypt+LUKS header have some room for that, and if so, how may I set a label? Note that I don't want to expose my ext4 labels before decryption, that would be silly. I'd like to add other labels to the containers, which could appear while the ext4 labels are hidden. | For a permanent solution to change the label of the container , use: sudo cryptsetup config /dev/sdb1 --label YOURLABEL Edit: Notice that labeling only works with Luks2 headers. In any case, it is possible to convert a Luks1 header into Luks2 with: sudo cryptsetup convert /dev/sdb1 --type luks2 OBS: Please notice that Luks2 header occupy more space, which can reduce the total number of key slots. Converting Luks2 back to Luks1 is also possible, but there are reports of people who have had problems or difficulties in converting back. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/230308",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41892/"
]
} |
230,325 | I installed Kali Linux 2.0 yesterday on my Zenbook UX303LB and touch-click on my track pad doesn't work. I had the same problem in Ubuntu so it's not a new issue and I just used a mouse as a workaround. However, a couple minutes ago I discovered that I am able to scroll around a website when touching the track pad with two fingers! Why can my touch pad detect two fingers scrolls but not one simple tap? Maybe I could configure the touch pad to recognize tap clicks? | This also working for me: synclient tapbutton1=1 If you do not have synclient in your distro (e.g. Kali), apt-get install xserver-xorg-input-synaptics , reboot, then try again. See this answer for a way to keep synaptics settings persistent between reboots. But... As Linux users, we usually trying to fix problems via terminal, digging in text files, and all similar complicate stuffs, but answer is much simpler this time :) Kali Linux have an option for turn on/off "Tap to click", in /Settings/Mouse & Touchpad. This option is unchecked by default. All to need to do is check. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/230325",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121113/"
]
} |
230,330 | I copied a snippet of Bash to background an ssh command executed remotely: ssh user@remote <<CMDsome process <&- >log 2>error &CMD What does <&- do? My guess is that it is the same as < /dev/null My next understanding is that the three main file descriptors ( stdin , stdout , stderr ) need to be closed to prevent: The job being backgrounded and the script exiting -- conflictingsomehow? When the terminal closes, all processes that areaccepting stdin from terminal are closed? | <&- is not quite the same thing as < /dev/null . <&- closes fd 0, whereas < /dev/null redirects it from the device /dev/null , which never provides any data and always gives EOF on read. The difference is mostly that a read(2) call from a closed FD (the <&- case) will error with EBADF, whereas a call from a null-redirected FD will return no bytes read (end-of-file condition). If your program never reads from stdin, the distinction doesn't matter. Closing the FDs is good practice if you're backgrounding something, since a backgrounded process will hang if it tries to read anything from TTY. This example doesn't fully handle everything it should, though; ideally there would be a nohup or setsid invocation somewhere, to fully disassociate the background process. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/230330",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74847/"
]
} |
230,346 | Is there any way to check the usage of the ulimits for a given user? I know that you can change ulimits for a single process when you start it up or for a single shell when running but I want to be able to "monitor" how close a user is to hitting their limits. I am planning on writing a bash script that will report back to statsd the current usage percentage. Specifically, I want to track: open files ( ulimit -n ) max user processes ( ulimit -u ) pending signals ( ulimit -i ) What I want out is the percentage of usage (0-100). | Maybe this helps for the first question: If you know the process IDs (PID) of the specific user you can get the limits for each process with: cat /proc/<PID>/limits You can get the number of opened files for each PID with: ls -1 /proc/<PID>/fd | wc -l And then just compare the value of Max open files with the number of open file descriptors from the second command to get a percentage. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/230346",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134636/"
]
} |
230,349 | In my setup, I have two disks that are each formatted in the following way: (GPT)1) 1MB BIOS_BOOT2) 300MB LINUX_RAID 3) * LINUX_RAID The boot partitions are mapped in /dev/md0, the rootfs in /dev/md1. md0 is formatted with ext2, md1 with XFS. (I understand that formatting has to be done on the md devices and not on sd - please tell me if this is wrong). How do I setup GRUB correctly so that if one drive fails, the other will still boot? And by extension, that a replacement drive will automatically include GRUB, too? If this is even possible, of course. | If the two disks are /dev/sda and /dev/sdb , run both grub-install /dev/sda and grub-install /dev/sdb . Then both drives will be able to boot alone. Make sure that your Grub configuration doesn't hard-code disks like (hd0) , but instead searches for the boot and root filesystems' UUIDs. I'm not aware of support in Grub to declare two disks as being in a RAID-1 array so that grub-install would automatically write to both. This means you'll need to run grub-install again if you replace one disk; it's one more thing to do in addition to adding new members to the RAID arrays. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/230349",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134509/"
]
} |
230,389 | I would like to rename some files to their contents' MD5 sum; for example, if file foo is empty, it should be renamed to d41d8cd98f00b204e9800998ecf8427e . Does it have to be script or can I use something like the rename tool? | Glenn's answer is good; here's a refinement for multiple files: md5sum file1 file2 file3 | # or *.txt, or whatever while read -r sum filename; do mv -v "$filename" "$sum" done If you're generating files with find or similar, you can replace the md5sum invocation with something like find . <options> -print0 | xargs -0 md5sum (with the output also piped into the shell loop). This is taking the output of md5sum , which consists of multiple lines with a sum and then the file it corresponds to, and piping it into a shell loop which reads each line and issues a mv command that renames the file from the original name to the sum. Any files with identical sums will be overwritten; however, barring unusual circumstances (like if you're playing around with md5 hash collisions), that will mean they had the same contents, so you don't lose any data anyway. If you need to introduce other operations on each file, you can put them in the loop, referring to the variables $filename and $sum , which contain the original filename and the MD5 sum respectively. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230389",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110967/"
]
} |
230,421 | I wrote a simple bash script with a loop for printing the date and ping to a remote machine: #!/bin/bashwhile true; do # *** DATE: Thu Sep 17 10:17:50 CEST 2015 *** echo -e "\n*** DATE:" `date` " ***"; echo "********************************************" ping -c5 $1;done When I run it from a terminal I am not able to stop it with Ctrl+C .It seems it sends the ^C to the terminal, but the script does not stop. MacAir:~ tomas$ ping-tester.bash www.google.com*** DATE: Thu Sep 17 23:58:42 CEST 2015 ***********************************************PING www.google.com (216.58.211.228): 56 data bytes64 bytes from 216.58.211.228: icmp_seq=0 ttl=55 time=39.195 ms64 bytes from 216.58.211.228: icmp_seq=1 ttl=55 time=37.759 ms^C <= That is Ctrl+C press--- www.google.com ping statistics ---2 packets transmitted, 2 packets received, 0.0% packet lossround-trip min/avg/max/stddev = 40.887/59.699/78.510/18.812 ms*** DATE: Thu Sep 17 23:58:48 CEST 2015 ***********************************************PING www.google.com (216.58.211.196): 56 data bytes64 bytes from 216.58.211.196: icmp_seq=0 ttl=55 time=37.460 ms64 bytes from 216.58.211.196: icmp_seq=1 ttl=55 time=37.371 ms No matter how many times I press it or how fast I do it. I am not able to stop it. Make the test and realize by yourself. As a side solution, I am stopping it with Ctrl+Z , that stops it and then kill %1 . What is exactly happening here with ^C ? | What happens is that both bash and ping receive the SIGINT ( bash being not interactive, both ping and bash run in the same process group which has been created and set as the terminal's foreground process group by the interactive shell you ran that script from). However, bash handles that SIGINT asynchronously, only after the currently running command has exited. bash only exits upon receiving that SIGINT if the currently running command dies of a SIGINT (i.e. its exit status indicates that it has been killed by SIGINT). $ bash -c 'sh -c "trap exit\ 0 INT; sleep 10; :"; echo here'^Chere Above, bash , sh and sleep receive SIGINT when I press Ctrl-C, but sh exits normally with a 0 exit code, so bash ignores the SIGINT, which is why we see "here". ping , at least the one from iputils, behaves like that. When interrupted, it prints statistics and exits with a 0 or 1 exit status depending on whether or not its pings were replied. So, when you press Ctrl-C while ping is running, bash notes that you've pressed Ctrl-C in its SIGINT handlers, but since ping exits normally, bash does not exit. If you add a sleep 1 in that loop and press Ctrl-C while sleep is running, because sleep has no special handler on SIGINT, it will die and report to bash that it died of a SIGINT, and in that case bash will exit (it will actually kill itself with SIGINT so as to report the interruption to its parent). As to why bash behaves like that, I'm not sure and I note the behaviour is not always deterministic. I've just asked the question on the bash development mailing list ( Update : @Jilles has now nailed down the reason in his answer ). The only other shell I found that behave similarly is ksh93 (Update, as mentioned by @Jilles, so does FreeBSD sh ). There, SIGINT seems to be plainly ignored. And ksh93 exits whenever a command is killed by SIGINT. You get the same behaviour as bash above but also: ksh -c 'sh -c "kill -INT \$\$"; echo test' Doesn't output "test". That is, it exits (by killing itself with SIGINT there) if the command it was waiting for dies of SIGINT, even if it, itself didn't receive that SIGINT. A work around would be to do add a: trap 'exit 130' INT At the top of the script to force bash to exit upon receiving a SIGINT (note that in any case, SIGINT won't be processed synchronously, only after the currently running command has exited). Ideally, we'd want to report to our parent that we died of a SIGINT (so that if it's another bash script for instance, that bash script is also interrupted). Doing an exit 130 is not the same as dying of SIGINT (though some shells will set $? to same value for both cases), however it's often used to report a death by SIGINT (on systems where SIGINT is 2 which is most). However for bash , ksh93 or FreeBSD sh , that doesn't work. That 130 exit status is not considered as a death by SIGINT and a parent script would not abort there. So, a possibly better alternative would be to kill ourself with SIGINT upon receiving SIGINT: trap ' trap - INT # restore default INT handler kill -s INT "$$"' INT | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/230421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30951/"
]
} |
230,464 | I want to make the date command with nice formatting like this: $ date +"%Y-%m-%d %H:%M:%S"2015-09-17 16:51:58 But I want to save this in variable, so I could call from script like this: echo "$(nice_date) [WARNING] etc etc" However it does not work $ nice_date="date +%Y-%m-%d %H:%M:%S"$ echo "$($nice_date)"date: extra operand ‘%H:%M:%S’Try 'date --help' for more information.$ nice_date="date +\"%Y-%m-%d %H:%M:%S\""$ echo "$($nice_date)"date: extra operand ‘%H:%M:%S"’Try 'date --help' for more information.$ nice_date='date +"%Y-%m-%d %H:%M:%S"'$ echo "$($nice_date)"date: extra operand ‘%H:%M:%S"’Try 'date --help' for more information. What is correct way to do this, so that date command to get one correct argument? | The reason your example fails is because of the way the shell's word splitting works. When you run "$($nice_date)" , the shell is executing the date command with two arguments, "+%Y-%m-%d" and "%H:%M:%S" . This fails because the format string for date must be a single argument. The best way to do this is to use a function instead of storing the command in a variable: format_date() { # echo is not needed date "+%Y-%m-%d %H:%M:%S" "$@"}format_dateformat_date -d "2015-09-17 16:51:58"echo "$(format_date) [WARNING] etc etc" If you really wanted to store the command in a variable, you can use an array: nice_date=(date "+%Y-%m-%d %H:%M:%S")# again echo not needed"${nice_date[@]}" -d "2015-09-17 16:51:58" For more details on the complex cases of storing a command in a variable, see BashFAQ 050 . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/230464",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85816/"
]
} |
230,472 | Suppose my non-root 32-bit app runs on a 64-bit system, all filesystems of which are mounted as read-only. The app creates an image of a 64-bit ELF in memory. But due to read-only filesystems it can't dump this image to a file to do an execve on. Is there still a supported way to launch a process from this image? Note: the main problem here is to switch from 32-bit mode to 64-bit, not doing any potentially unreliable hacks . If this is solved, then the whole issue becomes trivial — just make a custom loader. | Yes, via memfd_create and fexecve : int fd = memfd_create("foo", MFD_CLOEXEC);// write your image to fd however you wantfexecve(fd, argv, envp); | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/230472",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27672/"
]
} |
230,481 | I am using Ubuntu, and the youtube-dl command is working absolutely fine. However, now I want to download only a portion a video that is too long. So I want to download only a few minutes of that video, e.g. from minute 13 to minute 17. Is there any way to do that? | I don't believe youtube-dl alone will do what you want. However you can combine it with a command line utility like ffmpeg. First acquire the actual URL using youtube-dl: youtube-dl -g "https://www.youtube.com/watch?v=V_f2QkBdbRI" Copy the output of the command and paste it as part of the -i parameter of the next command: ffmpeg -ss 00:00:15.00 -i "OUTPUT-OF-FIRST URL" -t 00:00:10.00 -c copy out.mp4 The -ss parameter in this position states to discard all input up until 15 seconds into the video. The -t option states to capture for 10 seconds. The rest of the command tells it to store as an mp4. ffmpeg is a popular tool and should be in any of the popular OS repositories/package managers. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/230481",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134732/"
]
} |
230,630 | Note: While I agree that this question basically is a duplicate of the above question, I feel @alienth's answer (below) is more concise, so I suggest you take a look at it before going to the other question. I periodically backup/image/clone my entire ubuntu system drive to another drive with: dd if=/dev/sda of=/media/disk1/backup.iso It works great when I need to restore the drive after an experiment, drive failure, etc. However I'd now like to mount a partition from within that .iso (i.e. what would have been /dev/sda1 when I was dd'ing the drive). If I'd backed up with: dd if=/dev/sda1 of=/media/disk1/backup.iso then the .iso would be easily mountable using ubuntu's mount volume utility. But the utility doesn't work for an iso of the entire drive. Is there a way to just mount sda1 from the original iso? | You'll need to determine where in the disk image your partition starts. To do so, run the following: sudo parted /media/disk1/backup.iso unit s print The output will look like the following: Model: (file)Disk /tmp/file: 200000sSector size (logical/physical): 512B/512BPartition Table: msdosNumber Start End Size Type File system Flags 1 2048s 199999s 197952s primary You need to take the logical sector size and multiply that by the Start of the partition you'd like to mount. In this case, if I want to mount the first partition, the position would be 2048 * 512 , or 1048576 . You can then mount the partition using a loopback setup, plugging the value determined from above into the offset parameter. mount -o loop,offset=1048576 /media/disk1/backup.iso /mnt/mydisk | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/230630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9300/"
]
} |
230,634 | I get access to some xeon machines for checking performance. I want to find out what architecture they are using such as Haswell, Sandybridge , Ivybridge. Is there a command to find this out? | It's a bit of a cheap workaround but you could get that info from gcc ! I'll explain : gcc is able to optimize binaries for each subarch with the -march option. Moreover, it is able to detect yours and automatically optimize for your machine with -march=nativeAssuming so, you just have to call gcc with march=native and ask it what flags it would use :in short gcc -march=native -Q --help=target|grep march for me it gives -march= bdver1 but my pc runs with an amd buldozer processor | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/230634",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134842/"
]
} |
230,654 | I'm been having some weird problems with bash lately. While trying to simplify my script, I came up with this small piece of code: $ o(){ echo | while read -r; do return 0; done; echo $?;}; o0$ o(){ echo | while read -r; do return 1; done; echo $?;}; o1 return should have exited the function without printing $? , shouldn't it? Well, then I checked if I can return from a pipe alone: $ echo | while read -r; do return 1; donebash: return: can only `return' from a function or sourced script The same happens without a while loop: $ foo(){ : | return 1; echo "This should not be printed.";}$ fooThis should not be printed. Is there something I'm missing here? A Google search brought nothing about this! My bash version is 4.2.37(1)-release on Debian Wheezy. | Related: https://stackoverflow.com/a/7804208/4937930 It's not a bug that you cannot exit a script or return from a function by exit or return in subshells. They are executed in another process and not affecting the main process. Besides that, I suppose you are seeing undocumented behaviors of bash on (probably) undefined spec. In a function, no errors are asserted for return at top level of subshell commands and it just behaves like exit . IMHO it's a bash bug for the inconsistent behavior of return depending on whether the main statement is in a function or not. #!/bin/basho() { # Runtime error, but no errors are asserted, # each $? is set to the return code. echo | return 10 echo $? (return 11) echo $? # Valid, each $? is set to the exit code. echo | exit 12 echo $? (exit 13) echo $?}o# Runtime errors are asserted, each $? is set to 1.echo | return 20echo $?(return 21)echo $?# Valid, each $? is set to the exit code.echo | exit 22echo $?(exit 23)echo $? Output: $ bash script.sh 10111213script.sh: line 20: return: can only `return' from a function or sourced script1script.sh: line 22: return: can only `return' from a function or sourced script12223 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230654",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9491/"
]
} |
230,670 | I have 2 directories. SOURCE and DESTINATION. I am moving the .csv files from source to destination as mv -f SOURCE/*.csv DESTINATION/ While moving, I want to remove the first and last line from each of the file in destination. Please help me with the command. | You can't move a file, AND edit it at the same time, since moving a file doesn't physically move the data (on the same filesystem), it just moves a pointer to the data. You can copy and convert the data, then delete the original file, or you can edit the original file, then move it. cd SOURCEfor i in *.csvdo awk 'NR>2{print s} {s=$0}' < "$i" > ../DESTINATION/"${i}" rm "${i}"done If you omit the rm line, it gives you the opportunity to verify that everything was converted the way you want, before you delete the source files. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42312/"
]
} |
230,673 | I would like to generate a random string (e.g. passwords, user names, etc.). It should be possible to specify the needed length (e.g. 13 chars). What tools can I use? (For security and privacy reasons, it is preferable that strings are generated off-line, as opposed to online on a website.) | My favorite way to do it is by using /dev/urandom together with tr to delete unwanted characters. For instance, to get only digits and letters: tr -dc A-Za-z0-9 </dev/urandom | head -c 13 ; echo '' Alternatively, to include more characters from the OWASP password special characters list : tr -dc 'A-Za-z0-9!"#$%&'\''()*+,-./:;<=>?@[\]^_`{|}~' </dev/urandom | head -c 13 ; echo If you have some problems with tr complaining about the input, try adding LC_ALL=C like this: LC_ALL=C tr -dc 'A-Za-z0-9!"#$%&'\''()*+,-./:;<=>?@[\]^_`{|}~' </dev/urandom | head -c 13 ; echo | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/230673",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55183/"
]
} |
230,678 | This happens whether I use ctrl+shift+v or paste from the right-click menu. What could be done to prevent this behavior? | Don't copy multiple lines of text, to paste. I can almost guarantee you're simply copying the last part of the line. If you're triple clicking to copy that line of code you're pasting, you're getting the newline at the end of the line. If you want to be sure, that is really the problem, then copy the entire line, except for the last letter/digit, and see if pasting that also includes a newline. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230678",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122833/"
]
} |
230,735 | If I had a running debian system, the following command could be issued to get list of installed packages: dpkg --get-selections > packages.lst But now I have only a full backup of root partition (complete system backup) of the working system and nothing more. How can I generate list of installed packages from these files? | chroot into it, and run dpkg would be the easiest thing. See https://superuser.com/a/417004/20798 for how to get a working /proc , /sys , and /dev inside the chroot. Since you have a working debian system outside the backup, you could probably just use dpkg --admindir=dir --get-selections The dir defaults to /var/lib/dpkg , so put the path to your backup's /var/lib/dpkg . Don't forget that dpkg --get-selections doesn't show which packages were manually installed, and which were only installed to satisfy dependencies (and thus should be auto-removed when no longer needed because newer versions of the packages you actually want have different deps, or because you purge a manually installed package.) I use aptitude, which makes it easy to mark everything as auto-installed, then go through and mark some packages as manually installed until nothing you want to keep is getting auto-removed. Start with big meta-packages, like build-essential , the Debian equivalents of ubuntu-standard and ubuntu-desktop , and stuff like that. In aptitude, hit r to see the reverse-depends of a package (pkgs that depend on it). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230735",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65781/"
]
} |
230,742 | What I'm asking is a little bit specific, and might be a different than other autocomplete questions on Unix Stackexchange. Suppose I have a directory that looks like this -rw-r--r-- 1 hlin117 staff 1.1K Sep 19 13:05 doc.aux-rw-r--r-- 1 hlin117 staff 26K Sep 19 13:05 doc.log-rw-r--r-- 1 hlin117 staff 177K Sep 19 13:05 doc.pdf-rw-r--r-- 1 hlin117 staff 13K Sep 19 13:01 doc.tex It makes very little sense to try doing vim doc.pdf , and in the common case, I wouldn't be doing vim doc.log or vim doc.aux . Instead, I'd often do vim doc.tex Unfortunately, tab-autocomplete will suggest to me all 4 files instead of only doc.tex . Is there a way where I could type vim \t , and this would ignore some certain files in my directory? More generally, can I type command X \t , and write some setting where typing command X will ignore files in my directory? FYI: I use zsh. Not sure whether bash and zsh will have similar solutions. | In zsh, with the “new” completion system (i.e. if you have compinit in your .zshrc ), use the file-patterns style . zstyle ':completion:*:*:vim:*' file-patterns '^*.(aux|log|pdf):source-files' '*:all-files' Files matching the pattern *.(aux|log|pdf) will only be completed on the vim command line if there would otherwise be no completion. You can use a pattern for the command name, in particular * to match all commands except the ones that are matched explicitly. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230742",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96629/"
]
} |
230,800 | I am trying to convert my video library to HEVC format to gain space. I ran the following command on all of the video files in my library: #!/bin/bashfor i in *.mp4;do #Output new files by prepending "X265" to the names avconv -i "$i" -c:v libx265 -c:a copy X265_"$i"done Now, most videos convert fine and the quality is the same as before. However, a few videos which are of very high quality (e.g. one movie print which is of 5GB) loses quality -- the video is all pixelated. I am not sure what to do in this case. Do I need to modify the crf parameter in my command line? Or something else? The thing is, I am doing a bulk conversion. So, I need a method where avconv automatically adjusts whatever parameter needs adjustment, for each video. UPDATE-1 I found that crf is the knob I need to adjust. The default CRF is 28. For better quality, I could use something less than 28. For example: avconv -i input.mp4 -c:v libx265 -x265-params crf=23 -c:a copy output.mp4 However, the problem is that for some videos CRF value of 28 is good enough, while for some videos, lower CRF is required. This is something which I have to check manually by converting small sections of the big videos. But in bulk conversion, how would I check each video manually? Is their some way that avconv can adjust CRF according to the input video intelligently? UPDATE-2 I found that there is a --lossless option in x265: http://x265.readthedocs.org/en/default/lossless.html . However, I don't know how to use it correctly. I tried using it in the following manner but it yielded opposite results (the video was even more pixelated): avconv -i input.mp4 -c:v libx265 -x265-params lossless -c:a copy output.mp4 | From my own experience, if you want absolutely no loss in quality, --lossless is what you are looking for. Not sure about avconv but the command you typed looks identical to what I do with FFmpeg . In FFmpeg you can pass the parameter like this: ffmpeg -i INPUT.mkv -c:v libx265 -preset ultrafast -x265-params lossless=1 OUTPUT.mkv Most x265 switches (options with no value) can be specified like this (except those CLI-only ones, those are only used with x265 binary directly). With that out of the way, I'd like to share my experience with x265 encoding. For most videos (be it WMV, or MPEG, or AVC/H.264) I use crf=23 . x265 decides the rest of the parameters and usually it does a good enough job. However often before I commit to transcoding a video in its entirety, I test my settings by converting a small portion of the video in question. Here's an example, suppose an mkv file with stream 0 being video, stream 1 being DTS audio, and stream 2 being a subtitle: ffmpeg -hide_banner \-ss 0 \-i "INPUT.mkv" \-attach "COVER.jpg" \-map_metadata 0 \-map_chapters 0 \-metadata title="TITLE" \-map 0:0 -metadata:s:v:0 language=eng \-map 0:1 -metadata:s:a:0 language=eng -metadata:s:a:0 title="Surround 5.1 (DTS)" \-map 0:2 -metadata:s:s:0 language=eng -metadata:s:s:0 title="English" \-metadata:s:t:0 filename="Cover.jpg" -metadata:s:t:0 mimetype="image/jpeg" \-c:v libx265 -preset ultrafast -x265-params \crf=22:qcomp=0.8:aq-mode=1:aq_strength=1.0:qg-size=16:psy-rd=0.7:psy-rdoq=5.0:rdoq-level=1:merange=44 \-c:a copy \-c:s copy \-t 120 \"OUTPUT.HEVC.DTS.Sample.mkv" Note that the backslashes signal line breaks in a long command, I do it to help me keep track of various bits of a complex CLI input. Before I explain it line-by-line, the part where you convert only a small portion of a video is the second line and the second last line: -ss 0 means seek to 0 second before starts decoding the input, and -t 120 means stop writing to the output after 120 seconds. You can also use hh:mm:ss or hh:mm:ss.sss time formats. Now line-by-line: -hide_banner prevents FFmpeg from showing build information on start. I just don' want to see it when I scroll up in the console; -ss 0 seeks to 0 second before start decoding the input. Note that if this parameter is given after the input file and before the output file, it becomes an output option and tells ffmpeg to decode and ignore the input until x seconds, and then start writing to output. As an input option it is less accurate (because seeking is not accurate in most container formats), but takes almost no time. As an output option it is very precise but takes a considerable amount of time to decode all the stream before the specified time, and for testing purpose you don't want to waste time; -i "INPUT.mkv" : Specify the input file; -attach "COVER.jpg" : Attach a cover art (thumbnail picture, poster, whatever) to the output. The cover art is usually shown in file explorers; -map_metadata 0 : Copy over any and all metadata from input 0, which in the example is just the input; -map_chapters 0 : Copy over chapter info (if present) from input 0; -metadata title="TITLE" : Set the title of the video; -map 0:0 ... : Map stream 0 of input 0, which means we want the first stream from the input to be written to the output. Since this stream is a video stream, it is the first video stream in the output , hence the stream specifier :s:v:0 . Set its language tag to English; -map 0:1 ... : Similar to line 8, map the second stream (DTS audio), and set its language and title (for easier identification when choosing from players); -map 0:2 ... : Similar to line 9, except this stream is a subtitle; -metadata:s:t:0 ... : Set metadata for the cover art. This is required for mkv container format; -c:v libx265 ... : Video codec options. It's so long that I've broken it into two lines. This setting is good for high quality bluray video (1080p) with minimal banding in gradient (which x265 sucks at). It is most likely an overkill for DVDs and TV shows and phone videos. This setting is mostly stolen from this Doom9 post ; crf=22:... : Continuation of video codec parameters. See the forum post mentioned above; -c:a copy : Copy over audio; -c:s copy : Copy over subtitles; -t 120 : Stop writing to the output after 120 seconds, which gives us a 2-minute clip for previewing trancoding quality; "OUTPUT.HEVC.DTS.Sample.mkv" : Output file name. I tag my file names with the video codec and the primary audio codec. Whew. This is my first answer so if there is anything I missed please leave a comment. I'm not a video production expert, I'm just a guy who's too lazy to watch a movie by putting the disc into the player. PS. Maybe this question belongs to somewhere else as it isn't strongly related to Unix & Linux. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/230800",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89385/"
]
} |
230,832 | How do I tell my zsh to automatically try a command with git in front, if the command is not found? E.g. I want to run $ status and if there is no status in $PATH , my zsh should try git status . | This sounds fragile — you could get into the habit into typing foo instead of git foo , and then one day a new foo command appears and foo no longer invokes git foo — but it can be done. When a command is not found with normal lookup (alias, function, builtin, executable on PATH ), zsh invokes the command_not_found_handler function (if it's defined). This function receives the command and the command's arguments as its arguments. command_not_found_handler () { git "$@"} If you want to do some fancier filtering, the command is in $1 and its arguments can be referred to as "$@[2,$#]" . command_not_found_handler () { if …; then git "$1" "$@[2,$#]" fi} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230832",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30769/"
]
} |
230,862 | Visiting some forum online that discuss about Debian and Xubuntu, I saw some users that add this line in the signature field: ...With no systemd... This line is showed with pride (it seems to me). From Wikipedia : systemd is a suite of system management daemons, libraries, and utilities designed as a central management and configuration platform for the Linux computer operating system. So systemd doesn't seem like a bad thing, so why do people write with pride that they don't use it? Can systemd be dangerous, or just bad for you? | No, it is neither dangerous nor bad for you. You have stumbled upon a little battle of the init wars . I will not get into this in detail but, briefly, the situation is as follows. Linux has been using sysvinit for most of its lifetime. This is old and lacks features and the one thing pretty much everyone agrees on is that it needs to be changed. However, nobody can agree on what it should be changed to. Various alternatives were proposed, including--but not limited to-- the following: systemd upstart Both of these are good in their own way and bad in others. As so often happens in the geek world, the choice of which init system (either one of those two or another) to adopt became something similar to a religious war. So, you happened to come across someone who dislikes systemd and, therefore, is proud of not using it. There are various people who have the opposite opinion and think that systemd is wonderful and everything else awful. Just like there is on any other subject on the wide and wonderful interwebs. Happily, the init wars are simmering down and are now past their prime. Most Linux distributions have decided to switch to systemd . Even Canonical's Ubuntu, despite their being the force behind upstart . So, today, systemd is actually the init system of choice for pretty much all major disrtibutions except Gentoo (image source ): | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230862",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110972/"
]
} |
230,872 | I'd like to show the state of xkb on status bar of (preferably any) window manager. State should include LED-indicators (both physical and virtual), modifier state, and both keycode and keysym each time some key is pressed. I've looked at xkbvleds with an intention to retrieve LED-state, but as far as I can see, it does not return any parsable information; just a new window with pre-defined appearance. I don't see a way to use it in the way I want to. I've looked at xev with an intention to retrieve the keycodes and keysyms of pressed keys, but it works only if focus is on specified window. I'd like to monitor keypresses globally, letting them through unmodified. Finally, I've looked at xinput , and it seems to me that I might be able to retrieve keypresses with that. It just looks a rather painful way to achieve what I want (if it would even work). This does not seem that exotic need in my eyes, which makes me think that I'm looking it in a wrong way, and/or missing something more or less obvious. Personally I'm looking for window manager -independent solutions here. | No, it is neither dangerous nor bad for you. You have stumbled upon a little battle of the init wars . I will not get into this in detail but, briefly, the situation is as follows. Linux has been using sysvinit for most of its lifetime. This is old and lacks features and the one thing pretty much everyone agrees on is that it needs to be changed. However, nobody can agree on what it should be changed to. Various alternatives were proposed, including--but not limited to-- the following: systemd upstart Both of these are good in their own way and bad in others. As so often happens in the geek world, the choice of which init system (either one of those two or another) to adopt became something similar to a religious war. So, you happened to come across someone who dislikes systemd and, therefore, is proud of not using it. There are various people who have the opposite opinion and think that systemd is wonderful and everything else awful. Just like there is on any other subject on the wide and wonderful interwebs. Happily, the init wars are simmering down and are now past their prime. Most Linux distributions have decided to switch to systemd . Even Canonical's Ubuntu, despite their being the force behind upstart . So, today, systemd is actually the init system of choice for pretty much all major disrtibutions except Gentoo (image source ): | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/230872",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/69124/"
]
} |
230,887 | I understand > /dev/null redirects things to /dev/null which acts like a blackhole. However, I don't understand what < /dev/null means. I saw some script written like this: nohup myprogram > foo.out 2> foo.err < /dev/null & So, what does < /dev/null in the code above mean? here's an example where it's suggested | It ensures that all I/O streams are accounted-for/occupied. This way, the backgrounded process has nothing "tied" to the terminal so you can go about your business without your program trying to read from TTY, which would cause the terminal to hang. In this case, since you're launching the process over ssh from a shell script, it's making sure that the script can move along unencumbered. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/230887",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72619/"
]
} |
230,899 | I have a server running openldap 2.4.31 in which I store my user and group posix accounts. How can I automatically copy the user and group accounts on the first login so that if the machine disconnects from the ldap server the user can still login? Also, would it be possible to automatically update the password and group membership for the local account if it is updated on the ldap server provided they are connected again? The openldap server is running on ubuntu 14.04 and the other machines are running ubuntu 14.04, CentOS 7 and Arch linux. What would be the common way to solve this in a company network running only linux machines? With windows machines this seems to be solved using active directory and maybe some policies but in a company with centralized login servers and laptops with either linux only or mixed OS I supposed this is done with ldap or radius or both. | It ensures that all I/O streams are accounted-for/occupied. This way, the backgrounded process has nothing "tied" to the terminal so you can go about your business without your program trying to read from TTY, which would cause the terminal to hang. In this case, since you're launching the process over ssh from a shell script, it's making sure that the script can move along unencumbered. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/230899",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135031/"
]
} |
231,019 | In ViM, how can I transform these lines of text: stringa1 minuscolostringa2 minuscolostringa33 minuscolostringa44 minuscolo into this: Stringa1 minuscoloStringa2 minuscoloStringa33 minuscolo | The following command will convert(It is a string substitute regex) the first character to uppercase :%s/^./\u&/g will replace first char in all the lines :1,4s/^./\u&/g This command will replace line starting from 1 to 4 ( 1,4 ) change the line range in the command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/231019",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
231,055 | I need to split a file into unique file names. I can do it with sed command eg, sed -n '/scaffold135_/w 135-scaf.txt' input file.txt but it's time consuming so I need a smart way to do it faster. Below is an input sample (the original file has one million lines): scaffold1_115,T,N,N,N,N,A,N,N,N,N,N,N,T,N,T,T,N,A,A,N,N,Ascaffold1_123,A,N,N,N,N,G,N,N,N,N,N,N,A,N,A,A,N,G,G,N,N,Gscaffold1_140,C,N,N,N,N,C,N,N,N,N,N,N,C,N,C,C,N,T,C,N,N,Cscaffold2_161,G,N,N,N,N,G,N,C,N,N,C,N,G,N,G,G,N,G,G,C,N,Gscaffold2_162,C,N,N,N,N,C,N,T,N,N,T,N,C,N,C,C,N,C,C,T,N,Cscaffold2_180,C,N,N,N,N,C,N,T,N,N,C,C,C,T,C,C,T,C,C,C,N,Cscaffold2_194,C,N,N,C,N,C,C,C,C,C,C,C,C,C,T,C,C,C,C,C,N,Cscaffold3_195,G,N,N,G,G,C,G,G,G,G,G,G,C,G,C,G,G,C,C,G,N,Cscaffold3_234,T,N,A,T,A,A,T,T,T,A,T,A,A,T,A,A,T,A,A,T,N,Ascaffold101_282,C,T,T,T,C,C,T,C,T,C,C,C,C,T,C,C,T,C,C,C,N,Cscaffold101_371,T,T,T,T,T,C,T,T,T,T,T,T,T,T,T,T,T,T,T,T,N,Cscaffold101_372,T,T,T,T,C,C,T,T,T,T,T,T,T,T,T,T,T,T,T,T,N,C The lines are unique. I want lines specific to each scafold into a separate file, say all lines that start with scaffold1_ into a file named scaffold1.txt and so on until scaffold10156.txt which contains the lines starting with scaffold10156_ | You should be able to use redirection with awk awk -F'_' '{print > $1".txt"}' file If lines sharing the scaffoldn_ prefix are contiguous, you could do the following to avoid breaching open file handles limit awk -F'_' 'NR == 1 || $1 != prev{if (f) close(f);f=$1".txt"; prev=$1}; {print > f};END{if (f) close(f)}' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/231055",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135126/"
]
} |
231,059 | I am trying to alias an executable in a directory with a space in it. For example: alias myfile="/home/ben/test case/myfile" Now, this is not expanded the way I want (it thinks /home/ben/test is the executable). In bash you can add extra quotes: alias myfile="'/home/ben/test case/myfile'" Sadly, in fish this does not work. What should I do instead? | alias in fish is just a wrapper for function builtin , it existed for backward compatible with POSIX shell. alias in fish didn't work as POSIX alias . If you want the equivalent of POSIX alias , you must use abbr , which was added in fish 2.2.0 : abbr -a myfile "'/home/ben/test case/myfile'" or: abbr -a myfile "/home/ben/test\ case/myfile" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/231059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10813/"
]
} |
231,074 | I removed a file and now I see: $ lstotal 64-rw-rw-r-- 1 502 17229 Sep 17 16:42 page_object_methods.rbdrwxrwxr-x 7 502 238 Sep 18 18:41 ../-rw-rw-r-- 1 502 18437 Sep 18 18:41 new_page_object_methods.rb-rw-r--r-- 1 502 16384 Sep 18 18:42 .nfs0000000000b869e300000001drwxrwxr-x 5 502 170 Sep 21 13:48 ./13:48:11 *vagrant* ubuntu-14 selenium_rspec_conversion and if I try to remove it: $ rm .nfs0000000000b869e300000001rm: cannot remove ‘.nfs0000000000b869e300000001’: Device or resource busy What does this indicate? What should I do | A file can be deleted while it's open by a process. When this happens, the directory entry is deleted, but the file itself (the inode and the content) remain behind; the file is only really deleted when it has no more links and it is not open by any process. NFS is a stateless protocol: operations can be performed independently of previous operations. It's even possible for the server to reboot, and once it comes back online, the clients will continue accessing the files as before. In order for this to work, files have to be designated by their names, not by a handle obtained by opening the file (which the server would forget when it reboots). Put the two together: what happens when a file is opened by a client, and deleted? The file needs to keep having name , so that the client that has it open can still access it. But when a file is deleted, it is expected that no more file by that name exists afterwards. So NFS servers turn the deletion of an open file into a renaming : the file is renamed to .nfs… ( .nfs followed by a string of letters and digits). You can't delete these files (if you try, all that happens is that a new .nfs… appears with a different suffix). They will eventually go away when the client that has the file open closes it. (If the client disappears before closing the file, it may take a while until the server notices.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/231074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
231,124 | I just compiled a new kernel and asked myself: What decides during the compilation process which kernel modules are built in the kernel statically? I then deleted /lib/modules , rebooted and found that my system works fine, so it appears all essential modules are statically built in the kernel. Without /lib/modules , the kernel loads 22. With the directory present, it loads 67 modules. | You do this as part of the configuration process, usually when you run make config , make menuconfig or similar. You can set the module as built-in (marked as * ), or modularised (marked as M ). You can see examples of this in a screenshot of make menuconfig , from here : | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/231124",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17859/"
]
} |
231,138 | I just installed VirtualBox on my mac, created a new Ubuntu Virtual Machine with "Use an existing virtual hard disk file" of the Cloudera Hadoop disk image. I'm able to start and run the virtual machine, however, I'd prefer to ssh into from my terminal. The following produces the message "connect to host 127.0.0.1 port 2222: Connection refused": ssh [email protected] -p 2222 I've also tried -p 22 I've also tried using "cloudera" as the user. Is there a VirtualBox setting I need to change to allow SSH? I've also just tried to create a new linux virtual machine without using Cloudera disk image, and I can SSH into that either. | I have a Mac on which I had installed VirtualBox. So this is what worked for me ... Click on the cloudera image and click settingsAfter that click on Network -> Adapter 1(by default have attached to as NAT) -> Advanced -> Port ForwardingAdd a new entry (click on + to add) with the following settings: Host Port: 1111, Guest Port: 22, leave the host IP and guest IP blank Connect from your Mac cmd shell using the following ssh -p 1111 cloudera@localhost On Ubuntu 18.04 additionally install ssh if necessary(usually signaled by unknown cmd ssh for the previous cmd) and reboot sudo apt-get install openssh-server | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/231138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135162/"
]
} |
231,184 | I'm thinking about switching to mutt for email. However, I have a few requirements. I'd like to be able to store the email offline. I'd like to have email pushed immediately to my local computer as opposed to periodic polling (e.g. using IMAP IDLE). For offline storage, I could use imapoffline or isync . I understand that the latter is more stable. However, to have email pushed on demand, the only option I've found for isync is mswatch . Unfortunately, this requires a program to be installed on the remote email server, which is not possible. Is there a solution that will allow me to use mutt , with offline email storage and instant email delivery? | Unfortunately, the two possibilities suggested in the other answer were imperfect. offlineimap was fairly buggy at the best of times. For example, there is no way to automatically run a script after new mail arrives. fetchmail doesn't synchronise bidirectionally. Instead, the solution that I ended up using was a combination of imapnotify and isync . I configured imapnotify to run a script when new mail is triggered (via IDLE). This script runs mbsync "${channel}:INBOX" depending on which account has mail. Next it runs notmuch new . Finally, it records the number of unread emails to a file as below. The contents of this file is displayed on a panel of my desktop environment. mail_count_file="/home/foo/.cache/new_mail_count"new_count=$(find ~/.mail/*/Inbox/new -type f | wc -l)if [[ $new_count > 0 ]]; then echo $new_count > "$mail_count_file"else if [[ -f "$mail_count_file" ]]; then rm "$mail_count_file" fifi Update imapnotify (nodejs-imapnotify) disconnects regularly with no warnings/errors, and often misses new mail. python-imapnotify also works intermittently. However, goimapnotify works very well in my experience. It rarely drops out, and when it does (e.g. because of network disconnects and/or suspend cycles), it quickly restarts itself without fuss. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/231184",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18887/"
]
} |
231,213 | I have a file which contains a gene sequence such as: ATGTGGATGGTGGGTTACAATGAAGGTGGTGAGTTCAACATGGCTGATTATCCATTCAGTGGAAGGAAACTAAGGCCTCTCATTCCAAGACCAGTCCCAGTCCCTACTACTTCTCCTAACAGCACTTCAACTATAACTCCTTCCTTAAACCGCATTCATGGTGGCAATGATTTATTTTCACAATATCATCACAATCTGCAGCAGCAAGCATCAGTAGGAGATCATAGCAAGAGATCAGAGTTGAATAATAATAATAATCCATCTGCAGCAGTTGTGGTGAGTTCAAGATGGAATCCAACACCAGAACAGTTAAGAGCACTGGAAGAATTGTATAGAAGAGGAACAAGAACACCTTCTGCTGAGCAAATCCAACAAATAACTGCCCAGCTTAGAAAATTTGGAAAAATTGAAGGCAAAAATGTTTTCTATTGGTTTCAGAATCACAAAGCCAGAGAAAGGCAAAAACGACGGCGTCAAATGGAATCAGCAGCTGCTGAGTTTGATTCTGCTATTGAAAAGAAAGACTTAGGCGCAAGTAGGACAGTGTTTGAAGTTGAACACACTAAAAACTGGCTACCATCTACAAATTCCAGTACCAGTACTCTTCATCTTGCAGAGGAATCTGTTTCAATTCAAAGGTCAGCAGCAGCAAAAGCAGATGGATGGCTCCAATTCGATGAAGCAGAATTACAGCAAAGAAGAAACTTTATGGAAAGGAATGCCACGTGGCATATGATGCAGTTAACTTCTTCTTGTCCTACAGCTAGCATGTCCACCACAACCACAGTAACAACTAGACTTATGGACCCAAAACTCATCAAGACCCATGAACTCAACTTATTCATTTCACCTCACACATACAAAGAAAGAGAAAACGCTTTTATCCACTTAAATACTAGTAGTACTCATCAAAATGAATCTGATCAAACCCTTCAACTTTTCCCAATAAGGAATGGAGATCATGGATGCACTGATCATCATCATCATCATCATAACATTATCAAAGAGACACAGATATCAGCTTCAGCAATCAATGCACCCAACCAGTTTATTGAGTTTCTTCCCTTGAAAAACTGA I am trying to count the number of occurrence of "ATG" substring in the above string (which is only one line without line breaks.) My file contains tens (10s) of these sequences and I want to be able to count how many "ATG" in each sequence. Each sequence is separated from others by an empty line. I tried grep but did not know which options I should use (if at all grep can solve the problem) and I googled for any awk example but I did not find any. | Returns the number of occurrences of ATG in each line: awk -F'ATG' 'NF{print NF-1}' testfile This works for files with one or many lines. Example 1 Consider this test file: $ cat testfilexxATGxxATGATGxxxATGxxxxxATGxxxxATGxxATGxx The code correctly counts the occurrences of ATG: $ awk -F'ATG' 'NF{print NF-1}' testfile223 Example 2 Using the example in the current version of the question: $ cat >file1ATGTGGATGGTGGGTTACAATGAAGGTGGTGAGTTCAACATGGCTGATTATCCATTCAGTGGAAGGAAACTAAGGCCTCTCATTCCAAGACCAGTCCCAGTCCCTACTACTTCTCCTAACAGCACTTCAACTATAACTCCTTCCTTAAACCGCATTCATGGTGGCAATGATTTATTTTCACAATATCATCACAATCTGCAGCAGCAAGCATCAGTAGGAGATCATAGCAAGAGATCAGAGTTGAATAATAATAATAATCCATCTGCAGCAGTTGTGGTGAGTTCAAGATGGAATCCAACACCAGAACAGTTAAGAGCACTGGAAGAATTGTATAGAAGAGGAACAAGAACACCTTCTGCTGAGCAAATCCAACAAATAACTGCCCAGCTTAGAAAATTTGGAAAAATTGAAGGCAAAAATGTTTTCTATTGGTTTCAGAATCACAAAGCCAGAGAAAGGCAAAAACGACGGCGTCAAATGGAATCAGCAGCTGCTGAGTTTGATTCTGCTATTGAAAAGAAAGACTTAGGCGCAAGTAGGACAGTGTTTGAAGTTGAACACACTAAAAACTGGCTACCATCTACAAATTCCAGTACCAGTACTCTTCATCTTGCAGAGGAATCTGTTTCAATTCAAAGGTCAGCAGCAGCAAAAGCAGATGGATGGCTCCAATTCGATGAAGCAGAATTACAGCAAAGAAGAAACTTTATGGAAAGGAATGCCACGTGGCATATGATGCAGTTAACTTCTTCTTGTCCTACAGCTAGCATGTCCACCACAACCACAGTAACAACTAGACTTATGGACCCAAAACTCATCAAGACCCATGAACTCAACTTATTCATTTCACCTCACACATACAAAGAAAGAGAAAACGCTTTTATCCACTTAAATACTAGTAGTACTCATCAAAATGAATCTGATCAAACCCTTCAACTTTTCCCAATAAGGAATGGAGATCATGGATGCACTGATCATCATCATCATCATCATAACATTATCAAAGAGACACAGATATCAGCTTCAGCAATCAATGCACCCAACCAGTTTATTGAGTTTCTTCCCTTGAAAAACTGA This results in: $ awk -F'ATG' 'NF{print NF-1}' file1915 How it works awk implicitly loops through every line of a file. Each line is divided into fields. -F'ATG' This tells awk to use ATG as the field separator. NF{print NF-1} For each non-empty line, this tells awk to print the number of fields minus 1. (On empty lines, the number of fields, NF , is zero. So, the condition NF evaluates to false on these lines, effectively skipping over them.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/231213",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49075/"
]
} |
231,239 | I am running a script which is collecting a log of a server. I need to redirect this logs to a ZIP file. Right now I am collecting data into text file. How can I redirect it directly to ZIP? | Using UnZip 6.00 of 20 April 2009 , I was able to do this: $ date | zip jeff.zip -$ unzip -l jeff.zipArchive: jeff.zip Length Date Time Name--------- ---------- ----- ---- 29 01-21-2016 13:02 ---------- ------- 29 1 file$ unzip -p jeff.zip | catThu Jan 21 13:02:31 EST 2016$ unzip -p jeff.zip > newfilename.here This uses date as a substitute for your script that collects the log file, to stdout presumably; it sends that stdout to zip, telling it to take its input from stdin instead of a filename (with - ). The contents of the zip file aren't named anything recognizable, but the data is there. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/231239",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135234/"
]
} |
231,244 | I have CentOS and I want install scala by: Installing scala on CentOS . I want to ask if it's safe installing this as root user? The commands are: wget http://www.scala-lang.org/files/archive/scala-2.10.1.tgztar xvf scala-2.10.1.tgzsudo mv scala-2.10.1 /usr/libsudo ln -s /usr/lib/scala-2.10.1 /usr/lib/scalaexport PATH=$PATH:/usr/lib/scala/binscala -version | Using UnZip 6.00 of 20 April 2009 , I was able to do this: $ date | zip jeff.zip -$ unzip -l jeff.zipArchive: jeff.zip Length Date Time Name--------- ---------- ----- ---- 29 01-21-2016 13:02 ---------- ------- 29 1 file$ unzip -p jeff.zip | catThu Jan 21 13:02:31 EST 2016$ unzip -p jeff.zip > newfilename.here This uses date as a substitute for your script that collects the log file, to stdout presumably; it sends that stdout to zip, telling it to take its input from stdin instead of a filename (with - ). The contents of the zip file aren't named anything recognizable, but the data is there. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/231244",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135235/"
]
} |
231,265 | I am wondering to ask the difference of these two commands (i.e. only the order of their options are different): tar -zxvf foo.tar.gz tar -zfxv foo.tar.gz The first one ran perfectly but the second one said: tar: You must specify one of the `-Acdtrux' or `--test-label' optionsTry `tar --help' or `tar --usage' for more information. And tar with --test-label and -zfxv said : tar (child): xv: Cannot open: No such file or directorytar (child): Error is not recoverable: exiting nowtar: Child returned status 2tar: Error is not recoverable: exiting now Then I looked at tar manual and realised that all the example there are using switch -f in the end!! AFAICT there is no need for this restriction, or is there?! because in my view switches should be order free. | The order of switches is free, but -f has a mandatory argument which is the file that tar will read/write. You could do tar -zf foo.tar.gz -xv and that will work, and has your requirement of a non-specific order of switches. This is how all commands that have options that have arguments work. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/231265",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135244/"
]
} |
231,273 | Currently I have Linux Mint installed on my PC with a USB hard drive partition mounted as /home . This is working well. If I install a second USB hard drive, is there any chance Linux will get confused between the two, and try mount the second hard drive's partition as /home on boot? That would be bad. Coming from Windows, I've seen it happen often that drive letters are not "remembered" correctly causing all sorts of issues. I guess the main question is: How does Linux actually know which USB hard drive is /dev/sdb and which is /media/misha/my_2nd_drive ? | Usually the location of the USB port (Bus/Device) determines the order it's detected on. However, don't rely on this. Each file system has a UUID which stands for universally unique identifier ( FAT and NTFS use a slightly different scheme, but they also have an identifier that can be used as a UUID). You can rely on the (Linux) UUID to be unique. For more information about UUIDs, see this Wikipedia article . Use the disk UUID as a mount argument. To find out what the UUID is, run this: $ sudo blkid /dev/sdb1 ( blkid needs to read the device, hence it needs root powers, hence the sudo . If you've already become root, the sudo is not needed.) You can then use that UUID in /etc/fstab like this: UUID=7e839ad8-78c5-471f-9bba-802eb0edfea5 /home ext4 defaults 0 2 There can then be no confusion about what disk is to be mounted on /home. For manual mounting you can use /dev/disk/by-uuid/..... | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/231273",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133840/"
]
} |
231,314 | How do I configure sshd to 1) require public key and 2) require a password for login? Note that I am not referring to the symmetric encryption of the client's key here. I am referring to a server-side password. Is this possible? | The linked answer in the other answer is really old and there are many changed things since then. So once again: If you read through the manual page for sshd_config(5) , there is option AuthenticationMethods , which takes the list of methods you need to pass before you are granted access. Your required setup is: AuthenticationMethods publickey,password This method should work all the current Linux systems with recent openssh (openssh-6, openssh-7). Older systems The only exception I know about is RHEL 6 (openssh-5.3), which requires setting different option with same values (as described on information security answer ): RequiredAuthentications2 pubkey,password | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/231314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10911/"
]
} |
231,346 | How would I count the number of files in a given directory that the current user has read permissions and write permissions on ? I am starting with: echo "whats the directory you want to check ?"read dir not sure then should I use a find command ? | You need to ask root to get you the list of files (for the ones that are below directories you can access but not read) and then check for the rights: sudo find "$dir" -print0|perl -Mfiletest=access -l -0ne'++$n if-r&&-w}{print+$n' If you don't care about files that are below non-readable directories (but you can still read and write), with GNU find : find "$dir" -writable -readable -printf . | wc -c Note that both check the access permissions (of every type of file including directories), it's not only based on permissions. It should give you the number of files that you would successfully open in read+write mode (without creation). For instance, for symlinks for which permissions are rwxrwxrwx, it only reports those that point to a file that you have read and write permission to. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/231346",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135298/"
]
} |
231,349 | I need to copy all files listed in TXT file from one location to another location. /1132526906_tt_nad87_1.jpg /thumb/t1132526906_tt_nad87_1.jpg/1132526906_tt_nad87_10.jpg /thumb/t1132526906_tt_nad87_10.jpg/1132526906_tt_nad87_11.jpg /thumb/t1132526906_tt_nad87_11.jpg/1132526907_tt_nad87_12.jpg /thumb/t1132526907_tt_nad87_12.jpg/1132526907_tt_nad87_13.jpg /thumb/t1132526907_tt_nad87_13.jpg/1132526908_tt_nad87_14.jpg /thumb/t1132526908_tt_nad87_14.jpg I can create a CSV file and divide source and target with some usable character. Example I would like to copy file 1132526906_tt_nad87_1.jpg from ./ to /thumb/t1132526906_tt_nad87_1.jpg - rename and move. Question Is there some command line command to do it? I found examples but these examples do only copying (without rename). UPDATE I created this script: #!/bin/bashinput="/data/web/web/gallery/data.csv"while IFS=',' read from todo echo "from: $from, to: $to"done < "$input" but no line is "echoed", it seems that data.csv file is not read. Is there something wrong? | You need to ask root to get you the list of files (for the ones that are below directories you can access but not read) and then check for the rights: sudo find "$dir" -print0|perl -Mfiletest=access -l -0ne'++$n if-r&&-w}{print+$n' If you don't care about files that are below non-readable directories (but you can still read and write), with GNU find : find "$dir" -writable -readable -printf . | wc -c Note that both check the access permissions (of every type of file including directories), it's not only based on permissions. It should give you the number of files that you would successfully open in read+write mode (without creation). For instance, for symlinks for which permissions are rwxrwxrwx, it only reports those that point to a file that you have read and write permission to. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/231349",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135300/"
]
} |
231,386 | On Fedora 22, gpg doesn't find gpg-agent: % gpg-agent --daemon % gpg -vvv --use-agent --no-tty --decrypt file.gpg gpg: using character set `utf-8':pubkey enc packet: version 3, algo 1, keyid 3060B8F7271AFBAF data: [4094 bits]gpg: public key is 271AFBAFgpg: using subkey 271AFBAF instead of primary key 50EA64D5gpg: using subkey 271AFBAF instead of primary key 50EA64D5gpg: gpg-agent is not available in this sessiongpg: Sorry, no terminal at all requested - can't get input | Looking at the versions reveals the problem: % gpg-agent --versiongpg-agent (GnuPG) 2.1.7% gpg --version gpg (GnuPG) 1.4.19 The components come from different packages ( gnupg2-2.1.7-1.fc22.x86_64 and gnupg-1.4.19-2.fc22.x86_64 in my case). The solution is to use the gpg2 command instead of gpg . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/231386",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24042/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.