source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
497,509 | Let's say I create 100 files with random text data of size 30MB each. Now I create a zip archive with 0 compression i.e. zip dataset.zip -r -0 *.txt . Now I want to extract just one file from this archive. As described here , there are two ways of unzipping/extracting files from archives: Seek to the end of the file and lookup the central directory. Then use that for fast random access to the file to be extracted.(Amortized O(1) complexity) Look through each local header and extract the one where theres a match.( O(n) complexity) Which method does unzip use? From my experiments it seems like it uses method 2? | When searching for a single file in a large archive, it uses method 1, which you can see using strace : open("dataset.zip", O_RDONLY) = 3ioctl(1, TIOCGWINSZ, 0x7fff9a895920) = -1 ENOTTY (Inappropriate ioctl for device)write(1, "Archive: dataset.zip\n", 22Archive: dataset.zip) = 22lseek(3, 943718400, SEEK_SET) = 943718400read(3, "\340P\356(s\342\306\205\201\27\360U[\250/2\207\346<\252+u\234\225\1[<\2310E\342\274"..., 4522) = 4522lseek(3, 943722880, SEEK_SET) = 943722880read(3, "\3\f\225P\\ux\v\0\1\4\350\3\0\0\4\350\3\0\0", 20) = 20lseek(3, 943718400, SEEK_SET) = 943718400read(3, "\340P\356(s\342\306\205\201\27\360U[\250/2\207\346<\252+u\234\225\1[<\2310E\342\274"..., 8192) = 4522lseek(3, 849346560, SEEK_SET) = 849346560read(3, "D\262nv\210\343\240C\24\227\344\367q\300\223\231\306\330\275\266\213\276M\7I'&35\2\234J"..., 8192) = 8192stat("rand-28.txt", 0x559f43e0a550) = -1 ENOENT (No such file or directory)lstat("rand-28.txt", 0x559f43e0a550) = -1 ENOENT (No such file or directory)stat("rand-28.txt", 0x559f43e0a550) = -1 ENOENT (No such file or directory)lstat("rand-28.txt", 0x559f43e0a550) = -1 ENOENT (No such file or directory)open("rand-28.txt", O_RDWR|O_CREAT|O_TRUNC, 0666) = 4ioctl(1, TIOCGWINSZ, 0x7fff9a895790) = -1 ENOTTY (Inappropriate ioctl for device)write(1, " extracting: rand-28.txt "..., 37 extracting: rand-28.txt ) = 37read(3, "\275\3279Y\206\223\217}\355W%:\220YNT\0\257\260z^\361T\242\2\370\21\336\372+\306\310"..., 8192) = 8192 unzip opens dataset.zip , seeks to the end, then seeks to the start of the requested file in the archive ( rand-28.txt , at offset 849346560) and reads from there. The central directory is found by scanning the last 65557 bytes of the archive; see the code starting here : /*--------------------------------------------------------------------------- Find and process the end-of-central-directory header. UnZip need only check last 65557 bytes of zipfile: comment may be up to 65535, end-of- central-directory record is 18 bytes, and signature itself is 4 bytes; add some to allow for appended garbage. Since ZipInfo is often used as a debugging tool, search the whole zipfile if zipinfo_mode is true. ---------------------------------------------------------------------------*/ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/497509",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/294990/"
]
} |
497,526 | I am programming a Linux shell script that will print status banners during its execution only if the proper tool, say figlet , is installed (this is: reachable on system path ). Example: #!/usr/bin/env bashecho "foo"figlet "Starting"echo "moo"figlet "Working"echo "foo moo"figlet "Finished" I would like for my script to work without errors even when figlet is not installed . What could be a practical method ? | My interpretation would use a wrapper function named the same as the tool; in that function, execute the real tool if it exists: figlet() { if command -p figlet >/dev/null 2>&1 then command figlet "$@" else : fi} Then you can have figlet arg1 arg2... unchanged in your script. @Olorin came up with a simpler method: define a wrapper function only if we need to (if the tool doesn't exist): if ! command -v figlet > /dev/null; then figlet() { :; }; fi If you'd like the arguments to figlet to be printed even if figlet isn't installed, adjust Olorin's suggestion as follows: if ! command -v figlet > /dev/null; then figlet() { printf '%s\n' "$*"; }; fi | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/497526",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57439/"
]
} |
497,532 | I had two namespaces (NS) in my NVMe SSD (Samsung)and deleted both to create just one,but Ubuntu is not able to recognize the device upon deleting. How do I recover the drive now? Command used to delete: sudo nvme delete-ns /dev/nvme0n1 -n 1 Ubuntu 18.04.1 LTS Kernel 4.15 | My interpretation would use a wrapper function named the same as the tool; in that function, execute the real tool if it exists: figlet() { if command -p figlet >/dev/null 2>&1 then command figlet "$@" else : fi} Then you can have figlet arg1 arg2... unchanged in your script. @Olorin came up with a simpler method: define a wrapper function only if we need to (if the tool doesn't exist): if ! command -v figlet > /dev/null; then figlet() { :; }; fi If you'd like the arguments to figlet to be printed even if figlet isn't installed, adjust Olorin's suggestion as follows: if ! command -v figlet > /dev/null; then figlet() { printf '%s\n' "$*"; }; fi | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/497532",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/334127/"
]
} |
497,543 | The classic scenario with Operator Precedence, you have a line like : (cd ~/screenshots/ && ls screenshot* | head -n 5) And you don't know if it's parsed ((A && B) | C) or (A && B | C) ... The almost official documentation found here doesn't list the pipe in the list so I cannot simply check in the table. Furthermore in bash, ( is not only for changing the order of operations but creates a subshell , so I'm not 100% sure this lines is the equivalent of the previous line : ((cd ~/screenshots/ && ls screenshot*) | head -n 5) More generally, how to know the AST of a bash line? In python I have a function that gives me the tree so that I can easily double check the order of operation. | cd ~/screenshots/ && ls screenshot* | head -n 5 This is equivalent to cd ~/screenshots && { ls screenshot* | head -n 5 ; } (the braces group commands together without a subshell ). The precedence of | is thus higher (binds tighter) than && and || . That is, A && B | C and A || B | C always mean that only B's output is to be given to C . You can use (...) or { ... ; } to join commands together as a single entity for disambiguation if necessary: { A && B ; } | CA && { B | C ; } # This is the default, but you might sometimes want to be explicit You can test this out using some different commands. If you run echo hello && echo world | tr a-z A-Z then you'll get helloWORLD back: tr a-z A-Z upper-cases its input , and you can see that only echo world was piped into it, while echo hello went through on its own. This is defined in the shell grammar , although not terribly clearly: the and_or production (for && / || ) is defined to have a a pipeline in its body, while pipeline just contains command , which doesn't contain and_or - only the complete_command production can reach and_or , and it only exists at the top level and inside the bodies of structural constructs like functions and loops. You can manually apply that grammar to get a parse tree for a command, but Bash doesn't provide anything itself. I don't know of any shell that does beyond what's used for their own parsing. The shell grammar has a lot of special cases defined only semi-formally and it can be quite a mission to get right. Even Bash itself has sometimes gotten it wrong , so the practicalities and the ideal may be different. There are external parsers that attempt to match the syntax and produce a tree, and of those I will broadly recommend Morbig , which attempts to be the most reliable. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/497543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/256871/"
]
} |
497,579 | The only command line solution for gapless playback I found so far (working with ALSA and JACK) is moc (»music on console«) . While I'm still searching for a simpler way I was wondering if it is possible to loop an audio file into a new file for a given number of times? Something like: loop-audio infile.flac --loop 32 outfile.flac for repeating infile.flac 32 times into outfile.flac | Sometimes it is just good to know that linux-life can be as easy as imagined, in this case by using SoX (Sound eXchange): sox infile.flac outfile.flac repeat 32 this even works with different file formats like: sox infile.flac outfile.mp3 repeat 32 would loop into a 128 kbps MP3 other bit rates can be set using the option: -C|--compression FACTOR Compression factor for output format getting an 320 kbps MP3 would be obtained with this command: sox infile.flac -C 320 outfile.mp3 repeat 32 and finally a simple gapless playback from the command line with mpv : mpv --loop-file infile.flac or the same even simpler: mpv --loop infile.flac | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/497579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
497,585 | [EDITED to reflect answers below] I am looking for a way to create blocks of folders / directories from the command line or a script that will generate a top level folder titled "200000-209999" and then inside of that folder, sub-folders named thusly: 200000-200499200500-200999201000-201499... etc ...... etc ...208500-208999209000-209499209500-209999 The naming is spaced like you see, and then I would want to set up the next batch of top-level/sub-folders, "210000-219999," "220000-229999," etc. [EDIT] I came up with the following script based on the answers below to accomplish exactly what I am looking for. My additions may not be elegant scripting, so if it can be improved upon, let me know. #!/bin/bash## mkfolders.sh## Ask user for starting range of job #'s and create the subsequent# folder hiearchy to contain them all.####clearread -p 'Starting Job# in range: ' jobnummkdir "${jobnum}"-"$((jobnum + 9999))"for start in $(seq $jobnum 500 $((jobnum+9999))); do mkdir "${jobnum}"-"$((jobnum + 9999))"/"${start}"-"$((start + 499))"; doneechoecho Done!echo | Sometimes it is just good to know that linux-life can be as easy as imagined, in this case by using SoX (Sound eXchange): sox infile.flac outfile.flac repeat 32 this even works with different file formats like: sox infile.flac outfile.mp3 repeat 32 would loop into a 128 kbps MP3 other bit rates can be set using the option: -C|--compression FACTOR Compression factor for output format getting an 320 kbps MP3 would be obtained with this command: sox infile.flac -C 320 outfile.mp3 repeat 32 and finally a simple gapless playback from the command line with mpv : mpv --loop-file infile.flac or the same even simpler: mpv --loop infile.flac | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/497585",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/334162/"
]
} |
497,639 | All the other questions on the SE network deal with scenarios where either the date is assumed to be now ( Q ) or where only a date is specified ( Q ). What I want to do is supply a date and time, and then subtract a time from that. Here is what I tried first: date -d "2018-12-10 00:00:00 - 5 hours - 20 minutes - 5 seconds" This results in 2018-12-10 06:39:55 - It added 7 hours. Then subtracted 20:05 minutes. After reading the man and info page of date , I thought I have it fixed with this: date -d "2018-12-10T00:00:00 - 5 hours - 20 minutes - 5 seconds" But, same result. Where does it even get the 7 hours from? I tried other dates as well because I thought maybe we had 7200 leap seconds on that day, who knows lol. But same results. A few more examples: $ date -d "2018-12-16T00:00:00 - 24 hours" +%Y-%m-%d_%H:%M:%S2018-12-17_02:00:00$ date -d "2019-01-19T05:00:00 - 2 hours - 5 minutes" +%Y-%m-%d_%H:%M:%S2019-01-19_08:55:00 But here it becomes interesting. If I omit the time on input, it works fine: $ date -d "2018-12-16 - 24 hours" +%Y-%m-%d_%H:%M:%S2018-12-15_00:00:00$ date -d "2019-01-19 - 2 hours - 5 minutes" +%Y-%m-%d_%H:%M:%S2019-01-18_21:55:00$ date --versiondate (GNU coreutils) 8.30 What am I missing? Update: I've added a Z at the end, and it changed the behaviour: $ date -d "2019-01-19T05:00:00Z - 2 hours" +%Y-%m-%d_%H:%M:%S2019-01-19_04:00:00 I'm still confused though. There is not much about this in the GNU info page about date. I'm guessing this is a timezone issue, but quoting The Calendar Wiki on ISO 8601 : If no UTC relation information is given with a time representation, the time is assumed to be in local time. Which is what I want. My local time is set correctly too. I'm not sure why date would mess with the timezone at all in this simple case of me supplying a datetime and wanting to subtract something off of it. Shouldn't it subtract the hours from the date string first? Even if it does convert it to a date first and then does the subtraction, if I leave out any subtractions I get exactly what I want: $ date -d "2019-01-19T05:00:00" +%Y-%m-%d_%H:%M:%S2019-01-19_05:00:00 So IF this truly is a timezone issue, where does that madness come from? | That last example should have clarified things for you: timezones . $ TZ=UTC date -d "2019-01-19T05:00:00Z - 2 hours" +%Y-%m-%d_%H:%M:%S2019-01-19_03:00:00$ TZ=Asia/Colombo date -d "2019-01-19T05:00:00Z - 2 hours" +%Y-%m-%d_%H:%M:%S 2019-01-19_08:30:00 As the output clearly varies by the timezone, I'd suspect some non-obvious default taken for a time string without a timezone specified. Testing a couple of values, it seems to be UTC-05:00 , though I'm not sure what that is. $ TZ=UTC date -d "2019-01-19T05:00:00 - 2 hours" +%Y-%m-%d_%H:%M:%S%Z2019-01-19_08:00:00UTC$ TZ=UTC date -d "2019-01-19T05:00:00Z - 2 hours" +%Y-%m-%d_%H:%M:%S%Z2019-01-19_03:00:00UTC$ TZ=UTC date -d "2019-01-19T05:00:00" +%Y-%m-%d_%H:%M:%S%Z 2019-01-19_05:00:00UTC It's only used when performing date arithmetic. It seems the issue here is that - 2 hours is not taken as arithmetic, but as a timezone specifier : # TZ=UTC date -d "2019-01-19T05:00:00 - 2 hours" +%Y-%m-%d_%H:%M:%S%Z --debugdate: parsed datetime part: (Y-M-D) 2019-01-19 05:00:00 UTC-02date: parsed relative part: +1 hour(s)date: input timezone: parsed date/time string (-02)date: using specified time as starting value: '05:00:00'date: starting date/time: '(Y-M-D) 2019-01-19 05:00:00 TZ=-02'date: '(Y-M-D) 2019-01-19 05:00:00 TZ=-02' = 1547881200 epoch-secondsdate: after time adjustment (+1 hours, +0 minutes, +0 seconds, +0 ns),date: new time = 1547884800 epoch-secondsdate: timezone: TZ="UTC" environment valuedate: final: 1547884800.000000000 (epoch-seconds)date: final: (Y-M-D) 2019-01-19 08:00:00 (UTC)date: final: (Y-M-D) 2019-01-19 08:00:00 (UTC+00)2019-01-19_08:00:00UTC So, not only is no arithmetic being done, there seems to be a daylight savings 1 hour adjustment on the time, leading to a somewhat nonsensical time for us. This also holds for addition: # TZ=UTC date -d "2019-01-19T05:00:00 + 5:30 hours" +%Y-%m-%d_%H:%M:%S%Z --debugdate: parsed datetime part: (Y-M-D) 2019-01-19 05:00:00 UTC+05:30date: parsed relative part: +1 hour(s)date: input timezone: parsed date/time string (+05:30)date: using specified time as starting value: '05:00:00'date: starting date/time: '(Y-M-D) 2019-01-19 05:00:00 TZ=+05:30'date: '(Y-M-D) 2019-01-19 05:00:00 TZ=+05:30' = 1547854200 epoch-secondsdate: after time adjustment (+1 hours, +0 minutes, +0 seconds, +0 ns),date: new time = 1547857800 epoch-secondsdate: timezone: TZ="UTC" environment valuedate: final: 1547857800.000000000 (epoch-seconds)date: final: (Y-M-D) 2019-01-19 00:30:00 (UTC)date: final: (Y-M-D) 2019-01-19 00:30:00 (UTC+00)2019-01-19_00:30:00UTC Debugging a bit more, the parsing seems to be: 2019-01-19T05:00:00 - 2 ( -2 being the timezone), and hours (= 1 hour), with an implied addition. It becomes easier to see if you use minutes instead: # TZ=UTC date -d "2019-01-19T05:00:00 - 2 minutes" +%Y-%m-%d_%H:%M:%S%Z --debugdate: parsed datetime part: (Y-M-D) 2019-01-19 05:00:00 UTC-02date: parsed relative part: +1 minutesdate: input timezone: parsed date/time string (-02)date: using specified time as starting value: '05:00:00'date: starting date/time: '(Y-M-D) 2019-01-19 05:00:00 TZ=-02'date: '(Y-M-D) 2019-01-19 05:00:00 TZ=-02' = 1547881200 epoch-secondsdate: after time adjustment (+0 hours, +1 minutes, +0 seconds, +0 ns),date: new time = 1547881260 epoch-secondsdate: timezone: TZ="UTC" environment valuedate: final: 1547881260.000000000 (epoch-seconds)date: final: (Y-M-D) 2019-01-19 07:01:00 (UTC)date: final: (Y-M-D) 2019-01-19 07:01:00 (UTC+00)2019-01-19_07:01:00UTC So, well, date arithmetic is being done, just not the one that we asked for. ¯\(ツ)/¯ | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/497639",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/296862/"
]
} |
497,666 | I'm trying to delete all files containing a certain text like this: $ find ~/Library/MobileDevice/Provisioning\ Profiles/* -exec grep -l "text to search for" '{}' \; -delete/Users/build/Library/MobileDevice/Provisioning Profiles/06060826-3fb2-4d71-82c6-7b9d309b08d6.mobileprovisionfind: -delete: /Users/build/Library/MobileDevice/Provisioning Profiles/06060826-3fb2-4d71-82c6-7b9d309b08d6.mobileprovision: relative path potentially not safe However, as you can see, it's throwing a warning and then does not delete the file. How can I resolve this error? This is on a Mac. | macOS find is based on an older version of FreeBSD find whose -delete would not remove the files that were given as argument. When you do: find dir/* ... -delete Your shell is expanding that dir/* glob into a list of file paths (excluding the hidden ones, while find itself will not exclude the hidden files it finds in any of those dirs), so find receives something like: find dir/dir1 dir/dir2 dir/file1 dir/file2... ... -delete If dir/file1 matches macOS find 's -delete will refuse to delete it. It will happily delete a dir/dir1/.somefile if it matches though. That was changed in FreeBSD in 2013 , but the change apparently didn't make it to macOS. Here, the work around is easy: use find dir (or find dir/ if you want to allow for dir to be a symlink to a directory and find to descend into it) instead of find dir/* . So, in your case: find ~/Library/MobileDevice/Provisioning\ Profiles/ \ -exec grep -l "text to search for" '{}' \; -delete Or use the more efficient grep -l --null | xargs -0 approach . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/497666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/201213/"
]
} |
497,674 | I'm trying to use grep to show only lines containing either of the two words, if only one of them appears in the line, but not if they are in the same line. So far I've tried grep pattern1 | grep pattern2 | ... but didn't get the result I expected. | A tool other than grep is the way to go. Using perl, for instance, the command would be: perl -ne 'print if /pattern1/ xor /pattern2/' perl -ne runs the command given over each line of stdin, which in this case prints the line if it matches /pattern1/ xor /pattern2/ , or in other words matches one pattern but not the other (exclusive or). This works for the pattern in either order, and should have better performance than multiple invocations of grep , and is less typing as well. Or, even shorter, with awk: awk 'xor(/pattern1/,/pattern2/)' or for versions of awk that don't have xor : awk '/pattern1/+/pattern2/==1` | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/497674",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/334246/"
]
} |
497,706 | I thought user=user&password=password will be in the body of the request, but I can't find it. Where is it? Does -v show the complete request including the body? Thanks. $ curl --data "user=user&password=password" -v http://google.com/* Trying 172.217.3.110...* TCP_NODELAY set* Connected to google.com (172.217.3.110) port 80 (#0)> POST / HTTP/1.1> Host: google.com> User-Agent: curl/7.58.0> Accept: */*> Content-Length: 27> Content-Type: application/x-www-form-urlencoded> * upload completely sent off: 27 out of 27 bytes< HTTP/1.1 405 Method Not Allowed< Allow: GET, HEAD< Date: Wed, 30 Jan 2019 14:01:40 GMT< Content-Type: text/html; charset=UTF-8< Server: gws< Content-Length: 1589< X-XSS-Protection: 1; mode=block< X-Frame-Options: SAMEORIGIN< <!DOCTYPE html><html lang=en> <meta charset=utf-8> <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width"> <title>Error 405 (Method Not Allowed)!!1</title> <style> *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px} </style> <a href=//www.google.com/><span id=logo aria-label=Google></span></a> <p><b>405.</b> <ins>That’s an error.</ins> <p>The request method <code>POST</code> is inappropriate for the URL <code>/</code>. <ins>That’s all we know.</ins>* Connection #0 to host google.com left intact | The description of curl ’s -v option says -v , --verbose Makes curl verbose during the operation. Useful for debugging and seeing what's going on "under the hood". A line starting with '>' means "header data" sent by curl, '<' means "header data" received by curl that is hidden in normal cases, and a line starting with '*' means additional info provided by curl. If you only want HTTP headers in the output, -i , --include might be the option you're looking for. If you think this option still doesn't give you enough details, consider using --trace or --trace-ascii instead. So -v shows headers (in addition to the response body, which curl shows anyway), and you need --trace to see the bodies: curl --data "user=user&password=password" --trace google.log http://google.com/ will output detailed logs in google.log . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/497706",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
497,897 | By default, i3 ignores the Print Screen key available in most keyboards. How can it be activated? | Everything inside i3 needs to be bound and just a minimal set of keys is added/generated inside the default config. Some keys that are not letters can be represented with its keycodes or keysyms. More about this subject here: i3 User’s Guide - 4.3. Keyboard bindings Printscreen is the Print keysym. I personally use gnome-screenshot to that task, since it can crop images, making life easier. Add the following lines to your .config/i3/config or any config file you are using as the i3wm main config file. #interactive screenshot by pressing printscreenbindsym Print exec gnome-screenshot -i #crop-area screenshot by pressing Mod + printscreenbindsym $mod+Print exec gnome-screenshot -a Some people like to use scrot . That is up to you to decide :) . Example: bindsym Print exec scrot $HOME/Images/`date +%Y-%m-%d_%H:%M:%S`.png | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/497897",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56887/"
]
} |
497,912 | I'm trying to implement code that enumerate all existing TCP connections per process (similar to netstat -lptn ).I prefer to implement it myself and not to rely on netstat .In order to do that, I'm parsing data from /proc/<PID>/net/tcp . I saw that a number of TCP connections are listed under /proc/<PID>/net/tcp but not listed by netstat -lptn command. For example I see that /proc/1/net/tcp and /proc/2/net/tcp have several TCP connections (tried on Ubuntu 16).As I understand, /proc/1/net/tcp is related to the /sbin/init process which should not have any TCP connection.The /proc/2/net/tcp is related to kthreadd which also should not have any TCP connection. | There are many misunderstandings in your approach. I'll go over them one by one. Sockets are not associated with a specific process. When a socket is created its reference count is 1. But through different methods such as dup2 , fork , and file descriptor passing it's possible to create many references to the same socket causing its reference count to increase. Some of these references can be from an open file descriptor table, which itself can be used by many threads. Those threads may belong to the same thread group (PID) or different thread groups. When you use the -p flag for netstat it will enumerate the sockets accessible to each process and try to find a process for each known socket. If there are multiple candidate processes, there is no guarantee that it shows the process you are interested in. /proc/<PID>/net/tcp does not only list sockets related to that process. It lists all TCPv4 sockets in the network namespace which that process belongs to. In the default configuration all processes on the system will belong to a single network namespace, so you'll see the same result with any PID. This also explains why a thread/process which doesn't use networking has contents in this file. Even if it doesn't use networking itself it still belongs to a network namespace in which other processes may use networking. /proc/<PID>/net/tcp contains both listening and connected sockets. When you pass -l to netstat it will show you only listening sockets. To match the output closer you'd need -a rather than -l . /proc/<PID>/net/tcp contains only TCPv4 sockets. You need to use /proc/<PID>/net/tcp6 as well to see all TCP sockets. If you are only interested in sockets in the same namespace as your own process you don't need to iterate through different PIDs. You can instead use /proc/net/tcp and /proc/net/tcp6 since /proc/net is a symlink to /proc/self/net . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/497912",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18227/"
]
} |
497,956 | I have about a thousand files that all look something like this: 20091208170014.nc 20091211150704.nc 20091214131328.nc 20091217111953.nc 20091220092643.nc 20091223073308.nc 20091226053932.nc 20091229034557.nc20091208171946.nc 20091211152610.nc The first eight are the date, the last 6 are consecutive numbers, but the difference between those numbers isn't the same between the files. I want the last six numbers to be consecutive and always with the same step. For example: 20091208000001.nc 20091211000002.nc 20091214000003.nc 20091217000004.nc 20091220000005.nc 20091223000006.nc 20091226000007.nc 20091229000008.nc20091208000009.nc 20091211000010.nc I checked several questions on this site using mmv and others like https://www.ostechnix.com/how-to-rename-multiple-files-at-once-in-linux/ but none of them can explain to me how to have consecutive numbers in my name. To differentiate this from the question Batch rename files to a sequential numbering , the sequential ordering must be based on the last six digits, ignoring the embedded date in the first eight characters completely.. | There are many misunderstandings in your approach. I'll go over them one by one. Sockets are not associated with a specific process. When a socket is created its reference count is 1. But through different methods such as dup2 , fork , and file descriptor passing it's possible to create many references to the same socket causing its reference count to increase. Some of these references can be from an open file descriptor table, which itself can be used by many threads. Those threads may belong to the same thread group (PID) or different thread groups. When you use the -p flag for netstat it will enumerate the sockets accessible to each process and try to find a process for each known socket. If there are multiple candidate processes, there is no guarantee that it shows the process you are interested in. /proc/<PID>/net/tcp does not only list sockets related to that process. It lists all TCPv4 sockets in the network namespace which that process belongs to. In the default configuration all processes on the system will belong to a single network namespace, so you'll see the same result with any PID. This also explains why a thread/process which doesn't use networking has contents in this file. Even if it doesn't use networking itself it still belongs to a network namespace in which other processes may use networking. /proc/<PID>/net/tcp contains both listening and connected sockets. When you pass -l to netstat it will show you only listening sockets. To match the output closer you'd need -a rather than -l . /proc/<PID>/net/tcp contains only TCPv4 sockets. You need to use /proc/<PID>/net/tcp6 as well to see all TCP sockets. If you are only interested in sockets in the same namespace as your own process you don't need to iterate through different PIDs. You can instead use /proc/net/tcp and /proc/net/tcp6 since /proc/net is a symlink to /proc/self/net . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/497956",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/334419/"
]
} |
497,985 | I'm trying to find the most efficient way to iterate through certain values that are a consistent number of values away from each other in a space separated list of words(I don't want to use an array). For example, list="1 ant bat 5 cat dingo 6 emu fish 9 gecko hare 15 i j" So I want to be able to just iterate through list and only access 1,5,6,9 and 15. EDIT: I should have made it clear that the values I'm trying to get from the list don't have to be different in format from the rest of the list. What makes them special is solely their position in the list(In this case, position 1,4,7...). So the list could be 1 2 3 5 9 8 6 90 84 9 3 2 15 75 55 but I'd still want the same numbers. And also, I want to be able to do it assuming I don't know the length of the list. The methods I've thought of so far are: Method 1 set $listfound=falsefind=9count=1while [ $count -lt $# ]; do if [ "${@:count:1}" -eq $find ]; then found=true break fi count=`expr $count + 3`done Method 2 set listfound=falsefind=9while [ $# ne 0 ]; do if [ $1 -eq $find ]; then found=true break fi shift 3done Method 3 I'm pretty sure piping makes this the worst option, but I was trying to find a method that doesn't use set, out of curiosity. found=falsefind=9count=1num=`echo $list | cut -d ' ' -f$count`while [ -n "$num" ]; do if [ $num -eq $find ]; then found=true break fi count=`expr $count + 3` num=`echo $list | cut -d ' ' -f$count`done So what would be most efficient, or am I missing a simpler method? | Pretty simple with awk . This will get you the value of every fourth field for input of any length: $ awk -F' ' '{for( i=1;i<=NF;i+=3) { printf( "%s%s", $i, OFS ) }; printf( "\n" ) }' <<< $list1 5 6 9 15 This works be leveraging built-in awk variables such as NF (the number of fields in the record), and doing some simple for looping to iterate along the fields to give you the ones you want without needing to know ahead of time how many there will be. Or, if you do indeed just want those specific fields as specified in your example: $ awk -F' ' '{ print $1, $4, $7, $10, $13 }' <<< $list1 5 6 9 15 As for the question about efficiency, the simplest route would be to test this or each of your other methods and use time to show how long it takes; you could also use tools like strace to see how the system calls flow. Usage of time looks like: $ time ./script.shreal 0m0.025suser 0m0.004ssys 0m0.008s You can compare that output between varying methods to see which is the most efficient in terms of time; other tools can be used for other efficiency metrics. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/497985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/334513/"
]
} |
498,027 | Let's assume I have a log file that contains exceptions as shown java.lang.NullPointerException blablaABC.Exception blalabbladogchacecat.Exception yadayada I want to be able to output each line from beginning and up until (including) "Exception" desired output: java.lang.NullPointerExceptionABC.Exceptiondogchacecat.Exception How do I do this using any GNU tool (grep, awk, sed)?Thank you! | Using grep : grep -o '.*Exception' file -o, --only-matching Prints only the matching part of the lines. '.*Exception' This will match between 0 and unlimited occurrences of any character (except for line terminators) up until the word "Exception" In order to get the behavior you mentioned in the comment (pull the string before and including Exception up until any leading whitespace) you can use extended or perl regex to use the \S control character (any non-whitespace character): grep -oE '\S+Exception' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/498027",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/334552/"
]
} |
498,033 | Here is an example showing the ' InRelease ' and ' Release ' line suffixes: # apt updateHit:1 http://security.debian.org stretch/updates InReleaseIgn:2 http://dl.google.com/linux/chrome/deb stable InRelease Hit:3 http://sfo1.mirrors.digitalocean.com/mariadb/repo/10.2/debian stretch InRelease Hit:4 http://mirrors.ocf.berkeley.edu/debian stretch-updates InRelease Ign:5 http://dl.google.com/linux/earth/deb stable InRelease Ign:6 http://ftp.uk.debian.org/debian stretch InRelease Hit:7 http://dl.google.com/linux/chrome/deb stable Release Hit:8 http://ftp.debian.org/debian stretch-backports InRelease Hit:9 http://dl.google.com/linux/earth/deb stable Release ... I know what a Release is. Although a web search for InRelease didn't turn up much. | InRelease files are equivalent to Release files with the exception that they contain an inline GPG signature, whereas validating Release files requires downloading a separate Release.gpg file. Having the signature in line avoids race conditions when downloading. This FTP master meeting summary contains this feature’s announcement, with a brief description. Since the title asks about them too, I’ll mention that Release files contain distribution metadata and the checksums of the index files. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/498033",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181039/"
]
} |
498,089 | I need to apply a patch to a file in a complicated directory and symlink scenario. No matter what I try, I can't figure out how to massage the patch arguments so that it would find the desired file. Is there a way to completely circumvent the problem and just EXPLICITLY TELL patch to which file it should apply the patch, ignoring the path/filename in the .patch file? | If the patch only contains changes to a single file, you should be able to tell patch to apply those changes to a file of your choice by specifying it before the patch name: patch myfile withthis.patch will apply withthis.patch to myfile , ignoring the file name in the patch. Quoting the man page : The names of the files to be patched are usually taken from the patch file, but if there's just one file to be patched it can be specified on the command line as originalfile . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/498089",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142097/"
]
} |
498,162 | I am looking for a linux command (like ls , time or anything similar) that spawns multiple processes. Anything which is to be run from a command line and not a shell script. The reason is I want to see parent-child relationship on the htop and with different Process IDs. Thanks | The & command separator will do this for you. Use it carefully and wisely, but here is a simple way to see process relationships: $ sleep 5 & pstree -p $$[1] 13369bash(13337)─┬─pstree(13370) └─sleep(13369) The [1] 13369 shows that sleep (which has PID 13369), has been put into the background as Job #1. $$ returns to the shell the PID of itself, so we feed that into pstree to show the process tree with a root of our shell's PID, to show all child processes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/498162",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/334683/"
]
} |
498,169 | I'm trying to create a ~/.ssh/config entry for the following scenario: I have a linux box in a private network somewhere, a vps server which is accessible from anywhere and my local macbook . The linux box is behind firewalls but I should still be able to SSH into the box from my local macbook. My current solution which works is the following: The linux box starts a reverse tunnel to the vps via: ssh -R 15000:localhost:22 vps-user@vps and then from my local macbook I start a tunnel to the vps via: ssh -L 12345:localhost:1500 vps-user@vps and then I start another ssh command from where I can directly ssh into the linux box which would else be hidden behind firewals etc.: ssh linuxbox-user@localhost -p 12345 So first of all, this all works pretty reliably (if you have any easier way to do this, let me know - it seems pretty cumbersome on the macbook side). How would I create a ~/.ssh/config entry to, in the best case scenario, just write: ssh linuxbox and be done with it? I've tried using the LocalForward option, which allows me to atleast make an alias for the ssh -L ... command but it still requires the second ssh command. I've also tried the ProxyCommand option but to no luck, maybe just wrong configuration? | The easiest way is probably to avoid the local forwarding, that appears unnecessary in your case, and leverage the ProxyJump directive, which lets you specify one or more jump proxies (i.e. one or more intermediate host(s) you connect to and from which you reach your target host). You will need two connections: The remote forwarding you are already establishing, from your linux box to the vps: ssh -R 15000:localhost:22 vps-user@vps I'm assuming that you can connect to the vps on port 22 (as it seems implied in your question). This will let the vps forward connections it receives on port 15000 to port 22 of your linux box. A connection from your local MacBook to the vps: ssh -J vps-user@vps -p 15000 linuxbox-user@localhost -J is a shortcut to specify a ProxyJump directive (refer to the manual page for ssh(1) ). Again, it is implied that you can connect to the vps on port 22 . This command will connect your local MacBook to port 22 on the vps and, from there, to port 15000 on the same vps (where the linux box is listening), giving you a login to your linux box with no further connections needed. The corresponding .ssh/config files would be: On your linux box: Host vps RemoteForward 15000 localhost:22 User vps-user Which allows you to just type: ssh vps On your local MacBook: Host linuxbox ProxyJump vps-user@vps Hostname localhost Port 15000 User linuxbox-user Which leaves you with the need to just issue: ssh linuxbox If you are using public key authentication (as you are presumably doing, at least to allow the unattended setup of the remote forwarding), you can also add the IdentityFile directive to both your .ssh/config files to remove the need for typing passwords. If some conditions are met, you might also be able to directly connect your local MacBook to port 15000 on the vps, avoiding the need for local forwarding or a proxy jump. Namely, the conditions are: Port 15000 on the vps is not filtered by a firewall. You set GatewayPorts yes in sshd 's configuration on the vps (usually in /etc/ssh/sshd_config ). This setting defaults to no and determines if remote forwarding can bind a listening port to other addresses than the loopback one (hence, an IP address that is not 127.0.0.1 or ::1 ). Refer to the manual page sshd_config(5) for further information. In this scenario, you would just need these two commands: The reverse forwarding from your linux box to the vps (note the *: preceding the remote port, which stands for "listen on any address"): ssh -R *:15000:localhost:22 vps-user@vps A simple connection from your MacBook to the vps: ssh -p 15000 -l linuxbox-user vps Which translate into the following .ssh/config files: On your linux box (again, note the *: preceding the remote port): Host vps RemoteForward *:15000 localhost:22 User vps-user On your local MacBook: Host linuxbox Hostname vps Port 15000 User linuxbox-user Note, however, that this way you will expose your linux box to the internet , which most likely is something you don't want. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/498169",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/334697/"
]
} |
498,393 | I use inotifywait for event trigger which put file. When many files are watched by inotifywait , when max_user_watches is exceeded, the following error occurs. Terminating since out of inotify watches.#012Consider increasing /proc/sys/fs/inotify/max_user_watches It is necessary to tune /proc/sys/fs/inotify/max_user_watches , but is it possible to check the current file watch number? Is there a way to check like file-nr in file descriptor? | I cobbled together this little script based on @mosvy's answer. Since the initial conception, it has since seen quite a few improvements (stability on older systems, total count, speed). On most normal machines, running it should take a less than 100 ms. INOTIFY WATCH COUNT PID USER COMMAND-------------------------------------- 3044 3933 myuser node /usr/local/bin/tsserver 2965 3941 myuser /usr/local/bin/node /home/myuser/.config/coc/extensions/node_modules/coc-tsserver/bin/tsserverForkStart /hom 979 3954 myuser /usr/local/bin/node /home/myuser/.config/coc/extensions/node_modules/coc-tsserver/node_modules/typescript/li 1 3899 myuser /usr/local/bin/node --no-warnings /home/myuser/dev/dotfiles/common-setup/vim/dotvim/plugged/coc.nvim/build/i 6989 WATCHES TOTAL COUNT 2023 Update: use a native version Michael Sartain has recreated this functionality (and added several improvements) as a native (C++) binary ( inotify-info ), so if you can spare a few seconds to do the build step, his project essentially makes my script redundant, as it is better in every way. Superfast! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/498393",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
498,396 | When I open bash and press the up arrow I see the last command I have typed. Continues presses of the up arrow will show a series of commands typed on the past. How do I find one specific command from that list, instead of having to press the up arrow n times? | Press CTRL+R and start typing. Press CTRL+R again to get the next found. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/498396",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45335/"
]
} |
498,495 | I'm trying to understand how inode numbers (as displayed by ls -i ) work with ext4 partitions. I'm trying to understand whether they are a construct of the linux kernel and mapped to inodes on disk, or if they actually are the same numbers stored on disk. Questions: Do inode numbers change when a computer is rebooted? When two partitions are mounted, can ls -i produce the same inode number for two different files as long as they are on different partitions. Can inode numbers be recycled without rebooting or re-mounting partitions? Why I'm asking... I want to create a secondary index on a USB hard drive with 1.5TB of data and around 20 million files (filenames). Files range from 10s of bytes to 100s of GB. Many of them are hard linked multiple times, so a single file (blob on disk) might have anything up to 200 file names. My task is to save space on disk by detecting duplicates and replacing the duplication with even more hard links. Now as a single exercise, I think I can create a database of every file on disk, it's shasum, permissions etc... Once built, detecting duplication should be trivial. Bit I need to be certain I am using the right unique key. Filenames are inappropriate due to the large number of existing hard links. My hope is that I can use inode numbers. What I would like to understand is whether or not the inode number us going to change when I next reboot my machine. Or if they are even more volatile (will they change while I'm building my database?) All the documentation I read fudges the distinction between inode numbers as presented by the kernel and inodes on disk. Whether or not these are the same thing is unclear based on the articles I've already read. | I'm trying to understand how inode numbers (as displayed by ls -i) work with ext4 partitions. Essentially, inode is a reference for a filesystem(!), a bridge between actual data on disk (the bits and bytes) and name associated with that data ( /etc/passwd for instance). Filenames are organized into directories, where directory entry is filename with corresponding inode. Inode then contains the actual information - permissions, which blocks are occupied on disk, owner, group, etc. In How are directory structures stored in UNIX filesystem , there is a very nice diagram, that explains relation between files and inodes a bit better: And when you have a file in another directory pointing to the same inode number, you have what is known as hard link. Now, notice I've emphasized that inode is reference specific to filesystem, and here's the reason to be mindful of that: The inode number of any given file is unique to the filesystem, but not necessarily unique to all filesystems mounted on a given host. When you have multiple filesystems, you will see duplicate inode numbers between filesystems, this is normal. This is in contrast to devices . You may have multiple filesystems on the same device, such as /var filesystem and / , and yet they're on the same drive. Now, can inode number change? Sort of. Filesystem is responsible for managing inodes, so unless there's underlying issues with filesystem, inode number shouldn't change. In certain tricky cases, such as vim text editor , renames the old file, then writes a new file with the original name, if it thinks it can re-create the original file's attributes. If you want to reuse the existing inode (and so risk losing data, or waste more time making a backup copy), add set backupcopy yes to your .vimrc. The key point to remember is that where data might be the same to the user, under the hood it actually is written to new location on disk, hence the change in inode number. So, to make things short: Do inode numbers change when a computer is rebooted? Not unless there's something wrong with filesystem after reboot 2.When two partitions are mounted, can ls -i produce the same inode number for two different files as long as they are on different partitions. Yes, since two different partitions will have different filesystems. I don't know a lot about LVM , but under that type of storage management two physical volumes could be combined into single logical volume, which would in my theoretical guess be the case where ls - would produce one inode per file Can inode numbers be recycled without rebooting or re-mounting partitions? The filesystem does that when a file is removed( that is , when all links to file are removed, and there's nothing pointing to that inode). My task is to save space on disk by detecting duplicates and replacing the duplication with even more hard links. Well, detecting duplication can be done via md5sum or other checksum command. In such case you're examining the actual data, which may or may not live under different inodes on disk. One example is from heemayls answer : find . ! -empty -type f -exec md5sum {} + | sort | uniq -w32 -dD | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/498495",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20140/"
]
} |
498,509 | I have an array of paths to files which each have several lines of text. I'd like to produce an array that is populated with the first line of each file processed like so: # this.txt first line is [Test this]# another.txt first line is [Test another]paths=( ./this/path/this.txt ./another/path/another.txt)for i in ${paths[@]}; do read -r line < $i lines+=$linedone At most I've only gotten one value in my array. I can't seem to get the array I'm looking for out of the for loop. I've tried many variations and having a hard time figuring out where I'm going wrong. | You wanted lines+=("$line") +=WORD is string concatenation (or addition). A compound assignment +=(...) appends the values to the array. You probably also want to quote all your variable expansions here - the line definitely needs it if the line might contain whitespace, but you could have issues elsewhere as well. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/498509",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79046/"
]
} |
498,590 | I have a large files (800.000 records) and I want to split this into different files of 20.000 records each. This one I can do, but my next problem is that I want to know if it's possible to automatically generate the new files? Example: file1 contains 800.000 records. First I get 20000 records out of it and move to another file, and then I remove the \r characters. sed -n '1,20000p;20001q' file1 > file1_1sed -e 's/\r//g' file1_1 > file1 Is it possible to do something in a loop? or do I have to write this 40 times? The number of records is variable, today it contains 800.000 records, but tomorrow it can contain 789.123 of 812.321 records. Do I have to give an 'end number' with the sed-command? Thank you all for your answers!! | Romeo Ninov already gave you The Right Answer™ : use split. But to answer the general case about sed , you could do the same thing with: i=1;filelen=$(wc -l < file1)while [[ $i -le $filelen ]]; do sed -n "s/\r//;$i,$((i+19999))p;$(($i+20000))q;" file1 > file1.$i; ((i+=20000)); done That saves each set of 20000 lines in a new file. If you really want to do what your question shows and only keep the 1st 20000 lines, it is much simpler: sed -i 's/\r//; 200001q' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/498590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264389/"
]
} |
498,699 | I want to prompt the user to input a URL, but it can only contain A-Z , a-z , 0-9 , & , . , / , = , _ , - , : , and ? . So, for example: Enter URL:$ http://youtube.com/watch?v=1234df_AQ-xThat URL is allowed.Enter URL:$ https://unix.stackexchange.com/$FAKEurl%123That URL is NOT allowed. This is what I've come up with so far, but it doesn't seem to work correctly: if [[ ! "${URL}" == *[abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890\-\_\/\&\?\:\.\=]* ]]; then echo "That URL is NOT allowed."else echo "That URL is allowed."fi Please note that the URLs I provided in the example are just examples. This script needs to work with all possible user input; it just can't contain characters other than the ones I specified earlier. Using bash 3.2.57(1)-release under macOS High Sierra 10.13.6. | You were close. You want to check whether the URL contains at least one of the disallowed characters (and then report it as invalid), not at least one of the allowed character. You can negate a character set in a bracket expression with ! ( ^ also works with bash and a few other shells). In any case, you were right to explicitly list characters individually, using ranges like a-z , A-Z , 0-9 would only work (at matching only the 26+26+10 characters you listed) in the C locale . In other locales they could match thousands of other characters or even collating elements made of more than one character (the ones that sort between A and Z like É for instance for A-Z ). case $URL in ("" | *[!abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890_/\&?:.=-]*) echo >&2 "That URL is NOT allowed.";; (*) echo "That URL is allowed.";;esac | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/498699",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335196/"
]
} |
498,857 | I work in a relatively heterogeneous environment where I may be running different versions of Bash on different HPC nodes, VMs, or my personal workstation. Because I put my login scripts in a Git repo, I would like use the same(ish) .bashrc across the board, without a lot of "if this host, then..."-type messiness. I like the default behavior of Bash ≤ 4.1 that expands cd $SOMEPATH into cd /the/actual/path when pressing the Tab key. In Bash 4.2 and above, you would need to shopt -s direxpand to re-enable this behavior, and that didn't become available until 4.2.29 . This is just one example, though; another, possibly related shopt option, complete_fullquote (though I don't know exactly what it does) may have also changed default behavior at v4.2. However, direxpand is not recognized by earlier versions of Bash, and if I try to shopt -s direxpand in my .bashrc , that results in an error message being printed to the console every time I log in to a node with an older Bash: -bash: shopt: direxpand: invalid shell option name What I'd like to do is wrap a conditional around shop -s direxpand to enable that option on Bash > 4.1 in a robust way, without chafing the older versions of Bash ( i.e. , not just redirecting the error output to /dev/null ). | Check if direxpand is present in the output of shopt and enable it if it is: shopt | grep -q '^direxpand\b' && shopt -s direxpand | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/498857",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/278323/"
]
} |
498,953 | The shell script I'm trying to use keeps giving this error: $ ./script.sh: line 2: [: missing `]' grep: ]: No such file or directory The line is part of a section trying to check if a particular process is going to have a file locked: COUNTER=0while [ ps aux | grep "[r]elayevent.sh" ] && [ "$COUNTER" -lt 10 ]; do sleep 3 let COUNTER+=1done Obviously I've checked that the brackets all pair up correctly - which looks fine to me. Also the common white space around the condition issue doesn't apply. What am I missing here? | The error is that you should remove first [ because of you want to check the exit status then use command directly. The Wiki pages of the Shellcheck tool have an explanation for this ( issue SC1014 ): [ .. ] is not part of shell syntax like if statements. It is not equivalent to parentheses in C-like languages, if (foo) { bar; } , and should not be wrapped around commands to test. [ is just regular command, like whoami or grep , but with a funny name (see ls -l /bin/[ ). It's a shorthand for test . If you want to check the exit status of a certain command, use that command directly. If you want to check the output of a command, use "$(..)" to get its output, and then use test or [ / [[ to do a string comparison: Also use ps aux | grep -q "[r]elayevent.sh" so that you will get the exit status silently instead of printing anything to stdout . Or you can use pgrep and direct it's output to /dev/null . Use second condition first because it will be more efficient for the last case. So final script will be like: #!/bin/bashCOUNTER=0while [ "$COUNTER" -lt 10 ] && ps aux | grep -q "[r]elayevent.sh" ; do sleep 3 let COUNTER+=1done Or #!/bin/bashCOUNTER=0while [ "$COUNTER" -lt 10 ] && pgrep "[r]elayevent.sh" >/dev/null ; do sleep 3 let COUNTER+=1done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/498953",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335423/"
]
} |
498,960 | How to skip the first line to print columns in some format and after some arithmetic operation below? Is it possible with next? Input #filename4e+06 5e+06 6e+06 5e+06 5e+06 6e+06 Code: BEGIN { CONVFMT="%0.17f" }function t(n, s) {s=index(n,".")return (s ? substr(n,1,s+2) : n)}FR>1 {print t($1-1000),t($2)} | The error is that you should remove first [ because of you want to check the exit status then use command directly. The Wiki pages of the Shellcheck tool have an explanation for this ( issue SC1014 ): [ .. ] is not part of shell syntax like if statements. It is not equivalent to parentheses in C-like languages, if (foo) { bar; } , and should not be wrapped around commands to test. [ is just regular command, like whoami or grep , but with a funny name (see ls -l /bin/[ ). It's a shorthand for test . If you want to check the exit status of a certain command, use that command directly. If you want to check the output of a command, use "$(..)" to get its output, and then use test or [ / [[ to do a string comparison: Also use ps aux | grep -q "[r]elayevent.sh" so that you will get the exit status silently instead of printing anything to stdout . Or you can use pgrep and direct it's output to /dev/null . Use second condition first because it will be more efficient for the last case. So final script will be like: #!/bin/bashCOUNTER=0while [ "$COUNTER" -lt 10 ] && ps aux | grep -q "[r]elayevent.sh" ; do sleep 3 let COUNTER+=1done Or #!/bin/bashCOUNTER=0while [ "$COUNTER" -lt 10 ] && pgrep "[r]elayevent.sh" >/dev/null ; do sleep 3 let COUNTER+=1done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/498960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335430/"
]
} |
499,022 | I would like to load a custom kernel module upon startup on my system (Debian 9). The vermagic string of this module does not exactly match my kernel version, but I can load it using modprobe -f module_name or insmod -f /path/to/module and it seems to work fine. If I just add the name of the module to /etc/modules-load.d/modules.conf it does not work, systemctl shows that systemd-modules-load.service gets an error upon trying to load the module. Can I tell systemd to force load the module? | You should be able to override the install behaviour using a configuration file in /etc/modprobe.d , for example /etc/modprobe.d/module_name.conf : install module_name /sbin/modprobe -i -f module_name This instructs the module loading code to run /sbin/modprobe -i -f module_name when a request is made to install module_name . -i tells modprobe to ignore install directives when processing the command (otherwise we’d end up with a loop). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499022",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335490/"
]
} |
499,081 | I like to use the following format in scripts for commands with a lot of parameters (for readability): docker run \ --rm \ -u root \ -p 8080:8080 \ -v jenkins-data:/var/jenkins_home \ -v /var/run/docker.sock:/var/run/docker.sock \ -v "$HOME":/home \ jenkinsci/blueocean But, sometimes I'd like to comment one of these parameters out like: # -p 8080:8080 This doesn't work, as the EOL is interpreted as return and the command fails. Tried this too: \ # -p 8080:8080 which also didn't work. Question: Is there a way to comment out the parameter, so it's still on its own line, but I'd be able to execute the script? | You could substitute an empty command substitution: docker run \ --rm \ -u root \ $(: -p 8080:8080 ) \ -v jenkins-data:/var/jenkins_home \ -v /var/run/docker.sock:/var/run/docker.sock \ -v "$HOME":/home \ jenkinsci/blueocean | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/499081",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23875/"
]
} |
499,119 | I have following files, all of which contain minified JavaScipt code. Each File ends in a comment: Folder structure |--static |--js |--1.1001bbaf.chunk.js |--runtime~main.229c360f.js |--main.57f2973b.chunk.js 1.1001bbaf.chunk.js (window.webpackJsonp=window.webpackJsonp||[]).push .....//# sourceMappingURL=1.1001bbaf.chunk.js.map runtime~main.229c360f.js !function(e){function r(r){for .....//# sourceMappingURL=runtime~main.229c360f.js.map main.57f2973b.chunk.js (window.webpackJsonp=window.webpackJsonp||[]).push .....//# sourceMappingURL=main.57f2973b.chunk.js.map My requirement is to flush the contents of all the files in a single file main.js , such that the content is appended and not overwritten. I tried the following solution: cat static/js/*.js >> main.js Works well, but it appends the content of second file at the end of first, that ends in a comment. Something like this: //#sourceMappingURL=1.1001bbaf.chunk.js.map(window.webpackJsonp=window.webpackJsonp||[]).push ..... Now the entire line is a comment in vim text editor and everything beginning from this line is a comment in Atom text editor. The default behaviour of my Vim text editor is that if I press o in command mode and if the last line was comment //... , the new line begins with // . I don't know exactly how to deal with the files ending in commented lines. I can think of things like: appended contents of next file beginning on a new line or to delete the last line comment. Restriction is to use bash only. | The last line of your file doesn't seem to end with a newline. That's why the last line of one file gets combined with the first line of the next file. You could try to append a newline after every file with this for file in static/js/*.jsdo cat "$file" echodone >> main.js The script doesn't check if a file ends with a newline or not. If a file ends with a newline it will be followed by an empty line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499119",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/211144/"
]
} |
499,144 | I have the following code: debug=$?function a { su - javi -c "uptime" return $debug}function b { su - javi -c "cat /etc/redhat-release" return $debug}function c { su - javi -c "cat /etc/redhat-release" return $debug}case $debug in0) a if [ $debug == 0 ]; then b echo "se ejcuta la funcion" elif [ $debug == 0 ] c elif[].... <-----this fi;;1) echo "se ha producido un error";;esac Is there any way to debug by removing the if ??I want them to go running a function if it ends well that jumps to the other function and if it does not end well that it leaves the escript, that with 5 functions | The last line of your file doesn't seem to end with a newline. That's why the last line of one file gets combined with the first line of the next file. You could try to append a newline after every file with this for file in static/js/*.jsdo cat "$file" echodone >> main.js The script doesn't check if a file ends with a newline or not. If a file ends with a newline it will be followed by an empty line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499144",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/322748/"
]
} |
499,190 | Manpage of ss says: FILTER := [ state TCP-STATE ] [ EXPRESSION ] Please take a look at the official documentation (Debian package iproute-doc) for details regarding filters. What does that mean? I can't find anything under /usr/share/doc/iproute2-doc/ . $ ls /usr/share/doc/iproute2-doc/ss.htmlls: cannot access '/usr/share/doc/iproute2-doc/ss.html': No such file or directory$ ls /usr/share/doc/iproute2-doc/actions changelog.Debian.gz copyright examples README README.decnet README.devel README.distribution.gz README.iproute2+tc README.lnstat Is the document also online somewhere for browsing? Thanks. | The documentation is available in the Debian 9 package but was removed in later releases because it was outdated. The manpage is supposed to be the complete documentation now. (But it doesn’t have much to say about the details of filters.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499190",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
499,203 | I have a vpn service unit for which I can view the logs with... journalctl -u vpn I also have a script that interacts with the vpn manually and is logged to journal with... exec > >(systemd-cat -t vpn.sh) 2>&1 and I can view the logs with... journalctl -t vpn.sh I tried viewing both logs with... journalctl -u vpn -t vpn.sh but it didn't work. Is there a way to view both logs at the same time? Or is it possible to set the identifier ( -t vpn.sh ) in the vpn service unit file to match the identifier of my script ( vpn.sh ). | TL;DR: This will work: $ journalctl _SYSTEMD_UNIT=vpn.service + SYSLOG_IDENTIFIER=vpn.sh You can use + to connect two sets of connections and look for journal log lines that match either expression. (This is documented in the man page of journalctl.) In order to do that, you need to refer to them by their proper field names (the flags -u and -t are shortcuts for those.) You can look at systemd.journal-fields(5) for the documentation of the field names. (That page will also explain why one of them has a leading underscore and the other one doesn't.) For _SYSTEMD_UNIT you will need an exact match, including the .service suffix (the -u shortcut is smart and will find the exact unit name when translating it to a query by field.) Putting it all together, you'll get the command above. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499203",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168989/"
]
} |
499,229 | I have a directory that contains, among other files, 3 named pipes: FIFO, FIFO1, and FIFO11. If I try something like grep mypattern * in this directory, grep hangs forever on the named pipes, so I need to exclude them. Unexpectedly, grep --exclude='FIF*' mypattern * does not solve the problem; grep still hangs forever. However, grep -r --exclude='FIF*' mypattern . does solve the hanging problem (albeit with the undesired side effect of searching all the subdirectories). I did some testing that shows that grep --exclude ='FIF*' mypattern * works as expected if FIFO etc. are regular files, not named pipes. Questions: Why does grep skip --excludes in both cases if they're regular files, and skips --excluded named pipes in the recursive case, but doesn't skip named pipes in the non-recursive case? Is there another way to format the exclusion that will skip these files in all cases? is there a better way to accomplish what I'm after? (EDIT: I just discovered the --devices=skip flag in grep, so that's the answer to this part ... but I'm still curious about the first two parts of the question) | It seems grep still opens files even if the regex tells it to skip them: $ lltotal 4.0Kp-w--w---- 1 user user 0 Feb 7 16:44 pip-fifo--w--w---- 1 user user 4 Feb 7 16:44 pip-filelrwxrwxrwx 1 user user 4 Feb 7 16:44 pip-link -> file (Note: none of these have read permissions.) $ strace -e openat grep foo --exclude='pip*' pip-file pip-link pip-fifoopenat(AT_FDCWD, "pip-file", O_RDONLY|O_NOCTTY) = -1 EACCES (Permission denied)grep: pip-file: Permission deniedopenat(AT_FDCWD, "pip-link", O_RDONLY|O_NOCTTY) = -1 ENOENT (No such file or directory)grep: pip-link: No such file or directoryopenat(AT_FDCWD, "pip-fifo", O_RDONLY|O_NOCTTY) = -1 EACCES (Permission denied)grep: pip-fifo: Permission denied+++ exited with 2 +++ Granting read permissions, it appears that it doesn't try to read them after opening if they are excluded: $ strace -e openat grep foo --exclude='pip*' pip-file pip-link pip-fifoopenat(AT_FDCWD, "pip-file", O_RDONLY|O_NOCTTY) = 3openat(AT_FDCWD, "pip-link", O_RDONLY|O_NOCTTY) = -1 ENOENT (No such file or directory)grep: pip-link: No such file or directoryopenat(AT_FDCWD, "pip-fifo", O_RDONLY|O_NOCTTY^Cstrace: Process 31058 detached <detached ...>$ strace -e openat,read grep foo --exclude='pip*' pip-fileread(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0000\25\0\0\0\0\0\0"..., 832) = 832read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240\r\0\0\0\0\0\0"..., 832) = 832read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\t\2\0\0\0\0\0"..., 832) = 832read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260`\0\0\0\0\0\0"..., 832) = 832openat(AT_FDCWD, "pip-file", O_RDONLY|O_NOCTTY) = 3+++ exited with 1 +++$ strace -e openat,read grep foo --exclude='pipe*' pip-fileread(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0000\25\0\0\0\0\0\0"..., 832) = 832read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240\r\0\0\0\0\0\0"..., 832) = 832read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\t\2\0\0\0\0\0"..., 832) = 832read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260`\0\0\0\0\0\0"..., 832) = 832openat(AT_FDCWD, "pip-file", O_RDONLY|O_NOCTTY) = 3read(3, "foo\n", 32768) = 4fooread(3, "", 32768) = 0+++ exited with 0 +++ And since openat wasn't called with O_NONBLOCK , the opening itself hangs, and grep doesn't reach the part where it excludes it from reading. Looking at the source code, I believe the flow is like this: If not recursive, call grep_command_line_arg on each file. That calls grepfile if not on stdin. grepfile calls grepdesc after opening the file. grepdesc checks for excluding the file . When recursive: grepdirent checks for excluding the file before calling grepfile , so the failing openat never happens. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499229",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335631/"
]
} |
499,333 | I have Linux Mint 19 Tara and today I installed new updates of packages which you can see below in the picture. After that during sudo usage sudo shows * instead of nothing. Sudo works but this is strange, after I hit enter, stars will disappear. $ sudo echo[sudo] password for matej: ********* Another strange thing is with console, at first open I can write my username but password gets confirmed without my interference. So I can't login to console. | After @JeffSchaller directed me to this password feedback I found that in /etc/sudoers.d is new file named 0pwfeedback with content Defaults pwfeedback . After removing this file, problem with stars in sudo was solved. Second problem with login to console is known: ubuntu bugs but I am still trying figure out how to solve it. Edit 6.3.2019: It looks like kernel 4.15.0-46 solved problem with console login. I can now normally login into console and password input is not automatically entered. $ uname -r4.15.0-46-generic$ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499333",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/303077/"
]
} |
499,374 | when logging into my VPS with ssh keys, I get this: Command '' not found, but can be installed with:sudo apt install libpam-mount ... sudo apt install nmh virtualenvwrapper.sh: There was a problem running the initialization hooks.If Python could not import the module virtualenvwrapper.hook_loader,check that virtualenvwrapper has been installed forVIRTUALENVWRAPPER_PYTHON= and that PATH isset properly. Here's my .bashrc variables: export WORKON_HOME=~/Envsource /usr/local/bin/virtualenvwrapper.shexport VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3 What I've tried Sourcing .bashrc , ~/.profile and /usr/local/bin/virtualenvwrapper.sh (no errors) Upgrading virtualenvwrapper with pip3 --upgrade (latest) Also, my virtualenv's work perfectly. | In lines 47-51 of the virtualenvwrapper.sh script, it first checks to see if the environment variable VIRTUALENVWRAPPER_PYTHON is set, and if not, it sets it in line 50 : VIRTUALENVWRAPPER_PYTHON="$(command \which python)" The problem is that newer versions of Ubuntu (18.04+) no longer come with the binary python installed, only python3 . Change python to python3 in line 50 of the script and you're all set ;) Otherwise, in .bashrc , you need to first set VIRTUALENVWRAPPER_PYTHON and then source the script: VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3source /usr/local/bin/virtualenvwrapper.sh | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499374",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335788/"
]
} |
499,375 | POSIX and GNU have their syntax styles for options. For all the commands that I have seen, they accept option-like inputs as command line arguments. Is it uncommon for a program to accept option-like inputs from stdin (and therefore to use getopt to parse option-like stdin inputs) ? Something like: $ ls -l -rw-rw-r-- 1 t t 31232 Jan 7 13:38 fetch.png-rw-rw-r-- 1 t t 69401 Feb 6 14:35 proxy.png$ myls> -l -rw-rw-r-- 1 t t 31232 Jan 7 13:38 fetch.png-rw-rw-r-- 1 t t 69401 Feb 6 14:35 proxy.png> -l fetch.png-rw-rw-r-- 1 t t 31232 Jan 7 13:38 fetch.png Why is it uncommon for stdin inputs to have option-like inputs, while common for command line arguments? From expressive power point of view (similar to regular languages vs context free languages), are nonoption like inputs and option like inputs equivalent? Let's not emphasize on shell expansions, because we never expect (or need) something like shell expansion happen on stdin input (nonoption like). Thanks. Similar questions for nonoption like inputs: https://stackoverflow.com/questions/54584124/how-to-parse-non-option-command-line-arguments-and-stdin-inputs (The post is removed due to no response and downvotes. but still accessible if you have sufficient reputations) | tl;dr This is very similar to wanting to use ls in scripts ( 1 , 2 ), quoting only when necessary , or crossing the streams (which is almost literally what this does, by using stdin for two completely orthogonal things). It is a bad idea. There are several problems with such an approach: Your tool will have to handle all of the expansions which a shell already handles for you: brace expansion tilde expansion parameter and variable expansion command substitution arithmetic expansion word splitting filename expansion, including globbing as @StephenHarris points out If your tool does not handle any expansions ( as you suggest , contrary to the example you've given which clearly has to do word splitting to not treat -l fetch.png as a single parameter) you will have to explain to developers at very great length why none of -l "$path" , -l ~/Downloads and -l 'my files' do what they expect. When your tool handles expansions differently from shells (which it will do, because shells handle expansions differently and nobody can afford to detect which shell you're running in and supporting all of them forever), you've just added a new syntax which needs to be learned, remembered and used by everyone using your tool. Standard input is no longer just the tool input. Based on The Unix Philosophy , it can no longer trivially work together with other tools, because those other tools will have to do special things to pass input to your tool. There is already a convention used by every major tool. Breaking convention in such a fundamental way would be a really bad idea in terms of usability. Mixing options and input cannot be done safely for arbitrary input, unless you go to the considerable extra trouble of encoding or escaping one of them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499375",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
499,409 | How can I adjust fan speed according to hard drive temperature via Fancontrol? | I finally found a simple script to control fan speed according to hard drive temperature via Fancontrol , Hddtemp , and Lm-sensors . In the following script, “ /dev/sda ” is the hard disk to be monitored, and “ /Fancontrol/Hddtemp ” is the output file to be read by Fancontrol. Press Ctrl + Alt + T to open Terminal and run the following command to check whether “ /dev/sda ” is the correct one: sudo hddtemp /dev/sd[a-z] Use only the one supported by Hddtemp, which will display the temperature rather than “S.M.A.R.T. not available”. Replace “ /dev/sda ” with the correct one in the script if necessary. If you have not yet configured Fancontrol, see this page , this page , and this page and run the following commands one by one (restart Linux after running the first one): sudo sensors-detect watch sensors sudo pwmconfig sudo service fancontrol start Then, go through the procedure below: (1) Run the following command to create a script file. sudo mkdir -p "/Fancontrol/" & sudo xed /Fancontrol/HDD_temp (2) Copy the following script into the file and save it. #!/bin/bashFile=/Fancontrol/Hddtempwhile truedo temperature=$(sudo hddtemp -n /dev/sda)echo $(($temperature * 1000)) > "$File"sleep 30 done (3) Run the following command to make it executable. sudo chmod +x /Fancontrol/HDD_temp (4) Run the following command to create a service file. sudo xed /lib/systemd/system/HDD_temp.service (5) Copy the following lines into the file and save it. [Service] ExecStart=/Fancontrol/HDD_temp [Install] WantedBy=multi-user.target (6) Run the following commands one by one: sudo chmod 664 /lib/systemd/system/HDD_temp.service sudo systemctl daemon-reload sudo systemctl start HDD_temp.service sudo systemctl enable HDD_temp.service Then, the script “ HDD_temp ” will be run as a system service at Linux startup. (7) Run the following command to edit “ fancontrol ”, the configuration file. sudo xed /etc/fancontrol Find the line that begins with “ FCTEMPS ”. For example: FCTEMPS=hwmon1/pwm1=hwmon1/temp1_input On that line, “ hwmon1/temp1_input ” is the temperature (e.g. the chipset temperature) currently read by Fancontrol. Replace it with “ /Fancontrol/Hddtemp ”, and the line will become: FCTEMPS=hwmon1/pwm1=/Fancontrol/Hddtemp Save the file and run the following command to restart Fancontrol. sudo service fancontrol restart Then, the fan controlled by “ hwmon1/pwm1 ” will respond to “ /Fancontrol/Hddtemp ”, the hard disk temperature. Note that "HDD_temp" and "Hddtemp" are the script file and output file respectively. Don't confuse them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499409",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/332937/"
]
} |
499,415 | I have a bash script which contains awscli as well. I am trying to print a variable which is created in a for loop. The variable that I am trying to print contains $ sign because of for loop. I couldn't print the value. Below I am sharing the script. The output of this script is only numbers which is generated in the for loop. I want to print the value which is generated in the command. #!/bin/bashdeclare -i counter=11declare -i counter2=14for i in {1..2}do declare v1$i=$(aws iam get-group --group-name VideoEditors | awk -v counter1=$counter 'NR==counter1' | awk -F\" '{print $4}') counter=$counter+7 declare v2$i=$(aws iam get-group --group-name VideoEditors | awk -v counter3=$counter2 'NR==counter3' | awk -F\" '{print $4}') counter2=$counter2+7 echo $v1$i echo $v2$idone | I finally found a simple script to control fan speed according to hard drive temperature via Fancontrol , Hddtemp , and Lm-sensors . In the following script, “ /dev/sda ” is the hard disk to be monitored, and “ /Fancontrol/Hddtemp ” is the output file to be read by Fancontrol. Press Ctrl + Alt + T to open Terminal and run the following command to check whether “ /dev/sda ” is the correct one: sudo hddtemp /dev/sd[a-z] Use only the one supported by Hddtemp, which will display the temperature rather than “S.M.A.R.T. not available”. Replace “ /dev/sda ” with the correct one in the script if necessary. If you have not yet configured Fancontrol, see this page , this page , and this page and run the following commands one by one (restart Linux after running the first one): sudo sensors-detect watch sensors sudo pwmconfig sudo service fancontrol start Then, go through the procedure below: (1) Run the following command to create a script file. sudo mkdir -p "/Fancontrol/" & sudo xed /Fancontrol/HDD_temp (2) Copy the following script into the file and save it. #!/bin/bashFile=/Fancontrol/Hddtempwhile truedo temperature=$(sudo hddtemp -n /dev/sda)echo $(($temperature * 1000)) > "$File"sleep 30 done (3) Run the following command to make it executable. sudo chmod +x /Fancontrol/HDD_temp (4) Run the following command to create a service file. sudo xed /lib/systemd/system/HDD_temp.service (5) Copy the following lines into the file and save it. [Service] ExecStart=/Fancontrol/HDD_temp [Install] WantedBy=multi-user.target (6) Run the following commands one by one: sudo chmod 664 /lib/systemd/system/HDD_temp.service sudo systemctl daemon-reload sudo systemctl start HDD_temp.service sudo systemctl enable HDD_temp.service Then, the script “ HDD_temp ” will be run as a system service at Linux startup. (7) Run the following command to edit “ fancontrol ”, the configuration file. sudo xed /etc/fancontrol Find the line that begins with “ FCTEMPS ”. For example: FCTEMPS=hwmon1/pwm1=hwmon1/temp1_input On that line, “ hwmon1/temp1_input ” is the temperature (e.g. the chipset temperature) currently read by Fancontrol. Replace it with “ /Fancontrol/Hddtemp ”, and the line will become: FCTEMPS=hwmon1/pwm1=/Fancontrol/Hddtemp Save the file and run the following command to restart Fancontrol. sudo service fancontrol restart Then, the fan controlled by “ hwmon1/pwm1 ” will respond to “ /Fancontrol/Hddtemp ”, the hard disk temperature. Note that "HDD_temp" and "Hddtemp" are the script file and output file respectively. Don't confuse them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335828/"
]
} |
499,485 | I have a Debian (Buster) laptop with 8 GB RAM and 16GB swap. I'm running a very long running task. This means my laptop has been left on for the past six days while it churns through. While doing this I periodically need to use my laptop as a laptop. This shouldn't be a problem; the long running task is I/O bound, working through stuff on a USB hard disk and doesn't take much RAM (<200 MB) or CPU (<4%). The problem is when I come back to my laptop after a few hours, it will be very sluggish and can take 30 minutes to come back to normal. This is so bad that crash-monitors flag their respective applications as having frozen (especially browser windows) and things start incorrectly crashing out. Looking on the system monitor, of the 2.5 GB used around half gets shifted into swap. I've confirmed this is the problem by removing the swap space ( swapoff /dev/sda8 ). If I leave it without swap space it comes back to life almost instantly even after 24 hours. With swap, it's practically a brick for the first five minutes having been left for only six hours. I've confirmed that memory usage never exceeds 3 GB even while I'm away. I have tried reducing the swappiness ( see also: Wikipedia ) to values of 10 and 0 , but the problem still persists. It seems that after a day of inactivity the kernel believes the entire GUI is no longer needed and wipes it from RAM (swaps it to disk). The long running task is reading through a vast file tree and reading every file. So it might be the kernel is confused into thinking that caching would help. But on a single sweep of a 2 TB USB HD with ~1 billion file names, an extra GB RAM isn't going to help performance much. This is a cheap laptop with a sluggish hard drive. It simply can't load data back into RAM fast enough. How can I tell Linux to only use swap space in an emergency? I don't want to run without swap. If something unexpected happens, and the OS suddenly needs an extra few GBs then I don't want tasks to get killed and would prefer start using swap. But at the moment, if I leave swap enabled, my laptop just can't be used when I need it. The precise definition of an "emergency" might be a matter for debate. But to clarify what I mean: An emergency would be where the system is left without any other option than to swap or kill processes. What is an emergency? - Do you really have to ask?... I hope you never find yourself in a burning building! It's not possible for me to define everything that might constitute an emergency in this question. But for example, an emergency might be when the kernel is so pushed for memory that it has start killing processes with the OOM Killer . An emergency is NOT when the kernel thinks it can improve performance by using swap. Final Edit: I've accepted an answer which does precisely what I've asked for at the operating system level. Future readers should also take note of the answers offering application level solutions. | One fix is to make sure the memory cgroup controller is enabled (I think it is by default in even half-recent kernels, otherwise you'll need to add cgroup_enable=memory to the kernel command line). Then you can run your I/O intensive task in a cgroup with a memory limit, which also limits the amount of cache it can consume. If you're using systemd, you can set +MemoryAccounting=yes and either MemoryHigh / MemoryMax or MemoryLimit (depeneds on if you're using cgroup v1 or v2) in the unit, or a slice containing it. If its a slice, you can use systemd-run to run the program in the slice. Full example from one of my systems for running Firefox with a memory limit. Note this uses cgroups v2 and is set up as my user, not root (one of the advantages of v2 over v1 is that delegating this to non-root is safe, so systemd does it). $ systemctl --user cat mozilla.slice # /home/anthony/.config/systemd/user/mozilla.slice[Unit]Description=Slice for Mozilla appsBefore=slices.target[Slice]MemoryAccounting=yesMemoryHigh=5GMemoryMax=6G$ systemd-run --user --slice mozilla.slice --scope -- /usr/bin/firefox &$ systemd-run --user --slice mozilla.slice --scope -- /usr/bin/thunderbird & I found to get the user one working I had to use a slice. System one works just by putting the options in the service file (or using systemctl set-property on the service). Here is an example service (using cgroup v1), note the last two lines. This is part of the system (pid=1) instance. [Unit]Description=mount S3QL filesystemRequires=network-online.targetAfter=network-online.target[Install]WantedBy=multi-user.target[Service]Type=forkingUser=s3ql-userGroup=s3ql-userLimitNOFILE=20000ExecStartPre=+/bin/sh -c 'printf "S3QL_CACHE_SIZE=%%i\n" $(stat -c "%%a*%%S*.90/1024" -f /srv/s3ql-cache/ | bc) > /run/local-s3ql-env'ExecStartPre=/usr/bin/fsck.s3ql --cachedir /srv/s3ql-cache/fs1 --authfile /etc/s3ql-authinfo --log none «REDACTED»EnvironmentFile=-/run/local-s3ql-envExecStart=/usr/bin/mount.s3ql --keep-cache --cachedir /srv/s3ql-cache/fs1 --authfile /etc/s3ql-authinfo --cachesize ${S3QL_CACHE_SIZE} --threads 4ExecStop=/usr/bin/umount.s3ql /mnt/S3QL/TimeoutStopSec=2mMemoryAccounting=yesMemoryLimit=1G Documentation is in systemd.resource-control(5) . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/499485",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20140/"
]
} |
499,612 | I have a shell variable for example. a="big little man". How do I use regex in bash print out the variable with only the middle word capitalized? (big LITTLE man) I can do it by separating the variable into 3 variables and then expanding them in echo. For eg first=${a%* } , etc. But how do I do it in one single go with one regex? Is it possible to do it in a single line? Using the capitalize operators (^) | sed Assuming you're using GNU sed: $ sed -E 's/(\w+) (\w+) (\w+)/\1 \U\2\E \3/' <<< 'big little man'big LITTLE man This command makes use of the GNU specific sequences \U and \E which turn the subsequent characters into upper case and cancel case conversion respectively. awk While not operating on regular expressions, awk provides another convenient way to capitalize a single word: $ awk '{print($1, toupper($2), $3)}' <<< 'big little man'big LITTLE man bash Although Bash on its own does not have regular expression based conversions, you can still achieve partial capitalization by treating your string as an array, e.g. $ (read -a words; echo "${words[0]} ${words[1]^^} ${words[2]}") <<< 'big little man'big LITTLE man Here ^^ converts the second element of our array (i.e. the second word) to upper case. The feature was introduced in Bash 4. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499612",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335997/"
]
} |
499,631 | My VirtualBox filesystem looks like: # dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/sda2 29799396 5467616 22795012 20% /devtmpfs 1929980 0 1929980 0% /devtmpfs 1940308 12 1940296 1% /dev/shmtmpfs 1940308 8712 1931596 1% /runtmpfs 1940308 0 1940308 0% /sys/fs/cgroup/dev/sdb 31441920 1124928 30316992 4% /srv/node/d1/dev/sdc 31441920 49612 31392308 1% /srv/node/d2/dev/sdd 31441920 34252 31407668 1% /srv/node/d3/dev/sda1 999320 253564 676944 28% /boottmpfs 388064 0 388064 0% /run/user/0 Disks /dev/sdb , /dev/sdc , /dev/sdd are VDI data disks. I removed some data from them (not everything) and would like to use zerofree to compress them afterwards.Looks like I can't use zerofree on those disks. Here is an execution: # zerofree -v /dev/sdbzerofree: failed to open filesystem /dev/sdb Is it possible to use zerofree on such disks? If not, is there any alternative solution? I need to keep the existing data on those disks, but use zerofree (or anything else) to fill removed data with zeros. | I didn't find the answer on how to use zerofree on such disks but I found an alternative solution which works well. Mount your disk somewhere (in my case 3 disks are mounted to locations: /srv/node/d1 , /srv/node/d2 , /srv/node/d3 ). Enter the directory where your disk is mounted ( cd /srv/node/d1 ). Perform the command: dd if=/dev/zero of=zerofillfile bs=1M Remove the a created file: rm -f zerofillfile Perform the above operations for all disks. P.S. not related to this question, but for virtual box disk compaction, use the command after performing the above commands: VBoxManage modifyhd --compact /path/to/my/disks/disk1.vdi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499631",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164881/"
]
} |
499,649 | When running cat /proc/meminfo , you get these 3 values at the top: MemTotal: 6291456 kBMemFree: 4038976 kBCached: 1477948 kB As far as I know, the "Cached" value is disk caches made by the Linux system that will be freed immediately if any application needs more RAM, thus Linux will never run out of memory until both MemFree and Cached are at zero. Unfortunately, "MemAvailable" is not reported by /proc/meminfo, probably because it is running in a virtual server. (Kernel version is 4.4) Thus for all practical purposes, the RAM available for applications is MemFree + Cached. Is that view correct? | That view can be very misleading in a number of real-world cases. The kernel now provides an estimate for available memory, in the MemAvailable field. This value is significantly different from MemFree + Cached . /proc/meminfo: provide estimated available memory [kernel change description, 2014] Many load balancing and workload placing programs check /proc/meminfo to estimate how much free memory is available. They generally do this by adding up "free" and "cached", which was fine ten years ago, but is pretty much guaranteed to be wrong today. It is wrong because Cached includes memory that is not freeable as page cache, for example shared memory segments, tmpfs, and ramfs, and it does not include reclaimable slab memory, which can take up a large fraction of system memory on mostly idle systems with lots of files. Currently, the amount of memory that is available for a new workload, without pushing the system into swap, can be estimated from MemFree, Active(file), Inactive(file), and SReclaimable, as well as the "low" watermarks from /proc/zoneinfo. However, this may change in the future, and user space really should not be expected to know kernel internals to come up with an estimate for the amount of free memory. It is more convenient to provide such an estimate in /proc/meminfo. If things change in the future, we only have to change it in one place. ... Documentation/filesystems/proc.txt: ... MemAvailable: An estimate of how much memory is available for starting new applications, without swapping. Calculated from MemFree, SReclaimable, the size of the file LRU lists, and the low watermarks in each zone. The estimate takes into account that the system needs some page cache to function well, and that not all reclaimable slab will be reclaimable, due to items being in use. The impact of those factors will vary from system to system. 1. MemAvailable details As it says above, tmpfs and other Shmem memory cannot be freed, only moved to swap. Cached in /proc/meminfo can be very misleading, due to including this swappable Shmem memory. If you have too many files in a tmpfs, it could be occupying a lot of your memory :-). Shmem can also include some graphics memory allocations , which could be very large. MemAvailable deliberately does not include swappable memory. Swapping too much can cause long delays. You might even have chosen to run without swap space, or allowed only a relatively limited amount. I had to double-check how MemAvailable works. At first glance, the code did not seem to mention this distinction. /* * Not all the page cache can be freed, otherwise the system will * start swapping. Assume at least half of the page cache, or the * low watermark worth of cache, needs to stay. */pagecache = pages[LRU_ACTIVE_FILE] + pages[LRU_INACTIVE_FILE];pagecache -= min(pagecache / 2, wmark_low);available += pagecache; However, I found it correctly treats Shmem as "used" memory. I created several 1GB files in a tmpfs. Each 1GB increase in Shmem reduced MemAvailable by 1GB. So the size of the "file LRU lists" does not include shared memory or any other swappable memory. (I noticed these same page counts are also used in the code that calculates the "dirty limit" ). This MemAvailable calculation also assumes that you want to keep at least enough file cache to equal the kernel's "low watermark". Or half of the current cache - whichever is smaller. (It makes the same assumption for reclaimable slabs as well). The kernel's "low watermark" can be tuned, but it is usually around 2% of system RAM . So if you only want a rough estimate, you can ignore this part :-). When you are running firefox with around 100MB of program code mapped in the page cache, you generally want to keep that 100MB in RAM :-). Otherwise, at best you will suffer delays, at worst the system will spend all its time thrashing between different applications. So MemAvailable is allowing a small percentage of RAM for this. It might not allow enough, or it might be over-generous. "The impact of those factors will vary from system to system". For many PC workloads, the point about "lots of files" might not be relevant. Even so, I currently have 500MB reclaimable slab memory on my laptop (out of 8GB of RAM). This is due to ext4_inode_cache (over 300K objects). It happened because I recently had to scan the whole filesystem, to find what was using my disk space :-). I used the command df -x / | sort -n , but e.g. Gnome Disk Usage Analyzer would do the same thing. 2. [edit] Memory in control groups So-called "Linux containers" are built up from namespaces , cgroups , and various other features according to taste :-). They may provide a convincing enough environment to run something almost like a full Linux system. Hosting services can build containers like this and sell them as "virtual servers" :-). Hosting servers may also build "virtual servers" using features which are not in mainline Linux. OpenVZ containers pre-date mainline cgroups by two years, and may use "beancounters" to limit memory. So you cannot understand exactly how those memory limits work if you only read documents or ask questions about the mainline Linux kernel. cat /proc/user_beancounters shows current usage and limits. vzubc presents it in a slightly more friendly format. The main page on beancounters documents the row names. Control groups include the ability to set memory limits on the processes inside them. If you run your application inside such a cgroup, then not all of the system memory will be available to the application :-). So, how can we see the available memory in this case? The interface for this differs in a number of ways, depending if you use cgroup-v1 or cgroup-v2 . My laptop install uses cgroup-v1. I can run cat /sys/fs/cgroup/memory/memory.stat . The file shows various fields including total_rss , total_cache , total_shmem . shmem, including tmpfs, counts towards the memory limits. I guess you can look at total_rss as an inverse equivalent of MemFree . And there is also the file memory.kmem.usage_in_bytes , representing kernel memory including slabs. (I assume memory.kmem. also includes memory.kmem.tcp. and any future extensions, although this is not documented explicitly). There are not separate counters to view reclaimable slab memory. The document for cgroup-v1 says hitting the memory limits does not trigger reclaim of any slab memory. (The document also has a disclaimer that it is "hopelessly outdated", and that you should check the current source code). cgroup-v2 is different. I think the root (top-level) cgroup doesn't support memory accounting. cgroup-v2 still has a memory.stat file. All the fields sum over child cgroups, so you don't need to look for total_... fields. There is a file field, which means the same thing cache did. Annoyingly I don't see an overall field like rss inside memory.stat ; I guess you would have to add up individual fields. There are separate stats for reclaimable and unreclaimable slab memory; I think a v2 cgroup is designed to reclaim slabs when it starts to run low on memory. Linux cgroups do not automatically virtualize /proc/meminfo (or any other file in /proc ), so that would show the values for the entire machine. This would confuse VPS customers. However it is possible to use namespaces to replace /proc/meminfo with a file faked up by the specific container software . How useful the fake values are, would depend on what that specific software does. systemd believes cgroup-v1 cannot be securely delegated e.g. to containers. I looked inside a systemd-nspawn container on my cgroup-v1 system. I can see the cgroup it has been placed inside, and the memory accounting on that. On the other hand the contained systemd does not set up the usual per-service cgroups for resource accounting. If memory accounting was not enabled inside this cgroup, I assume the container would not be able to enable it. I assume if you're inside a cgroup-v2 container, it will look different to the root of a real cgroup-v2 system, and you will be able to see memory accounting for its top-level cgroup. Or if the cgroup you can see does not have memory accounting enabled, hopefully you will be delegated permission so you can enable memory accounting in systemd (or equivalent). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/499649",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112631/"
]
} |
499,694 | Typically an easy diagnostic approach to checking an application server is running is to run telnet again the host and port: telnet somehost port The issue is that some operating systems, such as macOS now make the tool unavailable by default. For this reason, instead of trying to see how to install telnet, I am curious to know if there are any other CLI approaches to check a server is listening, without needing special privileges? Just to clarify I am looking for solutions that are as quick to use on any system as telnet, which is achievable in 5 seconds. Coding a solution doesn’t really offer a quick access approach. | You can try several ways to check if something listen on particular port: With wget / curl wget your_IP:port With netstat netstat -an|grep LISTEN|grep :port With lsof lsof -i :port With netcat nc -vz your_IP port With /proc filesystem (probably will work only on linux)( explained here ) With ss ss|grep LISTEN|grep :port With nmap nmap -sS -O -pport your_IP EDIT1 Also (almost) every ssh,http,ftp client can be used, but sometime will be hard to understand if port is closed by firewall or not available. EDIT2 Found in this Q/A sample way to use cat and echo to do the job: true &>/dev/null </dev/tcp/127.0.0.1/$PORT && echo open || echo closed or with only exec command (if you do not see error port is open): exec 6<>/dev/tcp/your_IP/port And I found a way to use only awk to do the job (original here ): awk -v port=your_port 'function hextodec(str,ret,n,i,k,c){ ret = 0 n = length(str) for (i = 1; i <= n; i++) { c = tolower(substr(str, i, 1)) k = index("123456789abcdef", c) ret = ret * 16 + k } return ret}function getIP(str,ret){ ret=hextodec(substr(str,index(str,":")-2,2)); for (i=5; i>0; i-=2) { ret = ret"."hextodec(substr(str,i,2)) } ret = ret":"hextodec(substr(str,index(str,":")+1,4)) return ret} NR > 1 {{local=getIP($2);remote=getIP($3) }{ if (remote ~ "0:0" && local ~ ":"port) print local}}' /proc/net/tcp EDIT3 As mentioned in to comment some of the methods, especially based on /dev filesystem may bot work in your environment | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499694",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27460/"
]
} |
499,716 | I would like to know how Linux, such us Ubuntu, "know" how and what drivers to install when installing it from scratch.For example, I buy a new computer without any system and I install Ubuntu. Inside my PC I have a GPU, HDDs, etc., also some peripherals, like mouse, keyboard, etc. Obviously, a fresh install does not have drivers needed for the system to control and communicate with the hardware so how does Ubuntu "know" what drivers to install/download and how does it do that? | (Based on Google-cached copy of http://people.skolelinux.org/pere/blog/Modalias_strings___a_practical_way_to_map__stuff__to_hardware.html by Petter Reinholdtsen.) In the hardware, there are certain standard device identifiers that can be accessed as long as you know the standard access method for that particular I/O bus or subsystem, without having any further knowledge about the actual device. In Linux, these identifiers are used to build up modalias strings , which are then used to find the correct driver for each device. The source code of each driver module can include MODULE_DEVICE_TABLE structures, which are used by the depmod command to create module alias wildcard entries that will match the modalias strings of the hardware supported by that particular module. When the kernel detects a piece of hardware with no matching driver loaded yet, it will create a modalias string from the identifiers of the hardware, and use it to request a module to be autoloaded. The modprobe command will then use the /lib/modules/$(uname -r)/modules.alias[.bin] file created by depmod to see if a matching module exists. If it does, that module is loaded and gets to probe the hardware for further details if necessary. For example, I have a DVB TV card: $ lspci -v -nn -s 07:00.007:00.0 Multimedia video controller [0400]: Conexant Systems, Inc. CX23885 PCI Video and Audio Decoder [14f1:8852] (rev 04) Subsystem: Hauppauge computer works Inc. CX23885 PCI Video and Audio Decoder [0070:6a28] This results in a modalias string like this: pci:v000014F1d00008852sv00000070sd00006A28bc04sc00i00 The cx23885 module has these aliases based on MODULE_DEVICE_TABLE in its source code: # modinfo cx23885...alias: pci:v000014F1d00008880sv*sd*bc*sc*i*alias: pci:v000014F1d00008852sv*sd*bc*sc*i*... When the kernel detects the card, it effectively runs the modprobe pci:v000014F1d00008852sv00000070sd00006A28bc04sc00i00 command. The second alias of the cx23885 module matches, and so that module gets loaded. PCI/PCI-X/PCIe bus devices This is the "PCI subtype". It uses modalias strings like this: pci:v00008086d00002770sv00001028sd000001ADbc06sc00i00 This can be decoded as follows: v 00008086 (vendor)d 00002770 (device)sv 00001028 (subvendor)sd 000001AD (subdevice)bc 06 (bus class)sc 00 (bus subclass)i 00 (interface) With lspci -nn , you can see the class, subclass, vendor and device IDs. If you add the -v option, you can also see the subvendor:subdevice IDs. USB devices With USB devices, the modalias strings look like this: usb:v1D6Bp0001d0206dc09dsc00dp00ic09isc00ip00 This unpacks to: v 1D6B (device vendor)p 0001 (device product)d 0206 (bcddevice)dc 09 (device class)dsc 00 (device subclass)dp 00 (device protocol)ic 09 (interface class)isc 00 (interface subclass)ip 00 (interface protocol) With the lsusb command, you can see the vendor and product IDs. If you use the -v option, you can see the other IDs too. ACPI devices These use the ACPI PNP identifiers, prefixed with acpi: and separated with colons: acpi:IBM0071:PNP0511: DMI devices This can be a very long modalias string: dmi:bvnIBM:bvr1UETB6WW(1.66):bd06/15/2005:svnIBM:pn2371H4G:pvrThinkPadX40:rvnIBM:rn2371H4G:rvrNotAvailable:cvnIBM:ct10:cvrNotAvailable: This unpacks to: bvn IBM (BIOS vendor)bvr 1UETB6WW(1.66) (BIOS version)bd 06/15/2005 (BIOS date)svn IBM (system vendor)pn 2371H4G (product name)pvr ThinkPadX40 (product version)rvn IBM (board vendor)rn 2371H4G (board name)rvr NotAvailable (board version)cvn IBM (chassis vendor)ct 10 (chassis type)cvr NotAvailable (chassis version) SerIO devices, i.e. mostly PS/2 mice The modalias string will look like this: serio:ty01pr00id00ex00 The values here are: ty 01 (type)pr 00 (prototype)id 00 (id)ex 00 (extra) Other bus/device types There are many other bus types recognized by the Linux kernel. Studying the contents of the kernel source file file2alias.c might be helpful in deciphering the meaning of the components of each type of modalias string. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499716",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240420/"
]
} |
499,729 | I have read this answer but don't know how to add the following line into my sudoers file. matthew ALL=(ALL) NOPASSWD: /usr/sbin/service fancontrol start I ran " sudo visudo ", and a " /etc/sudoers.tmp " window popped up. Is " /etc/sudoers.tmp " the correct file into which the line should be added? If so, under which line should I add the lines? How can I save it? I cannot find a "Save" option there. I aim to run " sudo service fancontrol start " without a password. GNU nano 2.9.3 /etc/sudoers.tmp ## This file MUST be edited with the 'visudo' command as root. | visudo is a command provided for editing the sudoers file in a safe way. To quote its manual page : visudo edits the sudoers file in a safe fashion, analogous to vipw(8). visudo locks the sudoers file against multiple simultaneous edits, provides basic sanity checks, and checks for parse errors. The /etc/sudoers.tmp file is lock file used by visudo . Your changes are written to this temporary file so that visudo can carry out its checks. If everything checks out okay, the main /etc/sudoers file will be modified accordingly. So when you run sudo visudo , a command line editor pops up so that you can edit the file. In your case, this editor appears to be GNU nano . In nano, you can navigate to the bottom of the file using arrow keys (or the Page Down key), and then paste the lines you want to include. Once your changes are done, you can exit the editor with Ctrl + X and choose the 'Y' option to save the file (you'll be asked to confirm the filename - just hit Enter). Your sudoers file should now be updated. You can use a pager like less to read the file and confirm that for yourself (the command to do that is sudo less /etc/sudoers ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/499729",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/332937/"
]
} |
499,792 | In Linux, how do /etc/hosts and DNS work together to resolve hostnames to IP addresses? if a hostname can be resolved in /etc/hosts , does DNS apply after /etc/hosts to resolve the hostname or treat the resolved IP address by /etc/hosts as a "hostname" to resolve recursively? In my browser (firefox and google chrome), when I add to /etc/hosts : 127.0.0.1 google.com www.google.com typing www.google.com into the address bar of the browsers andhitting entering won't connect to the website. After I remove thatline from /etc/hosts , I can connect to the website. Does it meanthat /etc/hosts overrides DNS for resolving hostnames? After I re-add the line to /etc/hosts , I can still connect to thewebsite, even after refreshing the webpage. Why doesn't /etc/hosts apply again, so that I can't connect to the website? Thanks. | This is dictated by the NSS (Name Service Switch) configuration i.e. /etc/nsswitch.conf file's hosts directive. For example, on my system: hosts: files mdns4_minimal [NOTFOUND=return] dns Here, files refers to the /etc/hosts file, and dns refers to the DNS system. And as you can imagine whichever comes first wins . Also, see man 5 nsswitch.conf to get more idea on this. As an aside, to follow the NSS host resolution orderings, use getent with hosts as database e.g.: getent hosts example.com | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/499792",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
499,840 | I teach an Intro to UNIX/Linux course at a local college and one of my students asked the following question: Why are some of the files in my directory colored white and others are gray? Are the white ones the ones I created today and the gray are existing files? As I looked into this I first thought the answer would be in the LS_COLORS variable, but further investigation revealed that the color listings were different when using the -l switch versus the -al switch with the ls command. See the following screen shots: Using ls -l the file named '3' shows as white but using the -al switch the same file shows a gray. Is this a bug in ls or does anyone know why this is happening? | It looks as if your prompt-string ( $PS1 ) is setting the bold attribute on characters to make the colors nicer, and not unsetting it. The output from ls doesn't know about this, and does unset bold. So after the first color output of ls , everything looks dimmer. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/499840",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/336198/"
]
} |
499,897 | I have this regex: (?<=prefix).*$ which returns any character following string "prefix" and it works fine on any online regex engines (e.g. https://regex101.com ). The problem is when I use that regex in bash: grep '(?<=prefix).*$' <<< prefixSTRING it does not match anything. Why that regex does not work with grep? | You seem to have defined the right regex, but not set the sufficient flags in command-line for grep to understand it. Because by default grep supports BRE and with -E flag it does ERE. What you have (look-aheads) are available only in the PCRE regex flavor which is supported only in GNU grep with its -P flag. Assuming you need to extract only the matching string after prefix you need to add an extra flag -o to let know grep that print only the matching portion as grep -oP '(?<=prefix).*$' <<< prefixSTRING There is also a version of grep that supports PCRE libraries by default - pcregrep in which you can just do pcregrep -o '(?<=prefix).*$' <<< prefixSTRING Detailed explanation on various regex flavors are explained in this wonderful Giles' answer and tools that implement each of them | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/499897",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/336253/"
]
} |
499,901 | I have a following bash script and it is being executed using popen(/path/to/script, 'r+') from C code and saving result back (Ex: DA00000);for this, i need to export a bash script file. I should not use a seperate bash script. How to write below script in a single line itself as command? So, i can use this command directly ( Ex:fp = popen(command "r")) in C code. #!/bin/shurl="$(grep 0x017a /sys/bus/pci/devices/*/device)"addr="$(echo $url | cut -d/ -f6)"str="$(head -n1 /sys/bus/pci/devices/$addr/resource | cut -d ' ' -f 1)"result="${str:${#str} - 8}"echo $result | You seem to have defined the right regex, but not set the sufficient flags in command-line for grep to understand it. Because by default grep supports BRE and with -E flag it does ERE. What you have (look-aheads) are available only in the PCRE regex flavor which is supported only in GNU grep with its -P flag. Assuming you need to extract only the matching string after prefix you need to add an extra flag -o to let know grep that print only the matching portion as grep -oP '(?<=prefix).*$' <<< prefixSTRING There is also a version of grep that supports PCRE libraries by default - pcregrep in which you can just do pcregrep -o '(?<=prefix).*$' <<< prefixSTRING Detailed explanation on various regex flavors are explained in this wonderful Giles' answer and tools that implement each of them | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/499901",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335799/"
]
} |
499,913 | Why is this working: mkdir /dir/test{1,2,3} and this not? {chown httpd,chmod 700} /dir/test1-bash: {chown: command not found My Bash Version is:GNU bash, version 4.2.46(2)-release | Your brace expansion is not valid. A brace expansion must be one word in the shell. A word is a string delimited by unquoted spaces (or tabs or newlines, by default), and the string {chown httpd,chmod 700} consists of the three separate words {chmod , http,chmod and 700} and would not be recognised as a brace expansion. Instead, the shell would interpret the line as a {chown command, executed with the arguments http,chmod , 700} and /dir/test1 . The simplest way to test this is with echo : $ echo {chown httpd,chmod 700} /dir/test1{chown httpd,chmod 700} /dir/test1$ echo {"chown httpd","chmod 700"} /dir/test1chown httpd chmod 700 /dir/test1 Note that even if your brace expansion had worked, the command would have been nonsensical. Just write two commands, chown http /dir/test1chmod 700 /dir/test1 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/499913",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41449/"
]
} |
499,938 | What I have Host: Windows 10 Version 1803 Guest: 4.19.20-1-MANJARO VirtualBox Version 6.0.4r128413 What I have tried: using the VirtualBox geustaddition iso 1.1. from toolbar Devices > Insert guestaddition image ... 1.2. cd /run/media/foobar/VBox_GAs-6.0.4 1.3. sudo sh autorun.sh or simply by sudo sh VBoxLinuxAdditions.run leading to the error: This system is currently not set up to build kernel modules. Please install the Linux kernel "header" files matching the current kernel for adding new hardware support to the system. VirtualBox Guest Additions: modprobe vboxsf failed 1.4. So I tried solving the problem by installing the Linux kernel header files as mentioned here : 1.4.1 find the Linux kernel by mhwd-kernel -li which in my case is linux419 1.4.2. Then sudo pacman -S linux419-kernel 1.4.3. then following the step one in original post and reboot. This solves the resolution problem but every time I reboot I have to wait for 5-6 minutes showing the message: A stop job is running for vboxadd.service … 1.4.4. Tried the sudo systemctl stop vboxadd and sudo systemctl disable vboxadd from here but then it reverts the resolution back. 1.4.5. tried uninstalling the guest additions by sudo sh VBoxLinuxAdditions.run uninstall and then following step 2 whish was also not successful! using the Manjaro repository as suggested on their wiki : 2.1. sudo pacman -Syu virtualbox-guest-utils leading to There are 11 providers available for VIRTUALBOX-HOST-MODULES: :: Repository extra linux316-virtualbox-guest-modules … :: Repository community linux-rt-lts-manjaro-virtualbox-guest-modules 2.2. from here running mhwd-kernel -li indicates that should go for linux419 , or use sudo pacman -S linux419-virtualbox-guest-modules instead. but then I get the error: error failed to commit transaction (conflicting files) virtualbox guest utils exists in filesystem vboxclient 2.3. as suggested here I tried sudo pacman -S --force and finished the installation and rebooted. But nothing changes except that I get this notification: | I recently faced the same issue, and after some research I came up with the solution that doesn't require to use VBoxVGA adapter and reinstall Manjaro.The TL;DR version is, you needed to install linux419-headers , not linux419-kernel . System specs Host : Windows 10 1809 Pro 64 bit Guest : Manjaro KDE 18.0.4 64 bit with 4.19.34-1-MANJARO kernel Virtualization : VirtualBox 6.0.6 r130049 (Qt5.6.2) Steps Do full system update: sudo pacman -Syyu Install gcc , make and Linux kernel "header" files for the current kernel version (which can be found via uname -r command, e.g. linux419-headers – I tried to provide a uniform command using sed and grep functionality): sudo pacman -S gcc make linux$(uname -r|sed 's/\W//g'|cut -c1-2)-headers Reboot: sudo reboot Mount the ISO via Devices → Guest Additions CD Image… and open the terminal there. Run sudo sh VBoxLinuxAdditions.run Reboot: sudo reboot At this point Manjaro should work fine with the new VMSVGA controller adjusting screen resolution on the fly (make sure you allocated enough video memory (128 Mb) and enabled acceleration in VM settings beforehand), seamlessly share buffer and allow drag-and-drop. Update Recent kernel update from 4.19.34-1-MANJARO to 5.0.9-2-MANJARO didn't affect functionality of Guest Additions and no additional tweaking was required: | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167505/"
]
} |
499,951 | I have a table as of below fruits shopname Apple x1orange x1banana x2Apple x3orange x2banana x3 I want to group all rows based on column 1 and replace the duplicates with empty space. It will look like below. fruits shopname Apple x1 x3banana x2 x3orange x1 x2 I know we can remove duplicates with uniq command.but here I want to group them and replace duplicates with empty space. | I recently faced the same issue, and after some research I came up with the solution that doesn't require to use VBoxVGA adapter and reinstall Manjaro.The TL;DR version is, you needed to install linux419-headers , not linux419-kernel . System specs Host : Windows 10 1809 Pro 64 bit Guest : Manjaro KDE 18.0.4 64 bit with 4.19.34-1-MANJARO kernel Virtualization : VirtualBox 6.0.6 r130049 (Qt5.6.2) Steps Do full system update: sudo pacman -Syyu Install gcc , make and Linux kernel "header" files for the current kernel version (which can be found via uname -r command, e.g. linux419-headers – I tried to provide a uniform command using sed and grep functionality): sudo pacman -S gcc make linux$(uname -r|sed 's/\W//g'|cut -c1-2)-headers Reboot: sudo reboot Mount the ISO via Devices → Guest Additions CD Image… and open the terminal there. Run sudo sh VBoxLinuxAdditions.run Reboot: sudo reboot At this point Manjaro should work fine with the new VMSVGA controller adjusting screen resolution on the fly (make sure you allocated enough video memory (128 Mb) and enabled acceleration in VM settings beforehand), seamlessly share buffer and allow drag-and-drop. Update Recent kernel update from 4.19.34-1-MANJARO to 5.0.9-2-MANJARO didn't affect functionality of Guest Additions and no additional tweaking was required: | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/336296/"
]
} |
499,982 | I understand where to find the logs, but I am not always sure what they mean . And I can't exactly find a comprehensive guide on sshd logs explaining what they mean. I am particularly concerned with this set of log attempts: Feb 03 01:08:47 malan-server sshd[8110]: Invalid user centos from 193.106.58.90 port 34574Feb 03 01:08:47 malan-server sshd[8110]: pam_tally(sshd:auth): pam_get_uid; no such userFeb 03 01:08:47 malan-server sshd[8110]: pam_unix(sshd:auth): check pass; user unknownFeb 03 01:08:47 malan-server sshd[8110]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.106.58.90Feb 03 01:08:48 malan-server sshd[8110]: Failed password for invalid user centos from 193.106.58.90 port 34574 ssh2Feb 03 01:08:49 malan-server sshd[8110]: Connection closed by invalid user centos 193.106.58.90 port 34574 [preauth]Feb 03 01:14:30 malan-server sshd[8114]: Invalid user centos from 193.106.58.90 port 39249Feb 03 01:14:30 malan-server sshd[8114]: pam_tally(sshd:auth): pam_get_uid; no such userFeb 03 01:14:30 malan-server sshd[8114]: pam_unix(sshd:auth): check pass; user unknownFeb 03 01:14:30 malan-server sshd[8114]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.106.58.90Feb 03 01:14:32 malan-server sshd[8114]: Failed password for invalid user centos from 193.106.58.90 port 39249 ssh2Feb 03 01:14:34 malan-server sshd[8114]: Connection closed by invalid user centos 193.106.58.90 port 39249 [preauth]Feb 03 01:20:18 malan-server sshd[8118]: Invalid user centos from 193.106.58.90 port 43934Feb 03 01:20:18 malan-server sshd[8118]: pam_tally(sshd:auth): pam_get_uid; no such userFeb 03 01:20:18 malan-server sshd[8118]: pam_unix(sshd:auth): check pass; user unknownFeb 03 01:20:18 malan-server sshd[8118]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.106.58.90Feb 03 01:20:20 malan-server sshd[8118]: Failed password for invalid user centos from 193.106.58.90 port 43934 ssh2Feb 03 01:20:22 malan-server sshd[8118]: Connection closed by invalid user centos 193.106.58.90 port 43934 [preauth]Feb 03 01:26:06 malan-server sshd[8121]: Invalid user centos from 193.106.58.90 port 48611Feb 03 01:26:06 malan-server sshd[8121]: pam_tally(sshd:auth): pam_get_uid; no such userFeb 03 01:26:06 malan-server sshd[8121]: pam_unix(sshd:auth): check pass; user unknownFeb 03 01:26:06 malan-server sshd[8121]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.106.58.90Feb 03 01:26:08 malan-server sshd[8121]: Failed password for invalid user centos from 193.106.58.90 port 48611 ssh2Feb 03 01:26:08 malan-server sshd[8121]: Connection closed by invalid user centos 193.106.58.90 port 48611 [preauth] There are plenty that day from that same IP address, 193.106.58.90 in Kiev, Ukraine . Another set of scary looking logs are these: Feb 04 19:58:29 malan-server sshd[9725]: Bad protocol version identification 'RFB 003.003' from 142.44.253.51 port 36772Feb 04 23:47:52 malan-server sshd[9762]: Bad protocol version identification 'REMOTE HI_SRDK_DEV_GetHddInfo MCTP/1.0' from 162.207.145.58 port 48248Feb 05 06:40:36 malan-server sshd[9836]: Bad protocol version identification 'REMOTE HI_SRDK_DEV_GetHddInfo MCTP/1.0' from 186.4.174.94 port 34515Feb 05 07:59:13 malan-server sshd[9850]: Bad protocol version identification 'GET / HTTP/1.1' from 209.17.97.34 port 43944Feb 05 09:09:48 malan-server sshd[9863]: Bad protocol version identification 'REMOTE HI_SRDK_DEV_GetHddInfo MCTP/1.0' from 98.150.93.187 port 60182Feb 05 14:09:45 malan-server sshd[9911]: Did not receive identification string from 191.232.54.97 port 63982Feb 05 14:09:45 malan-server sshd[9912]: Bad protocol version identification '\003' from 191.232.54.97 port 64044Feb 05 14:09:45 malan-server sshd[9913]: Bad protocol version identification '\003' from 191.232.54.97 port 64136Feb 05 14:33:37 malan-server sshd[9919]: Bad protocol version identification '' from 198.108.67.48 port 56086 What do these mean? I understand that the Internet is a big bad mean scary place where public-facing IP addresses constantly get bombarded with bot-attacks. But I have my router configured to forward connections on port 9000 to my server's port 22, so I am not entirely sure how there are still bot-attacks. It seemed unlikely to me that they would be port scanning all 65,535 possible ports. I'll write a list of questions: Did I just choose a port that's too easy to guess? What would be a better port number? What do the port numbers in these sshd logs even mean? How can they have access to port 44493 if my router is only configured to forward port 9000 to port 22? It seems obvious to me that the port number listed is not the same thing as the outward-facing computer port, because I only access through port 9000, yet the port number listed for my own external logins is not 9000. What does [preauth] mean? What does Bad protocol version identification 'REMOTE HI_SRDK_DEV_GetHddInfo MCTP/1.0' from 162.207.145.58 port 48248 mean? | There is no good port to use, only good SSH configurations. If you disable password-based logins and only allow key-based authentication, you won’t risk much from such brute-forcing attempts. You could add port-knocking, but that’s security by obscurity. The port numbers listed on the right of the logs are the source ports; these are dynamically allocated and are on the source system, not the target system. [preauth] means that the logged event happened before the connection was authenticated — i.e. in this case that the connection is closed before being authenticated. All the logs from your second set of logs correspond to non-SSH traffic sent to your dæmon. You’ll see this happen quite a lot, especially since you’re listening on a non-standard port — various scanners will send requests without knowing what is listening on the other end. Scanning large portions of the Internet, on a variety of ports, doesn’t take very long if you have well-connected systems to scan from, or a large number of compromised hosts in a botnet. See massscan for an example of a mass-scanning tool. There are also lists of known-open IP addresses and ports which are circulated; so all it takes is for one scan to find your open port 9000. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/499982",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/288980/"
]
} |
500,025 | What is the NixOS-way to put a configuration file in /etc ? E.g. I want to drop /etc/nanorc . I found some forum entries talking about programming it into /etc/nixos/configuration.nix , but could not find any documentation about that... | To create a file in /etc on NixOS, use environment.etc in configuration.nix . Here's an example: environment.etc = { # Creates /etc/nanorc nanorc = { text = '' whatever you want to put in the file goes here. ''; # The UNIX file mode bits mode = "0440"; };}; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/500025",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9417/"
]
} |
500,050 | hostname is used to display the system's DNS name, and to display or set its hostname or NIS domain name. Does a computer system (Linux) only have one host name? In virtual hosting, several host names can be resolved to different root directories in a web server. If a computer system (Linux) can only have one host name, how is virtual hosting possible? Thanks. | Yes, and no. The are two distinct things called hostnames. The "internal" hostname is basically a string kept by the kernel. This is the one returned by the hostname command (or the gethostname() call) and it's unique within a system (*) . It's mostly used when a program wants to output some identifier for the system it's running on. E.g. \h in Bash's PS1 expands to the hostname. Similarly, syslog-style logfiles also include the hostname on log entries. (* Though as Stephen Kitt comments, namespaces can be used to show different hostnames to processes on the same system. That's mostly used for containers, which try to act like they're distinct systems.) Then there's also DNS names that are used by other systems to look up the IP address of another. There might be more than one DNS name that point to the same IP address, and so the same host. The internal hostname and the DNS names don't need to be the same. Suppose someone has a webserver they've decided to call orange (*) , with the IP address 192.0.2.9 . It could serve two different domains and the DNS would be set up to have www.example.org and www.example.com both point to 192.0.2.9 , while the internal hostname of the system might be orange.example.org or just orange . In that case, the DNS setup would usually also have a reverse lookup on 192.0.2.9 point back to the name orange.example.org , but there's nothing to force that. (* because they like to name their servers after fruit. Someone might use webserver1 or such, but the point is that it doesn't need to be named after one of the actual domains.) In addition to that, virtual hosting requires that the browser tell the web server the name of the site it tried to access. Otherwise the server would not know which virtual site the client tried to reach. HTTP has the Host header for that. What muddies the distinction between a DNS name and the internal hostname is the mDNS protocol (implemented e.g. by the avahi daemon ) and other discovery protocols. mDNS makes it possible for hosts to query all otherhosts on the same network for name information, and to make their own hostnames visible on other hostswithout explicitly setting them up in DNS. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/500050",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
500,072 | Kitty is a terminal for Linux. How do I copy and paste with it. CTRL + c does not work? And there is no option on right click. Right-click also doesn't work for copy. | Kitty You need to use a capital C . To copy and paste in Select text Press CTRL + SHIFT + C Paste with SHIFT + INSERT in any app. There is no method to copy with the cursor. Vim Because things like VIM keep coming up. If you have Neovim installed (which is basically better vim). You can easily copy to your Xorg/Wayland buffer by selecting into the + buffer with "+y . This has the advantage of skipping over things that should not be copied, like hints, | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/500072",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
500,075 | I have two regular expressions i.e. command1 and command2 where i need to combine both expressions into a single expression using | for that command1 output should be passed to next expression. command1: grep 0x017a /sys/bus/pci/devices/*/device | cut -d/ -f6>> Output : 00:00:01 command 2: head -n1 /sys/bus/pci/devices/00:00:01/resource | cut -d ' ' -f 1 | tail -c 9 How to use command1 output (00:00:01) into command2 and combine into a single expression? | Kitty You need to use a capital C . To copy and paste in Select text Press CTRL + SHIFT + C Paste with SHIFT + INSERT in any app. There is no method to copy with the cursor. Vim Because things like VIM keep coming up. If you have Neovim installed (which is basically better vim). You can easily copy to your Xorg/Wayland buffer by selecting into the + buffer with "+y . This has the advantage of skipping over things that should not be copied, like hints, | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/500075",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335799/"
]
} |
500,132 | I have a file called - . I want to display its contents. One way is to do cat ./- since cat - reads from standard input. However , why are cat "-" and cat '-' also interpreted by the shell as cat - ? | The shell removes any quotes before cat sees them. So cat - and cat "-" and cat '-' all get passed through as the array ['cat', '-'] after whitespace tokenization, wildcard expansion, and quote removal by the shell. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/500132",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/336431/"
]
} |
500,225 | I'm trying to use flatpak enter to enter a sandboxed Steam client. To get a PID or instance ID I do like so: $ flatpak psInstance PID Application Runtime2581746118 4294 com.valvesoftware.Steam org.freedesktop.Platform However, doing flatpak enter as root doesn't work: # flatpak enter 4294 basherror: 4294 is neither a pid nor an application or instance ID# flatpak enter 2581746118 basherror: 2581746118 is neither a pid nor an application or instance ID# flatpak enter com.valvesoftware.Steam basherror: com.valvesoftware.Steam is neither a pid nor an application or instance ID Also, using tab completion after flatpak enter only shows command line options, rather than any argument to enter . | The problem is you need to be root to use flatpak enter because it requires entering various container namespaces. What makes that more complex is that sudo changes environment variables making flatpak unaware of your application instances. That results in this rather non-obvious usage: sudo -E flatpak enter instance-id /bin/bash | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/500225",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41033/"
]
} |
500,243 | Want to list all python test scripts that contain "def test" This command line did not work while individual command works find . -name "*.py" | grep "def test" | find . -name '*.py' -exec grep -l 'def test' {} \; or find . -name '*.py' -exec grep -l 'def test' {} + The second version will result in fewer invocations of grep by specifying sets of files as arguments. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/500243",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/336533/"
]
} |
500,254 | I am writing a bash shell script at the moment and one of the specifications is that I need to print an error message if the first argument, $1, is not -e or -d, this is what I have done: if [ "$1" != "-e" ] || [ "$1" != "-d" ]; then echo "Error: First argument must be either -e or -d" exit 1fi this does not seem to work however, as even if i used -e or -d for the first argument, this error message is still printed. What am I doing wrong and how do I fix this? | find . -name '*.py' -exec grep -l 'def test' {} \; or find . -name '*.py' -exec grep -l 'def test' {} + The second version will result in fewer invocations of grep by specifying sets of files as arguments. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/500254",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/336541/"
]
} |
500,255 | Today I just noticed that my process IDs are very high, in the 400,000 (i.e 449624). When I run ps -ef | more , that's when I noticed it. Is that normal or does that indicate a problem? Otherwise the scripts are running fine. I am using Redhat 7.3 x64 bit. One other thing I noticed is that we also have Redhat 7.2 and the pids are not that high, just on newer OS. Why would that be? Does it mean it's OS related and normal? I don't have that kernel_pid_max in my sysctl.conf . I ran cat /proc/sys/kernel/pid_max and I see 458752 . | From the proc documentaton : On 32-bit platforms, 32768 is the maximum value for pid_max. On 64-bit systems, pid_max can be set to any value up to 2^22 (PID_MAX_LIMIT, approximately 4 million). You can see the with cat /proc/sys/kernel/pid_max . You can also query this with sysctl . sudo sysctl -a | grep kernel.pid_max Or: sysctl -n kernel.pid_max Modify /etc/sysctl.conf to change the value permanently and reload with sysctl -p . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/500255",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335368/"
]
} |
500,314 | echo $GOPATH will print: /mnt/star/program/go/package:/mnt/star/git_repository/workspace/go_workplace There are 2 directories, I want to append the first directory's sub-directory bin/ to $PATH . If I write $PATH=$PATH:$GOPATH/bin , then actually it appends 2 directories to $PATH : /mnt/star/program/go/package This only contains directories, it should be /mnt/star/program/go/package/bin . /mnt/star/git_repository/workspace/go_workplace/bin This actually shouldn't be added to $PATH . BTW, there are cases that $GOPATH only contains one directory, so that simply appending $GOPATH/bin will work. I am looking for a solution that fits both cases. So, how do I write this in a bash config file? | You can use: PATH="$PATH:${GOPATH%%:*}/bin" Or PATH="$PATH:${GOPATH%:*}/bin" Both will work because there can be at most one : . It will remove the part after : . So, in your first case, it will remove the second directory and in your second case, there will be no pattern like :* , so there will be no change in the directory name. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/500314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78405/"
]
} |
500,460 | I am running Debian 9 (Stretch) with QEMU/KVM hosting a Windows 10 - 1809 guest. Using Spice for graphics. I installed SPICE Guest Tools 0.141 and am able to copy/paste files and text from my Debian host to my Windows guest. However, I am unable to copy from the Windows guest to the host. I have tried reinstalling Spice Tools on the guest. I have checked that the Channel spice has been added and it looks good to my untrained eye. I checked the guest log files for errors, with no luck. | I made it work with Debian 10 host, and a Windows 10 guest, in both directions. install virt-managerinstall the spice-guest-tools in windows (it has a non-costly license on http://spice-space.org/ ) find the details tab for the VMput the video qxl to qxl (other may work but slower)bottom left, click add hardware, add a channel, and put the spicevmc type with the redhat name. This is very important for the clipboard to work. You must restart virt-manager. It is also important you shut down the OS of the VM. you can also use the option virt-manager --debug to see logs when you copy paste. Here can be found more details: https://blogs.nologin.es/rickyepoderi/index.php?/archives/87-Copy-n-Paste-in-KVM.html thanks to redhat that provided all the drivers Since my answer was popular, I would like also to share how to share a folder.I don't think it is supported for linux kernel older than 4.19. But it works on 4.19. You need to be careful that you do not share a folder with the whole internet wihtout password. But you need to check this yourself. Use virt-manager to share files between Linux host and Windows guest? You set a folder as shared by right clicking on Windows. linux with Nautilus can connect to the smb://IP_WINDOWS. But it is better to use the shell it is more stable. enable firmware rules on windowsOpen Control Panel, click System and Security, and then click Windows Firewall. In the left pane, click Advanced settings, and in the console tree, click Inbound Rules.Under Inbound Rules, locate the rules File and Printer Sharing (NB-Session-In) and File and Printer Sharing (SMB-In).For each rule, right-click the rule, and then click Enable Rule. find ips using ipconfig and ifconfig remove password protection for smb https://pureinfotech.com/setup-network-file-sharing-windows-10/ It is important to deactivate authenticate for all networks in the network configuration in windows, accessible from file sharing. then the folder must be created from scratch to make sure it works See in particular the section "How to share files over the network without needing a password" at the pureinfotech.com link above. If you make the public network have free access without password, it may be a security risk (do not put your credit card number in the shared folder yet). But it will work. You can expand upon those instructions. I don't think a VM inside linux is easily accessible from the public network, but maybe. --And this is how to mountsudo mount -t cifs //192.168.1.123/Users/MrHappy/Desktop/repos /media/vm -o user=externo,password=asd,uid=1000,gid=1000,mfsymlinks or add this in /etc/fstab//192.168.1.123/Users/MrHappy/Desktop/repos /media/vm cifs user=externo,password=asd,uid=1000,gid=1000,mfsymlinksAnd then one can mount usingsudo mount /media/vm it is important to replace the gid and uid with the ones of the linux machine, using "id -g user" and "id -u user"the ui adn gid are so taht not only root has access but also the user the option mfsymlinks enables symlinks to work before you shut down the host computer, you should run this or the mount point is stuck:sudo umount -a -t cifs -lIt is better to do it a few minutes before shutting down the computer. if you want to make a plug-and play usb microphone (such as audio technica) work in a windows guest, you just need to add a "usb redirection" module in virtmanager, and perhaps set the hardware usb device to usb 3 if the socket is blue for usb 3. lsusb -v can inspect that the host finds the device. Windows device manager should then see the device. Try to unplug and replug. do not add the specific usb name in virt-manager or it crashes. sometimes, you may need in windows to unplug-replug the microphone, and open settings/system/sound to see the microphone appear. On a working PC, I was using the Intel integrated graphics and not an amd/nvidia card. I had tearing for videos inside the VM. I removed it by activating TearFree in the intel driver. You can check that TearFree is enabled by running "grep -i tear /var/log/Xorg.0.log" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/500460",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/301936/"
]
} |
500,536 | https://www.freedesktop.org/software/systemd/man/systemd-resolved.service.html#Description says Additionally, systemd-resolved provides a local DNS stub listener on IP address 127.0.0.53 on the local loopback interface. Programs issuing DNS requests directly, bypassing any local API may be directed to this stub, in order to connect them to systemd-resolved. How shall I understand the format of `/etc/resolv.conf`? says the DNS server and resolver ("stub resolver") can be different, you can pass DNS requests to 127.0.0.53 which pass them to your router for actual DNS (eg it could handle local hosts but pass requests for remote hosts on for full DNS). What are DNS server, resolver and stub resolver? I also heard of two kinds of DNS servers (one is called "resolver", and the other I forgot). What do the two kinds mean? | Confusing names that people often get wrong. In the terminology of RFC 1034 , there are "resolvers" and there are "name servers". "resolver" describes the entire subsystem that user programs use to access "name servers", without regard to any particular architecture. It's the subsystem that queries one or more "name servers" for the data that they publish and pieces together from those data a final answer for the querying application, in the manner described in RFC 1034 § 5.3.3 . A "resolver" is the overall subsystem that does query resolution . The RFC theoretically allows, because it isn't intended to be Unix-centric , systems where all of the query resolution mechanism is potentially in some form of shared subsystem that runs inside each individual applications program. In RFC 1034 terminology, a "stub resolver" is what is generally employed in the Unix and Linux world: a fairly dumb DNS client library running in the application processes, talking the same DNS/UDP and DNS/TCP protocols to an external program running as another process, that actually does the grunt work of query resolution by making back-end transactions and building up the front-end response from them. "resolver" is such a confusing term, and so often used contrary to the RFCs, that years ago I took to explaining the DNS to people using terminology borrowed from HTTP: proxy servers , content servers , and client libraries linked into applications. The DNS client library in your applications is almost always going to be the BIND DNS client library from ISC. Most C libraries on Unix and Linux systems incorporate the BIND DNS client library. Several other DNS client libraries exist, though, and a minority of applications use them instead. The DNS client library does name qualification and finds out what DNS proxy server(s) to talk to, in the manners described in further reading. The initial DNS proxy server is, in this particular setup, systemd-resolved listening on 127.0.0.53. Other Unix and Linux softwares that perform this rôle include Daniel J. Bernstein's dnscache , unbound , dnsmasq , the PowerDNS Recursor, the MaraDNS Recursor ("Deadwood"), and so forth. I personally have a local instance of modified dnscache (that can inherit its listening sockets) on every machine listening on 127.0.0.1, which is also the default place that the BIND DNS client library expects a proxy DNS server to be, in the absence of explicit configuration. systemd-resolved talks to other proxy DNS servers, which may well talk to yet further proxy servers, forwarding the query along a chain until the query reaches a resolving proxy server. By default, as the systemd people ship things and unless the person who built the binary package or the local system administrator changes it, the resolving proxy DNS server will be a server run by Google as part of Google Public DNS, and there will be a chain of forwarding proxy DNS servers of length 1. If the system administrator has configured systemd-resolved to use other proxy DNS servers instead of Google's, the chain will be longer. Examples of such configuration include (in a rough best-to-worst order) using a local resolving proxy DNS server on the LAN, using a proxy DNS server that is running in a router/gateway at the edge of the LAN, or using a third-party proxy DNS server that is out on Internet at large. The resolving proxy DNS server at the far end of the chain does the grunt work of query resolution, querying content DNS servers around the world as needed for data which it stitches together to form the final answer, which is then returned back along the chain of proxy DNS servers, including systemd-resolved at the near end of that chain, to the DNS client library in the applications. In RFC 1034 terms, for contrast, the "resolver" here is in fact a huge black box encompassing the BIND DNS Client library, systemd-resolved , and Google Public DNS, because it is defined by the RFC as having "user programs" on one side and content DNS servers (providing referrals and database information "directly") on the other. People often will mis-use the term, sometimes because they misunderstand the RFC 1034 architecture-neutral concept of a "resolver" to be the same as one single Unix or Linux server program, which it is not. HTTP terminology does not have the huge black box. Further reading Jonathan de Boyne Pollard (2000). "content" and "proxy" DNS servers. Frequently Given Answers. Jonathan de Boyne Pollard (2004). What DNS query resolution is . Frequently Given Answers. Jonathan de Boyne Pollard (2017). What DNS name qualification is . Frequently Given Answers. Jonathan de Boyne Pollard (2003). Whence one obtains proxy DNS service . Frequently Given Answers. Jonathan de Boyne Pollard (2018). " The dnscache , tinydns , and axfrdns services ". nosh Guide . Softwares. https://unix.stackexchange.com/a/449092/5132 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/500536",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
500,549 | I have a file whose content is similar to the following one. 000.20000 I need to remove all the lines with a single zero. I was thinking to use grep -v "0" , but this removes also the line containing 0.2. I saw I could use the -w option, but this doesn't seem to work either. How can I remove all the lines containing just a single 0 and keep all those lines starting with a 0? | grep -vx 0 From man grep : -x, --line-regexp Select only those matches that exactly match the whole line. For a regular expression pattern, this is like parenthesizing the pattern and then surrounding it with ^ and $. -w fails because the first 0 in 0.02 is considered a "word", and hence this line is matched. This is because it is followed by a "non-word" character. You can see this if you run the original command without -v , i.e. grep -w "0" . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/500549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
500,572 | I want to create random unique numbers (UUIDs) as the following node.id=ffffffff-ffff-ffff-ffff-ffffffffffff First I tried this $ rndnum=` echo $RANDOM"-"echo $RANDOM"-"echo $RANDOM"-"echo $RANDOM"-"echo $RANDOM`$ echo $rndnum30380-echo 21875-echo 14791-echo 32193-echo 11503 What is the right way to create the following (where f is any number)? ffffffff-ffff-ffff-ffff-ffffffffffff | On Linux, the util-linux / util-linux-ng package offers a command to generate UUIDs: uuidgen . $ uuidgen5528f550-6559-4d61-9054-efb5a16a4de0 To quote the manual : The uuidgen program creates (and prints) a new universally unique identifier (UUID) using the libuuid (3) library. The new UUID can reasonably be considered unique among all UUIDs created on the local system, and among UUIDs created on other systems in the past and in the future. There are two types of UUIDs which uuidgen can generate: time-based UUIDs and random-based UUIDs. By default uuidgen will generate a random-based UUID if a high-quality random number generator is present. Otherwise, it will choose a time-based UUID. It is possible to force the generation of one of these two UUID types by using the -r or -t options. Addendum: The OP had provided a link in the comments to the documentation for Presto DB . After a bit of searching, I found this related discussion where it is explicitly mentioned that the node.id property is indeed a UUID. Adding the information provided by frostschutz in a comment: As an alternative to the uuidgen / libuuid approach, you can make use of an interface exposed by the Linux kernel itself to generate UUIDs: $ cat /proc/sys/kernel/random/uuid00db2531-365c-415c-86f7-503a35fafa58 The UUID is re-generated on each request. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/500572",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
500,578 | Example on hyper terminal: I press alt + h , the script should check if hyper is already running in the background. If yes, it should open already the opened window. If not, open a new window. OS: Ubuntu 18.04 | On Linux, the util-linux / util-linux-ng package offers a command to generate UUIDs: uuidgen . $ uuidgen5528f550-6559-4d61-9054-efb5a16a4de0 To quote the manual : The uuidgen program creates (and prints) a new universally unique identifier (UUID) using the libuuid (3) library. The new UUID can reasonably be considered unique among all UUIDs created on the local system, and among UUIDs created on other systems in the past and in the future. There are two types of UUIDs which uuidgen can generate: time-based UUIDs and random-based UUIDs. By default uuidgen will generate a random-based UUID if a high-quality random number generator is present. Otherwise, it will choose a time-based UUID. It is possible to force the generation of one of these two UUID types by using the -r or -t options. Addendum: The OP had provided a link in the comments to the documentation for Presto DB . After a bit of searching, I found this related discussion where it is explicitly mentioned that the node.id property is indeed a UUID. Adding the information provided by frostschutz in a comment: As an alternative to the uuidgen / libuuid approach, you can make use of an interface exposed by the Linux kernel itself to generate UUIDs: $ cat /proc/sys/kernel/random/uuid00db2531-365c-415c-86f7-503a35fafa58 The UUID is re-generated on each request. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/500578",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/336846/"
]
} |
500,626 | I'm trying to experiment with shared objects and found the below snippet on http://www.gambas-it.org/wiki/index.php?title=Creare_una_Libreria_condivisa_(Shared_Library)_.so gcc -g -shared -Wl,-soname,libprimo.so.0 -o libprimo.so.0.0 primo.o -lc I browsed trough the manpages and online, but I didn't find what the -lc switch does, can someone tell me? | The option is shown as " -l_library_ " (no space) or " -l _library_ " (with a space) and c is the library argument, see https://linux.die.net/man/1/gcc -lc will link libc ( -lfoobar would link libfoobar etc.) General information about options and arguments UNIX commands often accept option arguments with or without whitespace. If you have an option o which takes an argument arg you can write -o arg or -oarg . On the other hand you can combine options that don't take an argument, e.g. -a -b -c or -abc . When you see -lc you can only find out from the documentation (man page) if this is the combination of options -l and -c or option -l with argument c or a single option -lc . See also https://www.gnu.org/software/libc/manual/html_node/Argument-Syntax.html Note: gcc is an exception from this general concept. You cannot combine options for gcc . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/500626",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335248/"
]
} |
500,642 | inetd can make several programs with stdin input and stdout output work like programs with input and output from and to sockets, and monitor their listening sockets simultaneously. Is there a simpler program than inetd which just works for a single program: make a single program with stdin input and stdout output work like a program with input and output from and to sockets? Thanks. | Nmap’s Ncat can do this, with its -c or -e options: nc -l -c bc will listen on the default port (31337) and, when a connection is established, run bc with its standard input and output connected to the socket. nc localhost 31337 will then connect to a “remote” bc and you can then enter bc expressions and see their result. socat can do this too (thanks Hermann ): socat tcp-listen:31337,reuseaddr,fork EXEC:bc | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/500642",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
500,662 | I have a problem with redirections : $ which python3 gives me /Library/Frameworks/Python.framework/Versions/3.7/bin/python3 and $ ls -l /Library/Frameworks/Python.framework/Versions/3.7/bin/python3 gives me lrwxr-xr-x 1 root admin 9 5 fév 18:30 /Library/Frameworks/Python.framework/Versions/3.7/bin/python3 -> python3.7 but which python3 | ls -l don't gives me the same result. Do you know why ? And what is the right command for redirection ? I'm using OSX. I have to say that the following question pass the output of previous command to next as an argument may be the same as this one, but if I look the answers that were given, there I'm lost. To be useful, they require more advanced knowledge or study than those given here. | ls does not take input from standard in, but only from arguments: Try ls -l "$(which python3)" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/500662",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335968/"
]
} |
500,725 | I'm trying to pipe the output of a grep search into the input of another grep.Such as: grep search_query * | grep -v but_not_this But the second grep is not using the output of the previous search. It looks like the second grep is just using * instead. For example, grep lcov * tst/bits/Module.mk:21:$(call func_report_lcov)tst/drivers/Module.mk:27:$(call func_report_lcov) But when I want to filter out the results containing "call", grep lcov * | grep -v call... Grep gives me every single line in my workspace that doesn't contain "call". Environment Info: This is happening in both bash and fish I have aliased the grep command like so alias grep='grep -nR --color=always' Anything else I might be missing? | The alias is what is causing it. From man grep , the -R option causes grep to "read all files under each directory, recursively". Hence, the part after the pipe ignores the output from the first grep , and instead grep s through all files recursively from the current directory. You can bypass the alias and use vanilla grep with \grep . Hence the following should give you what you expect. grep lcov * | \grep -v call However, I personally think that putting -R in the alias is confusing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/500725",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/336965/"
]
} |
500,761 | Why is the Nginx webserver called a "reverse proxy"? I know any "proxy" to be a "medium" and this touches a more basic question of "how can a medium be forward or reverse". | A typical "forward" proxy (commonly just called "proxy") is used to allow internal clients to reach out to external sites. For example, a corporation may have desktop users who want to reach the internet, but firewalls block them. The users can configure their browser to reach a proxy server, which will make the connection for them. A "reverse" proxy allows external clients to reach in to internal sites. For example, a corporation may run a dozen different web sites behind a firewall. A reverse proxy would be programmed so that incoming requests for "site1.corporate.example.com" will be forwarded to the real web server for that site. In this way the corporation only needs to expose one real web server. There are many use cases for both forward and reverse proxies. nginx is a web server, similar to apache and IIS . Like many web servers it can be configured to work in forward proxy mode or reverse proxy mode. The phrase "nginx reverse proxy" means the nginx server configured as a reverse proxy. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/500761",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
500,796 | On recent Linux based operating systems there is no ifconfig and traceroute . Some functionality has been incorporated into the ip utility (see here for examples), but I have not found a replacement for the traceroute command. I know that I can do yum install net-tools or yum install traceroute when I am on CentOS or RHEL but our servers come preinstalled without that command and while we are allowed to sudo certain commands installing additional software is always a problem | Try if command "tracepath" available | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/500796",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59934/"
]
} |
500,828 | Let's say we have 2 integers in a bash script: value1=5value2=3 Then why do we need to use double quotes in case of a test ? For example: if [[ "$value1" -eq "$value2" ]] Why not just use the following ? if [[ $value1 -eq $value2 ]] To me, the double quotes don't make any sense. | You don't actually need the quotes here. This is one of the very few cases where it is safe to use a variable unquoted. You can confirm this with set -x : $ var1=""$ var2="3"$ set -x$ if [[ $var1 -eq $var2 ]]; then echo "match!"; else echo "no match!"; fi+ [[ '' -eq 3 ]]+ echo 'no match!'no match!$ if [[ "$var1" -eq "$var2" ]]; then echo "match!"; else echo "no match!"; fi+ [[ '' -eq 3 ]]+ echo 'no match!'no match! As you can see above, the quoted and unquoted versions of the test are resolved to the exact same thing by bash. The same should be true for zsh and, I think, any other shell that supports the [[ ]] operator. Note that this is not the case with the more portable [ ] : $ if [ $var1 -eq $var2 ]; then echo "match!"; else echo "no match!"; fi+ '[' -eq 3 ']'sh: [: -eq: unary operator expected+ echo 'no match!'no match! The [ ] construct, unlike the [[ ]] one, does require quoting. Some useful links to learn more about when and why quoting is required: Why does my shell script choke on whitespace or other special characters? Security implications of forgetting to quote a variable in bash/POSIX shells When is double-quoting necessary? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/500828",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/337045/"
]
} |
500,887 | I want to automatically partition a block device with sfdisk . This might be an SD card, a hard disk, SATA or NVME device. Initially I thought that sfdisk requires these names and thus I was looking to generate them correctly but apparently one can leave them out anyway. :) Unlike the traditional ATA and SATA devices that have partitions names simply appended to the device name (e.g., /dev/sda1 for the first partition of block device sda ) there exists another scheme for block devices that are flash-based and use other drivers. These add a p between the device and partition name (e.g. /dev/mmcblk0p1 for the first partition of mmcblk0 ). Unfortunately I have not found any kernel documentation on these details. Given a block device (e.g., /dev/mmcblk0 ) how do I decide if the respective (yet non-existing) partitions will be named with an p or not (e.g., /dev/mmcblk0p1 or /dev/mmcblk01 )? This is basically the reverse question of this one but with the additional twist, that the partitions do not exist yet (for the sake of this question I do not allow the answer to actually modify the partition table thus trying it out is not valid). | If device name ends with digit then kernel adds 'p' symbol to separate partition number from device name. /dev/sda -> /dev/sda1/dev/mmcblk0 -> /dev/mmcblk0p1 For details see disk_name function in Linux kernel sources (linux/block/partition-generic.c): if (isdigit(hd->disk_name[strlen(hd->disk_name)-1])) snprintf(buf, BDEVNAME_SIZE, "%sp%d", hd->disk_name, partno);else snprintf(buf, BDEVNAME_SIZE, "%s%d", hd->disk_name, partno) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/500887",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41065/"
]
} |
500,999 | From How can I extract only the pid column and only the pathname column in the lsof output? awk '{ for (i=9; i<=NF; i++) { if ($i ~ "string" && $1 != "wineserv" && $5 == "REG" && $NF ~ "\.pdf") { $1=$2=$3=$4=$5=$6=$7=$8="" print }}}' The regex "\.pdf" matches /.../pdf.../... in gawk, but not in mawk. I wonder why? Thanks. | I don't think it's about the regex, but about how the double-quoted string is handled. C-style escapes (like \n ) are interpreted in awk strings, and gawk and mawk treat invalid escapes differently: $ mawk 'BEGIN { print "\."; }'\.$ gawk 'BEGIN { print "\."; }'gawk: cmd. line:1: warning: escape sequence `\.' treated as plain `.'. That is, mawk seems to leave the backslash as-is, while gawk removes it (and complains, at least in my version). So, the actual regexes used are different : in gawk the regex is .pdf , which of course matches /pdf , since the dot matches any single character, while in mawk your regex is \.pdf , where the dot is escaped and matched literally. GNU awk's manual explicitly mentions it's not portable to use a backslash before a character with no defined backslash-escape sequence (see the box "Backslash Before Regular Characters"): If you place a backslash in a string constant before something that is not one of the characters previously listed, POSIX awk purposely leaves what happens as undefined. There are two choices: Strip the backslash out This is what BWK awk and gawk both do. For example, "a\qc" is the same as "aqc" . Leave the backslash alone Some other awk implementations do this. In such implementations, typing "a\qc" is the same as typing "a\\qc" . I assume you want the dot to be escaped in the regex, so the safe ways are either $NF ~ "\\.pdf" , or $NF ~ /\.pdf/ (since with the regex literal /.../ , the escapes aren't "double processed"). The POSIX text also notes the double processing of the escapes: If the right-hand operand [of ~ or !~ ] is any expression other than the lexical token ERE, the string value of the expression shall be interpreted as an extended regular expression, including the escape conventions described above. Note that these same escape conventions shall also be applied in determining the value of a string literal (the lexical token STRING), and thus shall be applied a second time when a string literal is used in this context. So, this works in both gawk and mawk: $ ( echo .pdf; echo /pdf ) | awk '{ if ($0 ~ "\\.pdf") print " match: " $0; else print "no match: " $0; }' match: .pdfno match: /pdf as does this: $ ( echo .pdf; echo /pdf ) | awk '{ if ($0 ~ /\.pdf/) print " match: " $0; else print "no match: " $0; }' match: .pdfno match: /pdf | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/500999",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
501,128 | In my terminal it printed out a seemingly random number 127 . I think it is printing some variable's value and to check my suspicion, I defined a new variable v=4 . Running echo $? again gave me 0 as output. I'm confused as I was expecting 4 to be the answer. | From man bash : $? Expands to the exit status of the most recently executed foreground pipeline. echo $? will return the exit status of last command. You got 127 that is the exit status of last executed command exited with some error (most probably). Commands on successful completion exit with an exit status of 0 (most probably). The last command gave output 0 since the echo $v on the line previous finished without an error. If you execute the commands v=4echo $vecho $? You will get output as: 4 (from echo $v)0 (from echo $?) Also try: trueecho $? You will get 0 . falseecho $? You will get 1 . The true command does nothing, it just exits with a status code 0 ; and the false command also does nothing, it just exits with a status code indicating failure (i.e. with status code 1 ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/501128",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291388/"
]
} |
501,153 | This has been happening for a while now about a month. I thought it would be fix with new updates but it didn't. The file /var/log/Xorg.0.log.old has this last few lines before the crash [574.086] (II) NVIDIA(GPU-0): Deleting GPU-0[574.087] (WW) xf86CloseConsole: KDSETMODE failed: Input/output error[574.087] (WW) xf86CloseConsole: VT_GETMODE failed: Input/output error[574.087] (WW) xf86CloseConsole: VT_ACTIVATE failed: Input/output error[574.088] (II) Server terminated successfully (0). Closing log file. I tried opening up the virtual terminal but won't let me type (It's stuck)So how can I fix it? | From man bash : $? Expands to the exit status of the most recently executed foreground pipeline. echo $? will return the exit status of last command. You got 127 that is the exit status of last executed command exited with some error (most probably). Commands on successful completion exit with an exit status of 0 (most probably). The last command gave output 0 since the echo $v on the line previous finished without an error. If you execute the commands v=4echo $vecho $? You will get output as: 4 (from echo $v)0 (from echo $?) Also try: trueecho $? You will get 0 . falseecho $? You will get 1 . The true command does nothing, it just exits with a status code 0 ; and the false command also does nothing, it just exits with a status code indicating failure (i.e. with status code 1 ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/501153",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/337350/"
]
} |
501,156 | I just installed the colored-man-pages zsh plugin. It works well, but I have an ugly color output on the bottom message: What is the proper way to personalize the color of the plugins without overwriting everything? It seems the color are set up directly during the plugin activation . Or maybe it's a bug with my system, fixable with an another way? Indeed, it looks weird to have this default unreadable color output. I run under Ubuntu 18.10 and gnome-terminal. | The format of man pages ( groff ) doesn't allow colors explicitly, but utilizes a few text decorations like bold or underlines, which in turn can be re-interpreted by a viewer to show colors. And this is exactly what linked plugin is doing, so I suggest to remove this plugin and instead set the colors directly in .zshrc via LESS_TERMCAP variables (I assume you are using less as you man pager and so does this plugin). Here is the list of variables with description: export LESS_TERMCAP_mb=$'\e[6m' # begin blinkingexport LESS_TERMCAP_md=$'\e[34m' # begin boldexport LESS_TERMCAP_us=$'\e[4;32m' # begin underlineexport LESS_TERMCAP_so=$'\e[1;33;41m' # begin standout-mode - info boxexport LESS_TERMCAP_me=$'\e[m' # end modeexport LESS_TERMCAP_ue=$'\e[m' # end underlineexport LESS_TERMCAP_se=$'\e[m' # end standout-mode The list of color codes can be found with this script: #!/bin/bashecho "PALETTE OF 8 COLORS (bold, high intensity, normal, faint)"for i in {30..37}; do printf "\e[1;${i}m1;%-2s \e[m" "$i"; done; echofor i in {90..97}; do printf "\e[${i}m%+4s \e[m" "$i"; done; echofor i in {30..37}; do printf "\e[${i}m%+4s \e[m" "$i"; done; echofor i in {30..37}; do printf "\e[2;${i}m2;%-2s \e[m" "$i"; done;echo -e "\n\n\nPALETTE OF 256 COLORS (only normal)"j=8for i in {0..255}; do [[ $i = 16 ]] && j=6 [[ $i = 232 ]] && j=8 printf "\e[38;5;${i}m38;5;%-4s\e[m" "${i}" (( i>15 && i<232 )) && printf "\e[52C\e[1;38;5;${i}m1;38;5;%-4s\e[52C\e[m\e[2;38;5;${i}m2;38;5;%-4s\e[m\e[126D" "${i}" "${i}" [[ $(( $(( $i - 15 )) % $j )) = 0 ]] && echo [[ $(( $(( $i - 15 )) % $(( $j * 6 )) )) = 0 ]] && echodoneexit 0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/501156",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173927/"
]
} |
501,260 | When I type: nmcli con show wlan0 One of the settings is: 802-11-wireless.band: bg Where is this setting stored on disk? It isn't in: /etc/sysconfig/network-scripts/ifcfg-wlan0 I've grepped everything in lib, var, etc, and usr and I haven't been able to find it. | NetworkManager supports various plugins, which can define new storage locations for configuration information. The currently enabled plugins can be found in /etc/NetworkManager/NetworkManager.conf : [main]plugins=ifupdown,keyfile The generic default plugin is keyfile , which stores configurations in /etc/NetworkManager/system-connections directory, in files similar to Windows .ini files. Other plugins may be distribution-specific: Fedora and RedHat use ifcfg-rh , which will both read and write /etc/sysconfig/network-scripts/ifcfg-* files. Debian and Ubuntu use ifupdown , which is a read-only plugin: it reads /etc/network/interfaces but does not make any changes to it. Any configuration changes you make through NetworkManager will be saved using the keyfile plugin instead. SuSE apparently used to have ifcfg-suse , but it seems to be deprecated. Other distributions may have their own plugins. Having said that, the 802-11-wireless.band setting probably gets its default value from the WiFi NIC capabilities reported by the driver. It would be saved only if you wanted to explicitly restrict the NIC to only some types of WiFi networking. If your WiFi NIC can only transmit in the 2.4 GHz band, you cannot add a or ac capabilities by just reconfiguring the software or the driver: it would require a new radio module and a new antenna tuned for the 5.0 GHz band. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/501260",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
501,266 | I am having trouble understanding what is happening when I try to write to a file descriptor? It appears to be overwriting the original contents? Is this expected behaviour? I have replicated this in the example below: $ echo "The quick brown fox ..." > example.txt $ echo "The quick brown fox ..." >> example.txt$ cat example.txtThe quick brown fox ... The quick brown fox ...$ exec 88<>example.txt$ cat example.txtThe quick brown fox ... The quick brown fox ...$ echo "jumped" >&88 $ cat example.txtjumped ck brown fox ... The quick brown fox ...$ echo "jumped" >&88 $ cat example.txtjumped jumped n fox ... The quick brown fox ... | Because you hadn't done any reads on descriptor 88, the current seek position was "0", and so the write took place at that point. If, instead, you'd read the file before then, then appends happen: bash-4.2$ cat <&88The quick brown fox ...The quick brown fox ...bash-4.2$ echo hello >&88bash-4.2$ cat example.txt The quick brown fox ...The quick brown fox ...hellobash-4.2$ echo more >&88bash-4.2$ cat example.txt The quick brown fox ...The quick brown fox ...hellomore | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/501266",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/337408/"
]
} |
501,282 | I would like to set up my desktop computer (which is actually a server for the KVM guests I do my actual work in) to have a redundant root installation. If one drive dies I want to quickly get back to work without doing a full restore from backup, nor a system reinstall and reset all my settings and preferences. I thought that the way to do this would be RAID1, but the deeper I dig into it, the more I realize that RAID1 is not a 'set-it-and-forget-it' solution. Oh, and I want it to be UEFI boot. Last time I tried a software RAID1 install (which I set up using the Ubuntu Server installer), something got corrupted and I ended up with a GRUB rescue screen and could not for the life of me figure out how to get it to boot from the mirror drive. For all I know, the boot sector on both was corrupted due to the corruption replicating between drives. Obviously this defeats the purpose of having a RAID1 boot for the purpose of decreased downtime. I was thinking that maybe I should put the EFI partition on a USB drive and keep it backed up for quick and easy replacement (while having the root partition in RAID1), but I am worried that I might now always know then the EFI partition has changed and therefore will not know when to back it up. I was also thinking to do ZFS-on-root, in the thought that the bitrot protection and snapshotting might be more useful in preventing situations like the one above. But it seems that ZFS on root is not recommended for Ubuntu, and the status of ZFS on Linux in general seems to be in question now due to a certain Linux Kernel programmer's stated lack of tolerance for ZFS. I wonder if this might be a good approach but I know nothing about this whole MAAS thing and have no idea whether it is relevant to my use case. The last thing I was thinking was to just do a regular one-drive install and then every week or so dd it to a spare drive, so that if disaster strikes I can at least recover my settings and installation from a week ago or less. But wouldn't dding an SSD every week be really hard on it? I have found countless tutorials about RAID and ZFS, but so far have not found anything that clearly explains to pros and cons of my options with respect to the goal stated above. Advice or links to explanations would be greatly appreciated! | Because you hadn't done any reads on descriptor 88, the current seek position was "0", and so the write took place at that point. If, instead, you'd read the file before then, then appends happen: bash-4.2$ cat <&88The quick brown fox ...The quick brown fox ...bash-4.2$ echo hello >&88bash-4.2$ cat example.txt The quick brown fox ...The quick brown fox ...hellobash-4.2$ echo more >&88bash-4.2$ cat example.txt The quick brown fox ...The quick brown fox ...hellomore | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/501282",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124211/"
]
} |
501,309 | Deployment: VM -- (eth0)RPI(wlan0) -- Router -- ISP ^ ^ ^ ^ DHCP Static DHCP GW NOTE: RPI hostname: gateway • The goal was to make VMs accessible from the outside the network. Accomplished, according to the tutorial https://www.youtube.com/watch?v=IAa4tI4JrgI , via the Port Forwarding on Router and RPI, by installing dhcpcd and configuring iptables on RPI. • Here is my interfaces , where I have commented out the auto wlan0, in attempt to fix the issue (before, it was uncommented, and was still the same thing...) # interfaces(5) file used by ifup(8) and ifdown(8)# Please note that this file is written to be used with dhcpcd# For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf'# Include files from /etc/network/interfaces.d:source-directory /etc/network/interfaces.d#auto wlan0iface wlan0 inet dhcpwpa-ssid FunBox-84A8wpa-psk 7A73FA25C43563523D7ED99A4D#auto eth0allow-hotplug eth0iface eth0 inet static address 192.168.2.1 netmask 255.255.255.0 network 192.168.2.0 broadcast 192.168.2.255 • Here is the firewall.conf used by the iptables : # Generated by iptables-save v1.6.0 on Sun Feb 17 20:01:56 2019*nat:PREROUTING ACCEPT [86:11520]:INPUT ACCEPT [64:8940]:OUTPUT ACCEPT [71:5638]:POSTROUTING ACCEPT [37:4255]-A PREROUTING -d 192.168.1.21/32 -p tcp -m tcp --dport 170 -j DNAT --to-destination 192.168.2.83:22-A PREROUTING -d 192.168.1.21/32 -p tcp -m tcp --dport 171 -j DNAT --to-destination 192.168.2.83:443-A PREROUTING -d 192.168.1.21/32 -p tcp -m tcp --dport 3389 -j DNAT --to-destination 192.168.2.66:3389-A POSTROUTING -o wlan0 -j MASQUERADECOMMIT# Completed on Sun Feb 17 20:01:56 2019# Generated by iptables-save v1.6.0 on Sun Feb 17 20:01:56 2019*filter:INPUT ACCEPT [3188:209284]:FORWARD ACCEPT [25:2740]:OUTPUT ACCEPT [2306:270630]-A FORWARD -i wlan0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT-A FORWARD -i eth0 -o wlan0 -j ACCEPTCOMMIT# Completed on Sun Feb 17 20:01:56 2019# Generated by iptables-save v1.6.0 on Sun Feb 17 20:01:56 2019*mangle:PREROUTING ACCEPT [55445:38248798]:INPUT ACCEPT [3188:209284]:FORWARD ACCEPT [52257:38039514]:OUTPUT ACCEPT [2306:270630]:POSTROUTING ACCEPT [54565:38310208]COMMIT# Completed on Sun Feb 17 20:01:56 2019# Generated by iptables-save v1.6.0 on Sun Feb 17 20:01:56 2019*raw:PREROUTING ACCEPT [55445:38248798]:OUTPUT ACCEPT [2306:270630]COMMIT# Completed on Sun Feb 17 20:01:56 2019 • iptables -L : pi@gateway:/etc$ sudo iptables -LChain INPUT (policy ACCEPT)target prot opt source destinationChain FORWARD (policy ACCEPT)target prot opt source destinationACCEPT all -- anywhere anywhere state RELATED,ESTABLISHEDACCEPT all -- anywhere anywhereChain OUTPUT (policy ACCEPT)target prot opt source destination • Here is the dhcpcd.conf : # A sample configuration for dhcpcd.# See dhcpcd.conf(5) for details.# Allow users of this group to interact with dhcpcd via the control socket.#controlgroup wheel# Inform the DHCP server of our hostname for DDNS.hostname# Use the hardware address of the interface for the Client ID.clientid# or# Use the same DUID + IAID as set in DHCPv6 for DHCPv4 ClientID as per RFC4361.# Some non-RFC compliant DHCP servers do not reply with this set.# In this case, comment out duid and enable clientid above.#duid# Persist interface configuration when dhcpcd exits.persistent# Rapid commit support.# Safe to enable by default because it requires the equivalent option set# on the server to actually work.option rapid_commit# A list of options to request from the DHCP server.option domain_name_servers, domain_name, domain_search, host_nameoption classless_static_routes# Most distributions have NTP support.option ntp_servers# Respect the network MTU. This is applied to DHCP routes.option interface_mtu# A ServerID is required by RFC2131.require dhcp_server_identifier# Generate Stable Private IPv6 Addresses instead of hardware based onesslaac private# Example static IP configuration:#interface eth0#static ip_address=192.168.0.10/24#static ip6_address=fd51:42f8:caae:d92e::ff/64#static routers=192.168.0.1#static domain_name_servers=192.168.0.1 8.8.8.8 fd51:42f8:caae:d92e::1# It is possible to fall back to a static IP if DHCP fails:# define static profile#profile static_eth0#static ip_address=192.168.1.23/24#static routers=192.168.1.1#static domain_name_servers=192.168.1.1# fallback to static profile on eth0#interface eth0#fallback static_eth0denyinterfaces eth0host Accountant {hardware ethernet 10:60:4b:68:03:21;fixed-address 192.168.2.83;}host Accountant1 {hardware ethernet 00:0c:29:35:95:ed;fixed-address 192.168.2.66;}host Accountant3 {hardware ethernet 30:85:A9:1B:C4:8B;fixed-address 192.168.2.70;} • The error message, that I am not able to figure out: root@gateway:/home/pi# systemctl restart dhcpcdWarning: dhcpcd.service changed on disk. Run 'systemctl daemon-reload' to reload units.Job for dhcpcd.service failed because the control process exited with error code.See "systemctl status dhcpcd.service" and "journalctl -xe" for details.root@gateway:/home/pi# systemctl status dhcpcd● dhcpcd.service - dhcpcd on all interfaces Loaded: loaded (/lib/systemd/system/dhcpcd.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/dhcpcd.service.d └─wait.conf Active: failed (Result: exit-code) since Sun 2019-02-17 20:36:42 GMT; 6s ago Process: 775 ExecStart=/usr/lib/dhcpcd5/dhcpcd -q -w (code=exited, status=6)Feb 17 20:36:42 gateway systemd[1]: Starting dhcpcd on all interfaces...Feb 17 20:36:42 gateway dhcpcd[775]: Not running dhcpcd because /etc/network/interfacesFeb 17 20:36:42 gateway dhcpcd[775]: defines some interfaces that will use aFeb 17 20:36:42 gateway dhcpcd[775]: DHCP client or static addressFeb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Control process exited, code=exited status=6Feb 17 20:36:42 gateway systemd[1]: Failed to start dhcpcd on all interfaces.Feb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Unit entered failed state.Feb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Failed with result 'exit-code'.Warning: dhcpcd.service changed on disk. Run 'systemctl daemon-reload' to reload units.root@gateway:/home/pi# root@gateway:/home/pi# systemctl daemon-reloadroot@gateway:/home/pi# systemctl status dhcpcd● dhcpcd.service - dhcpcd on all interfaces Loaded: loaded (/lib/systemd/system/dhcpcd.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/dhcpcd.service.d └─wait.conf Active: failed (Result: exit-code) since Sun 2019-02-17 20:36:42 GMT; 1min 23s agoFeb 17 20:36:42 gateway systemd[1]: Starting dhcpcd on all interfaces...Feb 17 20:36:42 gateway dhcpcd[775]: Not running dhcpcd because /etc/network/interfacesFeb 17 20:36:42 gateway dhcpcd[775]: defines some interfaces that will use aFeb 17 20:36:42 gateway dhcpcd[775]: DHCP client or static addressFeb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Control process exited, code=exited status=6Feb 17 20:36:42 gateway systemd[1]: Failed to start dhcpcd on all interfaces.Feb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Unit entered failed state.Feb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Failed with result 'exit-code'.root@gateway:/home/pi# • gateway version: pi@gateway:/etc$ cat os-releasePRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"NAME="Raspbian GNU/Linux"VERSION_ID="9"VERSION="9 (stretch)"ID=raspbianID_LIKE=debian Questions: 1) What does the error message Not running dhcpcd because /etc/network/interfaces defines some interfaces that will use a DHCP client or static address mean? How to fix it, according to my config above? 2) Why hosts are not getting assigned the IP address according to my dhcpcd.conf, except the host Accountant , which is always getting the same IP, which I want, even if comment out the binding...? How to fix it, in order to be able to bind more than one hosts MAC with IP? 3) What does this notation mean: #auto eth0allow-hotplug eth0iface eth0 inet static address 192.168.2.1 netmask 255.255.255.0 network 192.168.2.0 broadcast 192.168.2.255 What are the notation rules for the interfaces file in Linux? | Question 1.) Sorry, it looks like you've misunderstood a few things. dhcpcd is a DHCP client daemon, which is normally started by NetworkManager or ifupdown , not directly by systemd . It is what will be handling the IP address assignment for your wlan0 . You can use dhcpcd as started by systemd if you wish, however that will require disabling all the normal network interface configuration logic (i.e. /etc/network/interfaces must be empty of non-comment lines) of the distribution and replacing it with your own custom scripting wherever necessary. That is for special uses only; if you're not absolutely certain you should do that, you shouldn't. dhcpcd will never serve IP addresses to any other hosts. This part you added to dhcpcd.conf looks like it would belong to the configuration file of ISC DHCP server daemon, dhcpd (yes it's just one-letter difference) instead: host Accountant {hardware ethernet 10:60:4b:68:03:21;fixed-address 192.168.2.83;}host Accountant1 {hardware ethernet 00:0c:29:35:95:ed;fixed-address 192.168.2.66;}host Accountant3 {hardware ethernet 30:85:A9:1B:C4:8B;fixed-address 192.168.2.70;} But if you are following the YouTube tutorial you mentioned, you might not even have dhcpd installed, since dnsmasq is supposed to do that job. As far as I can tell, the equivalent syntax for dnsmasq.conf would be: dhcp-host=10:60:4b:68:03:21,192.168.2.83,Accountantdhcp-host=00:0c:29:35:95:ed,192.168.2.66,Accountant1dhcp-host=30:85:A9:1B:C4:8B,192.168.2.70,Accountant3 Disclaimer: I haven't actually used dnsmasq , so this is based on just quickly Googling its man page. Question 2.) In the tutorial you mentioned, dnsmasq was supposed to act as a DHCP server on eth0 . You did not say anything about it, so I don't know whether it was running or not. If not, the one client that was always getting the same IP might have been simply falling back to a previously-received old DHCP lease that wasn't expired yet. Yes, DHCP clients may store a DHCP lease persistently and keep using it if a network doesn't seem to have a working DHCP server available. Question 3.): /etc/network/interfaces is a classic Debian/Ubuntu style network interface configuration file. Use man interfaces to see documentation for it, or look here. In Debian, *Ubuntu, Raspbian etc., NetworkManager will have a plug-in that will read /etc/network/interfaces but won't write to it. If NetworkManager configuration tools like nmcli , nmtui or GUI-based NetworkManager configuration tools of your desktop environment of choice are used, the configuration would be saved to files in /etc/NetworkManager/system-connections/ directory instead. If NetworkManager is not installed, the /etc/network/interfaces file is used by the ifupdown package, which includes the commands ifup and ifdown . The package also includes a system start-up script that will run ifup -a on boot, enabling all network interfaces that have auto <interface name> in /etc/network/interfaces . There is also an udev rule which will run ifup <interface name> if a driver for a new network interface gets auto-loaded and /etc/network/interfaces has an allow-hotplug <interface name> line for it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/501309",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/337042/"
]
} |
501,316 | I receive the error No space left on device from bash upon filename tab completion. As I try to spot what has eaten up my space, as suggested on similar question, I'm confused by df -ah results.My pc has 220GB disk and it seems only 66GB are used. Where is the problem? lot of docker images in var/lib? should I move them somewhere else? Filesystem Size Used Avail Use% Mounted onsysfs 0 0 0 - /sysproc 0 0 0 - /procudev 7,8G 0 7,8G 0% /devdevpts 0 0 0 - /dev/ptstmpfs 1,6G 11M 1,6G 1% /run/dev/nvme0n1p7 34G 32G 0 100% /securityfs 0 0 0 - /sys/kernel/securitytmpfs 7,8G 26M 7,8G 1% /dev/shmtmpfs 5,0M 4,0K 5,0M 1% /run/locktmpfs 7,8G 0 7,8G 0% /sys/fs/cgroupcgroup 0 0 0 - /sys/fs/cgroup/systemdpstore 0 0 0 - /sys/fs/pstoreefivarfs 0 0 0 - /sys/firmware/efi/efivarscgroup 0 0 0 - /sys/fs/cgroup/cpusetcgroup 0 0 0 - /sys/fs/cgroup/net_cls,net_priocgroup 0 0 0 - /sys/fs/cgroup/pidscgroup 0 0 0 - /sys/fs/cgroup/rdmacgroup 0 0 0 - /sys/fs/cgroup/freezercgroup 0 0 0 - /sys/fs/cgroup/devicescgroup 0 0 0 - /sys/fs/cgroup/memorycgroup 0 0 0 - /sys/fs/cgroup/blkiocgroup 0 0 0 - /sys/fs/cgroup/cpu,cpuacctcgroup 0 0 0 - /sys/fs/cgroup/perf_eventcgroup 0 0 0 - /sys/fs/cgroup/hugetlbsystemd-1 - - - - /proc/sys/fs/binfmt_miscmqueue 0 0 0 - /dev/mqueuedebugfs 0 0 0 - /sys/kernel/debughugetlbfs 0 0 0 - /dev/hugepagesconfigfs 0 0 0 - /sys/kernel/configfusectl 0 0 0 - /sys/fs/fuse/connections/dev/loop2 54M 54M 0 100% /snap/core18/677/dev/loop3 148M 148M 0 100% /snap/skype/66/dev/loop1 202M 202M 0 100% /snap/hiri/53/dev/loop4 92M 92M 0 100% /snap/core/6259/dev/loop6 43M 43M 0 100% /snap/gtk-common-themes/701/dev/loop7 136M 136M 0 100% /snap/chromium/490/dev/loop5 227M 227M 0 100% /snap/pycharm-community/83/dev/loop8 165M 165M 0 100% /snap/noson/160/dev/loop10 142M 142M 0 100% /snap/skype/51/dev/loop12 139M 139M 0 100% /snap/skype/54/dev/loop13 91M 91M 0 100% /snap/core/6350/dev/loop9 202M 202M 0 100% /snap/hiri/56/dev/loop11 271M 271M 0 100% /snap/pycharm-community/108/dev/loop16 477M 477M 0 100% /snap/libreoffice/100/dev/loop14 179M 179M 0 100% /snap/noson/175/dev/loop20 478M 478M 0 100% /snap/libreoffice/80/dev/loop19 144M 144M 0 100% /snap/chromium/566/dev/loop23 54M 54M 0 100% /snap/core18/594/dev/loop21 35M 35M 0 100% /snap/gtk-common-themes/818/dev/loop25 271M 271M 0 100% /snap/pycharm-community/112/dev/loop26 193M 193M 0 100% /snap/hiri/42/dev/nvme0n1p9 173G 16G 149G 10% /home/dev/nvme0n1p1 496M 64M 433M 13% /boot/efibinfmt_misc 0 0 0 - /proc/sys/fs/binfmt_misctmpfs 1,6G 0 1,6G 0% /run/user/0tmpfs 1,6G 60K 1,6G 1% /run/user/1000gvfsd-fuse 0 0 0 - /run/user/1000/gvfs/dev/loop27 54M 54M 0 100% /snap/core18/719/dev/loop24 91M 91M 0 100% /snap/core/6405/dev/loop0 35M 35M 0 100% /snap/gtk-common-themes/1122/dev/loop22 94M 94M 0 100% /snap/noson/179/dev/loop17 147M 147M 0 100% /snap/chromium/595/dev/loop18 484M 484M 0 100% /snap/libreoffice/104overlay - - - - /var/lib/docker/overlay2/0d3c09bca9a7835f9c9b51114c9c15d08b127dbf7eacc53f7dccaa9f79c9885e/mergedoverlay - - - - /var/lib/docker/overlay2/085ba20c74d4078feda19d9a71ce2b04810ea22ac1e074f451a81a6d60d80c10/mergedoverlay - - - - /var/lib/docker/overlay2/08de193cf78bd7284cd8b2254c727a859039b34bfc1288c579d964f4b2029f45/mergedshm - - - - /var/lib/docker/containers/b32add46586fc18218271486f137687430272cabed1e8090be8d4301d7cb3368/mounts/shmshm - - - - /var/lib/docker/containers/d8c1e0688afd134667083c1fcf467ec2b252fc8e6457ed1635a70642ce536f77/mounts/shmshm - - - - /var/lib/docker/containers/80982ee282be06f4dd5cc8e3fbad5ebe215e9257db61f9e1cac87bbc6543f058/mounts/shmnsfs - - - - /run/docker/netns/4b8d6fe23d4cnsfs - - - - /run/docker/netns/43a0f6dd8034nsfs - - - - /run/docker/netns/9e2e1b56b0deoverlay - - - - /var/lib/docker/overlay2/71d932f5ea853341431835774f3ecd4d7ab909020fdef3343048fcdf75401ebc/mergedshm - - - - /var/lib/docker/containers/2aa2003301c5e4e63182376b1bce08e34188835c59794214070b6cc6a576e8e7/mounts/shmnsfs - - - - /run/docker/netns/9eb16a7792abtracefs - - - - /sys/kernel/debug/tracingdu -shc /* | sort -h3,9M /lib3211M /run13M /bin13M /sbin16M /etc161M /root214M /boot761M /lib5,4G /usr16G /home17G /snap28G /var66G total | Well, look at the list. The essential partitions you have are these two: Filesystem Size Used Avail Use% Mounted on/dev/nvme0n1p7 34G 32G 0 100% //dev/nvme0n1p9 173G 16G 149G 10% /home I.e., you have a separate /home of ~170 GB, but everything else (including /var ) is in / , and that's only 34 GB. Repartitioning an installed system is likely to be hard (unless you were using LVM, which you aren't), but you could try and see if you have some large data sets that could be moved to /home , like those images. You can, also, run symlinks from /var/whatever to e.g. /home/var/whatever , (or a similar bind mount) so that the data is still visible in the expected place under /var . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/501316",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40884/"
]
} |
501,319 | My environment is xubuntu 18.04 64-bit. I use this to set desktop's color: xfconf-query -c xfce4-desktop -p /backdrop/screen0/monitor0/workspace0/color1 -n -t int -t int -t int -t int -s 19018 -s 37008 -s 55769 -s 65535 Then I see the color is black. Where am I wrong? | Well, look at the list. The essential partitions you have are these two: Filesystem Size Used Avail Use% Mounted on/dev/nvme0n1p7 34G 32G 0 100% //dev/nvme0n1p9 173G 16G 149G 10% /home I.e., you have a separate /home of ~170 GB, but everything else (including /var ) is in / , and that's only 34 GB. Repartitioning an installed system is likely to be hard (unless you were using LVM, which you aren't), but you could try and see if you have some large data sets that could be moved to /home , like those images. You can, also, run symlinks from /var/whatever to e.g. /home/var/whatever , (or a similar bind mount) so that the data is still visible in the expected place under /var . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/501319",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/337466/"
]
} |
501,329 | From running help . or help source Execute commands from a file in the current shell. Read and execute commands from FILENAME in the current shell. The entries in $PATH are used to find the directory containing FILENAME. From my point of view, it seems like the dot command (or the source command) is simply running a shell script in the current shell context (instead of spawning another shell). Question : why doesn't . (or source ) requires the file to be executable like when you run a normal script? | Lets say I have a shell script ( my-script.sh )starting with: #!/bin/sh If the script has execute permissions set then I can run the script with: ./my-script.sh In this case you are ultimately asking the kernel to run my-script.sh as a program, and the kernel (program loader) will check permissions first, and then use /bin/sh ./my-script.sh to actually execute your script. But the shell ( /bin/sh ) does not care about execute permissions and doesn't check them. So if you call this ... /bin/sh ./my-script.sh ... The kernel is never asked to run my-script.sh as a program. The kernel (program loader) is only asked to run /bin/sh . So the execute permissions will never me checked. That is, you don't need execute permission to run a script like this. To answer your question: The difference between you calling ./my-script.sh and . ./my-script.sh inside another script is exactly the same. In the first, you are asking the kernel to run it as a program, in the second, you are asking your current shell to read commands from the script and the shell doesn't need (or care about) execute permissions to do this. Further reading: Running scripts as programs is surprising behaviour when you think about it. They are not written in machine code. I would read up on why this works; start with reading up on the shebang ( #! ) https://en.wikipedia.org/wiki/Shebang_(Unix) Running scripts with the dot notation is necessary to share variables. All other mechanisms for running start a new shell "context", meaning that any variables set in the called script will not be passed back to the calling script. Bash documentation is a little lite, but it's here: https://www.gnu.org/software/bash/manual/html_node/Bourne-Shell-Builtins.html | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/501329",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/311201/"
]
} |
501,341 | I have centos 7.6 & installed squid 4.5 on it. sudo yum -y install squid I followed this link for Basic Authentication . Without authentication squid works fine. Here is squid.conf after adding # Basic Authentication part : ## Recommended minimum configuration:## Example rule allowing access from your local networks.# Adapt to list your (internal) IP networks from where browsing# should be allowedacl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN)acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machinesacl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN)acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)acl localnet src fc00::/7 # RFC 4193 local private network rangeacl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machinesacl SSL_ports port 443acl Safe_ports port 80 # httpacl Safe_ports port 21 # ftpacl Safe_ports port 443 # httpsacl Safe_ports port 70 # gopheracl Safe_ports port 210 # waisacl Safe_ports port 1025-65535 # unregistered portsacl Safe_ports port 280 # http-mgmtacl Safe_ports port 488 # gss-httpacl Safe_ports port 591 # filemakeracl Safe_ports port 777 # multiling httpacl CONNECT method CONNECT## Recommended minimum Access Permission configuration:## Deny requests to certain unsafe portshttp_access deny !Safe_ports# Deny CONNECT to other than secure SSL ports# http_access deny CONNECT !SSL_ports# Only allow cachemgr access from localhosthttp_access allow localhost managerhttp_access deny manager# We strongly recommend the following be uncommented to protect innocent# web applications running on the proxy server who think the only# one who can access services on "localhost" is a local user#http_access deny to_localhost## INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS## Example rule allowing access from your local networks.# Adapt localnet in the ACL section to list your (internal) IP networks# from where browsing should be allowedhttp_access allow localnethttp_access allow localhost# Basic Authenticationauth_param basic program /usr/lib64/squid/basic_ncsa_auth /etc/squid/passwdauth_param basic children 5auth_param basic realm Squid Basic Authenticationauth_param basic credentialsttl 2 hoursacl auth_users proxy_auth REQUIREDhttp_access allow auth_users# allow all requests acl all src 0.0.0.0/0http_access allow all# And finally deny all other access to this proxyhttp_access deny all# Squid normally listens to port 3128http_port 3128# Uncomment and adjust the following to add a disk cache directory.#cache_dir ufs /var/spool/squid 100 16 256# Leave coredumps in the first cache dircoredump_dir /var/spool/squid## Add any of your own refresh_pattern entries above these.#refresh_pattern ^ftp: 1440 20% 10080refresh_pattern ^gopher: 1440 0% 1440refresh_pattern -i (/cgi-bin/|\?) 0 0% 0refresh_pattern . 0 20% 4320 Please see # Basic Authentication part. The problem is : /usr/lib64/squid/basic_ncsa_auth file not exist. Where is that file? How can i fix this problem? - Edit after comment - Here is result for yum info squid Loaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: mirror.softaculous.com * epel: mirror.wiuwiu.de * extras: mirror.alpix.eu * updates: centosmirror.netcup.netInstalled PackagesName : squidArch : x86_64Epoch : 7Version : 4.5Release : 1.el7Size : 10 MRepo : installedFrom repo : squidSummary : The Squid proxy caching serverURL : http://www.squid-cache.orgLicense : GPLv2+ and (LGPLv2+ and MIT and BSD and Public Domain)Description : Squid is a high-performance proxy caching server for Web clients, : supporting FTP, gopher, and HTTP data objects. Unlike traditional : caching software, Squid handles all requests in a single, : non-blocking, I/O-driven process. Squid keeps meta data and especially : hot objects cached in RAM, caches DNS lookups, supports non-blocking : DNS lookups, and implements negative caching of failed requests. : : Squid consists of a main server program squid, a Domain Name System : lookup program (dnsserver), a program for retrieving FTP data : (ftpget), and some management and client tools. | It appears that you aren't using the CentOS packaged squid, but the ones packaged here . It might have helped if you had mentioned that in your question. If you look at the repo, it appears there is a squid-helpers package that includes /usr/lib64/squid/basic_ncsa_auth . EDIT: if it wasn't clear, yum install squid-helpers would solve your problem. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/501341",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88327/"
]
} |
501,486 | I have a txt file that contains some numbers like this: 1 2 3 4 5 And I have another txt file that contains the same number of lines, but with other numbers: 6 7 8 9 10 I want to add them together, namely 1+6, 2+7, 3+8, etc.. How do I write the script? By the way, I've got a variety of answers so far, and only after I tried them on my files did I realise some of the methods can't deal with decimals. Some of my files contain decimals, and I need to be accurate, so if you would like to add an answer, could you show a method that can calculate decimals accurately. Thanks. | This is basic task many tools can solve; paste + awk combo seems exceptionally handy: $ paste file1 file2 | awk '{$0=$1+$2}1'79111315 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/501486",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/336387/"
]
} |
501,500 | I'd like to do something along the lines of mkfs -t btrfs filedrivemount filedrive /media/fuse Without specifying a particular size, I'd like to be able to have the file grow in size as I write files into the mounted filesystem, and shrink when files are deleted. Is there some mechanism for this? I have also seen this question and I am aware that it could in theory be managed manually, but my question has nothing to do with ecryptfs and focuses on the automatic aspect. I do not particularly care about the filesystem type inside the file. I am also aware of existing systems that do something similar: specifically VirtualBox's Dynamic Disks. If there is some way of using those - or something akin to them - without actually running a virtual machine, I would also be happy with that. | Actually, something like it is already possible A filesystem needs to have a defined maximum size. But as the filesystem-as-file can be sparse , that size can be an arbitrary number which doesn't have much to do with how much space the filesystem-as-file takes up on the underlying filesystem. If you can accept setting an arbitrary maximum size limit (which can be much greater than the actual size of the underlying filesystem) for the filesystem-as-file, you can create a sparse file and a filesystem on it right now: /tmp# df -h .Filesystem Size Used Avail Use% Mounted on<current filesystem> 20G 16G 3.0G 84% //tmp# dd if=/dev/null bs=1 seek=1024000000000 of=testdummy0+0 records in0+0 records out0 bytes copied, 0.000159622 s, 0.0 kB/s/tmp# ll testdummy-rw-r--r-- 1 root root 1024000000000 Feb 19 08:24 testdummy/tmp# ll -h testdummy-rw-r--r-- 1 root root 954G Feb 19 08:24 testdummy Here, I've created a file that appears to be a whole lot bigger than the filesystem it's stored onto... /tmp# du -k testdummy0 testdummy ...but so far it does not actually take any disk space at all (except for the inode and maybe some other metadata). It would be perfectly possible to losetup it, create a filesystem on it and start using it. Each write operation that actually writes data to the file would cause the file's space requirement to grow. In other words, while the file size as reported by ls -l would stay that arbitrary huge number all the time, the actual space taken by the file on the disk as reported by du would grow. And if you mount the filesystem-as-file with a discard mount option, shrinking can work automatically too: /tmp# losetup /dev/loop0 testdummy/tmp# mkfs.ext4 /dev/loop0/tmp# mount -o discard /dev/loop0 /mnt/tmp# du -k testdummy 1063940 testdummy/tmp# df -h /mntFilesystem Size Used Avail Use% Mounted on/dev/loop0 938G 77M 890G 1% /mnt/tmp# cp /boot/initrd.img /mnt/tmp# du -k testdummy 1093732 testdummy/tmp# rm /mnt/initrd.img/tmp# du -k testdummy1063944 testdummy Automatic shrinking requires: 1.) that the filesystem type of the filesystem-as-file supports the discard mount option (so that the filesystem driver can tell the underlying system which blocks can be deallocated) 2.) and that the filesystem type of the underlying filesystem supports "hole punching", i.e. the fallocate(2) system call with the FALLOC_FL_PUNCH_HOLE option (so that the underlying filesystem can be told to mark some of the previously-allocated blocks of the filesystem-as-file as sparse blocks again) 3.) and that you're using kernel version 3.2 or above, so that the loop device support has the necessary infrastructure for this. https://outflux.net/blog/archives/2012/02/15/discard-hole-punching-and-trim/ If you're fine with less immediate shrinking, you could just periodically run fstrim on the filesystem-as-file instead of using the discard mount option. If the underlying filesystem is very busy, avoiding immediate shrinking might help minimizing the fragmentation of the underlying filesystem. The problem with this approach is that if the underlying filesystem becomes full, it won't be handled very gracefully. If there is no longer space in the underlying filesystem, the filesystem-as-file will start receiving errors when it's trying to replace sparse "holes" with actual data, even as the filesystem-as-file would appear to have some unused capacity left. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/501500",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/337616/"
]
} |
501,602 | I have a script which finds files in the directory specified by user. #!/bin/bash# make sure about the correct inputif [ -z $1 ]then echo "Usage: ./script_name.sh path/to/directory"else DIR=$1 if [ $DIR = '.' ] then echo "Find files in the directory $PWD" else echo "Find files in the directory $DIR" fi find $DIR -type f -exec basename {} \; fi if I input $ ./script_name.sh . script gives me correct substitution ./ to $PWD and shows (for example) $ Find files in the directory /root/scripts But I can't make a decision how to substitute ../ to the name of the directory immediately above in the hierarchy. If I input $ ./script_name.sh .. script gives me the output $ Find files in the directory .. Does anybody know how to substitute ../ to the actual name of the directory? | GNU coreutils has the realpath command that does just that. /tmp/a$ realpath ../tmp Though note that if the path contains symlinks, it will also resolve those: /tmp/b/c$ realpath ../tmp/x/y (Here, /tmp/b was a symlink to /tmp/x/y/ ) This may be different from what the shell does with cd .. . E.g. cd ../.. from /tmp/b/c in Bash shows the new path as /tmp/ , not as /tmp/x . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/501602",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/325226/"
]
} |
501,659 | awk '{ for (i = 1; i <= NF; i++) sum[i]+=$i} END{for (i in sum) print sum[i]}' file1 > file2 This helps sum all record-wise but a similar scheme wouldn't help do a column-wise sum (maybe). How to generalize column-wise addition to n columns? cat file123 46 4545 57 5856 78 74cat file2114160208 | You want to compute the sum of the fields for each record, so it's just: awk '{sum = 0; for (i = 1; i <= NF; i++) sum += $i; print sum}' < file1 > file2 The curly braces begin an action statement that is executed on every line of the input; there is no preceding condition that would limit its execution to lines that satisfy such a condition . On each line: Initialize a sum variable to zero. Loop through the fields, starting at field #1 and ending at the last field (the special variable NF ), and increment sum by the value of that field ( $i ). Print the value of the sum variable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/501659",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/271902/"
]
} |
501,828 | In a bash script I need various values from /proc/ files. Until now I have dozens of lines grepping the files directly like that: grep -oP '^MemFree: *\K[0-9]+' /proc/meminfo In an effort to make that more efficient I saved the file content in a variable and grepped that: a=$(</proc/meminfo)echo "$a" | grep -oP '^MemFree: *\K[0-9]+' Instead of opening the file multiple times this should just open it once and grep the variable content, which I assumed would be faster – but in fact it is slower: bash 4.4.19 $ time for i in {1..1000};do grep ^MemFree /proc/meminfo;done >/dev/nullreal 0m0.803suser 0m0.619ssys 0m0.232sbash 4.4.19 $ a=$(</proc/meminfo)bash 4.4.19 $ time for i in {1..1000};do echo "$a"|grep ^MemFree; done >/dev/nullreal 0m1.182suser 0m1.425ssys 0m0.506s The same is true for dash and zsh . I suspected the special state of /proc/ files as a reason, but when I copy the content of /proc/meminfo to a regular file and use that the results are the same: bash 4.4.19 $ cat </proc/meminfo >meminfobash 4.4.19 $ time for i in $(seq 1 1000);do grep ^MemFree meminfo; done >/dev/nullreal 0m0.790suser 0m0.608ssys 0m0.227s Using a here string to save the pipe makes it slightly faster, but still not as fast as with the files: bash 4.4.19 $ time for i in $(seq 1 1000);do <<<"$a" grep ^MemFree; done >/dev/nullreal 0m0.977suser 0m0.758ssys 0m0.268s Why is opening a file faster than reading the same content from a variable? | Here, it's not about opening a file versus reading a variable's content but more about forking an extra process or not. grep -oP '^MemFree: *\K[0-9]+' /proc/meminfo forks a process that executes grep that opens /proc/meminfo (a virtual file, in memory, no disk I/O involved) reads it and matches the regexp. The most expensive part in that is forking the process and loading the grep utility and its library dependencies, doing the dynamic linking, open the locale database, dozens of files that are on disk (but likely cached in memory). The part about reading /proc/meminfo is insignificant in comparison, the kernel needs little time to generate the information in there and grep needs little time to read it. If you run strace -c on that, you'll see the one open() and one read() systems calls used to read /proc/meminfo is peanuts compared to everything else grep does to start ( strace -c doesn't count the forking). In: a=$(</proc/meminfo) In most shells that support that $(<...) ksh operator, the shell just opens the file and read its content (and strips the trailing newline characters). bash is different and much less efficient in that it forks a process to do that reading and passes the data to the parent via a pipe. But here, it's done once so it doesn't matter. In: printf '%s\n' "$a" | grep '^MemFree' The shell needs to spawn two processes, which are running concurrently but interact between each other via a pipe. That pipe creation, tearing down, and writing and reading from it has some little cost. The much greater cost is the spawning of an extra process. The scheduling of the processes has some impact as well. You may find that using the zsh <<< operator makes it slightly quicker: grep '^MemFree' <<< "$a" In zsh and bash, that's done by writing the content of $a in a temporary file, that is less expensive than spawning an extra process, but will probably not give you any gain compared to getting the data straight off /proc/meminfo . That's still less efficient than your approach that copies /proc/meminfo on disk, as the writing of the temp file is done at each iteration. dash doesn't support here-strings, but its heredocs are implemented with a pipe that doesn't involve spawning an extra process. In: grep '^MemFree' << EOF $a EOF The shell creates a pipe, forks a process. The child executes grep with its stdin as the reading end of the pipe, and the parent writes the content at the other end of the pipe. But that pipe handling and process synchronisation is still likely to be more expensive than just getting the data straight off /proc/meminfo . The content of /proc/meminfo is short and takes not much time to produce. If you want to save some CPU cycles, you want to remove the expensive parts: forking processes and running external commands. Like: IFS= read -rd '' meminfo < /proc/meminfomemfree=${meminfo#*MemFree:}memfree=${memfree%%$'\n'*}memfree=${memfree#"${memfree%%[! ]*}"} Avoid bash though whose pattern matching is very ineficient. With zsh -o extendedglob , you can shorten it to: memfree=${${"$(</proc/meminfo)"##*MemFree: #}%%$'\n'*} Note that ^ is special in many shells (Bourne, fish, rc, es and zsh with the extendedglob option at least), I'd recommend quoting it. Also note that echo can't be used to output arbitrary data (hence my use of printf above). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/501828",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/246819/"
]
} |
501,862 | I have vim as default editor on my Mac and every time I run commands on Mac terminal, it automatically opens "vim". How can I set up "nano" instead and make sure the terminal will open "nano" every time is needed? | Set the EDITOR and VISUAL environment variables to nano . If you use bash , this is easiest done by editing your ~/.bashrc file and adding the two following lines: export EDITOR=nanoexport VISUAL="$EDITOR" to the bottom of the file. If the file does not exist, you may create it. Note that macOS users should probably modify the ~/.bash_profile file instead, as the abovementioned file is not used by default when starting a bash shell on this system. If you use some other shell, modify that shell's startup files instead (e.g. ~/.zshrc for zsh ). You should set both variables as some tools use one, and others may use the other. You will need to restart your terminal to have the changes take effect. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/501862",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/337902/"
]
} |
501,982 | I've been using my Raspberry Pi as a music file server, but I'm not happy with it. The current setup uses a samba server on my RPi with a WD Passport USB drive formatted as vfat . This serves as the library for my Sonos music system: Sonos mounts the drive, and lists all of the music it finds there in a menu for me to choose from. The Sonos interface seems to operate smoothly with the RPi Samba server most of the time. However, it does not work so smoothly with my MacOS. I use my Mac to maintain the music library, and there are two primary niggling issues: user permissions must be changed through the Samba configuration to enable deletions and additions in the music library browsing the Samba music share on my RPi in the Mac's Finder app reveals numerous "omissions & artifacts", whereas browsing a folder with exactly the same content on the NetgearNAS with CIFS is flawless; see the figure below. A friend uses a NetgearNAS as the fileserver for his Sonos system. It works very reliably, and the "artifacts" do not show up in Finder. His NetgearNAS is configured to use CIFS (and only CIFS). I'd like to try CIFS on my RPi, but my research so far has only added to my confusion. Finally, my questions: SMB and CIFS seem to be closely related, but are they "the same thing"? If not, what are the differences? Some sources refer to CIFS as a file system (in the sense that ext4 , FAT32 , etc. are file systems), while others refer to it as a networking protocol. As there is no CIFS extension for mkfs , references that refer to CIFS as a file system would seem to be misleading - or am I missing something? If CIFS is only a networking protocol, is it limited to a specific file system?; i.e. may one use FAT32 or ext4 with CIFS? Does the file system used with CIFS affect its use as a cross-platform server protocol? | vfat is a very limited filesystem, it's completely unsuitable to any networked use and any multiuser environment (it's the ancient MS-DOS filesystem). In short, don't use VFAT for anything but USB thumb drives, for maximum compatibility. Basically all of your problems come from the fact that MacOS tries to store extended attributes, file rights, etc through SMB/CIFS to VFAT that doesn't support any of that, nor does it support very long file names, or UTF-8 file names, or anything of interest to modern users. Just use a real, normal Linux filesystem on your USB drive (ext4, xfs) and all will be fine and dandy. That will certainly solve the problems with missing files, wrong rights and permissions, artifacts, etc. Regarding the other questions: SMB and CIFS are different names for the same thing, the Microsoft Network Filesystem ( Server Message Block protocol ). There is some confusion here, because CIFS actually was the first version of it (SMB version 1.0), and was superseded by newer versions (SMB v. 2.0, 3.0, 3.1, ...). However on Linux, for some reason the first version was called 'smbfs', and the newer ones 'cifs'. Anyway nowadays on both Linux and MacOS this doesn't make any difference, both are interchangeable. SMB/CIFS is a network file system. It has no direct relation at all with a block file system. It's a file system in the sense that it provides a file abstraction and its common I/O modes; however you can use any network file system (NFS, SMB, WebDAV, AFP...) to share data from any block file system (FAT32, ext4, HFS+, xfs, NTFS, ZFS...). Different block file systems provide different features (direct IO, POSIX ACLs, Windows ACLs, extended attributes, file streams, "hollow" files, file versioning, metadata versioning, subvolumes, volume snapshots...). Different network file systems provide also different features. The way the features from the network file system maps onto the underlying block file system vary greatly and are an unending source of confusion, pain, and bugs. For instance CIFS, originating from Windows, by default uses Windows ACLs, which unfortunately doesn't map one-on-one to the POSIX ACLs of most Unix/Linux filesystems. Nowadays Samba works around this by using extended attributes to store actual Windows ACLs, however if the underlying block file system doesn't support xattr , you'll have problems. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/501982",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/286615/"
]
} |
502,016 | In the past, from time to time, while developing Linux software, I have noticed that some man pages which deal with developer documentation are missing on my systems. This happens mainly on Debian systems. For example, yesterday I needed to use the fls() function, but man fls did not lead to anywhere, and despite researching and trying this and that, I still have no idea what I could do to install those missing man pages. I believe that this problem is related to the fact that some functions which are available originally have been implemented on other platforms (for example, fls() seems to come from BSD . However, this finding did not help; there is no special POSIX or BSD developer documentation in Debian (at least, I couldn't find it). Until now, I have worked around the problem by googling for man xxx , which worked (i.e. let me find the respective man page) every time. But this is crude and unsatisfying and makes me dependent on online services, so I think it's time to solve the problem. How to install all of such documentation in recent Debian versions (notably the man pages which are not in the package glibc-doc )? | You should install manpages-dev , which provides manpages for system calls and a number of library functions, and the -dev and (if any) -doc packages for the libraries you’re developing with. For kernel functions you should install linux-manual-4.9 (or whichever version is appropriate); this is where you’ll find man 9 fls . To find manpages in general, install apt-file , update the indexes ( apt update ), then search for the manpage you want: apt-file search -x man./fls\\. (the -x option tells apt-file to interpret the argument as a Perl regex). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/502016",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/210810/"
]
} |
502,061 | Been getting errors on my scripts when using this operator. I've some online documentation and double equals should supposedly work. Any ideas? Zsh reference guide: http://zsh.sourceforge.net/Doc/Release/Conditional-Expressions.html Script: #!/bin/zshif [ $_user == "root" ]; then echo "root"else echo "not root"fi Running it: $ ./script.sh./script.sh:3: = not found | Simple answer: a == is a logical operator only inside [[ … ]] constructs. $ [[ one == one ]] && echo "yes"yes And it works also in ksh and bash. When used outside a [[ … ]] construct a =cmd becomes a filename expansion operator but only in zsh $ echo ==zsh: = not found That is what happens inside the simpler [ … ] construct you used. But the correct way to test that the user running the shell script is root is to do: if (( EUID == 0 )); then echo "root"else echo "not root"fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/502061",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65536/"
]
} |
502,065 | I have to copy files on a machine. And the data is immensely large. Now servers need to serve normally, and there are usually a particular range of busy hours on those.So is there a way to run such commands in a way that if server hits busy hours, it pauses process, and when it gets out of that range, it resumes it? Intended-Result cp src dstif time between 9:00-14:00 pause processAfter 14:00 resume cp command. | You can pause execution of a process by sending it a SIGSTOP signal and then later resume it by sending it a SIGCONT . Assuming your workload is a single process (doesn't fork helpers running in background), you can use something like this: # start copy in background, store pidcp src dst &echo "$!" >/var/run/bigcopy.pid Then when busy time starts, send it a SIGSTOP : # pause execution of bigcopykill -STOP "$(cat /var/run/bigcopy.pid)" Later on, when the server is idle again, resume it. # resume execution of bigcopykill -CONT "$(cat /var/run/bigcopy.pid)" You will need to schedule this for specific times when you want it executed, you can use tools such as cron or systemd timers (or a variety of other similar tools) to get this scheduled. Instead of scheduling based on a time interval, you might choose to monitor the server (perhaps looking at load average, CPU usage or activity from server logs) to make a decision of when to pause/resume the copy. You also need to manage the PID file (if you use one), make sure your copy is actually still running before pausing it, probably you'll want to clean up by removing the PID file once the copy is finished, etc. In other words, you need more around this to make a reliable, but the base idea of using these SIGSTOP and SIGCONT signals to pause/resume execution of a process seems to be what you're looking for. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/502065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117409/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.