source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
549,070 | I am following Cross-Compiled Linux From Scratch - Embedded. In section 3.3 "Setting Up the Environment" , PS1 is declared in .bash_profile . When I log in as su - clfs , both .bash_profile and .bashrc get executed. When I log in as su clfs , only .bashrc gets executed. In both cases, PS1 never updates the terminal. When I put PS1 in .bashrc , it does update the terminal. The CLFS book has put it in .bash_profile : cat > ~/.bash_profile << "EOF"exec env -i HOME=${HOME} TERM=${TERM} PS1='\u:\w\$ ' /bin/bashEOF What is the suitable location for PS1? | The PS1 shell variable should be set in ~/.bashrc for the bash shell as that is the initialisation file that is read for interactive shell sessions. Note that this variable is a shell variable , not an environment variable (it does not make sense to let child processes inherit its value, and it's only the current shell that uses it). It therefore does not need to be exported with export . Related: What is the purpose of .bashrc and how does it work? You shouldn't need to start bash from any of the shell's startup files. Starting a particular shell from ~/.profile (or the corresponding file related to your login shell) may possibly be warranted if the system that you're running on does not allow you to change your login shell. Care should be taken to not start the other shell if that is the shell already executing the file though, or you may end up in an infinite loop of sorts. The exec code that you add to your ~/.bash_profile should never be needed. I suppose it's a way of getting ~/.bashrc to be parsed (it starts an interactive shell, and interactive bash shells read ~/.bashrc ). A better way of doing that would be to have one of the files source the other, for example using this in ~/.bash_profile : if [[ -f $HOME/.bashrc ]]; then source "$HOME/.bashrc"fi Then set PS1 in ~/.bashrc (there should be no need to touch HOME or TERM ). The other thing that the command does is to clean out all other environment variables using env -i . Unless you have very specific reasons to do this, you should not do that from your ordinary shell startup files. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/549070",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/211284/"
]
} |
549,084 | I'm trying to install Centos 8 on a blank VM. I run the Centos 8 ISO and begin the installation. It gets past the kernel installation step but then it seems to crash and all I see is this: I have no idea what to google for this error which is why I'm asking here if anyone has seen anything similar. Thanks | The PS1 shell variable should be set in ~/.bashrc for the bash shell as that is the initialisation file that is read for interactive shell sessions. Note that this variable is a shell variable , not an environment variable (it does not make sense to let child processes inherit its value, and it's only the current shell that uses it). It therefore does not need to be exported with export . Related: What is the purpose of .bashrc and how does it work? You shouldn't need to start bash from any of the shell's startup files. Starting a particular shell from ~/.profile (or the corresponding file related to your login shell) may possibly be warranted if the system that you're running on does not allow you to change your login shell. Care should be taken to not start the other shell if that is the shell already executing the file though, or you may end up in an infinite loop of sorts. The exec code that you add to your ~/.bash_profile should never be needed. I suppose it's a way of getting ~/.bashrc to be parsed (it starts an interactive shell, and interactive bash shells read ~/.bashrc ). A better way of doing that would be to have one of the files source the other, for example using this in ~/.bash_profile : if [[ -f $HOME/.bashrc ]]; then source "$HOME/.bashrc"fi Then set PS1 in ~/.bashrc (there should be no need to touch HOME or TERM ). The other thing that the command does is to clean out all other environment variables using env -i . Unless you have very specific reasons to do this, you should not do that from your ordinary shell startup files. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/549084",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357536/"
]
} |
549,149 | I'm investigating a very strange effect on some Beagle Bone Black (BBB) boards. We're seeing occasional jumps of a few months in the system clock which always correlate with systemd-timesyncd updating the system clock. We see 2 to 3 of these a week across a fleet of 2000 devices in diverse locations. We've spent a lot of time checking SNTP but that appears to be behaving normally. We've finally come up with a hardware issue with the on-board real time clock that can cause it to randomly jump 131072 seconds (36 hours) due to electronic noise. This doesn't immediately sit right, the reported time jump is quite specific and much less than we've observed, however deeper reading on the issue suggests jumps may be more random and may even go backwards. My question is... How does linux use a real time clock to maintain the system clock ? I want to know if an error with the real time clock would only present itself in the system clock when a timesync agent (ntpd or systemd-timesyncd) updates. Is there any direct link between the system clock and an RTC or is it only used by an agent? Note: In the first paragraph I mentioned that we're seeing jumps of a few months in the system clock which always correlate with systemd-timesyncd updating the system clock. By this I mean that the very first syslog message after a time jump is a Time has been changed syslog message: grep 'Time has been changed' /var/log/syslogOct 2 23:53:33 hostname systemd[1]: Time has been changedNov 21 00:07:05 hostname systemd[1]: Time has been changedNov 21 00:05:17 hostname systemd[1]: Time has been changedNov 21 00:03:29 hostname systemd[1]: Time has been changedNov 21 00:01:43 hostname systemd[1]: Time has been changedOct 3 02:07:20 hostname systemd[1]: Time has been changedOct 3 06:37:04 hostname systemd[1]: Time has been changed To the best of my knowledge the only thing that emits these messages is systemd-timesycnd ( See source code ). Obviously if anyone else knows of other regular systemd syslog messages messages matching these I'm open to suggestions. | Thanks very much to sourcejedi for this answer . This really led me to find the right answer. Answer to the question How does Linux use a real time clock to maintain the system clock? It does so only once, during boot. It will not query the RTC again until the next reboot. This is configurable, but will do so by default on most kernel builds. I want to know if an error with the real time clock would only present itself in the system clock when a timesync agent (ntpd or systemd-timesyncd) updates. Unless the system is rebooted, the time in the RTC is unlikely to get into the system clock at all. Some agents like ntpd can be configured to use an RTC as a time source but this is not usually enabled by default. It's inadvisable to enable it unless you know the RTC is a very good time source. Is there any direct link between the system clock? It appears the time is copied the other way. The RTC is periodically updated with the system time. As per sourcejedi's answer, this is done by the kernel if CONFIG_RTC_HCTOSYS is set. This can be tested: Set the RTC # hwclock --set --date='18:28' Then check the RTC time every few minutes with: # hwclock The result of this will be the system time will not change at all,and the RTC will eventually revert to the system time. The cause of time jumps on the BBB As sourcejedi pointed out, the messages were not being triggered by systemd-timesyncd . They were being triggered by connman . The evidence was (should be) a spurious log message in /var/log/syslog : Oct 3 00:10:37 hostname connmand[1040]: ntp: adjust (jump): -27302612.028018 sec...Nov 21 00:07:05 hostname systemd[1]: Time has been changed prior to version 1.37 , connman is hard coded to promiscuously poll the default gateway for the time. It does not need to be DHCP configured to do this and if connman's NTP client is enabled (it is by default) then it will do this regardless of any other configuration. In our case some home routers were actually responding to these NTP requests, but the results were highly unreliable. Particularly where the router was rebooted, it continued to hand out the time without actually knowing the correct time . For example we know that at least one version of the BT Home Hub 5 will, when rebooted, default to 21st November 2018 and give out this date over NTP. It's own NTP client will then correct the problem but there's a window where it hands out 21st November 2018. That is, this issue was ultimately caused by our customers rebooting their router and connman just accepting this time. I'll express my frustration here, it seems the belligerence of some has left this "feature" in connman for far too long. It was reported as a problem as early as 2015 . And it's a really well hidden "feature". There are no timeservers configured and no log message to explain what connman is doing or documentation as to why. If your test rigs have no NTP server on the default gateway you'll never see this in testing. How to Fix We are looking at two options which both appear to work: Remove connman completely. It seems the network works just fine without it; we've not yet found the reason for it being there in the first place. apt-get remove connman Disable NTP in connman by editing /var/lib/connman to include: [global]TimeUpdates=manual | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/549149",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20140/"
]
} |
549,165 | I have following code. read -p "Enter a word: " wordcase $word in [aeiou]* | [AEIOU]*) echo "The word begins with a vowel." ;; [0-9]*) echo "The word begins with a digit." ;; *[0-9]) echo "The word ends with a digit." ;; [aeiou]* && [AEIOU]* && *[0-9]) echo "The word begins with vowel and ends with a digit." ;; ????) echo "You entered a four letter word." ;; *) echo "I don't know what you've entered," ;;esac When I run this: Enter a word: apple123case2.sh: line 10: syntax error near unexpected token `&&'case2.sh: line 10: ` [aeiou]* && [AEIOU]* && *[0-9])' It looks like case statement don't support AND operator, and also I believe the && operator in my above case statement logically incorrect. I understand that we can use if else to check if the input starts with a vowel and digit. But I am curious if case has any inbuilt function like AND operator. | You are correct in that the standard definition of case does not allow for a AND operator in the pattern. You're also correct that trying to say "starts with a lower-case vowel AND starts with an upper-case vowel" would not match anything. Note also that you have your patterns & explanations reversed for the begins/ends with a digit tests -- using a pattern of [0-9]* would match words that begin with a digit, not end with a digit. One approach this would be to combine your tests into the same pattern, most-restrictive first: case $word in ([AaEeIiOoUu]??[0-9]) echo it is four characters long and begins with a vowel and ends with a digit;; ([AaEeIiOoUu]*[0-9]) echo it is not four characters long begins with a vowel and ends with a digit;;# ...esac Another (lengthy!) approach would be to nest your case statements, building up appropriate responses each time. Does it begin with a vowel, yes or no? Now, does it end in a digit, yes or no? This would get unwieldy quickly, and annoying to maintain. Another approach would be to use a sequence of case statements that builds up a string (or array) of applicable statements; you could even add * catch-all patterns to each if you wanted to provide "negative" feedback ("word does not begin with a vowel", etc). result=""case $word in [AaEeIiOoUu]*) result="The word begins with a vowel." ;;esaccase $word in [0-9]*) result="${result} The word begins with a digit." ;;esaccase $word in *[0-9]) result="${result} The word ends with a digit." ;;esaccase $word in ????) result="${result} You entered four characters." ;;esacprintf '%s\n' "$result" For examples: $ ./go.shEnter a word: aieeeThe word begins with a vowel.$ ./go.shEnter a word: jeff42 The word ends with a digit.$ ./go.shEnter a word: aieeThe word begins with a vowel. You entered four characters.$ ./go.shEnter a word: 9arm The word begins with a digit. You entered four characters.$ ./go.shEnter a word: arm9The word begins with a vowel. The word ends with a digit. You entered four characters. Alternatively, bash extended the syntax for the case statement to allow for multiple patterns to be selected, if you end the pattern(s) with ;;& : shopt -s nocasematchcase $word in [aeiou]*) echo "The word begins with a vowel." ;;& [0-9]*) echo "The word begins with a digit." ;;& *[0-9]) echo "The word ends with a digit." ;;& ????) echo "You entered four characters." ;;esac Note that I removed the * catch-all pattern, since that would match anything & everything, when falling through the patterns this way. Bash also has a shell option called nocasematch , which I set above, that enables case-insensitive matching of the patterns. That helps reduce redundancy -- I removed the | [AEIOU]* part of the pattern. For examples: $ ./go.shEnter a word: aieeeThe word begins with a vowel.$ ./go.shEnter a word: jeff42The word ends with a digit.$ ./go.shEnter a word: aieeThe word begins with a vowel.You entered four characters.$ ./go.shEnter a word: 9armThe word begins with a digit.You entered four characters.$ ./go.shEnter a word: arm9The word begins with a vowel.The word ends with a digit.You entered four characters. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/549165",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/277542/"
]
} |
549,193 | I am using Tecplot to process wind tunnel data, but the input for Tecplot requires a specific format for variable specification; each variable is bracketed by double quotes "Variable Name" . The problem is getting the double quotes isn't easy. I found in one post that printf ' "%s" ' will produce this effect. However, printf is rather finicky about what it will take as input. I've been unable to pipe the variables from my data file using a sed before the printf but, I found another post that noted using printf ' "%s" ' $(sed ...) . This construct worked but is only half of the solution to my problem. I now want to use the output of this printf in a substitution with sed to swap out a placeholder (XYZXYZXYZ) with the contents of the printf ' "%s" ' $(sed ...) . All I get is an unterminated 's : sed s/XYZXYZXYZ/` printf ' "%s" ' $(sed -n 1,265p Run-0020) `/ ../../wt/wt-layout_A.datsed: -e expression #1, char 12: unterminated `s' command How do I get the XYZXYZXYZ to be changed to the output of the printf as noted? | You are correct in that the standard definition of case does not allow for a AND operator in the pattern. You're also correct that trying to say "starts with a lower-case vowel AND starts with an upper-case vowel" would not match anything. Note also that you have your patterns & explanations reversed for the begins/ends with a digit tests -- using a pattern of [0-9]* would match words that begin with a digit, not end with a digit. One approach this would be to combine your tests into the same pattern, most-restrictive first: case $word in ([AaEeIiOoUu]??[0-9]) echo it is four characters long and begins with a vowel and ends with a digit;; ([AaEeIiOoUu]*[0-9]) echo it is not four characters long begins with a vowel and ends with a digit;;# ...esac Another (lengthy!) approach would be to nest your case statements, building up appropriate responses each time. Does it begin with a vowel, yes or no? Now, does it end in a digit, yes or no? This would get unwieldy quickly, and annoying to maintain. Another approach would be to use a sequence of case statements that builds up a string (or array) of applicable statements; you could even add * catch-all patterns to each if you wanted to provide "negative" feedback ("word does not begin with a vowel", etc). result=""case $word in [AaEeIiOoUu]*) result="The word begins with a vowel." ;;esaccase $word in [0-9]*) result="${result} The word begins with a digit." ;;esaccase $word in *[0-9]) result="${result} The word ends with a digit." ;;esaccase $word in ????) result="${result} You entered four characters." ;;esacprintf '%s\n' "$result" For examples: $ ./go.shEnter a word: aieeeThe word begins with a vowel.$ ./go.shEnter a word: jeff42 The word ends with a digit.$ ./go.shEnter a word: aieeThe word begins with a vowel. You entered four characters.$ ./go.shEnter a word: 9arm The word begins with a digit. You entered four characters.$ ./go.shEnter a word: arm9The word begins with a vowel. The word ends with a digit. You entered four characters. Alternatively, bash extended the syntax for the case statement to allow for multiple patterns to be selected, if you end the pattern(s) with ;;& : shopt -s nocasematchcase $word in [aeiou]*) echo "The word begins with a vowel." ;;& [0-9]*) echo "The word begins with a digit." ;;& *[0-9]) echo "The word ends with a digit." ;;& ????) echo "You entered four characters." ;;esac Note that I removed the * catch-all pattern, since that would match anything & everything, when falling through the patterns this way. Bash also has a shell option called nocasematch , which I set above, that enables case-insensitive matching of the patterns. That helps reduce redundancy -- I removed the | [AEIOU]* part of the pattern. For examples: $ ./go.shEnter a word: aieeeThe word begins with a vowel.$ ./go.shEnter a word: jeff42The word ends with a digit.$ ./go.shEnter a word: aieeThe word begins with a vowel.You entered four characters.$ ./go.shEnter a word: 9armThe word begins with a digit.You entered four characters.$ ./go.shEnter a word: arm9The word begins with a vowel.The word ends with a digit.You entered four characters. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/549193",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/370883/"
]
} |
549,207 | My syslog is chock-full of the following: Oct 28 23:35:01 myhost CRON[17705]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)Oct 28 23:45:01 myhost CRON[18392]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) and also some Oct 28 23:59:01 myhost CRON[19251]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 60 2) Now, obviously, these come from cron jobs, in /etc/cron.d/sysstat : # Activity reports every 10 minutes everyday5-55/10 * * * * root command -v debian-sa1 > /dev/null && debian-sa1 1 1# Additional run at 23:59 to rotate the statistics file59 23 * * * root command -v debian-sa1 > /dev/null && debian-sa1 60 2 Do I need to have this run so frequently? It doesn't seem to do much when I run it manually. Can I/should I just turn off the cron job, or uninstall sysstat? | These commands are, indeed, part of the sysstat package. It's intended for performance monitoring; and specifically, sar is the system activity report : a Unix System V-derived system monitor command used to report on various system loads, including CPU activity, memory/paging, interrupts, device load, network and swap space utilization. Sar uses /proc filesystem for gathering information So, running this command does not actually do anything which helps your system's health or stability, it's just statistics-gathering. With this in mind, you have three options: Uninstall sysstat as @wurtel suggests. You indicate you're not even able to see the gathered statistics, so obviously you're not really using this facility. That means you probably don't need such monitoring in the first place. Move cron output into a separate file from /var/log/messages , e.g. into /var/log/cron . If you're using rsyslog for logging, which you likely are given that it's the default on Devuan, you need to do is un-comment a line intended for just this purpose in /etc/rsyslog.conf : #cron.* /var/log/cron.log just remove the initial # ; and remove cron from what goes into /var/log/syslog , i.e. replace this: *.=info;*.=notice;*.=warn;\ auth,authpriv.none;\ cron,daemon.none;\ mail,news.none -/var/log/messages with this: *.=info;*.=notice;*.=warn;\ auth,authpriv.none;\<h1>cron,daemon.none;\</h1> daemon.none;\ mail,news.none -/var/log/messages In case you don't care to see cron job logging if there were no errors, @binarym suggests limiting the logging to error or warning messages. With rsyslog , that means replacing this: *.=info;*.=notice;*.=warn;\ auth,authpriv.none;\ cron,daemon.none;\ mail,news.none -/var/log/messages with this: *.=info;*.=notice;*.=warn;\ auth,authpriv.none;\ daemon.none;\ mail,news.none -/var/log/messages*.=warn;*.=err\ cron -/var/log/messages in the default /etc/rsyslogd.conf . (Although, frankly, I don't understand why .=err isn't there in the first place. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/549207",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34868/"
]
} |
549,532 | telnet 8.8.8.8 8888 displays Trying... I was expecting, that this directly is refused. Background: When we have a NGINX reverse proxy server, it would be great, that it detects directly when the backend is not there. | The TCP stack decides how to respond to a connection, based on a set of rules (it could be at the firewall level). You can REJECT the connection package (SYN), but you can also DROP it. Dropping it makes sense because of port scanning, for example. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/549532",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23987/"
]
} |
549,694 | I want to catch if a variable is multiline in a case statement in POSIX shell (dash). I tried this: q=''case "$q" in *$'\n'*) echo nl;; *) echo NO nl;;esac It returns nl in zsh but NO nl in dash. Thanks. | The dash shell does not have C-strings ( $'...' ). C-strings is an extension to the POSIX standard. You would have to use a literal newline. This is easier (and looks nicer) if you store the newline in a variable: #!/bin/dashnl=''for string; do case $string in *"$nl"*) printf '"%s" contains newline\n' "$string" ;; *) printf '"%s" does not contain newline\n' "$string" esacdone For each command line argument given to the script, this detects whether it contains a newline or not. The variable used in the case statement ( $string ) does not need quoting, and the ;; after the last case label is not needed. Testing (from an interactive zsh shell, which is where the dquote> secondary prompt comes from): $ dash script.sh "hello world" "hellodquote> world""hello world" does not contain newline"helloworld" contains newline | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/549694",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/370904/"
]
} |
549,741 | I have repeatedly encountered a problem with several programs that use open/save file dialogues. Upon initiating these by trying to open or to save a file, the program freezes for about 10 seconds and then crashes. With libreoffice for example, I get the following error message when started from terminal: Error creating proxy: Error calling StartServiceByName for org.gtk.vfs.UDisks2VolumeMonitor: Timeout was reached (g-io-error-quark, 24)(soffice:1466): GLib-GIO-ERROR **: 19:11:38.289: Settings schema 'org.gtk.Settings.FileChooser' does not contain a key named 'show-type-column'Fatal exception: Signal 5Stack: A stack trace follows. I have read about a similar problem on AskUbuntu.SE , but the solution (multiple versions of /usr/share/glib-2.0/schemas/org.gtk.Settings.FileChooser.gschema.xml ) does not apply to me. The file seems to have the appropriate contents (to me). Excerpt about the key mentioned in the error: <key name='show-type-column' type='b'> <default>true</default> <summary>Show file types</summary> <description>Controls whether the file chooser shows a column with file types. </description></key> How do I fix this problem? | I experienced this bug in 1.01 AppImage of Inkscape. Mike Nealy gives an explanation and workaround in a bug report here I've copied his workaround below: Simply updating the schema to contain show-type-column isn't enough. Downloading the newer schema file from https://gitlab.gnome.org/GNOME/gtk/-/blob/c925221aa804aec344bdfec148a17d23299b6c59/gtk/org.gtk.Settings.FileChooser.gschema.xml and installing it in/usr/share/glib-2.0/schemas/org.gtk.Settings.FileChooser.gschema.xml,running "glib-compile-schemas ." in that directory and using Alt-F2 rto restart gnome-shell seems to resolve the issue with Inkscape 1.0.1. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/549741",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/252428/"
]
} |
549,875 | I have several user input statements like: read -r -p "Do u want to include this step (y) or not (n) (y/N)"? answerif [[ "$answer" =~ ^[Yy]$ ]]; then ...fi I am looking for a way to automatically answering yes to all these questions. Imagine a non-interactive session where the user invokes the script with --yes option. No further stdin input. The only way I can think right now is adding another condition on each if statement. Any thoughts? | If you use read only for these questions, and the variable is always called answer , replace read : # parse options, set "$yes" to y if --yes is suppliedif [[ $yes = y ]]then read () { answer=y }fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/549875",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/213080/"
]
} |
549,933 | I have file which has following format ---------------------------------------- Name: cust foo mail: [email protected] Account Lock: FALSE Last Password Change: 20170721085748Z---------------------------------------- Name: cust xyz mail: [email protected] Account Lock: TRUE Last Password Change: 20181210131249Z---------------------------------------- Name: cust bar mail: [email protected] Account Lock: FALSE Last Password Change: 20170412190854Z---------------------------------------- Name: cust abc mail: [email protected] Account Lock: FALSE Last Password Change: 20191030080405Z---------------------------------------- I want to change Last Password Change data format to YYYY-MM-DD but not sure how to do that with sed or awk or is there any other method, I can try for loop it and use date -d option but not sure if there is easier way to do with regex | Since all you're doing is adding two dashes, and dropping some extra characters, there's not much need for date . $ sed -Ee 's/(Last Password Change: )(....)(..)(..).*Z/\1\2-\3-\4/' < foo.txt... Name: cust foo mail: [email protected] Account Lock: FALSE Last Password Change: 2017-07-21... For a stricter pattern, the dots (that match any character) could be replaced with [0-9] to only match digits. \1 etc. in the replacement of course expand to what ever the patterns in parenthesis matched. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/549933",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29656/"
]
} |
549,954 | I want to write a script that echoes to stdout its $@ verbatim, including any potential double-quotes, without a newline. What I have tried: > cat ./script1 #! /usr/bin/bashprintf "%s %s" $0 $@echo -n $0 $@echo -n "$0 $@"> > ./script1 a "bc d"./script1 abc d./script1 a bc d./script1 a bc d> I would like my script to run as follows: > ./script1 a "bc d"./script1 a "bc d"> Is there a way to do this? | Your positional parameters do not contain any double quotes in your example: ./script1 a "bc d" The double quotes there are simply to tell the shell that no word splitting should be performed on the space between bc and d . If you want your second parameter to contain literal quotes you need to escape them to tell the shell they are literals: ./script1 a '"bc d"' or ./script1 a \"bc\ d\" You can use printf to print them in a format that can reused as shell input (escaped): $ set -- a "bc d"$ printf '%q\n' "$@"abc\ d As you can see this only escapes the space though. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/549954",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/214773/"
]
} |
550,031 | I run dd to create bootable Ubuntu but it won't make it bootable. Instead it returns instantly without creating anything as I see. When I point partition, sda1 it writes data to it but the usb won't boot the system. Also sudo fdisk -l does not list the usb but lsblk does. How to make bootable usb with dd ? [I] ➜ uname --allLinux artpc 5.3.7-arch1-1-ARCH #1 SMP PREEMPT Fri Oct 18 00:17:03 UTC 2019 x86_64 GNU/Linux~ [I] ➜ lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 1 14.7G 0 disk └─sda1 8:1 1 14.7G 0 part nvme0n1 259:0 0 477G 0 disk ├─nvme0n1p1 259:1 0 680M 0 part /boot├─nvme0n1p2 259:2 0 475.3G 0 part │ └─cryptroot 254:0 0 475.3G 0 crypt /└─nvme0n1p4 259:3 0 990M 0 part ~ [I] ➜ sudo fdisk -l[sudo] password for art: Disk /dev/nvme0n1: 476.96 GiB, 512110190592 bytes, 1000215216 sectorsDisk model: KXG60ZNV512G NVMe TOSHIBA 512GB Units: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: gptDisk identifier: 246817B2-7F93-4723-8F53-B499C07511A3Device Start End Sectors Size Type/dev/nvme0n1p1 2048 1394687 1392640 680M EFI System/dev/nvme0n1p2 1394688 998158335 996763648 475.3G Linux filesystem/dev/nvme0n1p4 998158336 1000185855 2027520 990M Windows recovery environmentDisk /dev/mapper/cryptroot: 475.29 GiB, 510326210560 bytes, 996730880 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes~ took 5s ~ [N] ➜ sudo dd if=/home/art/Downloads/TriblerDownloads/ubuntu-19.10-desktop-amd64.iso of=/dev/sda bs=4M status=progress 587+1 records in587+1 records out2463842304 bytes (2.5 GB, 2.3 GiB) copied, 0.728635 s, 3.4 GB/s~ [I] ➜ pgrep dd -l# No dd here. Update, dmesg: [167395.353737] usb 2-1: new SuperSpeed Gen 1 USB device number 8 using xhci_hcd[167395.376079] usb 2-1: New USB device found, idVendor=8564, idProduct=1000, bcdDevice=11.00[167395.376084] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3[167395.376088] usb 2-1: Product: Mass Storage Device[167395.376091] usb 2-1: Manufacturer: JetFlash[167395.376094] usb 2-1: SerialNumber: 25KD7JEKLN6J409K[167395.379692] usb-storage 2-1:1.0: USB Mass Storage device detected[167395.380037] scsi host3: usb-storage 2-1:1.0[167396.745065] scsi 3:0:0:0: Direct-Access JetFlash Transcend 16GB 1100 PQ: 0 ANSI: 6[167396.746488] sd 3:0:0:0: [sda] 30851072 512-byte logical blocks: (15.8 GB/14.7 GiB)[167396.747105] sd 3:0:0:0: [sda] Write Protect is off[167396.747111] sd 3:0:0:0: [sda] Mode Sense: 43 00 00 00[167396.747634] sd 3:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA[167396.751767] sda: sda1[167396.754816] sd 3:0:0:0: [sda] Attached SCSI removable disk Usb type: USB 3.1 Gen 1 port. It is Dell Latitude 5401. Tried two USB flash drives. Both does not work. Update 2. ls -l /dev/sda*-rw-r--r-- 1 root root 2463842304 Nov 2 16:48 /dev/sdabrw-rw---- 1 root disk 8, 1 Nov 2 17:03 /dev/sda1 | You've got a file as /dev/sda not a device, so when you write to /dev/sda you're overwriting the file. With your NVMe disk this explains why writing speed is so high. Remove the file /dev/sda , unplug and replug the USB stick. Check that /dev/sda is now a block device (first character from ls -l is b ) rather than a file (first character - ), like this: brw-rw---- 1 root disk 8, 0 Nov 2 17:03 /dev/sdabrw-rw---- 1 root disk 8, 1 Nov 2 17:03 /dev/sda1 How did this happen? It's possible you first tried to write to the device before it had been plugged in, so the device node hadn't yet been created. Thereafter the presence of the file prevented the device from being created. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/550031",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/366609/"
]
} |
550,074 | I’ve just moved from Stretch to Buster with a Cinnamon desktop. This more or less does everything I need except that I also load stable firefox (using snap), AirVPN and Tor from the buster-backport repository. Normally there aren’t any problems except this time the Tor screen (after using the launcher) is black and unresponsive. I’ve done a clean install and then just installed Tor but the problem remains. Can anybody help? Thank you. | Answer form Debian bugs tracker : bug #942901 edit your /etc/apparmor.d/local/torbrowser.Browser.firefox and add the following line: owner /{dev,run}/shm/org.mozilla.*.* rw, Also add exactly the same line to /etc/apparmor.d/local/torbrowser.Browser.plugin-container . Then: sudo systemctl restart apparmor The Debian bug tracker states: Message #5 Tor Browser 9.0 shows only black screens because the default apparmor profile does not allow write access to /dev/shm/org.mozilla.ipc. . like it does for /dev/shm/org.chromium.* and I was able to fix this issue by adding this workaround: ==> /etc/apparmor.d/local/torbrowser.Browser.firefox <== owner /{dev,run}/shm/org.mozilla.*.* rw, | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/550074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/380237/"
]
} |
550,075 | I'm trying to update a software sdman in my macOs But I get bash outadted version error. rajkumar.natarajan$ sdk update An outdated version of bash was detected on your system! We recommend upgrading to bash 4.x, you have: 3.2.57(1)-release Need to use brute force to replace candidates... But bash version is already latest. rajkumar.natarajan$ which bash /usr/local/bin/bash rajkumar.natarajan$ bash --version GNU bash, version 5.0.11(1)-release (x86_64-apple-darwin16.7.0)Copyright (C) 2019 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>This is free software; you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. | sdman is looking at /bin/bash which is version 3.2.57 because macos cannot ship with anything newer than it. The reason that Apple includes such an old version of Bash in its operating system has to do with licensing. Since version 4.0 (successor of 3.2), Bash uses the GNU General Public License v3 (GPLv3), which Apple does not (want to) support. You can find some discussions about this here and here. Version 3.2 of GNU Bash is the last version with GPLv2, which Apple accepts, and so it sticks with it. source You have installed a modern version of bash but you haven't replaced /bin/bash with it. I don't really recommend doing that as it could potentially break some legacy scripts/programs (unlikely but possible). My recommendation is to ignore that warning. It's warning you because bash v3.2 normally would suffer from the shellshock vulnerability, however Apple has patched this in their version of Bash v3.2. If you are unable to ignore that warning and really want to risk updating your /bin/bash I would move it to a backup location and symlink your new bash to it. sudo mv /bin/bash /bin/bash.baksudo ln -s /usr/local/bin/bash /bin/bash But in order to do this you will need to bypass SIP To enable or disable System Integrity Protection, you must boot to Recovery OS and run the csrutil(1) command from the Terminal. Boot to Recovery OS by restarting your machine and holding down the Command and R keys at startup. Launch Terminal from the Utilities menu. Enter the following command: $ csrutil disable After enabling or disabling System Integrity Protection on a machine, a reboot is required. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/550075",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253851/"
]
} |
550,092 | I am writing a bash script to copy/move a folder called "folder" to a directory that already contains "folder" and I would like the contents to be merged. I am attempting to use a solution from this question: Merging folders with mv? cp -rl source/folder destinationrm -r source/folder If i type the first line in the terminal, source "folder" and destination "folder" are merged as expected. However, When i run the script with the line in it, instead of merging the folders the destination now contains two folders; "folder" and "blank", where "blank" has the contents of the source "folder" in it. | sdman is looking at /bin/bash which is version 3.2.57 because macos cannot ship with anything newer than it. The reason that Apple includes such an old version of Bash in its operating system has to do with licensing. Since version 4.0 (successor of 3.2), Bash uses the GNU General Public License v3 (GPLv3), which Apple does not (want to) support. You can find some discussions about this here and here. Version 3.2 of GNU Bash is the last version with GPLv2, which Apple accepts, and so it sticks with it. source You have installed a modern version of bash but you haven't replaced /bin/bash with it. I don't really recommend doing that as it could potentially break some legacy scripts/programs (unlikely but possible). My recommendation is to ignore that warning. It's warning you because bash v3.2 normally would suffer from the shellshock vulnerability, however Apple has patched this in their version of Bash v3.2. If you are unable to ignore that warning and really want to risk updating your /bin/bash I would move it to a backup location and symlink your new bash to it. sudo mv /bin/bash /bin/bash.baksudo ln -s /usr/local/bin/bash /bin/bash But in order to do this you will need to bypass SIP To enable or disable System Integrity Protection, you must boot to Recovery OS and run the csrutil(1) command from the Terminal. Boot to Recovery OS by restarting your machine and holding down the Command and R keys at startup. Launch Terminal from the Utilities menu. Enter the following command: $ csrutil disable After enabling or disabling System Integrity Protection on a machine, a reboot is required. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/550092",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/380252/"
]
} |
550,122 | I just installed CentOS GNU/Linux (version 8 build 1905) on a machine; this wasn't my choice of distro - I'm a Debian man myself. Anyway, when I SSH into this machine (as a non-root user), it tells me: Activate the web console with: systemctl enable --now cockpit.socket What will this web console have? On which port will it listen, and for whom? Can non-root users simply activate it when they want to? I'm somewhat baffled by this, as I'm not used to CentOS. | See the fine manual at https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_systems_using_the_rhel_8_web_console/getting-started-with-the-rhel-8-web-console_system-management-using-the-rhel-8-web-console What will this web console have? The RHEL web console enables you a wide range of administration tasks, including: Managing servicesManaging user accountsManaging and monitoring system servicesConfiguring network interfaces and firewallReviewing system logsManaging virtual machinesCreating diagnostic reportsSetting kernel dump configurationConfiguring SELinuxUpdating softwareManaging system subscriptions On which port will it listen, and for whom? Port 9090. For all users. Can non-root users simply activate it when they want to? Root privs needed to activate it. Such privs not needed to log onto it, once it's activated. So, give it a whirl, I'd suggest, see if it's useful for you. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/550122",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34868/"
]
} |
550,154 | How can I get Mariadb running again? My WordPress site is returning: Error establishing a database connection It seems that my database is down. (I updated Ubuntu 19.04 to 19.10 and have had multiple problems.) I went ahead and checked for MySQL installed packages, forgetting that I was actually using Mariadb. So I installed MySQL; discovered I should not have done that; removed MySQL; and followed these instructions to install Mariadb 10.4 . sudo apt install mariadb-server is returning: Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: mariadb-server : Depends: mariadb-server-10.4 (>= 1:10.4.8+maria~disco) but it is not going to be installed So I removed Mariadb by doing ( source ) apt-get remove --purge mysql*apt-get remove --purge mysqlapt-get remove --purge mariadbapt-get remove --purge mariadb*apt-get --purge remove mariadb-serverapt-get --purge remove python-software-properties And tried to reinstalling it, again, using the same guide mentioned above: sudo apt-get install software-properties-commonsudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8 this last command apt-key ... returns Executing: /tmp/apt-key-gpghome.XsKKHEPfCn/gpg.1.sh --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8 gpg: key F1656F24C74CD1D8: 6 signatures not checked due to missing keys gpg: key F1656F24C74CD1D8: "MariaDB Signing Key " not changed gpg: Total number processed: 1 gpg: unchanged: 1 sudo apt updatesudo apt install mariadb-server# MariaDB 10.4 repository list - created 2019-11-03 16:26 UTC# http://downloads.mariadb.org/mariadb/repositories/deb [arch=amd64] http://mariadb.mirror.pcextreme.nl/repo/10.4/ubuntu disco maindeb-src http://mariadb.mirror.pcextreme.nl/repo/10.4/ubuntu disco main sudo systemctl start mariadb returns Failed to start mariadb.service: Unit mariadb.service not found. dpkg -l | grep -e mysql -e mariadb returns rc auto mysql backup 2.6+debian.4-1 all daily, weekly and monthly backup for your MySQL databaseii dbconfig- mysql 2.0.11ubuntu2 all dbconfig-common MySQL/MariaDB supportii default- mysql -client 1.0.5ubuntu2 all MySQL database client binaries (metapackage)ii libdbd- mysql -perl:amd64 4.050-2build1 amd64 Perl5 database interface to the MariaDB/MySQL databaserc lib mysql client18:amd64 5.6.30-0ubuntu0.15.10.1 amd64 MySQL database client libraryii lib mysql client21:amd64 8.0.17-0ubuntu2 amd64 MySQL database client libraryii mysql -client 8.0.17-0ubuntu2 all MySQL database client (metapackage depending on the latest version)ii mysql -client-8.0 8.0.17-0ubuntu2 amd64 MySQL database client binariesii mysql -client-core-8.0 8.0.17-0ubuntu2 amd64 MySQL database core client binariesii mysql -common 1:10.4.8+maria~disco all MariaDB database common files (e.g. /etc/ mysql /my.cnf)ii mysql -server-8.0 8.0.17-0ubuntu2 amd64 MySQL database server binaries and system database setupii mysql -server-core-8.0 8.0.17-0ubuntu2 amd64 MySQL database server binariesrc mysql -utilities 1.6.4-1 all collection of scripts for managing MySQL serversrc php7.0- mysql 7.0.24-1+ubuntu17.04.1+deb.sury.org+1 amd64 MySQL module for PHPii php7.1- mysql 7.1.16-1+ubuntu17.10.1+deb.sury.org+1 amd64 MySQL module for PHPii postfix- mysql 3.4.5-1ubuntu1 amd64 MySQL map support for Postfixii roundcube- mysql 1.3.8+dfsg.1-2 all metapackage providing MySQL dependencies for RoundCube Update 1 It looks like I have two sets of databases. I found one in /var/lib/mysql-10.2 and another in /var/lib/mysql . /var/lib/mysql-upgrade is empty. Update 2 I followed these steps and stopped where it states But it may not help, as your system is seriously broken, ... here are some outputs: sudo apt-get update : Hit:1 http://archive.canonical.com/ubuntu eoan InReleaseGet:2 http://security.ubuntu.com/ubuntu eoan-security InRelease [92.9 kB]Hit:3 http://ppa.launchpad.net/webupd8team/y-ppa-manager/ubuntu eoan InRelease Hit:4 http://archive.ubuntu.com/ubuntu eoan InRelease Get:5 http://archive.ubuntu.com/ubuntu eoan-updates InRelease [88.4 kB] Fetched 181 kB in 1s (135 kB/s) Reading package lists... Done sudo apt-get install -f : Reading package lists... DoneBuilding dependency tree Reading state information... Done0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. sudo apt-get dist-upgrade : Reading package lists... DoneBuilding dependency tree Reading state information... DoneCalculating upgrade... Done0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. apt-cache policy python3 : python3: Installed: 3.7.5-1 Candidate: 3.7.5-1 Version table: *** 3.7.5-1 500 500 http://archive.ubuntu.com/ubuntu eoan/main amd64 Packages 100 /var/lib/dpkg/status sudo apt install mariadb-server : Reading package lists... DoneBuilding dependency tree Reading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: mariadb-server : Depends: mariadb-server-10.3 (>= 1:10.3.17-1) but it is not going to be installedE: Unable to correct problems, you have held broken packages. Update 3 I noticed /var/run has no mysql folder ... I am assuming mysqldb is not running. So I tried this: sudo /etc/init.d/mysql start [....] Starting mysql (via systemctl): mysql.serviceJob for mysql.service failed because the control process exited with error code.See "systemctl status mysql.service" and "journalctl -xe" for details. failed! sudo systemctl status mysql.service ● mysql.service - MySQL Community Server Loaded: loaded (/lib/systemd/system/mysql.service; disabled; vendor preset: enabled) Active: failed (Result: exit-code) since Sun 2019-11-03 18:07:45 PST; 44s ago Process: 6033 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=1/FAILURE) Nov 03 18:07:45 courtens.org systemd[1]: mysql.service: Service RestartSec=100ms expired, scheduling restart.Nov 03 18:07:45 courtens.org systemd[1]: mysql.service: Scheduled restart job, restart counter is at 5.Nov 03 18:07:45 courtens.org systemd[1]: Stopped MySQL Community Server.Nov 03 18:07:45 courtens.org systemd[1]: mysql.service: Start request repeated too quickly. Nov 03 18:07:45 courtens.org systemd[1]: mysql.service: Failed with result 'exit-code'. Nov 03 18:07:45 courtens.org systemd[1]: Failed to start MySQL Community Server. Update 4 I found this and followed the steps, and it finally worked. Or it looked like it would, but I never got the server back. From the page: Building MariaDB on Ubuntu $ sudo apt-get install software-properties-common \ devscripts \ equivs Installing Build Dependencies $ sudo apt-key adv --recv-keys \ --keyserver hkp://keyserver.ubuntu.com:80 \ 0xF1656F24C74CD1D8$ sudo add-apt-repository --update --yes --enable-source \ 'deb [arch=amd64] http://nyc2.mirrors.digitalocean.com/mariadb/repo/10.3/ubuntu '$(lsb_release -sc)' main'$ sudo apt-get build-dep mariadb-10.3 Building MariaDB ... | I found this post and used aptitude in place of apt-get or apt , and it finally looks like there is some hope... Instead of using sudo apt install mariadb-server I used sudo aptitude install mariadb-server and now it started to fix itself The following NEW packages will be installed: galera-3{a} libconfig-inifiles-perl{a} mariadb-client-10.3{ab} mariadb-client-core-10.3{ab} mariadb-common{a} mariadb-server mariadb-server-10.3{ab} mariadb-server-core-10.3{ab} socat{a} 0 packages upgraded, 9 newly installed, 0 to remove and 0 not upgraded.Need to get 18.5 MB of archives. After unpacking 161 MB will be used.The following packages have unmet dependencies: mysql-client-8.0 : Conflicts: mariadb-client-10.3 but 1:10.3.17-1 is to be installed Conflicts: virtual-mysql-client which is a virtual package, provided by: - mariadb-client-10.3 (1:10.3.17-1), but 1:10.3.17-1 is to be installed - mysql-client-8.0 (8.0.17-0ubuntu2), but 8.0.17-0ubuntu2 is installed mysql-server-8.0 : Conflicts: mariadb-server-10.3 but 1:10.3.17-1 is to be installed Conflicts: virtual-mysql-server which is a virtual package, provided by: - percona-xtradb-cluster-server-5.7 (5.7.20-29.24-0ubuntu3), but it is not going to be installed - mariadb-server-10.3 (1:10.3.17-1), but 1:10.3.17-1 is to be installed - mysql-server-8.0 (8.0.17-0ubuntu2), but 8.0.17-0ubuntu2 is installed mariadb-server-core-10.3 : Conflicts: virtual-mysql-server-core which is a virtual package, provided by: - percona-xtradb-cluster-server-5.7 (5.7.20-29.24-0ubuntu3), but it is not going to be installed - mariadb-server-core-10.3 (1:10.3.17-1), but 1:10.3.17-1 is to be installed - mysql-server-core-8.0 (8.0.17-0ubuntu2), but 8.0.17-0ubuntu2 is installed mariadb-server-10.3 : Conflicts: virtual-mysql-server which is a virtual package, provided by: - percona-xtradb-cluster-server-5.7 (5.7.20-29.24-0ubuntu3), but it is not going to be installed - mariadb-server-10.3 (1:10.3.17-1), but 1:10.3.17-1 is to be installed - mysql-server-8.0 (8.0.17-0ubuntu2), but 8.0.17-0ubuntu2 is installed mysql-client-core-8.0 : Conflicts: mariadb-client-10.3 but 1:10.3.17-1 is to be installed Conflicts: mariadb-client-core-10.3 but 1:10.3.17-1 is to be installed Conflicts: virtual-mysql-client-core which is a virtual package, provided by: - mariadb-client-core-10.3 (1:10.3.17-1), but 1:10.3.17-1 is to be installed - mysql-client-core-8.0 (8.0.17-0ubuntu2), but 8.0.17-0ubuntu2 is installed mariadb-client-10.3 : Conflicts: virtual-mysql-client which is a virtual package, provided by: - mariadb-client-10.3 (1:10.3.17-1), but 1:10.3.17-1 is to be installed - mysql-client-8.0 (8.0.17-0ubuntu2), but 8.0.17-0ubuntu2 is installed mariadb-client-core-10.3 : Conflicts: virtual-mysql-client-core which is a virtual package, provided by: - mariadb-client-core-10.3 (1:10.3.17-1), but 1:10.3.17-1 is to be installed - mysql-client-core-8.0 (8.0.17-0ubuntu2), but 8.0.17-0ubuntu2 is installed mysql-server-core-8.0 : Conflicts: mariadb-server-10.3 but 1:10.3.17-1 is to be installed Conflicts: mariadb-server-core-10.3 but 1:10.3.17-1 is to be installed Conflicts: virtual-mysql-server-core which is a virtual package, provided by: - percona-xtradb-cluster-server-5.7 (5.7.20-29.24-0ubuntu3), but it is not going to be installed - mariadb-server-core-10.3 (1:10.3.17-1), but 1:10.3.17-1 is to be installed - mysql-server-core-8.0 (8.0.17-0ubuntu2), but 8.0.17-0ubuntu2 is installedThe following actions will resolve these dependencies: Remove the following packages: 1) default-mysql-client [1.0.5ubuntu2 (eoan, now)] 2) mysql-client-8.0 [8.0.17-0ubuntu2 (eoan, now)] 3) mysql-client-core-8.0 [8.0.17-0ubuntu2 (eoan, now)]4) mysql-server-8.0 [8.0.17-0ubuntu2 (eoan, now)] 5) mysql-server-core-8.0 [8.0.17-0ubuntu2 (eoan, now)] Install the following packages: 6) mariadb-client [1:10.3.17-1 (eoan)] Accept this solution? [Y/n/q/?] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/550154",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/286706/"
]
} |
550,235 | I have following content in a file foobar barbar foo foobar bar foobar I would like to get get the last character from all the lines, also last 3 characters. I am using following for loop workaround to get the desired output, but I guess cut command can do a lot better than this. Is there a way to do the same via cut? $ cat test.shFILE=$1echo "Last chars:"for i in $(cat $FILE)do echo ${i: -1}# OR echo $i |tail -c 2doneechoecho "Last 3 chars:"for i in $(cat $FILE)do echo ${i: -3}# OR echo $i |tail -c 4done$ sh test.shLast chars:rrroorrrLast 3 chars:barbarbarfoofoobarbarbar | cut by itself doesn't have the concept of "the last N characters" on a line. However if you combine this with the rev program you can reverse each line, select the first N characters, and then reverse the result to get things back to the original order. rev | cut -c 1-3 | rev | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/550235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/277542/"
]
} |
550,327 | I have been wasting more then an hour on this now and I think this should be really simple... I have an azure website that allows me to connect and deploy to it using sftp. I can connect to it fine using FileZilla with the following settings: Host: The host given by azure portal Port: Empty Protocol: FTP - File Transfer Protocol Encryption: Require implicit FTP over TLS Logon Type: Normal User: The username given by Azure portal Password: The password given by Azure portal. I don't want to connect to it using FileZilla though. I want to move files over using the command line. I have been trying to use sftp , ftp and scp all without success. In the end they all fail with the following: $ sftp -v -oPort=990 [email protected]_7.9p1, OpenSSL 1.0.2r 26 Feb 2019debug1: Reading configuration data /home/rg/.ssh/configdebug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 17: Applying options for *debug1: Connecting to xxxxxxx.azurewebsites.windows.net [xxx.xxx.xxx.xxx] port 990.debug1: Connection established.debug1: identity file /home/rg/.ssh/id_rsa type 0debug1: identity file /home/rg/.ssh/id_rsa-cert type -1debug1: identity file /home/rg/.ssh/id_dsa type -1debug1: identity file /home/rg/.ssh/id_dsa-cert type -1debug1: identity file /home/rg/.ssh/id_ecdsa type -1debug1: identity file /home/rg/.ssh/id_ecdsa-cert type -1debug1: identity file /home/rg/.ssh/id_ed25519 type -1debug1: identity file /home/rg/.ssh/id_ed25519-cert type -1debug1: identity file /home/rg/.ssh/id_xmss type -1debug1: identity file /home/rg/.ssh/id_xmss-cert type -1debug1: Local version string SSH-2.0-OpenSSH_7.9ssh_exchange_identification: Connection closed by remote hostConnection closed.Connection closed I have tested that the OpenSSL version in use supports TLS 1.2. Neither is the host in the known hosts with another fingerprint. I hope somebody can help me here. | FTP (over TLS) is not SFTP. If you can connect using FTP with FileZilla, you have to use a command-line FTP client. Not SFTP client. Though not all command-line FTP clients support TLS encryption. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/550327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/350749/"
]
} |
550,415 | I want to write a Bash script, which checks if all directories, stored in an array, exist. If not, the script should create it. Is this a correct way to do it? array1=(/apache/apache/bin/apache/conf/apache/lib/www/www/html/www/cgi-bin/www/ftp)if [ ! -d “$array1” ]; then mkdir $array1else breakfi | Just use: mkdir -p -- "${array1[@]}" That will also create intermediary directory components if need be so your array can also be shortened to only include the leaf directories: array1=( /apache/bin /apache/conf /apache/lib /www/html /www/cgi-bin /www/ftp) Which you could also write: array1=( /apache/{bin,conf,lib} /www/{html,cgi-bin,ftp}) The [[ -d ... ]] || mkdir ... type of approaches in general introduce TOCTOU race conditions and are better avoided wherever possible (though in this particular case it's unlikely to be a problem). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/550415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/380533/"
]
} |
550,444 | I have two questions about the Linux kernel. Specifically, does anybody know exactly, what Linux does in the timer interrupt? Is there some documentation about this?And what is affected when changing the CONFIG_HZ setting, when building the kernel? Thanks in advance! | The Linux timer interrupt handler doesn’t do all that much directly . For x86, you’ll find the default PIT/HPET timer interrupt handler in arch/x86/kernel/time.c : static irqreturn_t timer_interrupt(int irq, void *dev_id){ global_clock_event->event_handler(global_clock_event); return IRQ_HANDLED;} This calls the event handler for global clock events, tick_handler_periodic by default, which updates the jiffies counter , calculates the global load, and updates a few other places where time is tracked. As a side-effect of an interrupt occurring, __schedule might end up being called, so a timer interrupt can also lead to a task switch (like any other interrupt). Changing CONFIG_HZ changes the timer interrupt’s periodicity. Increasing HZ means that it fires more often, so there’s more timer-related overhead, but less opportunity for task scheduling to wait for a while (so interactivity is improved); decreasing HZ means that it fires less often, so there’s less timer-related overhead, but a higher risk that tasks will wait to be scheduled (so throughput is improved at the expense of interactive responsiveness). As always, the best compromise depends on your specific workload. Nowadays CONFIG_HZ is less relevant for scheduling aspects anyway; see How to change the length of time-slices used by the Linux CPU scheduler? See also How is an Interrupt handled in Linux? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/550444",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/218728/"
]
} |
550,452 | grep and sed are both described as using "basic regex" ("BRE") by default. BRE is well described here . But consider this output: # echo ' aaaaa ' | grep '\(aaaaa\|bbbbb\)' aaaaa# echo ' aaaaa ' | sed '/\(aaaaa\|bbbbb\)/ s/ /_/g' aaaaa In the first command, the \( ... \| ... \) syntax clearly acted as (X OR Y) , since the output passed grep . In the second command, the \( ... \| ... \) syntax clearly didn't act as (X OR Y) , because the spaces weren't changed to underscores. (By contrast, both commands recognise \+ as "one or more repetitions") What's gone on? Why do there seem to be two flavours of BRE in FreeBSD, one of which recognises syntax that the other doesn't? The deeper question is, many projects look at BRE to provide portability to other unix-like systems. But this suggests that even BREs aren't likely to be the same across platforms, if they can't even be the same within individual platforms. Argh? | The description in the linked article is wrong. The actual POSIX definition states that: The interpretation of an ordinary character preceded by an unescaped <backslash> ( '\' ) is undefined, except for [ (){} , digits and inside a bracket expression] And ordinary characters are defined as any except the BRE special characters .[^$* and the backslash itself. So, unlike that page claims, the \+ is undefined in BRE, and so is \| . Some regex implementations define them as the same as ERE + and | though, particularly the GNU ones. But you shouldn't count on that, stick to the defined features instead. The problem here, of course is that the ERE alternation operator | doesn't exist at all in BRE, and the equivalent to ERE + is hideously ugly (it's \{1,\} ). So you probably want to use ERE instead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/550452",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120614/"
]
} |
550,498 | We want to create a personal bin directory in all user's home location. Is there any way to create a ~/bin folder by default, when we create users for the first time? | If you are going to create the users with adduser , then check /etc/adduser.conf . In that file you have a mention to the skeleton for each new user, by default /etc/skel . If you create /etc/skel/bin then that folder is going to be created for each new user you add with adduser . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/550498",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/380618/"
]
} |
550,688 | Hey how can I open vim inside a script? I need to grade coding assignments and wrote a script to open code and note files, but vim does not open. The script is essentially a fancier version of this: #!/bin/bashecho -e "$student_files" | while read student do vim -o $studentdone#each "$student" looks like: "file1 file2 [...] notes" But vim complains about: Vim: Warning: Input is not from a terminalVim: Error reading input, exiting...Vim: preserving files...Vim: Finished. Can anyone tell me why this is happening and how I could work around it?Thanks in advance :) note: I tried running vim in a script without a loop, which works, but I don't know why. I suppose its because I'm piping into while | Because of the pipeline, standard input is redirected for the while loop; that includes any command that is launched from it, like vim . As you always want to get input from the terminal there, you can just re-connect to the terminal, /dev/tty : < /dev/tty vim -o "$student" Alternatively, stdin could be saved to a file descriptor before going into the pipeline ( exec 6<&0 ) and that passed to Vim ( <&6 vim -o ... ). (That would also handle the case where a "macro recording" of Vim commands is saved in a file and you wanted to automate Vim itself by piping the recorded input back into it.) Or you could try to get rid of the pipeline completely, by first reading the entire file contents into a Bash array ( readarray ), and then using a simple for loop. This would consume more memory, but that shouldn't be a problem here as the limiting factor rather is how many Vim sessions you're willing to handle sequentially :-) Actually, you're not reading from a file here, but from variable contents (which as I understand have space-separated students on potentially multiple lines - the default IFS parsing will split those into individual students), so all you need to do is simple word splitting into a Bash array: declare -a students=($student_files)for student in "${students[@]}"do vim -o "$student"done or completely skipping the intermediate array: for student in $student_filesdo vim -o "$student"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/550688",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/315546/"
]
} |
550,835 | I want to schedule my script for last Saturday of every month.I have come up with the below: 45 23 22-28 * * test $(date +\%u) -eq 6 && echo "4th Saturday" && sh /folder/script.sh Is this correct or I need to change it?I can't test it right now as it will be invoked only on last saturday.Please advise. I have the below for last sunday of every month but i can't understand much of it. The first part gives 24 which is sunday and after -eq gives 7 which i don't know what it means: 00 17 * * 0 [ $(cal -s | tail -2 | awk '{print $1}') -eq $(date | awk '{print $3}') ] && /folder/script.sh Can I modify the above to get last saturday? With Romeo's help I was help to come up with the below answer: 00 17 * * 6 [ $(cal -s | tail -2 | awk '{print $7}') -eq $(date | awk '{print $3}') ] && /folder/script.sh | Your logic will not work. Month can have last Saturday to be on 29 or 30 or 31. For this reason the best way to do the check is to run it every Saturday and check in script if this is in last 7 days in month: 45 23 * * 6 sh /folder/script.sh and add in your script (here assuming the GNU implementation of date ) something like: if [ "$(date -d "+7 day" +%m)" -eq "$(date +%m)" ]then echo "This is not one of last 7 days in month.";exitfi<rest of your script> About your line in cron you should edit it to start like this: 00 17 * * 6 (6 means Saturday, 0 or 7 mean Sunday) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/550835",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/341733/"
]
} |
550,900 | I'm trying to work out the difference in 2 unix times to do a simple calculation (in a shell script) but it doesn't seem to work. I have set 2 variables one called $d and the other $c . Here's the syntax i currently have for setting up the variables: c= echo $a | awk -F : '{print ($1 * 32140800) + ($2 * 2678400) + ($3 * 86400) + ($4 * 3600) + ($5 * 60) }'echo $cd= echo $a | awk -F : '{print ($1 * 32140800) + ($2 * 2678400) + ($3 * 86400) + ($4 * 3600) + ($5 * 60) }'echo $d (variables a and b simply receive the timestamp from another script) I'm trying to subtract the output of $c from $d but every method I have used doesn't seem to work. I've tried the following: 1) duration=$(echo "$d - $c")echo $duration 2) duration=$((d-c))echo $duration | c= echo $a | awk -F : '{print ($1 * 32140800) + ($2 * 2678400) + ($3 * 86400) + ($4 * 3600) + ($5 * 60) }' You're missing the command substitution here. This will just run the echo | awk pipeline, while setting c to the empty value in the environment of echo . You used a command substitution below, in duration=$(echo "$d - $c") , that's what you need here, too. i.e. c=$(echo "$a" | awk ...) (Note that there's no whitespace around the = sign, that's what separates a plain assignment from a command, see Spaces in variable assignments in shell scripts ) Also, looking at the numbers in the awk script, you seem to have 31-day months, and accordingly 372-day years, which may or may not be what you want. If your input has seconds too, the awk script doesn't use them. (Should there be a + $6 in the end?) There's also $a in both commands, instead of $a and $b . duration=$(echo "$d - $c") This would just set duration to a string like 5678 - 1234 (with the actual values of d and c ) since echo just prints what was given, it doesn't do arithmetic. duration=$((d-c)) This should work to do the subtraction. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/550900",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/380933/"
]
} |
550,916 | One thing that always bugged me: touch arm a ok, it works as expected mkdir arm a will require -r to work; but I don't understand what is recursive in this? if the folder is empty, there is nothing recursive about it.I would expect rm a to try to remove a, and fail if it's not empty and rm -r to recursively the contents of a. why is is this like that? Edit:this question has been flagged as a duplicate of a totally irrelevant question.My question can be summed as: Why do I need a recursive flag to perform a non recursive operation: mkdir a, rm -r a has nothing recursive about it. And this is not related to the other question which is about why rm is not recursive by default. | This is because rm was never meant to delete directories at all. Originally rm didn't support -r and users had only 1 option: To delete all files in a dir with rm and after that to remove (the now empty) dir with rmdir Obviously working like this is very annoying so -r was implemented to fix this problem. But if rm /some/empty/dir would start working than it would no longer be backwards compatible (so it could break old code that assumed that rm /some/empty/dir fails). With -r this isn't a problem because this behaviour wasn't defined yet (because the -r option didn't exist yet) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/550916",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233354/"
]
} |
550,951 | I want to test some physical links in a setup. The software tooling that I can use to test this require a block device to read/write from/to. The block devices I have available can't saturate the physical link so I can't fully test it. I know I can setup a virtual block device which is backed by a file. So my idea was to somehow setup a virtual block device to /dev/null but the problem is of course that I can't read from it. Is there a way I could setup a virtual block device that writes to /dev/null but just returns always zero when read? Thank you for any help! | https://wiki.gentoo.org/wiki/Device-mapper#Zero See Documentation/device-mapper/zero.txt for usage. This target has no target-specific parameters. The "zero" target create that functions similarly to /dev/zero: All reads return binary zero, and all writes are discarded. Normally used in tests [...] This creates a 1GB (1953125-sector) zero target: root# dmsetup create 1gb-zero --table '0 1953125 zero' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/550951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/350749/"
]
} |
550,964 | I have a directory full of .tsv files and I want to run a grep command on each of them to pull out a certain group of text lines and then save it to an associated text file with a similar file name. So for example, if I was grepping just one of the files, my grep command looks like this: grep -h 8-K 2008-QTR1.tsv > 2008Q1.txt But I have a list of tsv files that look like: 2008-QTR1.tsv2008-QTR2.tsv2008-QTR3.tsv2008-QTR4.tsv2009-QTR1.tsv2009-QTR2.tsv2009-QTR3.tsv... And after grepping they need to be stored as: 2008Q1.txt2008Q2.txt2008Q3.txt2008Q4.txt2009Q1.txt2009Q2.txt2009Q3.txt Any thoughts? | In ksh93/bash/zsh, with a simple for loop and parameter expansion: for f in *-QTR*.tsvdo grep 8-K < "$f" > "${f:0:4}"Q"${f:8:1}".txtdone This runs the grep on one file at a time (where that list of files is generated from a wildcard pattern that requires "-QTR" to exist in the filename as well as a ".tsv" ending to the filename), redirecting the output to a carefully-constructed filename based on: the first four characters of the filename -- the year the letter Q the 9th character of the filename -- the quarter | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/550964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/380965/"
]
} |
551,017 | I'd like to make happen in RHEL/CentOS 7.6 anytime you do su and become root I want the terminal prompt color in that terminal to become red for the duration of that su session. Type exit to go back to being whoever you were and I want the prompt color to go back to whatever the previous color was (black). Same for an SSH window using putty logged in over the network: initially ssh in as user and have a default white shell prompt; do an su to root and I want the prompt to become red; type exit I want the prompt to go back to white. so far i did this but it's not working 100%, the color stays red after you type exit and leave the su session and go back to being user . /etc/profile.d/red_root_prompt.sh if [ $UID -eq 0 ]; then PS1="\e[31m[\u@\h \W]# "else PS1="[\u@\h \W]# " Is there a way to make things happen the way I want? I only want it for bash shells. | You can either add this to /etc/bash.bashrc or edit /etc/profile/ force_color_prompt=yes if [ "$LOGNAME" = root ] || [ "`id -u`" -eq 0 ] ; then PS1='\[\033[01;31m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[01;34m\]#\033[00m\] ' else PS1='\u@\h:\w\$ ' fi and you will get a prompt that looks like this. If you use a white background, change the last part after # to \033[01;30m\] so you can see the command text. I included that as a second example for reference . also if you add the following to \etc\bash.bashrc or ~/.bashrc: export col_white='\033[00m'export col_black='\033[01;30m'export col_red='\033[01;31m'export col_green='\033[01;32m'export col_yel='\033[01;33m'export col_blue='\033[01;34m' You will be able to do : $ echo -e $col_red red $col_blue blue $col_yel yellow $col_green green red blue yellow green with the output looking like this: EDIT : For some reason using variable expansion for the prompt breaks the carriage return (it locks it to the length of the variable, i.e. pushes it forward by n-many spaces corresponding to echo $col_blue , echo $col_white , and I have not found a good solution for this as of this moment, but using proper bracketing without variable substitution as above solves this issue. if [ "$LOGNAME" = root ] || [ "`id -u`" -eq 0 ] ; then PS1="$col_red\u@\h:$col_purple\w$col_green# $col_white"else PS1="\u@\h:$col_blue\w$col_yel\$ $col_white "fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/551017",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154426/"
]
} |
551,030 | I am partitioning a disk with the intent to have an ext4 filesystem on the partition. I am following a tutorial, which indicates that there are two separate steps where the ext4 filesystem needs to be specified. The first is by parted when creating the partition: sudo parted -a opt /dev/sda mkpart primary ext4 0% 100% The second is by the mkfs.ext4 utility, which creates the filesystem itself: sudo mkfs.ext4 -L datapartition /dev/sda1 My question is: what exactly are each of these tools doing? Why is ext4 required when creating the partition? I would have thought the defining of the partition itself was somewhat independent of the constituent file system. (The tutorial I'm following is here: https://www.digitalocean.com/community/tutorials/how-to-partition-and-format-storage-devices-in-linux ) | A partition can have a type . The partition type is a hint as in "this partition is designated to serve a certain function". Many partition types are associated with certain file-systems, though the association is not always strict or unambiguous. You can expect a partition of type 0x07 to have a Microsoft compatible file-system (e.g. FAT, NTFS or exFAT) and 0x83 to have a native Linux file-system (e.g. ext2/3/4). The creation of the file-system is indeed a completely independent and orthogonal step (you can put whatever file-system wherever you want – just do not expect things to work out of the box). parted defines the partition as in "a part of the overall disk". It does not actually need to know the partition type (the parameter is optional). In use however, auto-detection of the file-system and henceforth auto-mounting may not work properly if the partition type does not correctly hint to the file-system. A partition is a strictly linear piece of storage space. The mkfs.ext4 and its variants create file-systems so you can have your actual directory tree where you can conveniently store your named files in. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/551030",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/381017/"
]
} |
551,043 | Let's say I want to search a file for a string that begins with a dash, say "-something" : grep "-something" filename.txt This throws an error, however, because grep and other executables, as well as built-ins, all want to treat this as a command-line switch that they don't recognize. Is there a way to prevent this from happening? | For grep use -e to mark regex patterns: grep -e "-something" filename.txt For general built-ins use -- , in many utilities it marks "end of options" (but not in GNU grep). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/551043",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59803/"
]
} |
551,193 | I'm trying to write a bash script with make s curl call and get a json document back. The json document has a key called access_token and I need to extract the value of this field. This is my json document echo $json{"access_token":"kjdshfsd", "key2":"value"} I don't have jq installed. | The short answer: Install jq You shouldn't parse json without a json parser. To do this with jq : echo "$json" | jq -r '.access_token' My preferred json parser is json , using it you could do: echo "$json" | json access_token Note: both of these solutions assume your json object is exactly (or at least pretty much exactly) as you have shown in your example and I know it is not. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/551193",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/199342/"
]
} |
551,285 | I want to remove a character "[" from a file. I tried sed -i 's/[//g' 'filename' however I get the following error sed: -e expression #1, char 6: unterminated `s' command | Yes, the [ character is special as it starts a [...] group (a bracketed expression). With sed on OpenBSD, your command gives a more helpful error message: $ sed 's/[//g'sed: 1: "s/[//g": unbalanced brackets ([]) To delete all [ characters using sed , escape it: sed -i 's/\[//g' file Or put it inside a bracketed expression: sed -i 's/[[]//g' file Or, use tr , tr -d '[' <file >file.new Also, don't use in-place editing with sed until you know the expression that you are trying out actually works, or you will possibly have to restore your data from backups. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/551285",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/298739/"
]
} |
551,312 | What console browser work under Centos 8?I tried to install them all but sudo yum -y install lynx w3m links elinks links2 netrikNo match for argument: lynxNo match for argument: w3mNo match for argument: linksNo match for argument: elinksNo match for argument: links2No match for argument: netrikError: Unable to find a match | Try this: sudo yum --enablerepo=powertools install elinks The command is case-sensitive as specified by @Hasanuzzaman Sattar (--enablerepo=Powertools won't work) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/551312",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/352503/"
]
} |
551,444 | The sock struct defined in sock.h , has two attributes that seem very similar: sk_wmem_alloc , which is defined as "transmit queue bytes committed" sk_wmem_queued , defined as "persistent queue size" To me, the sk_wmem_alloc is the amount of memory currently allocated for the send queue. But then, what is sk_wmem_queued ? References According to this StackOverflow answer : wmem_queued : the amount of memory used by the socket send buffer queued in the transmit queue and are either not yet sent out or not yet acknowledged. The ss man also gives definitions, which don't really enlighten me (I don't understand what the IP layer has to do with this): wmem_alloc : the memory used for sending packet (which has been sent to layer 3) wmem_queued : the memory allocated for sending packet (which has not been sent to layer 3) Someone already asked a similar question on the LKML , but got no answer The sock_diag(7) man page also has its own definitions for these attributes: SK_MEMINFO_WMEM_ALLOC : The amount of data in send queue. SK_MEMINFO_WMEM_QUEUED : The amount of data queued by TCP, but not yet sent. All these definitions are different, and none of them clearly explain how the _alloc and _queued variants are different. | I emailed Eric Dumazet, who contributes to the Linux network stack, and here is the answer: sk_wmem_alloc tracks the number of bytes for skb queued after transport stack : qdisc layer and NIC TX ring buffers. If you have 1 MB of data sitting in TCP write queue, not yet sent (cwnd limit), sk_wmem_queue will be about 1MB, but sk_wmem_alloc will be about 0 A very good document for understanding what these three types of queues (socket buffer, qdisc queue and device queue) are is this article (rather long) article . In a nutshell, the socket starts by pushing the packets directly onto the qdisc queue, which forwards them to the device queue. When the qdisc queue is full, the socket starts buffering the data in its own write queue. the network stack places packets directly into the queueing discipline or else pushes back on the upper layers (eg socket buffer) if the queue is full So basically: sk_wmem_queues is the memory used by the socket buffer ( sock.sk_write_queue ) while sk_wmem_alloc is the memory used by the packets in the qdisc and device queues. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/551444",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45689/"
]
} |
551,463 | I want to modify a default behaviour in systemd. This default behaviour is CtrlAltDelBurstAction=reboot-force , commented out in /etc/systemd/system.conf . I just have to uncomment this line and modify it to CtrlAltDelBurstAction=none . But is there a "clean way" to do it, not interfering with the original distro file? I tested with systemctl edit system but this raise an error stating system.service is not found. | In modern versions of systemd , it is possible to override the settings of the "default" config file /etc/systemd/system.conf with "specialized" configuration snipplets residing in files under /etc/systemd/system.conf.d/ . You could place your config line into a file /etc/systemd/system.conf.d/10-suppress-ctraltdel.conf , then it will be read when you reload the systemd configuration (or at next boot). To quote from the man systemd-system.conf page : The main configuration file is read before any of the configuration directories, and has the lowest precedence; entries in a file in any configuration directory override entries in the single configuration file. Files in the *.conf.d/ configuration subdirectories are sorted by their filename in lexicographic order, regardless of which of the subdirectories they reside in. Btw, this kind of configuration override mechanism is now the standard for many Linux services, so the approach discussed here should also work in other situations. In /etc/systemd/system.conf.d/10-suppress-ctraltdel.conf , CtrlAltDelBurstAction=none must be in the [Manager] section (as noted in the manpage: "All options are configured in the [Manager] section"): [Manager]CtrlAltDelBurstAction=none If a section isn't specified, the option might be ignored, as it might be in the wrong section. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/551463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28179/"
]
} |
551,467 | In the fish shell, when you press Alt+l it displays the output of the ls command nicely. It keeps the command you were writing before you pressed it. I would like to have this in zsh. This was my very wonky attempt: function myls { awk '{len = length($0); for(i=0;i<len;i++) printf "\b"}' <<< "$LBUFFER" zle push-input zle accept-line print $(ls --color=always --indicator-style=slash)}zle -N mylsbindkey -- '^[l' myls I struggled to get the existing command to clear, and ended up using hacky \b 's. It does not work correctly on long and multi lines. Can anyone do any better? | Use zle -R after the push-input to redisplay the line without the buffer. For the most part, I find zsh's completion listing obviates the need for a widget to run ls. It does ls style colors and file-type suffixes. I have the following binding which does file completion in any context and does a long, ls -l style listing: zstyle ":completion:file-complete::::" completer _fileszle -C file-complete complete-word _genericzstyle ':completion:file-complete:*' file-list truebindkey '^X^F' file-complete | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/551467",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/370904/"
]
} |
551,512 | I found plenty of questions regarding disabling visual mode in vim but none that tackles my particular problem: I added set mouse-=a to my /etc/vim/vimrc file to disable visual mode for good. Thing is: That seems to do nothing. However when I put the exact same directive into my user's .vimrc file it works. Is this expected behaviour? Did I miss something? Has anyone a solution which doesn't involve managing a .vimrc file for each and every user? Thanks in advance! I am on Debian 10, fully updated by the way. | Debian's /etc/vim/vimrc contains this comment: " Vim will load $VIMRUNTIME/defaults.vim if the user does not have a vimrc." This happens after /etc/vim/vimrc(.local) are loaded, so it will override" any settings in these files." If you don't want that to happen, uncomment the below line to prevent" defaults.vim from being loaded." let g:skip_defaults_vim = 1 As :verbose set mouse? says, that was set by /usr/share/vim/vim81/defaults.vim mentioned above ( $VIMRUNTIME on Debian would be /usr/share/vim/vim<version> ). So, you can either create a ~/.vimrc (or ~/.vim/vimrc ) for your user (even an empty one will do), or uncomment let g:skip_defaults_vim = 1 in /etc/vim/vimrc . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/551512",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139745/"
]
} |
551,771 | I was under the impression that tar does not compress files. Imagine my surprise when I tar up a million small files ( du -h ~ 4.2G) and the resulting tar was a quarter of the size ( ls -lh mytar.tar ~ 1.3G)! Clearly these tiny files were taking up space beyond their reported size, and an answer to another question suggests that each non-empty file takes up at least 1KB regardless of it's size. But where does this 1KB come from, does it differ across filesystems (this is ext4), and does a 1.01 KB file take up 2KB? In short, how do I measure the true file size, especially many files in a directory? I tried du --apparent-size -h and I'm only getting 437M so I quite confused at the three vastly different numbers. | As Christopher points out this question is very similar to Why is a text file taking up at least 4kB even when there's just one byte of text in it? I'm not sure if I personally class it as a duplicate or not. But where does this 1KB come from This is more commonly 4KB File systems are allocated in blocks of bytes (AKA allocation units) not individual bytes. So to store a single byte in a file, that file will need an entire bock. This means the rest of the block is left blank, but no other file can use it. The origin of this number is unclear, but there's a number of things it fits with. For example at a low level, it's not possible to write single bytes to disk, you can only write a block of them. Modern HDs and even SSDs often have a 4KB limit. Meaning that if you want to write one byte, you must first load 4KB, change that 1 byte and write the entire block back. If you attempt to write a whole block, there is no need to read it's original contents. So file systems aligned to hardware limits are much more efficient. As Stephen Kitt points out, 4KB is the maximum block size supported for ext3 by many kernels . (Also discussed here ). In general larger block sizes have more efficient access times meaning "larger blocks are better". does it differ across filesystems (this is ext4) Once apon a time 512 was a common block size, and this figure still comes up occasionally as a default value. Tar files are very old and have this same 512 byte block size ( presumably in an attempt to align with the file systems and hardware making disk writes very fast) . As such tar files are still very wasteful with very small files (<512 bytes) It's much more common now to have file systems that are 4KB aligned (not 1KB). And yes, file systems can be configured when you format them to use a different block size. Different file systems have different limits but most can be configured. and does a 1.01 KB file take up 2KB? Assuming a 1KB block size, yes that correct. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/551771",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7093/"
]
} |
551,820 | There is a folder, "transfer". In that transfer folder there are user folders "user1", "user2", etc. I want to (periodically) delete the content (i.e. all files and folders in the user folders) but I do not want to delete the "transfer" or user folders. How can I do that using as shell script/command without manually adding a call for each new user folder every time I add a new user? | You can do it using following find command: find /path/to/transfer -mindepth 2 -delete -mindepth 2 parameter tells find to ignore first two level of directories: searched directory itself, and all files and folders that are directly in it. -delete parameter just simply tells find to delete all files. You can always add more parameters (for example -mtime ) according to your needs. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/551820",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195501/"
]
} |
551,959 | I am trying to create a series of files with brace expansions. I want create the files fileA1 to fileZ100 with all possible combinations (Something like touch file[A..Z][1..100] ). If I run the command touch $(printf "file%d " {1..100}) the output is ok: file1 file15 file21 file28 file34 file40 file47 file53 file6 file66 file72 file79 file85 file91 file98file10 file16 file22 file29 file35 file41 file48 file54 file60 file67 file73 file8 file86 file92 file99file100 file17 file23 file3 file36 file42 file49 file55 file61 file68 file74 file80 file87 file93file11 file18 file24 file30 file37 file43 file5 file56 file62 file69 file75 file81 file88 file94file12 file19 file25 file31 file38 file44 file50 file57 file63 file7 file76 file82 file89 file95file13 file2 file26 file32 file39 file45 file51 file58 file64 file70 file77 file83 file9 file96file14 file20 file27 file33 file4 file46 file52 file59 file65 file71 file78 file84 file90 file97 The same if I run touch $(printf "file%c " {A..Z}) : fileA fileC fileE fileG fileI fileK fileM fileO fileQ fileS fileU fileW fileYfileB fileD fileF fileH fileJ fileL fileN fileP fileR fileT fileV fileX fileZ I'm trying to combine them touch $(printf "file%c%d " {A..Z}{1..100}) , but the output is: [...]-bash: printf: Y2: invalid number-bash: printf: Y4: invalid number-bash: printf: Y6: invalid number-bash: printf: Y8: invalid number-bash: printf: Y10: invalid number-bash: printf: Y12: invalid number-bash: printf: Y14: invalid number-bash: printf: Y16: invalid number-bash: printf: Y18: invalid number-bash: printf: Y20: invalid number-bash: printf: Y22: invalid number-bash: printf: Y24: invalid number-bash: printf: Y26: invalid number-bash: printf: Y28: invalid number-bash: printf: Y30: invalid number-bash: printf: Y32: invalid number-bash: printf: Y34: invalid number-bash: printf: Y36: invalid number-bash: printf: Y38: invalid number-bash: printf: Y40: invalid number-bash: printf: Y42: invalid number-bash: printf: Y44: invalid number-bash: printf: Y46: invalid number-bash: printf: Y48: invalid number-bash: printf: Y50: invalid number-bash: printf: Y52: invalid number-bash: printf: Y54: invalid number-bash: printf: Y56: invalid number-bash: printf: Y58: invalid number-bash: printf: Y60: invalid number-bash: printf: Y62: invalid number-bash: printf: Y64: invalid number-bash: printf: Y66: invalid number-bash: printf: Y68: invalid number-bash: printf: Y70: invalid number-bash: printf: Y72: invalid number-bash: printf: Y74: invalid number-bash: printf: Y76: invalid number-bash: printf: Y78: invalid number-bash: printf: Y80: invalid number-bash: printf: Y82: invalid number-bash: printf: Y84: invalid number-bash: printf: Y86: invalid number-bash: printf: Y88: invalid number-bash: printf: Y90: invalid number-bash: printf: Y92: invalid number-bash: printf: Y94: invalid number-bash: printf: Y96: invalid number-bash: printf: Y98: invalid number-bash: printf: Y100: invalid number-bash: printf: Z2: invalid number-bash: printf: Z4: invalid number-bash: printf: Z6: invalid number-bash: printf: Z8: invalid number-bash: printf: Z10: invalid number-bash: printf: Z12: invalid number-bash: printf: Z14: invalid number-bash: printf: Z16: invalid number-bash: printf: Z18: invalid number-bash: printf: Z20: invalid number-bash: printf: Z22: invalid number-bash: printf: Z24: invalid number-bash: printf: Z26: invalid number-bash: printf: Z28: invalid number-bash: printf: Z30: invalid number-bash: printf: Z32: invalid number-bash: printf: Z34: invalid number-bash: printf: Z36: invalid number-bash: printf: Z38: invalid number-bash: printf: Z40: invalid number-bash: printf: Z42: invalid number-bash: printf: Z44: invalid number-bash: printf: Z46: invalid number-bash: printf: Z48: invalid number-bash: printf: Z50: invalid number-bash: printf: Z52: invalid number-bash: printf: Z54: invalid number-bash: printf: Z56: invalid number-bash: printf: Z58: invalid number-bash: printf: Z60: invalid number-bash: printf: Z62: invalid number-bash: printf: Z64: invalid number-bash: printf: Z66: invalid number-bash: printf: Z68: invalid number-bash: printf: Z70: invalid number-bash: printf: Z72: invalid number-bash: printf: Z74: invalid number-bash: printf: Z76: invalid number-bash: printf: Z78: invalid number-bash: printf: Z80: invalid number-bash: printf: Z82: invalid number-bash: printf: Z84: invalid number-bash: printf: Z86: invalid number-bash: printf: Z88: invalid number-bash: printf: Z90: invalid number-bash: printf: Z92: invalid number-bash: printf: Z94: invalid number-bash: printf: Z96: invalid number-bash: printf: Z98: invalid number-bash: printf: Z100: invalid number So... what is the correct regular expression? Should I use pipes? | So, here: printf "file%c%d " {A..Z}{1..100} The brace expansion produces strings like A1 , A2 , A3 ... Z99 , Z100 . Then printf tries to match those to the format specifiers %c and %d , using the first for %c , second for %d , third for %c again, etc. But %d expects a number and A2 isn't one, so there's an error. %c%d would expect arguments like A , 1 , A , 2 ..., as distinct arguments, but that would be hard to generate with brace expansion. Since the brace expansion already combines the letter and number sequences, you can just use printf "file%s " {A..Z}{1..100} to use the results of the expansion as-is. Or even just echo file{A..Z}{1..100} . Or the even more direct version pLumo's answer has. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/551959",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/373893/"
]
} |
551,976 | thanks to all recommend it work for me My files: File1.sh (bash shell) prefix_ud=uname; path_file=/path/filescript; sed -i "$(grep -n 'uname.*' "$path_file" |tail -1|cut -f1 -d':')a "$prefix_ud" yahoo.com" "$path_file" sed -i "$(grep -n 'uname.*' "$path_file" |tail -1|cut -f1 -d':')a "$prefix_ud" twitter.com" "$path_file" File2 Expected output: text1text2uname google.comuname gmail.comuname hotmail.comuname yahoo.comuname twitter.comtext3text4 | So, here: printf "file%c%d " {A..Z}{1..100} The brace expansion produces strings like A1 , A2 , A3 ... Z99 , Z100 . Then printf tries to match those to the format specifiers %c and %d , using the first for %c , second for %d , third for %c again, etc. But %d expects a number and A2 isn't one, so there's an error. %c%d would expect arguments like A , 1 , A , 2 ..., as distinct arguments, but that would be hard to generate with brace expansion. Since the brace expansion already combines the letter and number sequences, you can just use printf "file%s " {A..Z}{1..100} to use the results of the expansion as-is. Or even just echo file{A..Z}{1..100} . Or the even more direct version pLumo's answer has. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/551976",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/381328/"
]
} |
551,988 | I'm trying to print lines that have a 4th field value of 1001 or 1003 from a file called mypasswd . I can only use grep or egrep with regular expressions. Here is the file: daemon:x:2:2:Daemon 1001:/sbin:/bin/bashftp:x:40:49:FTP export account:/srv/ftp:/bin/bashdaemonuser:x:50:59:nouser/bin/false:/home/nouser:/bin/bashgdm:x:106:111:Gnome Display Mgr daemon:/var/lib/gdm:/bin/falsehaldaemon:x:101:102:User for haldaemon:/var/run/hald:/bin/falselp:x:4:7:Printing daemon:/var/spool/lpd:/bin/bashmail:x:8:12:Mailer daemon:/var/spool/clientmqueue:/bin/falseroot:x:0:0:root:/root:/bin/bashsshd:x:71:65:SSH daemon:/var/lib/sshd:/bin/falseolivert:x:1001:1005:Tom Oliver:/home/olivert:/bin/cshsmiths:x:1049:1000:Sue Williams:/export/home/smiths:/bin/cshnorthj:x:1003:1003:Jim jones-North:/home/northj:/bin/cshdenniss:x:1005:1003:Sue Dennis:/home/denniss:/bin/bashsmitha:x:1050:1001:Amy Smith:/export/home/smitha:/bin/bashjonesc:x:1053:1001:Cathy Jones:/export/home/jonesc:/bin/kshsmithd:x:1055:1001:Dan Smith Jr:/export/home/smithd:/bin/csh So the output should be northj:x:1003:1003:Jim jones-North:/home/northj:/bin/cshdenniss:x:1005:1003:Sue Dennis:/home/denniss:/bin/bashsmitha:x:1050:1001:Amy Smith:/export/home/smitha:/bin/bashjonesc:x:1053:1001:Cathy Jones:/export/home/jonesc:/bin/kshsmithd:x:1055:1001:Dan Smith Jr:/export/home/smithd:/bin/csh I can easily just run egrep '1001|1003' mypasswd ,but that also gives me "daemon" (fifth field contains "1001")and "olivert" (third field is "1001"). I'm just needing the 4th field values (values that are after three colons) that match those two numbers using egrep/grep regular expressions. Any answers are greatly appreciated, as they will help me out in the long run with this. | It would be more direct, in my opinion, to use a tool like awk that can: split fields for you test exactly the fields you want for the values you want For example: awk -F: '$4 == 1001 || $4 == 1003' mypasswd ... tells awk to: split the incoming lines into fields based on colons, with -F: uses an "or" expression to test whether field 4 has the value 1001 or 1003 if the condition above is true, then print the line (the default action) Awk can take a little bit to learn; one of the major things to understand about it is that it uses paired "pattern" and "action" statements. The "pattern" section(s) determine which "action" statements get executed. You could rewrite the above awk to make it more explicit; by doing so, we can explicitly print whatever we want (such as the 5th field): awk -F: '$4 == 1001 || $4 == 1003 { print $5 }' ... or to have an empty "pattern" section -- meaning, execute the "action" for every line, and then test inside the action pattern for the values: awk -F: '{ if ($4 == 1001 || $4 == 1003) print $5 }' To force grep into action, you could do: grep -E '^([^:]*:){3}(1001|1003):' mypasswd | cut -d: -f5 To tell it to look, from the beginning of the line, for the group "anything-that-isn't-a-colon any number of times, followed a colon" three times, followed by either 1001 or 1003, then followed by a colon; print the whole matching line, but then pass it to cut to print just the 5th field. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/551988",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/381828/"
]
} |
551,997 | I am grepping date 2019-09-XX and 2019-10-XX but somehow my grep not helping out, i am sure i am missing something here Last Password Change: 2019-10-30 Last Password Change: 2017-02-07 Last Password Change: 2019-10-29 Last Password Change: 2019-11-03 Last Password Change: 2019-10-31 Last Password Change: 2018-09-27 Last Password Change: 2018-09-27 Last Password Change: 2019-06-27 I am doing following and it doesn't work grep "2019\-[09,10]\-" file also tried grep "2019\-{09,10}\-" file | You want the "alternation" regular expression token | to say "either this or that": grep -E '2019-(09|10)-' file See Why does my regular expression work in X but not in Y? for some background on regular expression tokens and regex classes (basic, extended, etc). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/551997",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29656/"
]
} |
552,000 | i tried looking into old postS but but didnt get my answer thanks in advance!i have files which indisei have lines with same patterni would like to replace one of the following [mau] with "A" after a specific string for example:n=22 string="abnt7777/knowthis" char to change "m" abnt7777/knowthisDONTKNOWWICHCAHRACTERSmRESTOFSTRING what i know is the string that comes first "abnt7777/knowthis"aswell i know how many charcaters between "abnt7777/knowthis" and a which is 22after the change would be: abnt7777/knowthisDONTKNOWWICHCAHRACTERSARESTOFSTRING | You want the "alternation" regular expression token | to say "either this or that": grep -E '2019-(09|10)-' file See Why does my regular expression work in X but not in Y? for some background on regular expression tokens and regex classes (basic, extended, etc). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/552000",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/381833/"
]
} |
552,107 | I can set the MTU of an interface, eg: ip link set dev eth0 mtu 9000 However different interfaces and different machines appear to have different limits resulting in an error: Error: mtu greater than device maximum. I'm trying to find a way to check if NIC supporting a specific MTU size or not without trying to set it first; actually, I want to find the theoretical maximum MTU on all interfaces on all my servers. I've inspected all features of ethtool, looked in /sys/class/net, etc, but all I can find is the current MTU value. Are there a way to see how high MTU can be on interface without trying it? | Amazingly, I found that ip reports this information if asked. ip -d link list 21: enxa44cc8aa52bd: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000 link/ether a4:4c:c8:aa:52:bd brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9194 addrgenmode none numtxqueues 1 numrxqueues 1 gso_max_size 16354 gso_max_segs 65535 minmtu and maxmtu is the answer. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/552107",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/257317/"
]
} |
552,125 | On a Linux machine I have a series of commands that offer numerical values of the state of different sensors. The call of these commands is something similar to the following: $ command15647$ command276$ command38754 These values change in real time, and every time I want to check the status of one of them, I have to re-launch the command... This doesn't do me any good since I need both hands to manipulate the hardware. My goal is to make a simple Bash Script that calls these commands and keeps the value updated (in real time asynchronously or refreshing the value every x seconds) like this: $ ./myScript.shcommand1: xcommand2: ycommand3: zcommand4: v Where x , y , z and v are the changing values. Bash allows this simply and efficiently? or should I choose to do it in another language, like Python? UPDATE with more info: My current script is: #!/bin/bashecho "Célula calibrada: " $(npe ?AI1)echo "Anemómetro: " $(npe ?AI2)echo "Célula temperatura: " $(npe ?AI3)echo "Célula temperatura: " $(npe ?AI4) npe being an example command that returns the numeric value. I expect an output like this: This output I get with the command watch -n x ./myScript.sh , where x is refresh value of seconds. If I edit my script like this: #!/bin/bashwhile sleep 1; do clear; # added to keep the information in the same line echo "Célula calibrada: " $(npe ?AI1); echo "Anemómetro: " $(npe ?AI2); echo "Célula temperatura: " $(npe ?AI3); echo "Célula temperatura: " $(npe ?AI4);done I get my output with an annoying flicker: | It might be tricky to implement a real time solution in bash. There are many ways to run script once in X seconds you can use watch .I assume you already have myScript.sh available. Replace X with number of seconds you need. watch -n X ./myScript.sh while sleep X; do ./myScript.sh; done upd. to emulate watch you might want to clear the screen in between iterations. inside the script it will look this way: while sleep X; do clear; command1; command2;done add one of options above to the script itself. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/552125",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/373893/"
]
} |
552,143 | I have lost dates (film files and docs) on my external hard disk with dd (hd was on ext4 format, 2TB and had only videos and some files). I "restore" them with PhotoRec , with bad quality... Because I will learn for my mistake, I will learn how to does an external hard disk safe.., so, next time, when similar mistake doing, can them to restore without problem and with gut quality.... For example, what recommend me for partition table, MBR or GPT ? Thanks you very much for your recommendations! | It might be tricky to implement a real time solution in bash. There are many ways to run script once in X seconds you can use watch .I assume you already have myScript.sh available. Replace X with number of seconds you need. watch -n X ./myScript.sh while sleep X; do ./myScript.sh; done upd. to emulate watch you might want to clear the screen in between iterations. inside the script it will look this way: while sleep X; do clear; command1; command2;done add one of options above to the script itself. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/552143",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/374478/"
]
} |
552,169 | I have kind of an annoying problem. I have some old textfiles once exported from a retired financial system long since shut down. Some lines of the data are corrupt, so that values occur in the wrong column. Example: 123 99999 123 87675 65453 62 123 64534 The values in the first column should never consist of 5 numbers and the second column should always consist of 5 numbers. So far I came up with a way to find the problematic lines: cat tempfile | grep -n '^[0-9][0-9][0-9][0-9][0-9]' I would like to find a way to find the row number of the problematic line, just as above: 65463 62 .... then insert "123" and a space or tab, to make it look like, 123 65463 62 How could this be done the least complicated way, preferably in Bash. RegardsPaul | It might be tricky to implement a real time solution in bash. There are many ways to run script once in X seconds you can use watch .I assume you already have myScript.sh available. Replace X with number of seconds you need. watch -n X ./myScript.sh while sleep X; do ./myScript.sh; done upd. to emulate watch you might want to clear the screen in between iterations. inside the script it will look this way: while sleep X; do clear; command1; command2;done add one of options above to the script itself. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/552169",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/238095/"
]
} |
552,188 | I would like to remove empty lines from the beginning and the end of file, but not remove empty lines between non-empty lines in the middle. I think sed or awk would be the solution. Source: 1:2:3:line14:5:line26:7:8: Output: 1:line12:3:line2 | Try this, To remove blank lines from the begin of a file: sed -i '/./,$!d' filename To remove blank lines from the end of a file: sed -i -e :a -e '/^\n*$/{$d;N;ba' -e '}' file To remove blank lines from begin and end of a file: sed -i -e '/./,$!d' -e :a -e '/^\n*$/{$d;N;ba' -e '}' file From man sed , -e script, --expression=script -> add the script to the commands to be executed b label -> Branch to label; if label is omitted, branch to end of script. a -> Append text after a line (alternative syntax). $ -> Match the last line. n N -> Add a newline to the pattern space, then append the next line of input to the pattern space. If there is no more input then sed exits without processing any more commands. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/552188",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/215365/"
]
} |
552,302 | This is more like a conceptual question. I need some clarifications. Today I was learning some socket programming stuff and wrote a simple chat server and chat client based on Beej's Guide to Network Programming . (chat server receives clients message and send messages to all the other clients) I copied the chat server and I wrote my own chat client . The chat client is just a program to send stdin input to server and print socket data from server. Later I noticed that the guide says I can just use telnet to connect to the server. I tried and it worked. I was unfamiliar with telnet and for a long time I don't know what exactly it is. So now my experience confuses me: Isn't telnet just a simple TCP send/echo program? What makes it so special to be a protocol thing? My dumb chat client program doesn't create a [application] protocol. From Wikipedia Communication_protocol : In telecommunication, a communication protocol is a system of rules that allow two or more entities of a communications system to transmit information via any kind of variation of a physical quantity. What rules does Telnet create? telnet host port , open a TCP stream socket for raw input/output? That's not a rule. | Telnet is defined in RFC 854 . What makes it (and anything else) a protocol is a set of rules/constraints. One such rule is that Telnet is done over TCP, and assigned port 23 - this stuff might seem trivial, but it needs to be specified somewhere. You can't just send whatever you want, there are limitations and special meaning to some things. For example, it defines a "Network Virtual Terminal" - this is because when telnet was established, there could be many different terminals: A printer, a black/white monitor, a color monitor that supported ANSI codes, etc. Also, there's stuff like this: In summary, WILL XXX is sent, by either party, to indicate thatparty's desire (offer) to begin performing option XXX, DO XXX andDON'T XXX being its positive and negative acknowledgments; similarly,DO XXX is sent to indicate a desire (request) that the other party(i.e., the recipient of the DO) begin performing option XXX, WILL XXXand WON'T XXX being the positive and negative acknowledgments. Sincethe NVT is what is left when no options are enabled, the DON'T andWON'T responses are guaranteed to leave the connection in a statewhich both ends can handle. Thus, all hosts may implement theirTELNET processes to be totally unaware of options that are notsupported, simply returning a rejection to (i.e., refusing) anyoption request that cannot be understood. In modern times, most of the stuff isn't really that important anymore (then again, telnet as a protocol isn't being used much anymore, not just because it lacks security) so in practice it boils down to send/echo unless you have to actually interface with terminals. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/552302",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/208590/"
]
} |
552,324 | Is there any way to exclude multiples files and directory shortly with rsync? Below is my code: rsync -avzh . blabla@blabla:~/ --exclude=__pycache__ --exclude=checkpoints --exclude=logs --exclude=.git --exclude=plot I have to explicitly declare for each file (dir). I feel it too long. | The documentation for rsync (see man rsync ) has the --exclude-from parameter that will allow you to specify a list of exclusions in a file. With regard to your set of example exclusions, the directories should be followed with / to show they are directories rather than unspecified files or directories, and those that are only in your home directory itself should be prefixed with / so that they don't match anywhere else. In an exclusions file they could be listed like this # Directories found anywhere__pycache__/checkpoints/logs/plot/# Directories found only in HOME/.git/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/552324",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/382022/"
]
} |
552,436 | :>filename.txt For example: root@box$ dd if=/dev/zero of=file.txt count=1024 bs=10241024+0 records in1024+0 records out1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536175 s, 196 MB/sroot@box$ lltotal 1024-rw-r--r-- 1 root root 1048576 Nov 15 14:40 file.txtroot@box$ :>file.txtroot@box$ lltotal 0-rw-r--r-- 1 root root 0 Nov 15 14:40 file.txt Is this different from an rm ? Does it operate faster or slower than other similar means of zeroing a file or deleting it? | As you have discovered, this just empties the file contents (it truncates the file); that is different from rm as rm would actually remove the file altogether. Additionally, :>file.txt will actually create the file if it didn't already exist. : is a "do nothing command" that will exit with success and produce no output, so it's simply a short method to empty a file. In most shells, you could simply do >file.txt to get the same result. It also could be marginally faster than other methods such as echo >file.txt as echo could potentially be an external command. Additionally, echo >file.txt would put a blank line in file.txt where :>file.txt would make the file have no contents whatsoever. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/552436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137794/"
]
} |
552,578 | I need to write a script to convert a phone number from 123-123-1234 to (123) 123-1234 and 1231231234. I can change all instances of the hyphen with tr command, but how do I change just one of two? I did the second part with this code echo "Enter phone number (xxx-xxx-xxxx)"read phoneecho $phone | cat > phonetr -d "-" < phone I know there is a better way, but I am a new student and just don't know it yet,Thanks in advance for your help | The title ("How do I change just one character in a string when there more than one of that character?") is your attempted solution but the actual problem is explained in the question body. Compare XY problem . My answer addresses the body. IFS=- read n1 n2 n3 This will populate n1 , n2 and n3 variables with fragments of the user's input; respectively: before the first - , between the first - and the second - , after the second - (including remaining - characters, if any). Then you can use printf to print them in any format you want, e.g. printf '(%s) %s-%s\n' "$n1" "$n2" "$n3"printf '%s%s%s\n' "$n1" "$n2" "$n3" You may want to be liberal in what you accept and not require a strict format. An example as a shell function: get_phone_number() { IFS= read -rp "Enter phone number (10 digits; additional dashes, spaces, whatever allowed): " p || return 1 # remove non-digits p="$(tr -dc '0123456789' <<< "$p")" # check if there is 10 of them if [ "${#p}" -ne 10 ]; then echo "Wrong number of digits. Aborting." >&2; return 2 else n1="${p:0:3}" n2="${p:3:3}" n3="${p:6:4}" fi} Invoke get_phone_number . If the function succeeds then use $n1 , $n2 and $n3 with printf (like above) or in whatever way you want. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/552578",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/381742/"
]
} |
552,601 | I got the following error message when I use ssh -v -Y . The server OS is mojave. Does anybody know what is wrong? debug1: No xauth program.Warning: No xauth data; using fake authentication data for X11 forwarding.debug1: Requesting X11 forwarding with authentication spoofing.debug1: Sending environment.debug1: Sending env LANG = en_US.UTF-8debug1: Remote: No xauth program; cannot forward X11.X11 forwarding request failed on channel 0 | Both your client and the server are complaining that they can't find the xauth program. The "debug1: No xauth program" message comes from your client, saying it can't find a copy of xauth locally. The "Remote: No xauth program; cannot forward X11" message is from the server, saying it can't find xauth either. The default location for both client and server is /usr/X11R6/bin/xauth , though your vendor could change it. For the client, you can set the Xauth location in your .ssh/config : XAuthLocation /some/path/to/xauth For the server, you must set the location in the remote server's sshd_config : XAuthLocation /opt/X11/bin/xauth After modifying the config, you should run sshd -t to validate the config, then restart sshd to make it reread the file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/552601",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/323132/"
]
} |
552,651 | Problem I have to compare a string "problem1.sh" with itself. It works fine in case of solution1.sh (given below) where I have used square brackets for comparison. However, it does not work in case of solution2.sh (given below) where I have used round brackets. It shows the mentioned error (given below) . What I have tried? I have tried to learn the difference between the use of square and round brackets in bash script from here and here . I understand that ((expression)) is used for comparison arithmetic values, not for the string values. So, what does create the problem? If I remove sub-string ".sh" from the string "problem1.sh" and compare using the same statement if (("problem1" == "problem1")) , it works fine. However, when I just add "." in the string, it creates the problem. Even if I remove everything except "." from the string and use the statement if (("." == ".")) , it shows me error. Then, my question If the statement if (("problem1" == "problem1")) can work fine (may be, it will work fine for every letter of English alphabet) , why does "." string create the problem? I mean, Why we can not compare "." using round brackets in if statement of bash script (e.g if (("." == ".")) ) when we can compare letters using the same expression (e.g if (("findError" == "findError")) ) ? solution1.sh if [ "problem1.sh" == "problem1.sh" ]then printf "Okay" fi solution2.sh if (( "problem1.sh" == "problem1.sh" ))then printf "Okay" fi Error Message For solution2.sh ./solution2.sh: line 1: ((: problem1.sh == problem1.sh : syntax error: invalid arithmetic operator (error token is ".sh == problem1.sh ") | There is a problem in the translation of languages. In the arithmetic expression language a dot doesn't exist. You are using a language that can not work inside a "$((…))". Inside "$((…))" (an arithmetic expression) there could be only numbers (usually integers like 1234 ), operators ( + , - , * , << and yes == as equality operator, among others), and variables (one or more letters, numbers (not the first character) and underscores). That's all. There is no concept of string . When you write problem1 inside an arithmetic expression it is understood in that language as a variable name (with value 0 if not previously defined): $ echo "==$((problem1))=="==0==$ problem1=34$ echo "==$((problem1))=="==34== It doesn't matter if the text is inside quotes: $ echo "==$(("problem1"))=="==34== What you are using, the ((…)) is also an Arithmetic Expression which just happens to have no output. It just sets the exit status (and, as in C, it is true if the result of the expression is not 0 ). $ (( 1 + 1 )) ; echo "$?"0$ (( 0 )) ; echo "$?"1$ (( 1 - 10 )) ; echo "$?"0 But an arithmetic expression doesn't understand what a dot is, neither as an operator nor as a variable name, so, it generates a syntax error: $ echo "$(( 1.3 ))"bash: 1.3 : syntax error: invalid arithmetic operator (error token is ".3 ") The same also apply to a variable name ( problem1 ) followed by a dot and followed by another variable name ( sh ). $ echo "$((problem1.sh))"bash: problem1.sh: syntax error: invalid arithmetic operator (error token is ".sh") If the operator were a + instead of a dot, the expression would work: $ echo "$((problem1+sh))"34 If problem1 has been set to 34 (as above). So, the only way to compare strings is to use [[…]] : $ [[ problem1.sh == problem1.sh ]] && echo YESYES (not quoting the right side of == in this particular case as there is no variable expansion and the string has no glob characters, but in general, do quote the string on the right side of the == ). Or, similar to what you wrote: if [[ "problem1.sh" == "problem1.sh" ]]then printf "Okay" fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/552651",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/381337/"
]
} |
552,707 | I renewed my gpg key pair, but I am still receiving the following error from gpg. gpg: WARNING: Your encryption subkey expires soon.gpg: You may want to change its expiration date too. How can I renew the subkey? | List your keys. $ gpg --list-keys...-------------------------------pub rsa2048 2019-09-07 [SC] [expires: 2020-11-15] AF4RGH94ADC84uid [ultimate] Jill Doe (CX) <[email protected]>sub rsa2048 2019-09-07 [E] [expired: 2019-09-09]pub rsa2048 2019-12-13 [SC] [expires: 2020-11-15] 7DAA371777412uid [ultimate] Jill Doe <[email protected]>-------------------------------... We want to edit key AF4RGH94ADC84.The subkey is the second one in the list that is named ssb $ gpg --edit-key AF4RGH94ADC84gpg> listsec rsa2048/AF4RGH94ADC84 created: 2019-09-07 expires: 2020-11-15 usage: SC trust: ultimate validity: ultimatessb rsa2048/56ABDJFDKFN created: 2019-09-07 expired: 2019-09-09 usage: E[ultimate] (1). Jill Doe (CX) <[email protected]> So we want to edit the first subkey (ssb) ssb rsa2048/56ABDJFDKFN created: 2019-09-07 expired: 2019-09-09 usage: E[ultimate] (1). Jill Doe (CX) <[email protected]> When you select key (1), you should see the * next to it such as ssb* . Then you can set the expiration and then save. gpg> key 1sec rsa2048/AF4RGH94ADC84 created: 2019-09-07 expires: 2020-11-15 usage: SC trust: ultimate validity: ultimatessb* rsa2048/56ABDJFDKFN created: 2019-09-07 expired: 2019-09-09 usage: E[ultimate] (1). Jill Doe (CX) <[email protected]>gpg> expire...Changing expiration time for a subkey.Please specify how long the key should be valid. 0 = key does not expire <n> = key expires in n days <n>w = key expires in n weeks <n>m = key expires in n months <n>y = key expires in n yearsKey is valid for? (0) 2yKey expires at Wed 9 Sep 16:20:33 2021 GMTIs this correct? (y/N) ysec rsa2048/AF4RGH94ADC84 created: 2019-09-07 expires: 2020-11-15 usage: SC trust: ultimate validity: ultimatessb* rsa2048/56ABDJFDKFN created: 2019-09-07 expires: 2021-09-09 usage: E[ultimate] (1). Jill Doe (CX) <[email protected]>...gpg> save Don't forget to save the changes before quitting! | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/552707",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19319/"
]
} |
552,713 | I have a big file counting genotype input file. Here is the first few lines: LocusID f nAlleles x y2L:8347 1 2 44.3166 -12.23732L:8347 1 2 39.2667 -6.83332L:31184 1 2 39.2667 -6.83332L:31184 1 2 39.2667 -6.83332L:42788 1 2 39.2667 -6.83332L:42788 1 2 39.2667 -6.83332L:42887 1 2 39.2667 -6.83332L:42887 1 2 39.2667 -6.8333 The first column is locus ID and for each locus I have two rows with identical locus IDs. I want to keep only those which column x and column y are not qual for each locus. here is my desired output from the above example out2L:8347 1 2 44.3166 -12.23732L:8347 1 2 39.2667 -6.8333 Any idea how I can do it? | List your keys. $ gpg --list-keys...-------------------------------pub rsa2048 2019-09-07 [SC] [expires: 2020-11-15] AF4RGH94ADC84uid [ultimate] Jill Doe (CX) <[email protected]>sub rsa2048 2019-09-07 [E] [expired: 2019-09-09]pub rsa2048 2019-12-13 [SC] [expires: 2020-11-15] 7DAA371777412uid [ultimate] Jill Doe <[email protected]>-------------------------------... We want to edit key AF4RGH94ADC84.The subkey is the second one in the list that is named ssb $ gpg --edit-key AF4RGH94ADC84gpg> listsec rsa2048/AF4RGH94ADC84 created: 2019-09-07 expires: 2020-11-15 usage: SC trust: ultimate validity: ultimatessb rsa2048/56ABDJFDKFN created: 2019-09-07 expired: 2019-09-09 usage: E[ultimate] (1). Jill Doe (CX) <[email protected]> So we want to edit the first subkey (ssb) ssb rsa2048/56ABDJFDKFN created: 2019-09-07 expired: 2019-09-09 usage: E[ultimate] (1). Jill Doe (CX) <[email protected]> When you select key (1), you should see the * next to it such as ssb* . Then you can set the expiration and then save. gpg> key 1sec rsa2048/AF4RGH94ADC84 created: 2019-09-07 expires: 2020-11-15 usage: SC trust: ultimate validity: ultimatessb* rsa2048/56ABDJFDKFN created: 2019-09-07 expired: 2019-09-09 usage: E[ultimate] (1). Jill Doe (CX) <[email protected]>gpg> expire...Changing expiration time for a subkey.Please specify how long the key should be valid. 0 = key does not expire <n> = key expires in n days <n>w = key expires in n weeks <n>m = key expires in n months <n>y = key expires in n yearsKey is valid for? (0) 2yKey expires at Wed 9 Sep 16:20:33 2021 GMTIs this correct? (y/N) ysec rsa2048/AF4RGH94ADC84 created: 2019-09-07 expires: 2020-11-15 usage: SC trust: ultimate validity: ultimatessb* rsa2048/56ABDJFDKFN created: 2019-09-07 expires: 2021-09-09 usage: E[ultimate] (1). Jill Doe (CX) <[email protected]>...gpg> save Don't forget to save the changes before quitting! | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/552713",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216256/"
]
} |
552,723 | I am trying to do these in a script. I have to run some commands on a remote host. Currently, I am doing this: ssh root@host 'bash -s' < command1ssh root@host 'bash -s' < command2ssh root@host 'bash -s' < command3 However, this means that I have to connect to the server repeatedly, which is increasing a lot of time between processing of the commands. I am looking for something like this: varSession=$(ssh root@host 'bash -s')varSeesion < command1varSeesion < command2varSeesion < command3 Again, I need to run these commands via a script. I have taken a look at screen but I am not sure if it can be used in a script. | You can use a ControlMaster and ControlPersist to allow a connection to persist after the command has terminated: When used in conjunction with ControlMaster , specifies that the master connection should remain open in the background (waiting for future client connections) after the initial client connection has been closed. If set to no , then the master connection will not be placed into the background, and will close as soon as the initial client connection is closed. If set to yes or 0 , then the master connection will remain in the background indefinitely (until killed or closed via a mechanism such as the “ ssh -O exit ”). If set to a time in seconds, or a time in any of the formats documented in sshd_config(5) , then the backgrounded master connection will automatically terminate after it has remained idle (with no client connections) for the specified time. So, the first SSH command will setup a control file for the connection, and the other two will reuse that connection via that control file. Your ~/.ssh/config should have something like: Host host User root ControlMaster auto ControlPath /tmp/ssh-control-%C ControlPersist 30 # or some safe timeout And your script won't need any other changes. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/552723",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/376320/"
]
} |
552,743 | The other day I had a script error which wrote 4 million small text files to my home directory: I've accidentally written 4 million small text files to a folder, how best to get rid of them? I deleted those files, but since then whenever I hit tab to complete a filename or path there's a half second delay before anything happens. Although the files are now deleted, I assume there's some lasting damage to the gpt or similar? Are there any useful tools I can use to clean this up? The filesystem is ext4 (two 3TB drives in RAID 1) and I'm running CentOS 7. % ls -ld "$HOME"drwx------. 8 myname myname 363606016 Nov 18 09:21 /home/myname Thank you | As mentioned in the comments, your home directory itself is huge, and won’t shrink again. Scanning your home directory’s contents will involve reading a lot of data, every single time (from cache or disk). To fix this, you need to re-create your home directory: log out, log in as root, and make sure no running process refers to your home directory: lsof /home/myname copy your home directory: cd /homecp -al myname myname.new rename your home directory out of the way: mv myname myname.old rename your new home directory: mv myname.new myname You can log back in now. Your shiny, new home directory will only occupy the space it really needs, and file operations should be as fast as you expect. cp -al ensures that all files are available under the new directory, but it uses hard links so that no additional space is taken (apart from the directory structure). Because of the hard links, any changes made to files in one of the directories are reflected in the other directory, but you can safely remove myname.old . A similar approach can be used for any directory which used to contain a large number of files, although in most other cases you won’t need to log out first. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/552743",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142729/"
]
} |
552,809 | I'd like to know if apk add is capable of automatically assuming yes to any prompts when installing a new package on Alpine Linux? I'm familiar with running something like apt-get install -y curl on Ubuntu and wondering if there's an equivalent command for my use case. | apk does not need a --yes argument as it is designed to run non-interactively from the get-go and does not prompt the user unless the -i / --interactive argument is given (and then only for "certain operations"). Ref apk --help --verbose . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/552809",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/382529/"
]
} |
552,845 | The output of tput ed is empty and I can't figure out why. Other capabilities work fine. Also ed is not missing from infocmp output so tput should match, right? $ printf '%q' "$(tput ed)"'' $ printf '%q' "$(tput home)"$'\033'\[H I'm using zsh on Mac OS 10.14.6 and iTerm2. TERM=xterm-256color. | After more googling and scouring the documentation (mainly terminfo), I finally figured out that I need to fall back to the older termcap code since the capname is not supported for all terminfo capabilities. ed=$(tput ed || tput cd) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/552845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187509/"
]
} |
552,857 | I have a CentOS 8 guest running on a Fedora 31 host. The guest is attached to a bridge network, virbr0 , and has address 192.168.122.217 . I can log into the guest via ssh at that address. If I start a service on the guest listening on port 80, all connections from the host to the guest fail like this: $ curl 192.168.122.217curl: (7) Failed to connect to 192.168.122.217 port 80: No route to host The service is bound to 0.0.0.0 : guest# ss -tlnState Recv-Q Send-Q Local Address:Port Peer Address:PortLISTEN 0 128 0.0.0.0:22 0.0.0.0:*LISTEN 0 5 0.0.0.0:80 0.0.0.0:*LISTEN 0 128 [::]:22 [::]:* Using tcpdump (either on virbr0 on the host, or on eth0 on the guest), I see that the guest appears to be replying with an ICMP "admin prohibited" message. 19:09:25.698175 IP 192.168.122.1.33472 > 192.168.122.217.http: Flags [S], seq 959177236, win 64240, options [mss 1460,sackOK,TS val 3103862500 ecr 0,nop,wscale 7], length 019:09:25.698586 IP 192.168.122.217 > 192.168.122.1: ICMP host 192.168.122.217 unreachable - admin prohibited filter, length 68 There are no firewall rules on the INPUT chain in the guest: guest# iptables -S INPUT-P INPUT ACCEPT The routing table in the guest looks perfectly normal: guest# ip routedefault via 192.168.122.1 dev eth0 proto dhcp metric 100172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.217 metric 100 SELinux is in permissive mode: guest# getenforcePermissive If I stop sshd and start my service on port 22, it all works as expected. What is causing these connections to fail? In case someone asks for it, the complete output of iptables-save on the guest is: *filter:INPUT ACCEPT [327:69520]:FORWARD DROP [0:0]:OUTPUT ACCEPT [285:37235]:DOCKER - [0:0]:DOCKER-ISOLATION-STAGE-1 - [0:0]:DOCKER-ISOLATION-STAGE-2 - [0:0]:DOCKER-USER - [0:0]-A FORWARD -j DOCKER-USER-A FORWARD -j DOCKER-ISOLATION-STAGE-1-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT-A FORWARD -o docker0 -j DOCKER-A FORWARD -i docker0 ! -o docker0 -j ACCEPT-A FORWARD -i docker0 -o docker0 -j ACCEPT-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2-A DOCKER-ISOLATION-STAGE-1 -j RETURN-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP-A DOCKER-ISOLATION-STAGE-2 -j RETURN-A DOCKER-USER -j RETURNCOMMIT*security:INPUT ACCEPT [280:55468]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [285:37235]COMMIT*raw:PREROUTING ACCEPT [348:73125]:OUTPUT ACCEPT [285:37235]COMMIT*mangle:PREROUTING ACCEPT [348:73125]:INPUT ACCEPT [327:69520]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [285:37235]:POSTROUTING ACCEPT [285:37235]COMMIT*nat:PREROUTING ACCEPT [78:18257]:INPUT ACCEPT [10:600]:POSTROUTING ACCEPT [111:8182]:OUTPUT ACCEPT [111:8182]:DOCKER - [0:0]-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER-A DOCKER -i docker0 -j RETURNCOMMIT | Well, I figured it out. And it's a doozy. CentOS 8 uses nftables , which by itself isn't surprising. It ships with the nft version of the iptables commands, which means when you use the iptables command it actually maintains a set of compatibility tables in nftables. However... Firewalld -- which is installed by default -- has native support for nftables, so it doesn't make use of the iptables compatibility layer. So while iptables -S INPUT shows you: # iptables -S INPUT-P INPUT ACCEPT What you actually have is: chain filter_INPUT { type filter hook input priority 10; policy accept; ct state established,related accept iifname "lo" accept jump filter_INPUT_ZONES_SOURCE jump filter_INPUT_ZONES ct state invalid drop reject with icmpx type admin-prohibited <-- HEY LOOK AT THAT! } The solution here (and honestly probably good advice in general) is: systemctl disable --now firewalld With firewalld out of the way, the iptables rules visible with iptables -S will behave as expected. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/552857",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4989/"
]
} |
552,873 | Is there any way to do a sort of in-place upgrade to CentOS Stream? I'm currently running CentOS 8 and for various reasons would like to switch over to the new Stream version. Is it possible to do without having to reinstall the OS? | Edit: Please check out Anthony Geoghegan's answer for the latest recommendations. This should work as CentOS Stream is just additional repositories on top of CentOS 8 as mentioned on (unofficial) centosfaq.org . I did this on my development machine: $ dnf history centos-release-streamID | Command line | Date and time | Action(s) | Altered------------------------------------------------------------------------------- 156 | update --allowerasing | 2020-03-27 14:10 | E, I, U | 127 < 154 | install -y centos-releas | 2020-03-27 14:04 | Install | 1 > Which resulted in the following enabled repositories $ dnf repolist enabled | grep CentOSAppStream CentOS-8 - AppStreamBaseOS CentOS-8 - BasePowerTools CentOS-8 - PowerToolsStream-AppStream CentOS-Stream - AppStreamStream-BaseOS CentOS-Stream - BaseStream-extras CentOS-Stream - Extrascentosplus CentOS-8 - Plusextras CentOS-8 - Extrasfasttrack CentOS-8 - fasttrack I needed to get rid of some manually compiled packages ( --allowerasing ) though. I would not do this on a production server or without a functioning backup. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/552873",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317557/"
]
} |
552,922 | I have a systemd [email protected] that every user on the server can start with systemd --user start [email protected] . As root I would like to stop that service for all users when we do maintenance. I haven't found a way in systemd man, not even via Conflict. Is there a way to stop user services as root? | As root, systemd supports actions on user services using a combination of the --user and --machine flags ... e.g. systemctl --user --machine=<user>@.host <command> <service> or, it can be shortened to (on localhost) systemctl --user --machine=<user>@ <command> <service> i.e. systemctl --user --machine=jtingiris@ stop a-user.service That is documented in man 1 systemctl ... -M , --machine= Execute operation on a local container. Specify a containername to connect to, optionally prefixed by a user name toconnect as and a separating " @ " character. If the specialstring " .host " is used in place of the container name, aconnection to the local system is made (which is useful toconnect to a specific user's user bus: " --user [email protected] "). If the " @ " syntax is not used, theconnection is made as root user. If the " @ " syntax is usedeither the left hand side or the right hand side may beomitted (but not both) in which case the local user name and" .host " are implied. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/552922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/339444/"
]
} |
552,964 | How can you determine the hostname associated with an IP on the network? (without configuring a reverse DNS) This was something that I thought was impossible. However I've been using Fing on my mobile. It is capable of finding every device on my network (presumably using an arp-scan) and listing them with a hostname. For example, this app is capable of finding freshly installed Debian Linux devices plugged into a home router, with no apparent reverse DNS. As far as I know neither ping , nor Neighbor Discovery , nor arp include a hostname. So how can fing be getting this for a freshly installed Linux PC? What other protocol on a Linux machine would give out the machine's configured hostname? | The zeroconf protocol suite ( Wikipedia ) could provide this information. The best known implementations are AllJoyn (Windows and others), Bonjour (Apple), Avahi (UNIX/Linux). Example showing a list of everything on a LAN (in this case not very much): avahi-browse --all --terminate+ ens18 IPv6 Canon MG6650 _privet._tcp local+ ens18 IPv4 Canon MG6650 _privet._tcp local+ ens18 IPv6 Canon MG6650 Internet Printer local+ ens18 IPv4 Canon MG6650 Internet Printer local+ ens18 IPv6 Canon MG6650 UNIX Printer local+ ens18 IPv4 Canon MG6650 UNIX Printer local+ ens18 IPv6 Canon MG6650 _scanner._tcp local+ ens18 IPv4 Canon MG6650 _scanner._tcp local+ ens18 IPv6 Canon MG6650 _canon-bjnp1._tcp local+ ens18 IPv4 Canon MG6650 _canon-bjnp1._tcp local+ ens18 IPv6 Canon MG6650 Web Site local+ ens18 IPv4 Canon MG6650 Web Site local+ ens18 IPv6 SERVER _device-info._tcp local+ ens18 IPv4 SERVER _device-info._tcp local+ ens18 IPv6 SERVER Microsoft Windows Network local+ ens18 IPv4 SERVER Microsoft Windows Network local More specifically, you can use avahi-resolve-address to resolve an address to a name. Example avahi-resolve-address 192.168.1.254192.168.1.254 router.roaima... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/552964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20140/"
]
} |
552,988 | I upgraded to Fedora 31 today. All good. I came across issue with Docker so I found the solution to run grubby --args="systems.unified_cgroup_hierarchy=0" and then grub2-mkconfig Now, systems won't start into Gnome... stuck at GDM Started and that's it. I can get into tty2, but no way to run GDM. | The zeroconf protocol suite ( Wikipedia ) could provide this information. The best known implementations are AllJoyn (Windows and others), Bonjour (Apple), Avahi (UNIX/Linux). Example showing a list of everything on a LAN (in this case not very much): avahi-browse --all --terminate+ ens18 IPv6 Canon MG6650 _privet._tcp local+ ens18 IPv4 Canon MG6650 _privet._tcp local+ ens18 IPv6 Canon MG6650 Internet Printer local+ ens18 IPv4 Canon MG6650 Internet Printer local+ ens18 IPv6 Canon MG6650 UNIX Printer local+ ens18 IPv4 Canon MG6650 UNIX Printer local+ ens18 IPv6 Canon MG6650 _scanner._tcp local+ ens18 IPv4 Canon MG6650 _scanner._tcp local+ ens18 IPv6 Canon MG6650 _canon-bjnp1._tcp local+ ens18 IPv4 Canon MG6650 _canon-bjnp1._tcp local+ ens18 IPv6 Canon MG6650 Web Site local+ ens18 IPv4 Canon MG6650 Web Site local+ ens18 IPv6 SERVER _device-info._tcp local+ ens18 IPv4 SERVER _device-info._tcp local+ ens18 IPv6 SERVER Microsoft Windows Network local+ ens18 IPv4 SERVER Microsoft Windows Network local More specifically, you can use avahi-resolve-address to resolve an address to a name. Example avahi-resolve-address 192.168.1.254192.168.1.254 router.roaima... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/552988",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/301657/"
]
} |
552,997 | I have a text file containing the following unicode strings with regular text. Cat a.txt {"relationship":{"type:Memberkey","id""824-\u0001\u0019BFGHDICA2166-01-01","source"} Here \u0001 and \u0019 are unicode strings and is causing our program to fail . Is there a generic command to replace any such string? | The zeroconf protocol suite ( Wikipedia ) could provide this information. The best known implementations are AllJoyn (Windows and others), Bonjour (Apple), Avahi (UNIX/Linux). Example showing a list of everything on a LAN (in this case not very much): avahi-browse --all --terminate+ ens18 IPv6 Canon MG6650 _privet._tcp local+ ens18 IPv4 Canon MG6650 _privet._tcp local+ ens18 IPv6 Canon MG6650 Internet Printer local+ ens18 IPv4 Canon MG6650 Internet Printer local+ ens18 IPv6 Canon MG6650 UNIX Printer local+ ens18 IPv4 Canon MG6650 UNIX Printer local+ ens18 IPv6 Canon MG6650 _scanner._tcp local+ ens18 IPv4 Canon MG6650 _scanner._tcp local+ ens18 IPv6 Canon MG6650 _canon-bjnp1._tcp local+ ens18 IPv4 Canon MG6650 _canon-bjnp1._tcp local+ ens18 IPv6 Canon MG6650 Web Site local+ ens18 IPv4 Canon MG6650 Web Site local+ ens18 IPv6 SERVER _device-info._tcp local+ ens18 IPv4 SERVER _device-info._tcp local+ ens18 IPv6 SERVER Microsoft Windows Network local+ ens18 IPv4 SERVER Microsoft Windows Network local More specifically, you can use avahi-resolve-address to resolve an address to a name. Example avahi-resolve-address 192.168.1.254192.168.1.254 router.roaima... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/552997",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/382689/"
]
} |
553,022 | This is what I'm using now to get the job done: #!/bin/sh --string='Aa1!z'if ! printf '%s\n' "$string" | LC_ALL=C grep -q '[[:upper:]]' || \ ! printf '%s\n' "$string" | LC_ALL=C grep -q '[[:lower:]]' || \ ! printf '%s\n' "$string" | LC_ALL=C grep -q '[[:digit:]]' || \ ! printf '%s\n' "$string" | LC_ALL=C grep -q '[[:punct:]]'; then printf '%s\n' 'String does not meet your requirements'else printf '%s\n' 'String meets your requirements'fi This is extermely inefficent and verbose. Is there a better way to do this? | With one call to awk and without pipe: #! /bin/sh -string='whatever'has_char_of_each_class() { LC_ALL=C awk -- ' BEGIN { for (i = 2; i < ARGC; i++) if (ARGV[1] !~ "[[:" ARGV[i] ":]]") exit 1 }' "$@"}if has_char_of_each_class "$string" lower upper digit punct; then echo OKelse echo not OKfi That's POSIX but note that mawk doesn't support POSIX character classes yet. The -- is not needed with POSIX compliant awk s but would be in older versions of busybox awk (which would choke on values of $string that start with - ). A variant of that function using a case shell construct: has_char_of_each_class() { input=$1; shift for class do case $input in (*[[:$class:]]*) ;; (*) return 1;; esac done} Note however that changing the locale for the shell in the middle of a script doesn't work with all sh implementations (so you'd need the script to be called in the C locale already if you want the input to be considered as being encoded in the C locale charset and the character classes to match only the ones specified by POSIX). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/553022",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273175/"
]
} |
553,051 | I am trying to display the top 20 largest files in a specific directory. I want to include all sub directories but not the actual directories themselves. (I only want files.) I have been trying to find a way to do this and all the solutions I have found online do not work with the version on Unix I am using. I have this so far: find /dir -type f -exec ls -al {} \; | sort -nr | head -n 20du -a -g /dir/ | sort -n -r | head -n 20 The fist gives me a list as follows: file1.txtfile1.txtfile1.txtfile2.txt And so on. The second command gives me the following: 500 \path\250 \path\to\100 \path\to\directory\ And so on. The result I am looking for is: 500 \path\file1.txt250 \path\to\file2.txt100 \path\to\directory\file3.txt And so on. I have tried the solutions from the following questions: Finding largest file recursively https://stackoverflow.com/questions/12522269/how-to-find-the-largest-file-in-a-directory-and-its-subdirectories I have also tried to follow this tutorial: https://www.cyberciti.biz/faq/linux-find-largest-file-in-directory-recursively-using-find-du/ | find dir/ -type f -exec du -a {} + | sort -nr | head -n 20 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/553051",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/382716/"
]
} |
553,067 | I have a textfile that has the following format and I want to add a vertical line after those lines, followed by increasing numbers: c4-1 d e cc d e ce-2 f g2e4 f g2g8-4\( a-5 g f\) e4 cg'8\( a g f\) e4 cc-1 r c2c4 r c2 I achieve the line and the numbering with the following while-loop : #!/bin/bashwhile read -r line; do if [ -z "$line" ]; then echo continue fi n=$((++n)) \ && grep -vE "^$|^%" <<< "$line" \ | sed 's/$/\ \|\ \%'$(("$n"))'/'done < file and get an output like: c4-1 d e c | %1c d e c | %2e-2 f g2 | %3e4 f g2 | %4g8-4\( a-5 g f\) e4 c | %5g'8\( a g f\) e4 c | %6c-1 r c2 | %7c4 r c2 | %8 now I want the addition to be vertically aligned and get an output like this: c4-1 d e c | %1c d e c | %2e-2 f g2 | %3e4 f g2 | %4g8-4\( a-5 g f\) e4 c | %5g'8\( a g f\) e4 c | %6c-1 r c2 | %7c4 r c2 | %8 this would mean I need to somehow get the line length of the longest line (here: 21 characters) and the line length of each line and add the difference with spaces, how could I achieve this? | You could print the lines without alignment and format the output with column -t and a dummy delimiter character: #!/bin/bashwhile read -r line; do if [ -z "$line" ]; then echo continue fi printf '%s@| %%%s\n' "$line" "$((++n))"done < file | column -e -s'@' -t | sed 's/ |/|/' Here, I added a @ as dummy character before the | indicating the end of the column.The sed command at the end is used to remove one additional space character before the | . Option -e is needed to keep empty lines in the output. Output: c4-1 d e c | %1c d e c | %2e-2 f g2 | %3e4 f g2 | %4g8-4\( a-5 g f\) e4 c | %5g'8\( a g f\) e4 c | %6c-1 r c2 | %7c4 r c2 | %8 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/553067",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
553,136 | Is there a convenient way to rename a file (or dir) without redundantly repeating the path or directory-changing to it? For example … mv -some_flag db/migrations/abc_201911201243.php abc_20191101090000.php … to rename the file without moving it out of that directory. I looked into the man pages for mv , rename and rsync but didn't find anything, so I'm wondering if there is by chance a non-obvious trick to do this. | Use brace expansion: mv -some_flag db/migrations/abc_201911{201243,01090000}.php | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/553136",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/278372/"
]
} |
553,143 | I need to securely erase harddisks from time to time and have used a variety of tools to do this: cat /dev/zero > /dev/disk cat /dev/urandom > /dev/disk shred badblocks -w DBAN All of these have in common that they take ages to run. In one case cat /dev/urandom > /dev/disk killed the disk, apparently overheating it. Is there a "good enough" approach to achieve that any data on the disk is made unusable in a timely fashion? Overwriting superblocks and a couple of strategically important blocks or somesuch? The disks (both, spinning and ssd) come from donated computers and will be used to install Linux-Desktops on them afterwards, handed out to people who can't afford to buy a computer, but need one. The disks of the donated computers will usually not have been encrypted. And sometimes donors don't even think of deleting files beforehand. Update : From the answers that have come in so far, it seems there is no cutting corners.My best bet is probably setting up a lab-computer to erase multiple disks at once. One more reason to ask big companies for donations :-) Thanks everyone! | Overwriting the superblock or partition table just makes it inconvenient to reconstruct the data, which is obviously still there if you just do a hex dump. Hard disks have a built-in erasing feature: ATA Secure Erase , which you can activate using hdparm : Pick a password (any password): hdparm --user-master u --security-set-pass hunter1 /dev/sd X Initiate erasure: hdparm --user-master u --security-erase hunter1 /dev/sd X Since this is a built-in feature, it is unlikely that you'll find a faster method that actually offers real erasure. (It's up to you, though, to determine whether it meets your level of paranoia.) Alternatively, use the disk with full-disk encryption, then just throw away the key when you want to dispose of the data. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/553143",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/364705/"
]
} |
553,146 | I was wondering about a clean elegant way to do the following:Let's say I have written a C++ program, called foo , running inside as part of a shell script, called bar.sh . I'd like for the shell script to run foo as a background process, and then wait until the foo execution reaches a line of my choosing, at which point bar should continue execution. For the sake of clarity, here's a dummy example of bar.sh : #!/bin/bash./foowait echo "WAKING UP" Here is foo : #include <iostream> int main(){ for (int i = 0; i < 1000000; i++){ std::cout << i << std::endl; if (i == 50){ //Wake up bash! } } } I want to modify foo and/or bar so that the wait command in bar will stop when foo is at iteration 50 let's say. So when the for loop in foo reaches i = 50 , bar should then awaken and print WAKING UP. Of course, foo can continue to keep running. How can I modify these programs to achieve this sort of effect? | Overwriting the superblock or partition table just makes it inconvenient to reconstruct the data, which is obviously still there if you just do a hex dump. Hard disks have a built-in erasing feature: ATA Secure Erase , which you can activate using hdparm : Pick a password (any password): hdparm --user-master u --security-set-pass hunter1 /dev/sd X Initiate erasure: hdparm --user-master u --security-erase hunter1 /dev/sd X Since this is a built-in feature, it is unlikely that you'll find a faster method that actually offers real erasure. (It's up to you, though, to determine whether it meets your level of paranoia.) Alternatively, use the disk with full-disk encryption, then just throw away the key when you want to dispose of the data. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/553146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/292999/"
]
} |
553,185 | I have to find out the type of compression of the linux kernel of my arch linux system, but I can't find a way to get it more than the theory: now bzip2 ( bz ), formerly gzip ( z ). In my computer I run the command: $ file /boot/vmlinuz-linux/boot/vmlinuz-linux: Linux kernel x86 boot executable bzImage, version 5.3.11-arch1-1 (linux@archlinux) #1 SMP PREEMPT Tue, 12 Nov 2019 22:19:48 +0000, RO-rootFS, swap_dev 0x5, Normal VGA Looking at the theory, I see that bzImage must be compressed by gzip (z) , but I can't prove it: The bzImage was compressed using gzip until Linux 2.6.30 which introduced more algorithms. Although there is the popular misconception that the bz prefix means that bzip2 compression is used (the bzip2 package is often distributed with tools prefixed with bz , such as bzless , bzcat , etc.), this is not the case. Is there any way to prove it on my own machine? or is the theory itself, in this case, "empirical"? | To conclusively determine what compression was used for a given kernel image, without needing to run it or find its configuration, you can follow the approach used by the kernel’s own extract-vmlinux script: look for the compressor’s signature in the image: gunzip : \037\213\010 xz : \3757zXZ\000 bzip2 : BZh lzma : \135\0\0\0 lzo : \211\114\132 lz4 : \002!L\030 zstd : (\265/\375 try to extract the data from the image, starting at the offset of any signature you’ve found; check that the result (if any) is an ELF image. I’ve adapted the script here so that it only reports the compression type. I’m not including it here because it is licensed under the GPL 2 only. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/553185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/373893/"
]
} |
553,193 | We have more than 4 years of data in our system. We need to move 2 years old files and directories in a new repository. Our requirement is needed to know how many TB of data from Jan 2017 to as of now 2.Exclude personal folder I tried to find command but couldn't work out. find . -type f -mtime +1010 ! -path "./home/01_Personal Folder*" -printf '%s\n' \ | awk '{a+=$1;} END {printf "%.1f GB\n", a/2**30;}' | To conclusively determine what compression was used for a given kernel image, without needing to run it or find its configuration, you can follow the approach used by the kernel’s own extract-vmlinux script: look for the compressor’s signature in the image: gunzip : \037\213\010 xz : \3757zXZ\000 bzip2 : BZh lzma : \135\0\0\0 lzo : \211\114\132 lz4 : \002!L\030 zstd : (\265/\375 try to extract the data from the image, starting at the offset of any signature you’ve found; check that the result (if any) is an ELF image. I’ve adapted the script here so that it only reports the compression type. I’m not including it here because it is licensed under the GPL 2 only. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/553193",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/382822/"
]
} |
553,338 | I have a json file with a load of AWS CloudWatch logs (generated from a CLI command). I'm trying to use jq to only return values for entries that don't have a 'retentionInDays' field. I have the following which returns everything as I want, but I can't seem to filter out the results that do have retentionInDays. # Working output (unfiltered)jq ".logGroups[] | { log_name: .logGroupName, log_arn: .arn, retention_scheme: .retentionInDays }" cwlogs.json I've tried a couple of things, but either get an error, or it completes and outputs nothing: # Doesn't return anythingjq '.logGroups[] | { log_name: .logGroupName, log_arn: .arn, retention_scheme: select(.retentionInDays | contains ("null")?) }' cwlogs.json# Errors with "jq: error (at cwlogs.json:760): number (7) and string ("null") cannot have their containment checked"jq '.logGroups[] | { log_name: .logGroupName, log_arn: .arn, retention_scheme: select(.retentionInDays | contains ("null")) }' cwlogs.json# Hangs foreverjq '.logGroups[] | select(.retentionInDays != "null").type' Update:Testable segment of JSON I'm using { "logGroups": [ { "storedBytes": 0, "metricFilterCount": 0, "creationTime": 1234, "logGroupName": "/aws/elasticbeanstalk/docker", "retentionInDays": 7, "arn": "longarnhere" }, { "storedBytes": 0, "metricFilterCount": 0, "creationTime": 1245, "logGroupName": "/aws/elasticbeanstalk/nginx", "arn": "longarnhere" } ]} | I'm assuming you want to get the logGroups entries that don't have a retentionInDays key at all. $ jq '.logGroups[] | select( has("retentionInDays") == false )' file.json{ "storedBytes": 0, "metricFilterCount": 0, "creationTime": 1245, "logGroupName": "/aws/elasticbeanstalk/nginx", "arn": "longarnhere"} If you want an array of these (likely, if there may be more than one): $ jq '.logGroups | map(select( has("retentionInDays") == false ))' file.json[ { "storedBytes": 0, "metricFilterCount": 0, "creationTime": 1245, "logGroupName": "/aws/elasticbeanstalk/nginx", "arn": "longarnhere" }] You could also use has("retentionInDays") | not in place of has("retentionInDays") == false . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/553338",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/382949/"
]
} |
553,422 | I have a file with the following: 37 * 60 + 55.5234 * 60 + 51.7536 * 60 + 2.8836 * 60 + 14.9436 * 60 + 18.8236 * 60 + 8.3737 * 60 + 48.7136 * 60 + 34.1737 * 60 + 42.5237 * 60 + 51.5535 * 60 + 34.7634 * 60 + 18.9033 * 60 + 49.6334 * 60 + 37.7336 * 60 + 4.49 I need to write a shell command or Bash script that, for each line in this file, evaluates the equation and prints the result. For example, for line one I expect to see 2275.52 printed. Each result should print once per line. I've tried cat math.txt | xargs -n1 expr , but this doesn't work. It also seems like awk might be able to do this, but I'm unfamiliar with that command's syntax, so I don't know what it would be. | This awk seems to do the trick: while IFS= read i; do awk "BEGIN { print ($i) }"done < math.txt From here Note that we're using ($i) instead of $i to avoid problems with arithmetic expressions like 1 > 2 ( print 1 > 2 would print 1 into a file called 2 , while print (1 > 2) prints 0 , the result of that arithmetic expression). Note that since the expansion of the $i shell variable ends up being interpreted as code by awk , that's essentially a code injection vulnerability . If you can't guarantee the file only contains valid arithmetic expressions, you'd want to put some input validation in place. For instance, if the file had a system("rm -rf ~") line, that could have dramatic consequences. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/553422",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62058/"
]
} |
553,427 | I would like to create directory (Shared) in Linux and allow all users to create files in that Shared directory.Also Only users who creates files should be allowed to delete/modify their own files and rest of all users should limit to read-execute permission(705 or 755) in /Shared directory. Example if user TOM creates file called 'sample' in Shared directory then User TOM should be owner of 'sample' file in /shared.User Jack and user Matt should be limited to read & execute permissions on that file 'sample' means permissions should be set up 755 on 'sample' file in /Shared directory.I would like to prevent rest of users editing and deleting files in shared directory which were created by user TOM.How can I achieve that? Thanks,CG | This awk seems to do the trick: while IFS= read i; do awk "BEGIN { print ($i) }"done < math.txt From here Note that we're using ($i) instead of $i to avoid problems with arithmetic expressions like 1 > 2 ( print 1 > 2 would print 1 into a file called 2 , while print (1 > 2) prints 0 , the result of that arithmetic expression). Note that since the expansion of the $i shell variable ends up being interpreted as code by awk , that's essentially a code injection vulnerability . If you can't guarantee the file only contains valid arithmetic expressions, you'd want to put some input validation in place. For instance, if the file had a system("rm -rf ~") line, that could have dramatic consequences. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/553427",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/383043/"
]
} |
553,587 | I have an issue to rename the following file -data-02-03-04.dat I guess the issue come from the file name which starts with a dash. I get the following error trying to rename it. rename -data-02-03-04.dat data020304.datUnknown option: data-02-03-04.datUsage: rename [ -h|-m|-V ] [ -v ] [ -0 ] [ -n ] [ -f ] [ -d ] [ -e|-E perlexpr]*|perlexpr [ files ] | The problem actually comes from the file name that starts with a dash, so it is taken as an option of the rename command rather than as an argument. To avoid the problem, just go through the path of the file: mv ./-data-02-03-04.dat data020304.dat I used the mv command which is more convenient to one single file. rename command is mostly used for batch renaming and requires a perl regular expression. As pointed out here , another way to deal with the problem is to add a double dash -- to signify the end of command options: mv -- -data-02-03-04.dat data020304.dat | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/553587",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/383202/"
]
} |
553,607 | I am having trouble with pattern matching in zsh's hook function precmd . I have the following: precmd(){ local x='test' if [ $x = '*test*' ]; then echo 'hello' fi} which does not print hello ever. I've tested this code with a normal zsh test.zsh that works fine, and I've tried w/o the regex in precmd and got things to print out fine as well. Any clue as to why this isn't working as expected? $ zsh --versionzsh 4.3.11 RHEL | [ $x = '*test*' ] tests whether the string resulting from expanding $x , which is text , is equal to the string resulting from expanding '*test*' , which is *text* . To test whether the value of the variable x matches the pattern *test* , you need to use the = or == operator of zsh conditional expressions , which are written within double brackets [[ … ]] . Furthermore special characters in the pattern must be unquoted, otherwise they stand for themselves. Thus: if [[ $x == *test* ]]; then … The syntax of conditional expressions is similar to the syntax of expressions that you can use within single brackets [ … ] , but not identical. [ is parsed like an ordinary command; in fact, it's a built-in command with a one-character name, which is identical to the test builtin except that [ requires an additional argument at the end which must be ] . [[ … ]] is a distinct grammatical construct, which allows it to have shell special characters inside. [ $x = *test* ] would expand *test* to the list of matching file names (globbing) and the test builtin would end up parsing the result of that. [[ $x = *test* ]] parses *test* as part of conditional expression parsing which does not invoke globbing. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/553607",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111333/"
]
} |
553,631 | I am being told the following two statements about running scripts as sudo: The user executing the sudo command must also have execute permission to the file Only the file will execute as root and not the commands within I have no reason to doubt what I am being told, however I have the following setup which is working and I would like to know why, if the above two statements are correct. I have the following script: /var/www/bash_scripts/test/set_permissions.sh The script contains the following content: chown -R jason:www-data /var/www/test.site.comfind /var/www/test.site.com -type f -exec chmod 664 {} \;find /var/www/test.site.com -type d -exec chmod 775 {} \; The permissions of this script are: -rwxrwxr-- 1 root root 200 Nov 21 22:49 set_permissions.sh My /etc/sudoers file contains the following line appended to the bottom: www-data ALL=(ALL) NOPASSWD: /var/www/bash_scripts/test/set_permissions.sh When the script is executed via a php process running as user www-data , the script executes as expected and all commands within are carried out. If statement 1. from above is correct, the script should never execute as it is only executable via the root user. If statement 2. from above is correct, the commands within the script would have failed to execute. I am simply looking for some clarification as to whether or not the two statements are correct and if they are, what have I done that would allow the behavior I have described above? EDIT www-data owns /var/www/test.site.com The php code runs the command via: exec('sudo /var/www/bash_scripts/site/test/set_permissions.sh 2>&1') | The user executing the sudo command must also have execute permission to the file Statement 1 is not accurate. The user executing sudo does not need to have execute permissions on the script. Only the file will execute as root and not the commands within Statement 2 is also not accurate. All the commands in the script invoked by the sudo user will run as the sudo user. Here's a simple test script I ran to show the id of the user the script/command is running as. /tmp $ iduid=1000(danesh) gid=1000(danesh) /tmp $ ls -l test.sh -rwxr-xr-x 1 danesh danesh 24 Nov 22 10:48 test.sh*/tmp $ ./test.sh uid=1000(danesh) gid=1000(danesh) /tmp $ sudo ./test.sh[sudo] password for danesh: uid=0(root) gid=0(root) groups=0(root)/tmp $ sudo chown root:root test.sh/tmp $ sudo chmod 770 test.sh/tmp $ ls -l test.sh-rwxrwx--- 1 root root 24 Nov 22 10:48 test.sh*/tmp $ ./test.shfish: The file “./test.sh” is not executable by this user/tmp [126] $ sudo ./test.shuid=0(root) gid=0(root) groups=0(root)/tmp $ cat test.sh #!/usr/bin/env bash# show the user executing this scriptid Hope that helps. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/553631",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146903/"
]
} |
553,645 | I want to learn the NFS locking mechanism I surfed on net. Using nfslockd or rpc.lockd we can do. But in Debian there is no package. Anyone please help me whether shall we do or not in Debian. | The user executing the sudo command must also have execute permission to the file Statement 1 is not accurate. The user executing sudo does not need to have execute permissions on the script. Only the file will execute as root and not the commands within Statement 2 is also not accurate. All the commands in the script invoked by the sudo user will run as the sudo user. Here's a simple test script I ran to show the id of the user the script/command is running as. /tmp $ iduid=1000(danesh) gid=1000(danesh) /tmp $ ls -l test.sh -rwxr-xr-x 1 danesh danesh 24 Nov 22 10:48 test.sh*/tmp $ ./test.sh uid=1000(danesh) gid=1000(danesh) /tmp $ sudo ./test.sh[sudo] password for danesh: uid=0(root) gid=0(root) groups=0(root)/tmp $ sudo chown root:root test.sh/tmp $ sudo chmod 770 test.sh/tmp $ ls -l test.sh-rwxrwx--- 1 root root 24 Nov 22 10:48 test.sh*/tmp $ ./test.shfish: The file “./test.sh” is not executable by this user/tmp [126] $ sudo ./test.shuid=0(root) gid=0(root) groups=0(root)/tmp $ cat test.sh #!/usr/bin/env bash# show the user executing this scriptid Hope that helps. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/553645",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95090/"
]
} |
553,691 | luks1 has limit of 8. But I just accidentally add 9 slots to luks2 (from 0 to 8). I wonder what is the limit for luks2? Keyslots: 0: luks2 Key: 512 bits Priority: normal Cipher: aes-xts-plain64 Cipher key: 512 bits PBKDF: argon2i Time cost: 5 Memory: 1048576 Threads: 4 Salt: 32 ff 35 1e a2 b5 64 a7 fe f9 6e 7d 12 75 75 d5 a4 e7 47 39 80 96 1d 76 b1 35 b3 77 0a 85 46 ac AF stripes: 4000 AF hash: sha256 Area offset:32768 [bytes] Area length:258048 [bytes] Digest ID: 0 1: luks2 Key: 512 bits Priority: normal Cipher: aes-xts-plain64 Cipher key: 512 bits PBKDF: argon2i Time cost: 5 Memory: 1048576 Threads: 4 Salt: 55 7b 9a 4c d8 53 2b bb 90 af 57 44 67 b5 0c 03 85 a1 5d 70 e4 1e b0 5f 97 1a f3 0e f2 8c dc b2 AF stripes: 4000 AF hash: sha256 Area offset:290816 [bytes] Area length:258048 [bytes] Digest ID: 0 2: luks2 Key: 512 bits Priority: normal Cipher: aes-xts-plain64 Cipher key: 512 bits PBKDF: argon2i Time cost: 5 Memory: 1048576 Threads: 4 Salt: ac 24 09 ca f9 24 52 3d 49 d3 c9 89 63 d0 1d 61 83 4a aa ed 75 a2 39 ec 3f f8 ab 95 5d 0c 49 aa AF stripes: 4000 AF hash: sha256 Area offset:1064960 [bytes] Area length:258048 [bytes] Digest ID: 0 3: luks2 Key: 512 bits Priority: normal Cipher: aes-xts-plain64 Cipher key: 512 bits PBKDF: argon2i Time cost: 5 Memory: 1048576 Threads: 4 Salt: 3d 37 41 20 93 44 55 62 c6 19 fe e0 7d ae 14 0d 67 86 6a 44 5e c8 8a f0 97 01 1d c7 c6 83 02 22 AF stripes: 4000 AF hash: sha256 Area offset:1323008 [bytes] Area length:258048 [bytes] Digest ID: 0 4: luks2 Key: 512 bits Priority: normal Cipher: aes-xts-plain64 Cipher key: 512 bits PBKDF: argon2i Time cost: 5 Memory: 1048576 Threads: 4 Salt: bf 6d a0 15 c9 8e 9b 49 12 84 86 6b 13 93 95 7d cf cf 8f 3a e2 b7 42 42 4c 59 a1 5c 23 cd e6 1a AF stripes: 4000 AF hash: sha256 Area offset:1581056 [bytes] Area length:258048 [bytes] Digest ID: 0 5: luks2 Key: 512 bits Priority: normal Cipher: aes-xts-plain64 Cipher key: 512 bits PBKDF: argon2i Time cost: 5 Memory: 1048576 Threads: 4 Salt: bd 76 ae e1 33 d3 7a 83 5b 59 d4 bc 46 17 36 ec e6 94 a5 b1 85 2d 00 9f a4 ff f4 02 cc b6 ca bc AF stripes: 4000 AF hash: sha256 Area offset:1839104 [bytes] Area length:258048 [bytes] Digest ID: 0 6: luks2 Key: 512 bits Priority: normal Cipher: aes-xts-plain64 Cipher key: 512 bits PBKDF: argon2i Time cost: 5 Memory: 1048576 Threads: 4 Salt: ab 7f dd e5 2c eb 32 51 97 9a 10 5e 70 75 1e 15 91 35 10 63 f5 8b b6 8c 7a 97 16 40 50 e6 89 fb AF stripes: 4000 AF hash: sha256 Area offset:2097152 [bytes] Area length:258048 [bytes] Digest ID: 0 7: luks2 Key: 512 bits Priority: normal Cipher: aes-xts-plain64 Cipher key: 512 bits PBKDF: argon2i Time cost: 5 Memory: 1048576 Threads: 4 Salt: aa 02 fd a2 fd 4a ee 84 1a 41 93 58 7a 25 c2 d4 0d 65 bc b4 5b 18 1a 05 4b 0a 81 f7 68 8c 9a 26 AF stripes: 4000 AF hash: sha256 Area offset:548864 [bytes] Area length:258048 [bytes] Digest ID: 0 8: luks2 Key: 512 bits Priority: normal Cipher: aes-xts-plain64 Cipher key: 512 bits PBKDF: argon2i Time cost: 5 Memory: 1048576 Threads: 4 Salt: 2b 04 62 29 e2 dc 42 b4 3a 28 8d 46 28 17 05 26 a1 05 86 62 95 8e 50 98 91 67 18 15 71 1c 8a f9 AF stripes: 4000 AF hash: sha256 Area offset:806912 [bytes] Area length:258048 [bytes] Digest ID: 0 | For LUKS1, it's 8 key-slots, fixed. For LUKS2, currently it's at most 32 key-slots : #define LUKS2_KEYSLOTS_MAX 32 Trying to add more simply results in the error message "All key slots full.". However, the answer might not be so simple after all. The limit of 32 exists in code but is not mentioned at all in the LUKS2 On-Disk Format Specification . The LUKS2 header is actually capable of storing more than just 32 key-slots. Or it might not even be able to store 8 of them. It depends on the data offset, the size of the keyslots area, as well as the size required by each individual key. For a newly formatted header, it defaults to a large data offset so you might easily get 32 key-slots. If you converted from LUKS1 (with only 2MiB data offset), it's unchanged at 8 key-slots. If the data offset is smaller, it might be less than 8 key-slots. With a data offset of 1MiB, you only get 3 key-slots ( cryptsetup emits a warning about it): # truncate -s 100M foobar.img# cryptsetup luksFormat --offset=2048 foobar.imgWARNING: keyslots area (1015808 bytes) is very small,available LUKS2 keyslot count is very limited.# cryptsetup luksAddKey foobar.img # cryptsetup luksAddKey foobar.img # cryptsetup luksAddKey foobar.img No space for new keyslot. In this particular example, only 3 key-slots could be used before the header ran out of space to store more: # cryptsetup luksDump foobar.imgLUKS header informationVersion: 2Epoch: 5Metadata area: 16384 [bytes]Keyslots area: 1015808 [bytes][...]Keyslots: 0: luks2 [...] Area offset:32768 [bytes] Area length:258048 [bytes] Digest ID: 0 1: luks2 [...] Area offset:290816 [bytes] Area length:258048 [bytes] Digest ID: 0 2: luks2 [...] Area offset:548864 [bytes] Area length:258048 [bytes] Digest ID: 0 Here the total available keyslots area is only 1015808 bytes. Each key has a size of 257048 bytes. To store an additional key, 1032192 bytes would be required at minimum so, it just doesn't fit more keys in this particular case. If you don't care about MiB alignment, it's possible to make the offset even smaller, leaving you with only a single key-slot. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/553691",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77353/"
]
} |
553,719 | I want to get into kernel development, and the first practical step is obviously to run a Linux kernel to develop. I think the best solution for me would be to dual boot windows and Linux, with Linux as a daily driver and kernel dev environment/ test bed. I would keep windows as a backup option seeing as how last time I ran this setup I managed to brick my Linux system when I was trying to install video drivers... I would want to avoid running Linux from windows since I'm not sure I have the processing power to run both, especially if I wanted an IDE within Linux... My questions about this setup are as follows: Is there a fundamental problem I'm missing where Linux could corrupt my windows system if something goes wrong? I would be running my own edited version of the kernel, and I'm worried that a particularly unstable change could result in disk corruption, but I don't know if this is a relevant concern. Are there windows native tools for recovering a Linux system on the same device? If not then what Linux tools are available for recovery if I ran a third operating system on the side for recovery purposes? Edit: For background I have a degree in CS (with a course in OS, including an IPC kernel module), but I'm doing malware analysis/ RE and I want to get into development, and mainline kernel development would give me a competitive edge. I want to get into something low level where correctness/ optimization counts as a business concern. Security/ speed/ power efficiency/ multithread/multiprocessing or some other form of optimization where I can put my low level experience/ passion to use. | For LUKS1, it's 8 key-slots, fixed. For LUKS2, currently it's at most 32 key-slots : #define LUKS2_KEYSLOTS_MAX 32 Trying to add more simply results in the error message "All key slots full.". However, the answer might not be so simple after all. The limit of 32 exists in code but is not mentioned at all in the LUKS2 On-Disk Format Specification . The LUKS2 header is actually capable of storing more than just 32 key-slots. Or it might not even be able to store 8 of them. It depends on the data offset, the size of the keyslots area, as well as the size required by each individual key. For a newly formatted header, it defaults to a large data offset so you might easily get 32 key-slots. If you converted from LUKS1 (with only 2MiB data offset), it's unchanged at 8 key-slots. If the data offset is smaller, it might be less than 8 key-slots. With a data offset of 1MiB, you only get 3 key-slots ( cryptsetup emits a warning about it): # truncate -s 100M foobar.img# cryptsetup luksFormat --offset=2048 foobar.imgWARNING: keyslots area (1015808 bytes) is very small,available LUKS2 keyslot count is very limited.# cryptsetup luksAddKey foobar.img # cryptsetup luksAddKey foobar.img # cryptsetup luksAddKey foobar.img No space for new keyslot. In this particular example, only 3 key-slots could be used before the header ran out of space to store more: # cryptsetup luksDump foobar.imgLUKS header informationVersion: 2Epoch: 5Metadata area: 16384 [bytes]Keyslots area: 1015808 [bytes][...]Keyslots: 0: luks2 [...] Area offset:32768 [bytes] Area length:258048 [bytes] Digest ID: 0 1: luks2 [...] Area offset:290816 [bytes] Area length:258048 [bytes] Digest ID: 0 2: luks2 [...] Area offset:548864 [bytes] Area length:258048 [bytes] Digest ID: 0 Here the total available keyslots area is only 1015808 bytes. Each key has a size of 257048 bytes. To store an additional key, 1032192 bytes would be required at minimum so, it just doesn't fit more keys in this particular case. If you don't care about MiB alignment, it's possible to make the offset even smaller, leaving you with only a single key-slot. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/553719",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/383299/"
]
} |
553,731 | We can see that the synopsis of rm command is: rm [OPTION]... [FILE]... Doesn't it mean that we can use only rm command without any option or argument? When I run the command rm on its own, the terminal then shows the following error: rm: missing operandTry 'rm --help' for more information. Can anyone tell me why this is the case? | The standard synopsis for the rm utility is specified in the POSIX standard 1&2 as rm [-iRr] file...rm -f [-iRr] [file...] In its first form, it does require at least one file operand, but in its second form it does not. Doing rm -f with no file operands is not an error: $ rm -f$ echo "$?"0 ... but it just doesn't do very much. The standard says that for the -f option, the rm utility should... Do not prompt for confirmation. Do not write diagnostic messages or modify the exit status in the case of no file operands, or in the case of operands that do not exist. Any previous occurrences of the -i option shall be ignored. This confirms that it must be possible to run rm -f without any pathname operands and that this is not something that makes rm exit with a diagnostic message nor a non-zero exit status. This fact is very useful in a script that tries to delete a number of files as rm -f -- "$@" where "$@" is a list of pathnames that may or may not be empty, or that may contain pathnames that do not exist. ( rm -f will still generate a diagnostic message and exit with a non-zero exit status if there are permission issues preventing a named file from being removed.) Running the utility with neither option nor pathname operands is an error though: $ rmusage: rm [-dfiPRrv] file ...$ echo "$?"1 The same holds true for GNU rm (the above shows OpenBSD rm ) and other implementations of the same utility, but the exact diagnostic message and the non-zero exit-status may be different (on Solaris the value is 2, and on macOS it's 64, for example). In conclusion, the GNU rm manual may just be a bit imprecise as it's true that with some option ( -f , which is an optional option), the pathname operand is optional. 1 since the 2016 edition, after resolution of this bug , see the previous edition for reference. 2 POSIX is the standard that defines what a Unix system is and how it behaves. This standard is published by The Open Group . See also the question " What exactly is POSIX? ". | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/553731",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/383308/"
]
} |
553,787 | I am trying to write a script(script1.sh) that gives the sum of each digits in the first number, raised to the power of second number. So ./script1.sh 12345 2 should output 55 (because 1+4+9+16+25=55) or ./script1.sh 3706907995955475988644381 25 should output 3706907995955475988644381 . I wrote a script but in some cases I get a negative output and I don't see how that can happen. For example ./script1.sh 3706907995955475988644380 25 outputs -2119144605827694052 My script: #!/bin/bashsum=0value=$1arr=()for ((i = 0; i < ${#value}; i++)); do arr+=(${value:$i:1})donefor x in "${arr[@]}"; do sum=$(($sum+(x**$2)))doneecho $sum | shell arithmetic in bash uses the widest integer type supported by your C compiler. On most modern systems/C compilers, that's 64 bit integers, so "only" covering the range -9223372036854775808 to 9223372036854775807, and wrap for numbers out of that. In order to do this you will need to use another tool, such as bc: #!/bin/bashnum1=$1num2=$2sum=0for (( i=0; i<${#num1}; i++ )); do n=${num1:$i:1} sum=$( bc <<<"$sum + $(bc <<<"${n}^$num2")" )doneecho "$sum" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/553787",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/378813/"
]
} |
553,820 | I am trying to write a script(script1.sh) that finds every possible word when given a jumble of letters. The words must start with the first letter of the jumble and end withthe last letter. The letters of the word need to follow the order of the letters in the jumble. Each letter in the jumble can be used more than once. So this ./script1.sh "qwertyuytresdftyuiokn" should output queen and question but not "quieten" because "e" comes before "u" and "i" in the jumble. I tried assigning the first, last and the remaining letters to variables, then using egrep to find the words but I couldn't find a way to use the order of letters. So this one gives me invalid words as well. #!/bin/bashfirst_letter=$(echo $@ | cut -c1)last_letter=$(echo $@ |rev| cut -c1)remaining_letters=$(echo $@ | cut -c2- | rev | cut -c2-)grep -E "^$first_letter[$remaining_letters]*$last_letter$" /usr/share/dict/words Then I tried turning the jumble into an array but then again I couldn't find a way find words that obey the order in the jumble. | #!/bin/shpttrn="^$(printf '%s' "$1" | sed -e 's/\(.\)/\1*/g' -e 's/\*/\\+/' -e 's/\*$/\\+/')"'$'grep "$pttrn" /usr/share/dict/words A pattern is obtained from the first argument by injecting * after each character. Then the first * is changed to \+ ; so is the last * . Additionally ^ and $ are added. Your example input generates the following pattern: ^q\+w*e*r*t*y*u*y*t*r*e*s*d*f*t*y*u*i*o*k*n\+$ This pattern is the right pattern for grep . q must appear at least one time at the beginning, n must appear at least one time at the end. Each letter in the middle may appear zero or more times, the order is maintained. Note the script is dumb. If you provide input with . , [ , ] or so then you will get a regular expression beyond the specification. Provide sane input or expand the script to validate it. Examples: $ ./script1.sh qwertyuytresdftyuioknqueenquestion$ ./script1.sh tetee$ ./script1.sh superuserseersererspursupersuppersurer$ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/553820",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/378813/"
]
} |
553,852 | I have been using wc -l to check for the number of lines exist in my files. It worked fine always but not this time. I have 120 big files that are supposed to have at least two lines in each of them. I have just done some text editing work on those files to remove and add new lines. I was trying to check the final number of line by using wc -l * as usual. The output showed that most of the files had only one line. I opened up one of the file (which showed from the result of the command that it had only one line) with vim and I can see that it had exactly 2 lines. Exit vim and check again using wc -l , the number of line for that file then appeared as 2. Does anyone have any idea with what happened over here? And how can I solve this problem instead of opening all 120 files with vim ? PS: The final line of my files weren't empty. | The common gnu implementation of wc says ‘wc’ counts the number of bytes, characters, whitespace-separated words, and newlines in each given FILE, or standard input if none are given or for a FILE of ‘-’. so if there is no final newline character in the file the "lines" part of the wc output will be one less than expected. For example the following will output 1 printf 'hello\nworld' | wc -l The OP has confirmed in comments that vim is reporting the lack of the final newline.A simple fix if all the files are known to have this problem is for f in * do echo >> "$f" done to append a newline to each file. A way to add a newline conditionally to the end of all the files if they are missing one is to use sed. sed -s -i '$s/$/\n/;P;d' * uses some gnu extensions, -s to treat each file separately, -i to do an in place edit, and allowing \n to represent a newline. The sed program itself says on the last line of each file append a newline, and for each line print up to the first newline and move onto the next line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/553852",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/345181/"
]
} |
553,930 | I'm struggling to understand WHY ntp (the service) won't set the time correctly on my raspberry pi. I have configured the filesystem as read only, to save my SD card, but it used to work, and I cannot seem to figure out why ntp won't work now. In the logs I get many many lines of that message: ntpd[415]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronizedntpd[415]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronizedntpd[415]: error resolving pool 0.debian.pool.ntp.org: Temporary failure in name resolution (-3)ntpd[415]: error resolving pool 1.debian.pool.ntp.org: Temporary failure in name resolution (-3)ntpd[415]: error resolving pool 2.debian.pool.ntp.org: Temporary failure in name resolution (-3)ntpd[415]: error resolving pool 3.debian.pool.ntp.org: Temporary failure in name resolution (-3)ntpd[415]: error resolving pool 3.debian.pool.ntp.org: Temporary failure in name resolution (-3)ntpd[415]: error resolving pool 2.debian.pool.ntp.org: Temporary failure in name resolution (-3)ntpd[415]: error resolving pool 1.debian.pool.ntp.org: Temporary failure in name resolution (-3)ntpd[415]: error resolving pool 0.debian.pool.ntp.org: Temporary failure in name resolution (-3) My /etc/resolv.conf looks like this: # Generated by resolvconfnameserver 8.8.8.8nameserver 192.168.1.22 I have access to internet on that RPi, I can ping the pool addresses, I can ping google, I can apt update (after remounting in rw)... I also can issue an ntpdate command manually and IT WORKS! $ sudo ntpdate -u 0.fr.pool.ntp.org 1.fr.pool.ntp.org24 Nov 23:04:34 ntpdate[578]: step time server 129.250.35.250 offset 2418.621037 sec So yeah, I'm pulling hairs here. I cannot understand why the ntp service won't work. I scourged the internet, nobody seems to have this particular issue (all have a malfunctioning dns, but mine is working) My read-only setup is the following: https://hallard.me/raspberry-pi-read-only/ Do you guys have any idea? | I found this question while facing a similar issue. The issue turned out to be that systemd 's PrivateTmp feature does not work in a read-only configuration. Be sure to install ntp and ntpdate sudo apt install -y ntp ntpdate Copy /lib/systemd/system/ntp.service to /etc/systemd/system/ntp.service cp /lib/systemd/system/ntp.service /etc/systemd/system/ntp.service Open /etc/systemd/system/ntp.service and comment out PrivateTmp=true . sudo nano /etc/systemd/system/ntp.service Now, it should work correctly! As an additional step I have also now mounted /var/lib/ntp as tmpfs as recommended here Open /etc/fstab and add tmpfs /var/lib/ntp tmpfs nosuid,nodev 0 0 at the end of file. sudo nano /etc/fstab I didn't find this necessary in my case but there are additional insights into running on a read-only filesystem there. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/553930",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6361/"
]
} |
553,980 | I can't find any good information on the rt and lowlatency Linux kernels. I am wondering why anybody would not want to use a lowlatency kernel. Also, if anyone can tell what the specific differences are, that would be great too. | The different configurations, “generic”, “lowlatency” (as configured in Ubuntu), and RT (“real-time”), are all about balancing throughput versus latency. Generic kernels favour throughput over latency, the others favour latency over throughput. Thus users who need throughput more than they need low latency wouldn’t choose a low latency kernel. Compared to the generic configuration, the low-latency kernel changes the following settings: IRQs are threaded by default, meaning that more IRQs (still not all IRQs) can be pre-empted, and they can also be prioritised and have their CPU affinity controlled; pre-emption is enabled throughout the kernel ( CONFIG_PREEMPT instead of CONFIG_PREEMPT_VOLUNTARY ); the latency debugging tools are enabled, so that the user can determine what kernel operations are blocking progress; the timer frequency is set to 1000 Hz instead of 250 Hz . RT kernels add a number of patches to the mainline kernel, and a few more configuration tweaks. The purpose of most of those patches is to allow more opportunities for pre-emption, by removing or splitting up locks, and to reduce the amount of time the kernel spends handling uninterruptible tasks (notably, by improving the logging mechanisms and using them less). The goal of all this is to allow the kernel to meet deadlines , i.e. ensure that, when it is required to handle something, it isn’t busy doing something else; this isn’t the same as high throughput or low latency, but fixing latency issues helps. The generic kernels, as configured by default in most distributions, are designed to be a “sensible” compromise: they try to ensure that no single task can monopolise the system for too long, and that tasks can switch reasonably frequently, but without compromising throughput — because the more time the kernel spends considering whether to switch tasks (inside or outside the kernel), or handling interrupts, the less time the system as a whole can spend “working”. That compromise isn’t good enough for latency-sensitive workloads such as real-time audio or video processing: for those, low-latency kernels provide lower latencies at the expense of some throughput. And for real-time requirements, the real-time kernels remove as many low-latency-blockers as possible, at the expense of more throughput. Main-stream distributions of Linux are mostly installed on servers, where traditionally latency hasn’t been considered all that important (although if you do percentile performance analysis, and care about top percentile performance, you might disagree), so the default kernels are quite conservative. Desktop users should probably use the low-latency kernels, as suggested by the kernel’s own documentation. In fact, the more low-latency kernels are used, the more feedback there will be on their relevance, which helps get generally-applicable improvements into the default kernel configurations; the same goes for the RT kernels (many of the RT patches are intended, at some point, for the mainstream kernel). This presentation on the topic provides quite a lot of background. Since version 5.12 of the Linux kernel, “dynamic preemption” can be enabled; this allows the default preemption model to be overridden on the kernel command-line, using the preempt= parameter. This currently supports none (server), voluntary (desktop), and full (low-latency desktop). | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/553980",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73671/"
]
} |
554,007 | All I need is the Zip file name.In the first step I searched for the author: egrep -ni -B1 --color "$autor: test_autor" file_search_v1.log > result1.log whatever worked, the result was: zip: /var/www/dir_de/html/dir1/dir2/7890971.zip author: test_autor zip: /var/www/dir_de/html/dir1/dir2/10567581.zip author: test_autor But, as mentioned above, the Ziip file name.In the second step I tried to filter the result of the first search again: egrep -ni -B1 --color "$autor: test_autor" file_search_v1.log | xargs grep -i -o "\/[[:digit:]]]\.zip" to search only for the filename, unfortunately this does not work. My question.How should the second grep filter "look" so that I only get the zip file name? | The different configurations, “generic”, “lowlatency” (as configured in Ubuntu), and RT (“real-time”), are all about balancing throughput versus latency. Generic kernels favour throughput over latency, the others favour latency over throughput. Thus users who need throughput more than they need low latency wouldn’t choose a low latency kernel. Compared to the generic configuration, the low-latency kernel changes the following settings: IRQs are threaded by default, meaning that more IRQs (still not all IRQs) can be pre-empted, and they can also be prioritised and have their CPU affinity controlled; pre-emption is enabled throughout the kernel ( CONFIG_PREEMPT instead of CONFIG_PREEMPT_VOLUNTARY ); the latency debugging tools are enabled, so that the user can determine what kernel operations are blocking progress; the timer frequency is set to 1000 Hz instead of 250 Hz . RT kernels add a number of patches to the mainline kernel, and a few more configuration tweaks. The purpose of most of those patches is to allow more opportunities for pre-emption, by removing or splitting up locks, and to reduce the amount of time the kernel spends handling uninterruptible tasks (notably, by improving the logging mechanisms and using them less). The goal of all this is to allow the kernel to meet deadlines , i.e. ensure that, when it is required to handle something, it isn’t busy doing something else; this isn’t the same as high throughput or low latency, but fixing latency issues helps. The generic kernels, as configured by default in most distributions, are designed to be a “sensible” compromise: they try to ensure that no single task can monopolise the system for too long, and that tasks can switch reasonably frequently, but without compromising throughput — because the more time the kernel spends considering whether to switch tasks (inside or outside the kernel), or handling interrupts, the less time the system as a whole can spend “working”. That compromise isn’t good enough for latency-sensitive workloads such as real-time audio or video processing: for those, low-latency kernels provide lower latencies at the expense of some throughput. And for real-time requirements, the real-time kernels remove as many low-latency-blockers as possible, at the expense of more throughput. Main-stream distributions of Linux are mostly installed on servers, where traditionally latency hasn’t been considered all that important (although if you do percentile performance analysis, and care about top percentile performance, you might disagree), so the default kernels are quite conservative. Desktop users should probably use the low-latency kernels, as suggested by the kernel’s own documentation. In fact, the more low-latency kernels are used, the more feedback there will be on their relevance, which helps get generally-applicable improvements into the default kernel configurations; the same goes for the RT kernels (many of the RT patches are intended, at some point, for the mainstream kernel). This presentation on the topic provides quite a lot of background. Since version 5.12 of the Linux kernel, “dynamic preemption” can be enabled; this allows the default preemption model to be overridden on the kernel command-line, using the preempt= parameter. This currently supports none (server), voluntary (desktop), and full (low-latency desktop). | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/554007",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321126/"
]
} |
554,008 | Is there any command or way to remove all entries from history of bash shell containing a particular string? this will be useful to remove commands in history containing password.I know we can remove each history entry by its number but the issue is it deletes only one entry at a time and I need to take out number each time to remove a new entry. eg. History command shows 5 entries containing password abcabc and I want to remove all the entries from history command containing string abcabc 975 2019-03-15 11:20:30 ll 976 2019-03-15 11:20:33 ll cd 977 2019-03-15 11:20:36 ll CD 978 2019-03-15 11:20:45 chown test1:test1 CD 979 2019-03-15 11:20:53 chown test1:test1 ./CD 980 2019-03-15 11:20:57 chown test1:test1 .\CD 981 2019-03-15 11:22:04 cd /tmp/logs/ 982 2019-06-07 10:36:33 su test1 983 2019-08-22 08:35:10 su user1 984 2019-08-22 08:35:15 /opt/abc/legacy.exe -password abcabc 985 2019-09-24 07:20:45 cd /opt/test1/v6r2017x 986 2019-09-24 07:20:46 ll 987 2019-09-24 07:21:18 cd /tmp/ 988 2019-09-24 07:21:19 ll 989 2019-09-24 07:21:24 cd linux_a64/ 990 2019-09-24 07:21:25 /opt/abc/legacy.exe -password abcabc 991 2019-09-24 07:24:03 cd build/ 992 2019-09-24 07:24:04 ll 993 2019-09-24 07:24:07 cd .. 994 2019-09-24 07:24:10 /opt/abc/legacy.exe -password abcabc 995 2019-09-24 07:24:15 cd someapp/bin 996 2019-09-24 07:24:21 ll 997 2019-09-24 07:24:33 cd . 998 2019-09-24 07:24:35 cd .. 999 2019-09-24 07:24:36 ll Tried following command which gave error as given below servername:~ # sed -i 'g/abcabc/d' /home/user1/.bash_historysed: -e expression #1, char 2: extra characters after command Expectation : No error and all the entries containing string abcabc should be removed. | The different configurations, “generic”, “lowlatency” (as configured in Ubuntu), and RT (“real-time”), are all about balancing throughput versus latency. Generic kernels favour throughput over latency, the others favour latency over throughput. Thus users who need throughput more than they need low latency wouldn’t choose a low latency kernel. Compared to the generic configuration, the low-latency kernel changes the following settings: IRQs are threaded by default, meaning that more IRQs (still not all IRQs) can be pre-empted, and they can also be prioritised and have their CPU affinity controlled; pre-emption is enabled throughout the kernel ( CONFIG_PREEMPT instead of CONFIG_PREEMPT_VOLUNTARY ); the latency debugging tools are enabled, so that the user can determine what kernel operations are blocking progress; the timer frequency is set to 1000 Hz instead of 250 Hz . RT kernels add a number of patches to the mainline kernel, and a few more configuration tweaks. The purpose of most of those patches is to allow more opportunities for pre-emption, by removing or splitting up locks, and to reduce the amount of time the kernel spends handling uninterruptible tasks (notably, by improving the logging mechanisms and using them less). The goal of all this is to allow the kernel to meet deadlines , i.e. ensure that, when it is required to handle something, it isn’t busy doing something else; this isn’t the same as high throughput or low latency, but fixing latency issues helps. The generic kernels, as configured by default in most distributions, are designed to be a “sensible” compromise: they try to ensure that no single task can monopolise the system for too long, and that tasks can switch reasonably frequently, but without compromising throughput — because the more time the kernel spends considering whether to switch tasks (inside or outside the kernel), or handling interrupts, the less time the system as a whole can spend “working”. That compromise isn’t good enough for latency-sensitive workloads such as real-time audio or video processing: for those, low-latency kernels provide lower latencies at the expense of some throughput. And for real-time requirements, the real-time kernels remove as many low-latency-blockers as possible, at the expense of more throughput. Main-stream distributions of Linux are mostly installed on servers, where traditionally latency hasn’t been considered all that important (although if you do percentile performance analysis, and care about top percentile performance, you might disagree), so the default kernels are quite conservative. Desktop users should probably use the low-latency kernels, as suggested by the kernel’s own documentation. In fact, the more low-latency kernels are used, the more feedback there will be on their relevance, which helps get generally-applicable improvements into the default kernel configurations; the same goes for the RT kernels (many of the RT patches are intended, at some point, for the mainstream kernel). This presentation on the topic provides quite a lot of background. Since version 5.12 of the Linux kernel, “dynamic preemption” can be enabled; this allows the default preemption model to be overridden on the kernel command-line, using the preempt= parameter. This currently supports none (server), voluntary (desktop), and full (low-latency desktop). | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/554008",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/266985/"
]
} |
554,012 | In Unix, I am trying to create all possible combinations of these letter at 6 specific positions as follows: Position 1 - A or B or C Position 2 - A or C Position 3 - only C Position 4 - A or D Position 5 - B or C Position 6 - C or A So, for example, the combinations could be AACABC, BACABC,... Is there a quick way how UNIX-tools can do this? | The requirements correspond to the brace expansion {A,B,C}{A,C}C{A,D}{B,C}{C,A} This expands to 48 strings (48 = 3*2*1*2*2*2 ): $ printf '%s\n' {A,B,C}{A,C}C{A,D}{B,C}{C,A}AACABCAACABAAACACCAACACAAACDBCAACDBAAACDCCAACDCAACCABCACCABAACCACCACCACAACCDBCACCDBAACCDCCACCDCABACABCBACABABACACCBACACABACDBCBACDBABACDCCBACDCABCCABCBCCABABCCACCBCCACABCCDBCBCCDBABCCDCCBCCDCACACABCCACABACACACCCACACACACDBCCACDBACACDCCCACDCACCCABCCCCABACCCACCCCCACACCCDBCCCCDBACCCDCCCCCDCA | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/554012",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/383555/"
]
} |
554,065 | In Debian you can use zgrep to grep through a gunzipped archive file. The reason for making a gunzip file is easy enough, files such as changelogs are vast which can be highly compressed. The issue is with zgrep you only get a specific line and no above or below info. to give contextual info. on the change itself. An example to illustrate - usr/share/doc/intel-microcode$ zgrep Fallout changelog.gz * Implements MDS mitigation (RIDL, Fallout, Zombieload), INTEL-SA-00223 * Implements MDS mitigation (RIDL, Fallout, Zombieload), INTEL-SA-00223 Now as can be seen it seems my chip was affected by RIDL, Fallout and Zombieload bugs which seem to have been fixed by a software patch INTEL-SA-00223 which is mentioned but as can be seen it's pretty incomplete. The way out is to use zless and then / RIDL or any of the other keywords and then you know but am wanting to know if there is any other way or that's the only workaround ? FWIW did come to know that the bugs were mitigated on 2019-05-14 where Intel made software patches affecting these and various other issues on that date. I did try using 'head' and 'tail' using pipes but neither of them proved to be effective. | Zutils ( packaged in Debian ) provides a more capable version of zgrep which supports all the usual contextual parameters: $ zgrep -C3 Fallout /usr/share/doc/intel-microcode/changelog.Debian.gz * New upstream microcode datafile 20190618 + SECURITY UPDATE Implements MDS mitigation (RIDL, Fallout, Zombieload), INTEL-SA-00223 CVE-2018-12126, CVE-2018-12127, CVE-2018-12130, CVE-2019-11091 for Sandybridge server and Core-X processors + Updated Microcodes:-- * New upstream microcode datafile 20190514 + SECURITY UPDATE Implements MDS mitigation (RIDL, Fallout, Zombieload), INTEL-SA-00223 CVE-2018-12126, CVE-2018-12127, CVE-2018-12130, CVE-2019-11091 + New Microcodes: sig 0x00030678, pf_mask 0x02, 2019-04-22, rev 0x0838, size 52224 You can install it with sudo apt install zutils . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/554065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
554,070 | Example. I have a word that I am looking for, say "cat" . And I know that the word can be located anywhere under the current working directory. So I use grep -r , and this gives the results in the following form: ./example_root/[dir1|dir2]/[SOME_ARBITRARY_SUBPATH]/cat_file, where cat_file is the file that contains the word "cat" . What if I am only interested in /example_root/dir1/[whatever_path] prefix for my path? How to I tell it to grep ? | Zutils ( packaged in Debian ) provides a more capable version of zgrep which supports all the usual contextual parameters: $ zgrep -C3 Fallout /usr/share/doc/intel-microcode/changelog.Debian.gz * New upstream microcode datafile 20190618 + SECURITY UPDATE Implements MDS mitigation (RIDL, Fallout, Zombieload), INTEL-SA-00223 CVE-2018-12126, CVE-2018-12127, CVE-2018-12130, CVE-2019-11091 for Sandybridge server and Core-X processors + Updated Microcodes:-- * New upstream microcode datafile 20190514 + SECURITY UPDATE Implements MDS mitigation (RIDL, Fallout, Zombieload), INTEL-SA-00223 CVE-2018-12126, CVE-2018-12127, CVE-2018-12130, CVE-2019-11091 + New Microcodes: sig 0x00030678, pf_mask 0x02, 2019-04-22, rev 0x0838, size 52224 You can install it with sudo apt install zutils . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/554070",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/383596/"
]
} |
554,127 | I would like my initramfs to have the same hash no matter when or where I build it if the contents of the files are the same (and are owned by root and have same permissions). I don't see any options in GNU cpio to strip or set timestamps of files in the archive. Is there a relatively standard way to massage the input to cpio and other archive programs so you can get reproducible products? Going along with this, is there a conventional "We aren't giving this a date" timestamp? Something most software won't wig out about? For example 0 epoch-seconds? For example, if I did a find pass on an input directory for an initramfs and manually set all the timestamps to 0, could I build that archive, extract it on another system, repeat the process, and build it again and get bit-identical files? | Newer versions of GNU cpio have a --reproducible flag which goes some way towards your requirements. My understanding is that the strip-nondeterminism tool will handle the timestamp requirement after the fact. touch will allow you to set the time before you package of course. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/554127",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27395/"
]
} |
554,176 | Because that's what some of them are doing. > echo echo Hallo, Baby! | iconv -f utf-8 -t utf-16le > /tmp/hallo> chmod 755 /tmp/hallo> dash /tmp/halloHallo, Baby!> bash /tmp/hallo/tmp/hallo: /tmp/hallo: cannot execute binary file> (echo '#'; echo echo Hallo, Baby! | iconv -f utf-8 -t utf-16le) > /tmp/hallo> bash /tmp/halloHallo, Baby!> mksh /tmp/halloHallo, Baby!> cat -v /tmp/hallo#e^@c^@h^@o^@ ^@H^@a^@l^@l^@o^@,^@ ^@B^@a^@b^@y^@!^@^@ Is this some compatibility nuisance actually required by the standard? Because it looks quite dangerous and unexpected. | As per POSIX , input file shall be a text file, except that line lengths shall be unlimited ¹ NUL characters² in the input make it non-text , so the behaviour is unspecified as far as POSIX is concerned, so sh implementations can do whatever they want (and a POSIX compliant script must not contain NULs). There are some shells that scan the first few bytes for 0s and refuse to run the script on the assumption that you tried to execute a non-script file by mistake. That's useful because the exec*p() functions, env commands, sh , find -exec ... are required to call a shell to interpret a command if the system returns with ENOEXEC upon execve() , so, if you try to execute a command for the wrong architecture, it's better to get a won't execute a binary file error from your shell than the shell trying to make sense of it as a shell script. That is allowed by POSIX: If the executable file is not a text file, the shell may bypass this command execution. Which in the next revision of the standard will be changed to : The shell may apply a heuristic check to determine if the file to be executed could be a script and may bypass this command execution if it determines that the file cannot be a script. In this case, it shall write an error message, and shall return an exit status of 126. Note: A common heuristic for rejecting files that cannot be a script is locating a NUL byte prior to a <newline> byte within a fixed-length prefix of the file. Since sh is required to accept input files with unlimited line lengths, the heuristic check cannot be based on line length. That behaviour can get in the way of shell self-extractable archives though which contain a shell header followed by binary data¹. The zsh shell supports NUL in its input, though note that NULs can't be passed in the arguments of execve() , so you can only use it in the argument or names of builtin commands or functions: $ printf '\0() echo zero; \0\necho \0\n' | zsh | hd00000000 7a 65 72 6f 0a 00 0a |zero...|00000007 (here defining and calling a function with NUL as its name and passing a NUL character as argument to the builtin echo command). Some will strip them which is also a sensible thing to do. NUL s are sometimes used as padding. They are ignored by terminals for instance (they were sometimes sent to terminals to give them time to process complex control sequences (like carriage return (literally)). Holes in files appear as being filled with NULs, etc. Note that non-text is not limited to NUL bytes. It's also sequence of bytes that don't form valid characters in the locale. For instance, the 0xc1 byte value cannot occur in UTF-8 encoded text. So in locales using UTF-8 as the character encoding, a file that contains such a byte is not a valid text file and therefore not a valid sh script³. In practice, yash is the only shell I know that will complain about such invalid input. ¹ In the next revision of the standard, it is going to change to The input file may be of any type, but the initial portion of the file intended to be parsed according to the shell grammar (XREF to XSH 2.10.2 Shell Grammar Rules) shall consist of characters and shall not contain the NUL character. The shell shall not enforce any line length limits. explicitly requiring shells to support input that starts with a syntactically valid section without NUL bytes, even if the rest contains NULs, to account for self-extracting archives. ² and characters are meant to be decoded as per the locale's character encoding (see the output of locale charmap ), and on POSIX system, the NUL character (whose encoding is always byte 0) is the only character whose encoding contains the byte 0. In other words, UTF-16 is not among the character encodings that can be used in a POSIX locale. ³ There is however the question of the locale changing within the script (like when the LANG / LC_CTYPE / LC_ALL / LOCPATH variables are assigned) and at which point the change takes effect for the shell interpreting the input. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/554176",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.