source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
620,071
uniq seems to do something different than uniq -u , even though the description for both is "only unique lines". What's the difference here, what do they do?
This ought to be easy to test: $ cat file 1 2 3 3 4 4 $ uniq file 1 2 3 4 $ uniq -u file 1 2 In short, uniq with no options removes all but one instance of consecutively duplicated lines. The GNU uniq manual formulates that as With no options, matching lines are merged to the first occurrence. while POSIX says [...] write one copy of each input line on the output. The second and succeeding copies of repeated adjacent input lines shall not be written. With the -u option, it removes all instances of consecutively duplicated lines, and leaves only the lines that were never duplicated. The GNU uniq manual says only print unique lines and POSIX says Suppress the writing of lines that are repeated in the input.
{ "source": [ "https://unix.stackexchange.com/questions/620071", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/442348/" ] }
621,124
Hello I have this in my ~/.bash_profile export GOPATH="$HOME/go_projects" export GOBIN="$GOPATH/bin" program(){ $GOBIN/program $1 } so I'm able to do program "-p hello_world -tSu" . Is there any way to run the program and custom flags without using the quotation marks? if I do just program -p hello_world -tSu it'll only use the -p flag and everything after the space will be ignored.
Within your program shell function, use "$@" to refer to the list of all command line arguments given to the function. With the quotes, each command line argument given to program would additionally be individually quoted (you generally want this). program () { "$GOBIN"/program "$@" } You would then call program like so: program -p hello_world -tSu or, if you want to pass hello world instead of hello_world , program -p 'hello world' -tSu Using $1 refers to only the first command line argument (and $2 would refer to the second, etc.), as you have noticed. The value of $1 would additionally be split on white-spaces and each generated string would undergo filename globbing, since the expansion is unquoted. This would make it impossible to correctly pass an argument that contains spaces or filename globbing patterns to the function.
{ "source": [ "https://unix.stackexchange.com/questions/621124", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/443247/" ] }
621,180
I have several utility programs that do not have their own directory and are just a single executable. Typically I put the binary in /usr/local/bin. A problem I have is how to manage preference settings. One idea is to use environment variables and require the user to define such variables, for example, in their bash.rc. I am a little reluctant, however, to clutter up the bash.rc with miscellaneous preference settings for a minor program. Is there a Standard (or standard recommendation), that defines some place or method that is appropriate for storing preferences for small utility programs that do not have their own directory?
Small utilities for interactive desktop use would be expected to follow the XDG Base Directory Specification and keep their config files under $XDG_CONFIG_HOME or (if that is empty or unset) default to $HOME/.config The picture is a little less clear for non-GUI tools, since they might run on systems which are headless or which don't otherwise adhere to XDG/freedesktop standards. However, there's no obvious drawback to using $XDG_CONFIG_HOME if set or $HOME/.config if not, and it should be relatively unsurprising everywhere.
{ "source": [ "https://unix.stackexchange.com/questions/621180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47542/" ] }
621,523
In How does Linux “kill” a process? it is explained that Linux kills a process by returning its memory to the pool. On a single-core machine, how does it actually do this? It must require CPU time to kill a process, and if that process is doing some extremely long running computation without yielding, how does Linux gain control of the processor for long enough to kill off that process?
The kernel gains control quite frequently in normal operations: whenever a process calls a system call, and whenever an interrupt occurs. Interrupts happen when hardware wants the CPU’s attention, or when the CPU wants the kernel’s attention, and one particular piece of hardware can be programmed to request attention periodically (the timer). Thus the kernel can ensure that, as long as the system doesn’t lock up so hard that interrupts are no longer generated, it will be invoked periodically. As a result, if that process is doing some extremely long running computation without yielding isn’t a concern: Linux is a preemptive multitasking operating system, i.e. it multitasks without requiring running programs’ cooperation. When it comes to killing processes, the kernel is involved anyway. If a process wants to kill another process, it has to call the kernel to do so, so the kernel is in control. If the kernel decides to kill a process ( e.g. the OOM killer, or because the process tried to do something it’s not allowed to do, such as accessing unmapped memory), it’s also in control. Note that the kernel can be configured to not control a subset of a system’s CPUs itself (using the deprecated isolcpus kernel parameter), or to not schedule tasks on certains CPUs itself (using cpusets without load balancing, which are fully integrated in cgroup v1 and cgroup v2 ); but at least one CPU in the system must always be fully managed by the kernel. It can also be configured to reduce the number of timer interrupts which are generated, depending on what a given CPU is being used for. There’s also not much distinction between single-CPU (single-core, etc.) systems and multi-CPU systems, the same concerns apply to both as far as kernel control is concerned: each CPU needs to call into the kernel periodically if it is to be used for multitasking under kernel control.
{ "source": [ "https://unix.stackexchange.com/questions/621523", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/134585/" ] }
621,542
I am running Linux Mint 20. Cinnamon edition. I installed osdclock. When I run osd_clock it displays it in the left bottom corner. If I run osd_clock -t it runs on the top left corner. I can run it at all 4 corner. I can also offset it using -o, but it only moves it at the vertical line. I can not seem to move it along the horizontal line... But, is there any way to run it in the center of the screen? Here is the man page https://manpages.debian.org/testing/osdclock/osd_clock.1.en.html Anyone familiar with that program? Cheers.
The kernel gains control quite frequently in normal operations: whenever a process calls a system call, and whenever an interrupt occurs. Interrupts happen when hardware wants the CPU’s attention, or when the CPU wants the kernel’s attention, and one particular piece of hardware can be programmed to request attention periodically (the timer). Thus the kernel can ensure that, as long as the system doesn’t lock up so hard that interrupts are no longer generated, it will be invoked periodically. As a result, if that process is doing some extremely long running computation without yielding isn’t a concern: Linux is a preemptive multitasking operating system, i.e. it multitasks without requiring running programs’ cooperation. When it comes to killing processes, the kernel is involved anyway. If a process wants to kill another process, it has to call the kernel to do so, so the kernel is in control. If the kernel decides to kill a process ( e.g. the OOM killer, or because the process tried to do something it’s not allowed to do, such as accessing unmapped memory), it’s also in control. Note that the kernel can be configured to not control a subset of a system’s CPUs itself (using the deprecated isolcpus kernel parameter), or to not schedule tasks on certains CPUs itself (using cpusets without load balancing, which are fully integrated in cgroup v1 and cgroup v2 ); but at least one CPU in the system must always be fully managed by the kernel. It can also be configured to reduce the number of timer interrupts which are generated, depending on what a given CPU is being used for. There’s also not much distinction between single-CPU (single-core, etc.) systems and multi-CPU systems, the same concerns apply to both as far as kernel control is concerned: each CPU needs to call into the kernel periodically if it is to be used for multitasking under kernel control.
{ "source": [ "https://unix.stackexchange.com/questions/621542", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/443288/" ] }
622,283
According to https://www.geeksforgeeks.org/rev-command-in-linux-with-examples/ rev command in Linux is used to reverse the lines characterwise. e.g. wolf@linux:~$ rev Hello World! !dlroW olleH What is the example of application of rev in real life? Why do we need reversed string?
The non-standard rev utility is useful in situations where it's easier to express or do an operation from one direction of a string, but it's the reverse of what you have. For example, to get the last tab-delimited field from lines of text using cut (assuming the text arrives on standard input): rev | cut -f 1 | rev Since there is no way to express "the last field" to cut , it's easier to reverse the lines and get the first field instead. One could obviously argue that using awk -F '\t' '{ print $NF }' would be a better solution, but we don't always think about the best solutions first. The (currently) accepted answer to How to cut (select) a field from text line counting from the end? uses this approach with rev , while the runner-up answer shows alternative approaches. Another example is to insert commas into large integers so that 12345678 becomes 12,345,678 (original digits in groups of three, from the right): echo '12345678' | rev | sed -e 's/.../&,/g' -e 's/,$//' | rev See also What do you use string reversal for? over on the SoftwareEngineering SE site for more examples.
{ "source": [ "https://unix.stackexchange.com/questions/622283", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/409008/" ] }
622,768
The bash man page says the following about the read builtin: The exit status is zero, unless end-of-file is encountered This recently bit me because I had the -e option set and was using the following code: read -rd '' json <<EOF { "foo":"bar" } EOF I just don't understand why it would be desirable to exit non successfully in this scenario. In what situation would this be useful?
read reads a record (line by default, but ksh93/bash/zsh allow other delimiters with -d , even NUL with zsh/bash) and returns success as long as a full record has been read. read returns non-zero when it finds EOF while the record delimiter has still not been encountered. That allows you do do things like while IFS= read -r line; do ... done < text-file Or with zsh/bash while IFS= read -rd '' nul_delimited_record; do ... done < null-delimited-list And that loop to exit after the last record has been read. You can still check if there was more data after the last full record with [ -n "$nul_delimited_record" ] . In your case, read 's input doesn't contain any record as it doesn't contain any NUL character. In bash , it's not possible to embed a NUL inside a here document. So read fails because it hasn't managed to read a full record. It stills stores what it has read until EOF (after IFS processing) in the json variable. In any case, using read without setting $IFS rarely makes sense. For more details, see Understanding "IFS= read -r line" .
{ "source": [ "https://unix.stackexchange.com/questions/622768", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237982/" ] }
622,902
We have a shell script that -- for various reasons -- wraps a vendor's application. We have system administrators and application owners who have mixed levels of familiarity with systemd. As a result, in situations where the application has failed (systemctl indicates as much), some end users (including “root” system administrators) might start an application “directly” with the wrapper script instead of using systemctl restart . This can cause issues during reboots, because systemd does not call the proper shutdown script -- because as far as it's concerned, the application was already stopped. To help guide the transition to systemd, I want to update the wrapper script to determine whether it is being called by systemd or by an end-user; if it's being called outside systemd, I want to print a message to the caller, telling them to use systemctl. How can I determine, within a shell script, whether it is being called by systemd or not? You may assume: a bash shell for the wrapper script the wrapper script successfully starts and stops the application the systemd service works as expected An example of the systemd service could be: [Unit] Description=Vendor's Application After=network-online.target [Service] ExecStart=/path/to/wrapper start ExecStop=/path/to/wrapper stop Type=forking [Install] WantedBy=multi-user.target I am not interested in Detecting the init system , since I already know it's systemd.
From Lucas Werkmeister 's informative answer on Server Fault : With systemd versions 231 and later, there's a JOURNAL_STREAM variable that is set for services whose stdout or stderr is connected to the journal. With systemd versions 232 and later, there's an INVOCATION_ID variable that is set. If you don't want to rely on those variables, or for systemd versions before 231, you can check if the parent PID is equal to 1: if [[ $PPID -ne 1 ]] then echo "Don't call me directly; instead, call 'systemctl start/stop service-name'" exit 1 fi >&2
{ "source": [ "https://unix.stackexchange.com/questions/622902", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117549/" ] }
622,924
I saved a backup from ssd A to .img file with dd command now I want to know to clone ssd A to ssd B can I do it directly from the .img file something like: dd if=/dev/backup.img of=/dev/sdc bs=1 status=progress Will this be same as doing: dd if/dev/sdb of=/dev/sdc bs=1 bs=1 status=progress Where Disk A = sdb and Disk B=sdc I already did dd if=/dev/sdc of=/dev/image.img I woud prefer to clone it from the .img file as I dont mess something up or do the opposite, so I want to know are those two methods same 100% results?
From Lucas Werkmeister 's informative answer on Server Fault : With systemd versions 231 and later, there's a JOURNAL_STREAM variable that is set for services whose stdout or stderr is connected to the journal. With systemd versions 232 and later, there's an INVOCATION_ID variable that is set. If you don't want to rely on those variables, or for systemd versions before 231, you can check if the parent PID is equal to 1: if [[ $PPID -ne 1 ]] then echo "Don't call me directly; instead, call 'systemctl start/stop service-name'" exit 1 fi >&2
{ "source": [ "https://unix.stackexchange.com/questions/622924", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/444941/" ] }
626,248
Is there a way to see what the result of a find . -exec somecommand {} \; would be with substitutions, without actually running the commands? Like a dry run (or test run or print)? For example, suppose I have the following file structure: /a/1.txt /a/2.txt /a/b/3.txt Is there a way to test find . type f -exec rm {} \; from within the a directory such that the output would printed to stdout but not executed such as: rm 1.txt rm 2.txt rm b/3.txt Update Note: rm is just an example command, I'm interested in the general case
You can run echo rm instead of rm find . type f -exec echo rm {} \; Also, find has -delete option to delete files it finds
{ "source": [ "https://unix.stackexchange.com/questions/626248", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6250/" ] }
626,637
I'm new here and new to bash/linux. My teacher gave me an assignment to allow a script to be run only when you're "really" root and not when you're using sudo. After two hours of searching and trying I'm beginning to think he's trolling me. Allowing only root is easy, but how do I exclude users that run it with sudo? This is what I have: if [[ $EUID -ne 0 ]]; then echo "You must be root to run this script." exit fi
The only way I could think of is to check one of the SUDO_* environment variables set by sudo: #!/usr/bin/env sh if [ "$(id -u)" -eq 0 ] then if [ -n "$SUDO_USER" ] then printf "This script has to run as root (not sudo)\n" >&2 exit 1 fi printf "OK, script run as root (not sudo)\n" else printf "This script has to run as root\n" >&2 exit 1 fi Notice that of course this solution is not future proof as you cannot stop anyone from setting a variable before running the script: $ su Password: # SUDO_USER=whatever ./root.sh This script has to run as root (not sudo) # ./root.sh OK, script run as root (not sudo)
{ "source": [ "https://unix.stackexchange.com/questions/626637", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/448526/" ] }
629,616
I was reading this message from the zsh mailing list about key bindings and I'd like to know which key I need to press: ^X^I (I think Ctrl-X Ctrl-I , the capital X and I ) ^[^@ (I think Ctrl-Esc-@ ??) ^X^[q (I think Ctrl-X Esc-q ??) ^XQ (I think Ctrl-X and Q ??) From the Archlinux wiki page on zsh ^[[1;3A ^[[1;3D From bindkey ^[[1;5C ^[[A I know that ^[ means Esc, but I'm not sure how to find others. Is there any official reference or website that lists these?
^ c is a common notation for Ctrl + c where c is a (uppercase) letter or one of @[\]^_ . It designates the corresponding control character . The correspondence is that the numeric code of the control character is the numeric code of the printable character (letter or punctuation symbol) minus 64, which corresponds to setting a bit to 0 in base 2. In addition, ^? often means character 127. Some keys send a control character: Escape = Ctrl + [ Tab = Ctrl + I Return (or Enter or ⏎ ) = Ctrl + M Backspace = Ctrl + ? or Ctrl + H (depending on the terminal configuration) Alt (often called Meta because that was the name of the key at that position on historical Unix machines) plus a printable character sends ^[ (escape) followed by that character. Most function and cursor keys send an escape sequence, i.e. the character ^[ followed by some printable characters. The details depend on the terminal and its configuration. For xterm, the defaults are documented in the manual . The manual is not beginner-friendly. Here are some tips to help: CSI means ^[[ , i.e. escape followed by open-bracket. SS3 means ^[O , i.e. escape followed by uppercase-O. "application mode" is something that full-screen programs usually turn on. Some keys send a different escape sequence in this mode, for historical reasons. (There are actually multiple modes but I won't go into a detailed discussion because in practice, if it matters, you can just bind the escape sequences of both modes, since there are no conflicts.) Modifiers ( Shift , Ctrl , Alt / Meta ) are indicated by a numerical code. Insert a semicolon and that number just before the last character of the escape sequence. Taking the example in the documentation: F5 sends ^[[15~ , and Shift + F5 sends ^[[15;2~ . For cursor keys that send ^[[ and one letter X , to indicate a modifier M , the escape sequence is ^[[1; M X . Xterm follows an ANSI standard which itself is based on historical usage dating back from physical terminals. Most modern terminal emulators follow that ANSI standard and implement some but not all of xterm's extensions. Do expect minor variations between terminals though. Thus: ^X^I = Ctrl + X Ctrl + I = Ctrl + X Tab ^[^@ = Ctrl + Alt + @ = Escape Ctrl + @ . On most terminals, Ctrl + Space also sends ^@ so ^[^@ = Ctrl + Alt + Space = Escape Ctrl + Space . ^X^[q = Ctrl + X Alt + q = Ctrl + X Escape q ^XQ = Ctrl + X Shift + q ^[[A = Up ^[[1;3A = Alt + Up ( Up , with 1;M to indicate the modifier M ). Note that many terminals don't actually send these escape sequences for Alt + cursor key . ^[[1;3D = Alt + Left ^[[1;5C = Ctrl + Right There's no general, convenient way to look up the key corresponding to an escape sequence. The other way round, pressing Ctrl + V followed by a key chord at a shell prompt (or in many terminal-based editors) inserts the escape sequence literally. See also How do keyboard input and text output work? and key bindings table?
{ "source": [ "https://unix.stackexchange.com/questions/629616", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120293/" ] }
629,627
in my raspberry pi running debian os i have 2 interfaces, wlan0 & eth0. Both of interfaces got dhcp from both gateway server. How can i ping both LAN ? For example : eth0 -> gateway 10.1.22.1 -> LAN 10.0.0.0/8 wlan0 -> gateway 192.168.10.1 -> LAN 192.168.10.0/24 -> also can browse internet the route table that i get is : Destination Gateway Genmask Flags Metric Ref Use Iface default 10.1.22.1 0.0.0.0 UG 203 0 0 eth0 default 192.168.10.1 0.0.0.0 UG 304 0 0 wlan0 10.1.22.0 0.0.0.0 255.255.255.0 U 203 0 0 eth0 192.168.10.0 0.0.0.0 255.255.255.0 U 304 0 0 wlan0 I can ping LAN 10.0.0.0/8 but cannot browsing internet. How can i browse internet and also ping LAN 10.0.0.0/8 ? Sorry, it's basic linux network configuration. I'm not familiar with linux os. May someone can help me to figure it out.
^ c is a common notation for Ctrl + c where c is a (uppercase) letter or one of @[\]^_ . It designates the corresponding control character . The correspondence is that the numeric code of the control character is the numeric code of the printable character (letter or punctuation symbol) minus 64, which corresponds to setting a bit to 0 in base 2. In addition, ^? often means character 127. Some keys send a control character: Escape = Ctrl + [ Tab = Ctrl + I Return (or Enter or ⏎ ) = Ctrl + M Backspace = Ctrl + ? or Ctrl + H (depending on the terminal configuration) Alt (often called Meta because that was the name of the key at that position on historical Unix machines) plus a printable character sends ^[ (escape) followed by that character. Most function and cursor keys send an escape sequence, i.e. the character ^[ followed by some printable characters. The details depend on the terminal and its configuration. For xterm, the defaults are documented in the manual . The manual is not beginner-friendly. Here are some tips to help: CSI means ^[[ , i.e. escape followed by open-bracket. SS3 means ^[O , i.e. escape followed by uppercase-O. "application mode" is something that full-screen programs usually turn on. Some keys send a different escape sequence in this mode, for historical reasons. (There are actually multiple modes but I won't go into a detailed discussion because in practice, if it matters, you can just bind the escape sequences of both modes, since there are no conflicts.) Modifiers ( Shift , Ctrl , Alt / Meta ) are indicated by a numerical code. Insert a semicolon and that number just before the last character of the escape sequence. Taking the example in the documentation: F5 sends ^[[15~ , and Shift + F5 sends ^[[15;2~ . For cursor keys that send ^[[ and one letter X , to indicate a modifier M , the escape sequence is ^[[1; M X . Xterm follows an ANSI standard which itself is based on historical usage dating back from physical terminals. Most modern terminal emulators follow that ANSI standard and implement some but not all of xterm's extensions. Do expect minor variations between terminals though. Thus: ^X^I = Ctrl + X Ctrl + I = Ctrl + X Tab ^[^@ = Ctrl + Alt + @ = Escape Ctrl + @ . On most terminals, Ctrl + Space also sends ^@ so ^[^@ = Ctrl + Alt + Space = Escape Ctrl + Space . ^X^[q = Ctrl + X Alt + q = Ctrl + X Escape q ^XQ = Ctrl + X Shift + q ^[[A = Up ^[[1;3A = Alt + Up ( Up , with 1;M to indicate the modifier M ). Note that many terminals don't actually send these escape sequences for Alt + cursor key . ^[[1;3D = Alt + Left ^[[1;5C = Ctrl + Right There's no general, convenient way to look up the key corresponding to an escape sequence. The other way round, pressing Ctrl + V followed by a key chord at a shell prompt (or in many terminal-based editors) inserts the escape sequence literally. See also How do keyboard input and text output work? and key bindings table?
{ "source": [ "https://unix.stackexchange.com/questions/629627", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/452023/" ] }
629,642
I am trying to start nodepool-launcher on centOs 7 so that i can run Zuul API gateway management. Initially, I got this error: Failed at step EXEC spawning /usr/bin/nodepool-launcher.service: No such file or directory. I created a file named nodepool-launcher.service in /usr/bin directory. The file contains: [Service] ExecStart= /bin/bash /usr/bin/nodepool-launcher.service Now, I have this error: [root@mypc ~]# systemd-analyze verify nodepool-launcher.service nodepool-launcher.service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. Refusing. Error: org.freedesktop.DBus.Error.InvalidArgs: Unit is not loaded properly: Invalid argument. Failed to create nodepool-launcher.service/start: Invalid argument nodepool-launcher.service: command /usr/bin/nodepool-launcher is not executable: No such file or directory I have followed this documentation for installing and configuring nodepool. Any suggestions to overcome this problem are most welcome.
^ c is a common notation for Ctrl + c where c is a (uppercase) letter or one of @[\]^_ . It designates the corresponding control character . The correspondence is that the numeric code of the control character is the numeric code of the printable character (letter or punctuation symbol) minus 64, which corresponds to setting a bit to 0 in base 2. In addition, ^? often means character 127. Some keys send a control character: Escape = Ctrl + [ Tab = Ctrl + I Return (or Enter or ⏎ ) = Ctrl + M Backspace = Ctrl + ? or Ctrl + H (depending on the terminal configuration) Alt (often called Meta because that was the name of the key at that position on historical Unix machines) plus a printable character sends ^[ (escape) followed by that character. Most function and cursor keys send an escape sequence, i.e. the character ^[ followed by some printable characters. The details depend on the terminal and its configuration. For xterm, the defaults are documented in the manual . The manual is not beginner-friendly. Here are some tips to help: CSI means ^[[ , i.e. escape followed by open-bracket. SS3 means ^[O , i.e. escape followed by uppercase-O. "application mode" is something that full-screen programs usually turn on. Some keys send a different escape sequence in this mode, for historical reasons. (There are actually multiple modes but I won't go into a detailed discussion because in practice, if it matters, you can just bind the escape sequences of both modes, since there are no conflicts.) Modifiers ( Shift , Ctrl , Alt / Meta ) are indicated by a numerical code. Insert a semicolon and that number just before the last character of the escape sequence. Taking the example in the documentation: F5 sends ^[[15~ , and Shift + F5 sends ^[[15;2~ . For cursor keys that send ^[[ and one letter X , to indicate a modifier M , the escape sequence is ^[[1; M X . Xterm follows an ANSI standard which itself is based on historical usage dating back from physical terminals. Most modern terminal emulators follow that ANSI standard and implement some but not all of xterm's extensions. Do expect minor variations between terminals though. Thus: ^X^I = Ctrl + X Ctrl + I = Ctrl + X Tab ^[^@ = Ctrl + Alt + @ = Escape Ctrl + @ . On most terminals, Ctrl + Space also sends ^@ so ^[^@ = Ctrl + Alt + Space = Escape Ctrl + Space . ^X^[q = Ctrl + X Alt + q = Ctrl + X Escape q ^XQ = Ctrl + X Shift + q ^[[A = Up ^[[1;3A = Alt + Up ( Up , with 1;M to indicate the modifier M ). Note that many terminals don't actually send these escape sequences for Alt + cursor key . ^[[1;3D = Alt + Left ^[[1;5C = Ctrl + Right There's no general, convenient way to look up the key corresponding to an escape sequence. The other way round, pressing Ctrl + V followed by a key chord at a shell prompt (or in many terminal-based editors) inserts the escape sequence literally. See also How do keyboard input and text output work? and key bindings table?
{ "source": [ "https://unix.stackexchange.com/questions/629642", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/450711/" ] }
631,065
I have a directory ~/Documents/machine_learning_coursera/ . The command find . -type d -name '^machine' does not find anything I also tried find . -type d -regextype posix-extended -regex '^machine' so as to match the the beginning of the string and nothing. I tried also with -name : find . -type d -regextype posix-extended -regex -name '^machine' and got the error: find: paths must precede expression: `^machine' What am I doing wrong here?
find 's -name takes a shell/glob/ fnmatch() wildcard pattern, not a regular expression. GNU find 's -regex non-standard extension does take a regexp (old style emacs type by default), but that's applied on the full path (like the standard -path which also takes wildcards), not just the file name (and are anchored implicitly) So to find files of type directory whose name starts with machine , you'd need: find . -name 'machine*' -type d Or: find . -regextype posix-extended -regex '.*/machine[^/]*' -type d (for that particular regexp, you don't need -regextype posix-extended as the default regexp engine will work as well) Note that for the first one to match, the name of the file also needs to be valid text in the locale, while for the second one, it's the full path that needs to be valid text.
{ "source": [ "https://unix.stackexchange.com/questions/631065", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/440181/" ] }
631,093
I have a Dell Inspiron 15R N5110 Laptop (Core i5 2nd Gen/4 GB/500 GB/Windows 7). I previously installed Windows 10 on my system, but my computer was very slow, so I decided to install Linux on it. It is currently running Windows 7. My problem is: the only drivers I have for my laptop are Windows 7's and I can't find Linux drivers for it. How can I download my laptop's drivers for Linux? And does my laptop support Linux?
It is very unlikely that you will need any additional device drivers other than those that already come with most popular Linux distributions, specially on non brand new laptops. The only exception regards GPU devices used for games, such as NVidia and AMD Radeon GPUs. In such cases, some manufacturers sometimes provide their own device drivers but, even though, most are also supported by Linux community. Anyway, a possible lack of Linux support from the manufacturer might not prevent you from installing Linux: you can install the system and later install any manufacturer-provided device driver (if available/necessary). Though again, it is very rare on laptop devices. If you are not very familiar with Linux, I suggest you choose a distribution with a friendly, intuitive interface, such as Linux Mint Cinnamon Edition - that would be definitely my pick, if you ask me for a recommendation. You can also try Ubuntu, Pop!_OS, Elementary OS, DeepIn or Fedora. Hope this helps.
{ "source": [ "https://unix.stackexchange.com/questions/631093", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/452982/" ] }
631,501
Does anybody know why this is happening and how to fix it? me@box:~$ echo "eyJmb28iOiJiYXIiLCJiYXoiOiJiYXQifQ" | base64 -di {"foo":"bar","baz":"bat"}base64: invalid input
If you do the reverse, you'll note that the string isn't complete: $ echo '{"foo":"bar","baz":"bat"}' | base64 eyJmb28iOiJiYXIiLCJiYXoiOiJiYXQifQo= $ echo "eyJmb28iOiJiYXIiLCJiYXoiOiJiYXQifQo=" | base64 -di {"foo":"bar","baz":"bat"} Extracts of Why does base64 encoding require padding if the input length is not divisible by 3? What are Padding Characters? Padding characters help satisfy length requirements and carry no meaning. However, padding is useful in situations where base64 encoded strings are concatenated in such a way that the lengths of the individual sequences are lost, as might happen, for example, in a very simple network protocol. If unpadded strings are concatenated, it's impossible to recover the original data because information about the number of odd bytes at the end of each individual sequence is lost. However, if padded sequences are used, there's no ambiguity, and the sequence as a whole can be decoded correctly.
{ "source": [ "https://unix.stackexchange.com/questions/631501", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38649/" ] }
632,418
The below fails: sudo -u chris ls /root ls: cannot open directory '/root': Permission denied While the below succeeds: sudo ls /root ... I do not understand why. I assume -u just changes the $USER /running user to the parameter provided in addition to having root privliges. What is the cause behind this behavior?
sudo -u chris runs the given command as user chris , not as root with USER set to chris . So if chris can’t access /root , sudo -u chris won’t change that. See man sudo : -u user , --user = user Run the command as a user other than the default target user (usually root ). sudo isn’t specifically a “run as root” tool; it’s a “run as some other user or group” tool.
{ "source": [ "https://unix.stackexchange.com/questions/632418", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124109/" ] }
632,422
I have the following tsv-file (extract): File 1: NC_002163.1 RefSeq source 1 1641481 . + . organism=Campylobacter jejuni subsp. jejuni NCTC 11168;mol_type=genomic DNA;strain=NCTC 11168;sub_species=jejuni;db_xref=taxon:192222 NC_002163.1 RefSeq misc_feature 19386 19445 . - . inference=protein motif:TMHMM:2.0;note=3 probable transmembrane helices predicted for Cj0012c Further possible text NC_002163.1 RefSeq misc_feature 19482 19550 . - . inference=protein motif:TMHMM:2.0;note=3 probable transmembrane helices predicted for Cj0014c Sometimes there is more text NC_002163.1 RefSeq misc_feature 22853 22921 . - . inference=protein motif:TMHMM:2.0;note=5 probable transmembrane helices predicted for Cj0017c ... As you can see, the last column contains some identifiers ( Cj0014c, Cj0017c, etc ). Some of these IDs are saved in another file File 2: Cj0012c Cj0027 CjNC9 Cjp01 SRP_RNA_Cjs03 CjNC11 CjNC1 Cj0113 Cjp03 Cj0197c Cj0251c How can I use awk (or any bash-script-tool) to eliminate those lines from File 1, that contain as substring in the last column, any ID that is found in File 2? For example, the second line of File 1 would be deleted, since Cj0012c is found in File 2 and is part of the string in the last column of the line in File 1. I've been struggling already many hours, so thanks for any help (and, if possible, an explanation of the code!)
sudo -u chris runs the given command as user chris , not as root with USER set to chris . So if chris can’t access /root , sudo -u chris won’t change that. See man sudo : -u user , --user = user Run the command as a user other than the default target user (usually root ). sudo isn’t specifically a “run as root” tool; it’s a “run as some other user or group” tool.
{ "source": [ "https://unix.stackexchange.com/questions/632422", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/454312/" ] }
634,046
Often I find my self making single character aliases, because after all, they exist to save input time. I'm curious if this should be avoided. I do not know of any conflicts.
Things to avoid: standard or common commands with single character names: w (show logged in users' activity), X (X Window System server), R (R programming language interpreter), [ (similar to test ) builtins of your shell or of common shells: [ , . , : , - , r shell keywords: { , } , ! ? and * wildcard characters special characters in the shell syntax: `"$&();'#~|\<> , (also ^ , % in some shells), SPC, TAB, NL (and other blanks with some shells) better avoid non-ASCII characters (as those have different encoding depending on the locale) better avoid control characters (beside TAB and NL already mentioned above) as they're not that easy to enter, and depending on context, not always visible, or with different representations. Only zsh will let you define and use an alias for the NUL character. bash lets you define an alias for ^A (the control character with byte value 1) but not use it apparently. To find commands with single character names: bash : compgen -c | grep -x . | sort -u (also includes keywords, assumes command names don't contain newline characters) zsh : type -m '?' (or type -pm '?' if you don't want functions/aliases/builtins/keywords). Debian or derivatives: to find any command in any package with single character name: $ apt-file find -x '/s?bin/.$' coreutils: /usr/bin/[ e-wrapper: /usr/bin/e python3-q-text-as-data: /usr/bin/q r-base-core: /usr/bin/R r-base-core: /usr/lib/R/bin/R r-cran-littler: /usr/bin/r r-cran-littler: /usr/lib/R/site-library/littler/bin/r wims: /var/lib/wims/public_html/bin/c xserver-xorg-core: /usr/bin/X
{ "source": [ "https://unix.stackexchange.com/questions/634046", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/420883/" ] }
634,048
I am new to shell world, writing a simple script to pull files from more that 300 servers. Wanted to know if I am writing like below then it will login to all 300 servers in one go and pull files or it will go one by one. Also I have passwordless login for one user that user I can mention in $username or I need to create other script for that. #!/bin/bash cd /backup for server in $(cat server.txt) do scp -r $username@$server:/tmp/backup/*.txt* . done
Things to avoid: standard or common commands with single character names: w (show logged in users' activity), X (X Window System server), R (R programming language interpreter), [ (similar to test ) builtins of your shell or of common shells: [ , . , : , - , r shell keywords: { , } , ! ? and * wildcard characters special characters in the shell syntax: `"$&();'#~|\<> , (also ^ , % in some shells), SPC, TAB, NL (and other blanks with some shells) better avoid non-ASCII characters (as those have different encoding depending on the locale) better avoid control characters (beside TAB and NL already mentioned above) as they're not that easy to enter, and depending on context, not always visible, or with different representations. Only zsh will let you define and use an alias for the NUL character. bash lets you define an alias for ^A (the control character with byte value 1) but not use it apparently. To find commands with single character names: bash : compgen -c | grep -x . | sort -u (also includes keywords, assumes command names don't contain newline characters) zsh : type -m '?' (or type -pm '?' if you don't want functions/aliases/builtins/keywords). Debian or derivatives: to find any command in any package with single character name: $ apt-file find -x '/s?bin/.$' coreutils: /usr/bin/[ e-wrapper: /usr/bin/e python3-q-text-as-data: /usr/bin/q r-base-core: /usr/bin/R r-base-core: /usr/lib/R/bin/R r-cran-littler: /usr/bin/r r-cran-littler: /usr/lib/R/site-library/littler/bin/r wims: /var/lib/wims/public_html/bin/c xserver-xorg-core: /usr/bin/X
{ "source": [ "https://unix.stackexchange.com/questions/634048", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/356506/" ] }
634,112
I've been having trouble with some network configuration lately which has been tricky to resolve. It seems this would be much easier to diagnose if I knew which direction the traffic was failing to get through. Since all ping requests receive no responses back I'd like to know if the ping-request packets are getting through and the responses failing, or if it's the requests themselves that are failing. To be clear, standard utilities like ping and traceroute rely on sending a packet out from one machine and receiving a packet in response back to that same machine. When no response comes back, it's always impossible to tell if the initial request failed to get through, or the response to it was blocked or even if the response to it was simply never sent. It's this specific detail, "which direction is the failure", that I'd like to analyse. Are there any utilities commonly available for Linux which will let me monitor for incoming ICMP ping requests?
tcpdump can do this, and is available pretty much everywhere: tcpdump -n -i enp0s25 icmp will dump all incoming and outgoing ICMP packets on enp0s25 . To see only ICMP echo requests: tcpdump -n -i enp0s25 "icmp[0] == 8" ( -n avoids DNS lookups, which can delay packet reporting and introduce unwanted traffic of their own.) This allows you to find if it is receiving the packets from the other machine (from which you would e.g. ping it), so the problem is with the return path, or if they directly don't arrive.
{ "source": [ "https://unix.stackexchange.com/questions/634112", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20140/" ] }
634,315
I've been struggling with this for a couple days so I'm hoping someone on SE can help me. I've downloaded a large file from Dropbox using wget (following command) wget -O folder.zip https://www.dropbox.com/sh/.../.../dropboxfolder?dl=1 I'm sure it's a zip because 1), file dropboxfolder.zip yields dropboxfolder.zip: Zip archive data, at least v2.0 to extract , and 2) the download and extraction works find on my Windows machine. When I try to unzip to the current directory using unzip dropboxfolder.zip , on Linux, I get the following output: warning: stripped absolute path spec from / mapname: conversion of failed creating: subdir1/ creatingL subdir2/ extracting: subdir1/file1.tif error: invalid zip file with overlapped components (possible zip bomb) I'm unsure what the issue is, since as I said it works fine on Windows. Since the zip is rather large (~19GB) I would like to avoid transferring it bit by bit, so I would be very thankful for any help. I've run unzip -t but it gives the same error. When listing all the elements in the archive it shows everything as it should be. Could it be an issue with the file being a tif file?
I had the exact same issue with dropbox , wget and zip . I used an alternative compressing tool and extracted the file with: 7z e file.zip
{ "source": [ "https://unix.stackexchange.com/questions/634315", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/455986/" ] }
634,710
What exactly is a "stable" Linux distribution and what are the (practical) consequences of using an "unstable" distribution? Does it really matter for casual users (i.e. not sysadmins) ? I've read this and this but I haven't got a clear answer yet. "Stable" in Context: I've seen words and phrases like "Debian Stable" and "Debian Unstable" and things like "Debian is more stable than Ubuntu".
In the context of Debian specifically, and more generally when many distributions describe themselves, stability isn’t about day-to-day lack of crashes, it’s about the stability of the interfaces provided by the distribution , both programming interfaces and user interfaces. It’s better to think of stable v. development distributions than stable v. “unstable” distributions. A stable distribution is one where, after the initial release, the kernel and library interfaces won’t change. As a result, third parties can build programs on top of the distribution, and expect them to continue working as-is throughout the life of the distribution. A stable distribution provides a stable foundation for building more complex systems. In RHEL, whose base distribution moves even more slowly than Debian, this is described explicitly as API and ABI stability . This works forwards as well as backwards: thus, a binary built on Debian 10.5 should work as-is on 10.9 but also on the initial release of Debian 10. (This is one of the reasons why stable distributions never upgrade the C library in a given release.) This is a major reason why bug fixes (including security fixes) are rarely done by upgrading to the latest version of a given piece of software, but instead by patching the version of the software present in the distribution to fix the specific bug only. Keeping a release consistent also allows it to be considered as a known whole, with a better-defined overall behaviour than in a constantly-changing system; minimising the extent of changes made to fix bugs helps keep the release consistent. Stability as defined for distributions also affects users, but not so much through program crashes etc.; rather, users of rolling distributions or development releases of distributions (which is what Debian unstable and testing are) have to regularly adjust their uses of their computers because the software they use undergoes major upgrades (for example, bumping LibreOffice). This doesn’t happen inside a given release stream of a stable distribution. This could explain why some users might perceive Debian as more stable than Ubuntu: if they track non-LTS releases of Ubuntu, they’ll get major changes every six months, rather than every two years in Debian. Programs in a stable distribution do end up being better tested than in a development distribution, but the goal isn’t for the development distribution to be contain more bugs than the stable distribution: after all, packages in the development distribution are always supposed to be good enough for the next release. Bugs are found and fixed during the stabilisation process leading to a release though , and they can also be found and fixed throughout the life of a release. But minor bugs are more likely to be fixed in the development distribution than in a stable distribution. In Debian, packages which are thought to cause issues go to “experimental”, not “unstable”.
{ "source": [ "https://unix.stackexchange.com/questions/634710", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/449342/" ] }
635,132
So, to make a long story short, I wrote a (python) program which opened a lot of files, writes data in it, and then deleted the files, but didn't properly close the file handles. After some time, this program halted due to lack of disk space. Auto-complete in bash failed with cannot create temp file for here-document: No space left on device" , and lsof -nP +L1 showed a ton of no-longer existing files. After killing my program, all the filehandles were closed, disk space was "free" again and everything was fine. Why did this happen? The disk space wasn't physically filled up. Or is the number of file handles limited?
Deleting a file in Unix simply removes a named reference to its data (hence the syscall name unlink / unlinkat , rather than delete ). In order for the data itself to be freed, there must be no other references to it. References can be taken in a few ways: There must be no further references to this data on the filesystem ( st_nlink must be 0) -- this can happen when hard linking. Otherwise, we'd drop the data while there's still a way to access it from the filesystem. There must be no further references to this data from open file handles (on Linux, the relevant struct file 's f_count in the kernel must be 0). Otherwise, the data could still be accessed or mutated by reading or writing to the file handle (or /proc/pid/fd on Linux), and we need somewhere to continue to store it. Once both of these conditions are fulfilled, the data is eligible to be freed. As your case violates condition #2 -- you still have open file handles -- the data continued to be stored on disk (since it has nowhere else to go) until the file handle was closed. Some programs even use this in order to simplify cleaning up their data. For example, imagine a program which needs to have some large data stored on disk for intermediate work, but doesn't need to share it with others. If it opens and then immediately deletes that file, it can use it without having to worry about making sure they clean up on exit -- the open file descriptor reference count will naturally drop to 0 on close(fd) or exit, and the relevant space will be freed whether the program exits normally or not. Detection Deleted files which are still being held open by a file descriptor can be found with lsof , using something like the following: % lsof -nP +L1 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME pulseaudi 1799 cdown 6u REG 0,1 67108864 0 1025 /memfd:pulseaudio (deleted) chrome 46460 cdown 45r REG 0,27 131072 0 105357 /dev/shm/.com.google.Chrome.gL8tTh (deleted) This lists all open files which an st_nlink value of less than one. Mitigation In your case you were able to close the file handles by terminating the process, which is a good solution if possible. In cases where that isn't possible, on Linux you can access the data backed by the file descriptor via /proc/pid/fd and truncate it to size 0, even if the file has already been deleted: : > "/proc/pid/fd/$num" Note that, depending on what your application then does with this file descriptor, the application may be varying degrees of displeased about having the data changed out from under it like this. If you are certain that file descriptor has simply leaked and will not be accessed again, then you can also use gdb to close it. First, use lsof -nP +L1 or ls -l /prod/pid/fd to find the relevant file descriptor number, and then: % gdb -p pid --batch -ex 'call close(num)' To answer your other question, although it's not the cause of your problem: Is the number of file [descriptors] limited? The number of file descriptors is limited, but that's not the limit you're hitting here. "No space left on device" is ENOSPC , which is what we generate when your filesystem is out of space. If you were hitting a file descriptor limit, you'd receive EMFILE (process-level shortage, rendered by strerror as "Too many open files") or ENFILE (system-level shortage, rendered by strerror as "Too many open files in system") instead. The process level soft limit can be inspected with ulimit -Sn , and the system-level limit can be viewed at /proc/sys/fs/file-max .
{ "source": [ "https://unix.stackexchange.com/questions/635132", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90771/" ] }
636,324
I'm learning AWK these days for text processing. But I'm very much confused about AWK syntax. I read on Wikipedia that the syntax follows this format: (conditions) {actions} I assumed I can follow the same syntax in a BEGIN and END block. But I'm getting a syntax error when I run the following script. awk 'BEGIN{} (1 == 1) {print "hello";} END{ (1==1) {print "ended"}}' $1 However, if I make a little bit of a change inside the END block and add 'if' before the condition, it runs just fine. awk 'BEGIN{} (1 == 1) {print "hello";} END{ if (1==1) {print "ended"}}' $1 Why is it mandatory to write 'if' in END block while it's not needed in normal blocks?
AWK programs are a series of rules, and possibly functions. Rules are defined as a pattern ( (conditions) in your format) followed by an action ; either are optional. BEGIN and END are special patterns . Thus in BEGIN {} (1 == 1) { print "hello"; } END { if (1 == 1) { print "ended" } } the patterns are BEGIN , (1 == 1) (and the parentheses aren’t necessary), and END . Blocks inside braces after a pattern (or without a pattern, to match everything) are actions . You can’t write patterns as such inside a block, each block is governed by the pattern which introduces it. Conditions inside an action must be specified as part of an if statement (or other conditional statement, while etc.). The actions above are {} (the empty action), { print "hello"; } , and { if (1 == 1) { print "ended" } } . A block consisting of { (1 == 1) { print "ended" } } results in a syntax error because (1 == 1) is a statement here, and must be separated from the following statement in some way; { 1 == 1; { print "ended" } } would be valid, but wouldn’t have the effect you want — 1 == 1 will be evaluated, then separately, { print "ended" } .
{ "source": [ "https://unix.stackexchange.com/questions/636324", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247138/" ] }
636,346
I've had a look at this (and the forum thread here ) and this . I've tried running in Python and also at the command line. I've double-checked: some files have definitely been deleted from the source, but are present in the link-dest destination. I've tried messing around with numerous options. I've tried adding forward slash to the end of the paths to see if that might make a difference. The paths in all cases are simple directories, never ending in glob patterns. I've also looked at the man pages. Incidentally, this shouldn't matter, but you never know: I'm running this under WSL (W10 OS). Nothing seems to work. By the way, the files deleted in source do get deleted (or rather not copied) in the target location (if not a dry run). What I'm trying to do is to find out what changes have occurred between the link-dest location and the source, with a view to cancelling the operation if nothing has changed. But to do that I have to be able to get a list of new or modified files and also files which have been deleted. This is the Python code I've been trying: link_dest_setting = '' if most_recent_snapshot_of_any_type == None \ else f'--link-dest={most_recent_snapshot_of_any_type[0]}' rsync_command_args = [ 'rsync', '-v', # '--progress', # '--update', '--recursive', '--times', '--delete', # '--info=DEL', '-n', link_dest_setting, source_dir, new_snapshot_path, ] print( f'running this: {rsync_command_args}') result = subprocess.run( rsync_command_args, stdout=subprocess.PIPE, stderr=subprocess.PIPE) rsync_result_stdout = result.stdout.decode( 'utf-8' ) print( f'rsync_result stdout |{rsync_result_stdout}|') rsync_result_stderr = result.stderr.decode( 'utf-8' ) print( f'rsync_result stderr |{rsync_result_stderr}|') Typical stdout (with dry run): rsync_result stdout |sending incremental file list ./ MyModifiedFile.odt sent 1,872 bytes received 25 bytes 3,794.00 bytes/sec total size is 6,311,822 speedup is 3,327.27 (DRY RUN) | (no errors are reported in stderr ) Just found another option, -i . Using this things get quite mysterious: rsync_result stdout |sending incremental file list .d..t...... ./ >f.st...... MyModifiedFile.odt sent 53,311 bytes received 133 bytes 35,629.33 bytes/sec total size is 6,311,822 speedup is 118.10 | Edit Typical BASH command: rsync -virtn --delete --link-dest=/mnt/f/link_dest_dir /mnt/d/source_dir /mnt/f/destination_dir Dry run which, in principle, should show files/dirs present under link_dest_dir but NOT present (deleted) under source_dir. I can't get this to be shown. In any event I think the Python answer is likely to be a preferable solution, because the scanning STOPS at the first detection of a difference. Edit 2 (in answer to roaima's question "what are you saving?") My "My Documents" dir has about 6 GB, and thousands of files. It takes my Python script 15 s or so to scan it, if no differences are found (shorter if one is). rsync typically takes about 2 minutes to do a copy (using hard links for the vast majority of the files). If that were found to be unnecessary, because there had been no change between the source and the link-dest location, I would then have to delete all those files and hard links. The deletion operation on its own is very expensive in terms of time. Incidentally, this is an external HD, spinning plates type. Not the slowest storage location ever, but it has the limitations it has. Just as importantly, because rsync does not appear to be capable, at least according to what I have found, of reporting on files which have been deleted in the source, how would I even know that this new snapshot was identical to the link-dest snapshot? In these snapshot locations I only want to keep a limited number (e.g. 5) snapshots, but I only want to add a new snapshot when it is different to its predecessor. So although the script may run every 10 minutes, the gap between adjacent snapshots may be 40 minutes, or much longer. I see you (roaima) have a high rep, and seem to specialise quite a bit in rsync . The simple question I want answering is: is it possible for rsync , on a dry run or not, to report on files/dirs deleted in the source relative to the link-dest ? If not, is this a bug/deficiency? Because the man pages certainly seem to claim (e.g. with --info=DEL ) that this should happen.
AWK programs are a series of rules, and possibly functions. Rules are defined as a pattern ( (conditions) in your format) followed by an action ; either are optional. BEGIN and END are special patterns . Thus in BEGIN {} (1 == 1) { print "hello"; } END { if (1 == 1) { print "ended" } } the patterns are BEGIN , (1 == 1) (and the parentheses aren’t necessary), and END . Blocks inside braces after a pattern (or without a pattern, to match everything) are actions . You can’t write patterns as such inside a block, each block is governed by the pattern which introduces it. Conditions inside an action must be specified as part of an if statement (or other conditional statement, while etc.). The actions above are {} (the empty action), { print "hello"; } , and { if (1 == 1) { print "ended" } } . A block consisting of { (1 == 1) { print "ended" } } results in a syntax error because (1 == 1) is a statement here, and must be separated from the following statement in some way; { 1 == 1; { print "ended" } } would be valid, but wouldn’t have the effect you want — 1 == 1 will be evaluated, then separately, { print "ended" } .
{ "source": [ "https://unix.stackexchange.com/questions/636346", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220752/" ] }
637,002
I want to launch a test, then wait a little and then start a program, using a shell script: #!/bin/bash sleep 3 & # test sleep 1 # wait some sleep 4 & # run program under test fg 1 # <-- I want to put test back in "foreground", but yields "line 5: fg: no job control" I presume I misunderstood what "foreground" means, but is there some other way to do what I want? (I tried prefixing with jobs -x and nohup , but I suspect my misunderstanding runs deeper.)
You need to just enable job control in the shell with set -m #!/bin/bash set -m sleep 3 & # test sleep 1 # wait some sleep 4 & # run program under test jobs fg %1 quoting from bash manual: -m Monitor mode. Job control is enabled. This option is on by default for interactive shells on systems that support it (see JOB CONTROL above). Background processes run in a separate process group and a line containing their exit status is printed upon their completion.
{ "source": [ "https://unix.stackexchange.com/questions/637002", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73741/" ] }
638,221
Not sure if it is ok to share the website I tried to get its source, but I think it is necessary for a better explanation. And I apologize if it's not in advance The command: curl -k -L -s https://www.mi.com The output was binary data for some reason by getting the following error Warning: Binary output can mess up your terminal. Use "--output -" to tell Warning: curl to output it to your terminal anyway, or consider "--output Warning: <FILE>" to save to a file. How can I read the page HTML source? thanks!
The returned data is compressed , you can instruct curl to handle the decompression directly by adding the --compressed option: curl -k -L -s --compressed https://www.mi.com
{ "source": [ "https://unix.stackexchange.com/questions/638221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/417850/" ] }
640,062
How do I correctly run a few commands with an altered value of the IFS variable (to change the way field splitting works and how "$*" is handled), and then restore the original value of IFS ? I know I can do ( IFS='my value here' my-commands here ) to localize the change of IFS to the sub-shell, but I don't really want to start a sub-shell, especially not if I need to change or set the values of variables that needs to be visible outside of the sub-shell. I know I can use saved_IFS=$IFS; IFS='my value here' my-commands here IFS=$saved_IFS but that seems to not restore IFS correctly in the case that the original IFS was actually unset . Looking for answers that are shell agnostic (but POSIX). Clarification: That last line above means that I'm not interested in a bash -exclusive solution. In fact, the system I'm using most, OpenBSD, does not even come with bash installed at all by default, and bash is not a shell I use for anything much other than to answer questions on this site. It's much more interesting to see solutions that I may use in bash or other POSIX-like shells without making an effort to write non-portable code.
Yes, in the case when IFS is unset, restoring the value from $saved_IFS would actually set the value of IFS (to an empty value). This would affect the way field splitting of unquoted expansions is done, it would affect field splitting for the read built-in utility, and it would affect the way the positional parameters are combined into a string when using "$*" . With an unset IFS these things would happen as if IFS had the value of a space, a tab character, and a newline character, but with an empty value, there would be no field splitting and the positional parameters would be concatenated into a string with no delimiter when using "$*" . So, there's a difference. To correctly restore IFS , consider setting saved_IFS only if IFS is actually set to something. unset saved_IFS [ -n "${IFS+set}" ] && saved_IFS=$IFS The parameter substitution ${IFS+set} expands to the string set only if IFS is set, even if it is set to an empty string. If IFS is unset, it expands to an empty string, which means that the -n test would be false and saved_IFS would remain unset. Now, saved_IFS is unset if IFS was initially unset, or it has the value that IFS had, and you can set whatever value you want for IFS and run your code. When restoring IFS , you do a similar thing: unset IFS [ -n "${saved_IFS+set}" ] && { IFS=$saved_IFS; unset saved_IFS; } The final unset saved_IFS isn't really necessary, but it may be good to clean up old variables from the environment. An alternative way of doing this, suggested by LL3 in comments (now deleted), relies on prefixing the unset command by : , a built-in utility that does nothing, effectively commenting out the unset , when it's not needed: saved_IFS=$IFS ${IFS+':'} unset saved_IFS This sets saved_IFS to the value of $IFS , but then unsets it if IFS was unset. Then set IFS to your value and run you commands. Then restore with IFS=$saved_IFS ${saved_IFS+':'} unset IFS (possibly followed by unset saved_IFS if you want to clean up that variable too). Note that : must be quoted, as above, or escaped as \: , so that it isn't modified by $IFS containing : (the unquoted parameter substitution invokes field splitting, after all).
{ "source": [ "https://unix.stackexchange.com/questions/640062", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116858/" ] }
640,272
I am using Ubuntu 20.04 for Windows 10 (WSL2) on a Haswell laptop and I am getting about 0.6 bytes per second. As in 6 bytes total after 10 seconds of waiting. This is unacceptable. What is the problem? EDIT: This only appears to be an issue when operating in WSL2 mode. WSL1 = 40MiB/s WSL2 = 0.6 byte/s
Both /dev/random and /dev/urandom in Linux are cryptographically secure pseudorandom number generators. In older versions of the Linux kernel, /dev/random would block once initialized until additional sufficient entropy was accumulated, whereas /dev/urandom would not. Since WSL2 is a virtual machine with a real Linux kernel, it has a limited set of entropy sources from which it can draw entropy and must rely on the host system for most of its entropy. However, as long as it has received enough entropy when it boots, it's secure to use the CSPRNGs. It sounds like in your environment, the CSPRNG has been seeded at boot from Windows, but isn't reseeded at a high rate. That's fine, but it will cause /dev/random to block more frequently than you want. Ultimately, this is a problem with the configuration of WSL2. WSL1 probably doesn't have this problem because in such a case, /dev/random probably doesn't block and just uses the system CSPRNG, like /dev/urandom . In more recent versions of Linux , the only time that /dev/random blocks is if enough entropy hasn't been accumulated at boot to seed the CSPRNG once; otherwise, it is completely equivalent to /dev/urandom . This decision was made because there is no reasonable security difference in the two interfaces provided the pool has been appropriately initialized. Since there's no measurable difference in these cases, if /dev/random is blocking and is too slow for you, the proper thing to do is use /dev/urandom , since they are the output of the same CSPRNG (which is based on ChaCha20). The upstream Linux behavior will likely be the default in a future version of WSL2 anyway, since Microsoft will eventually incorporate a newer version of Linux.
{ "source": [ "https://unix.stackexchange.com/questions/640272", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/430596/" ] }
640,343
I accidentally executed sudo rm /* instead of sudo rm ./* inside a directory whose contents I wanted to delete, and I have basically messed up my system. None of the basic commands like ls , grep etc. are working, and none of my applications are opening, like chromium, slack, image viewer etc. I tried to look up my problem on the internet and found this question, but none of the solutions there work for me. I am on an Arch Linux desktop, and I haven't logged out of my system since this happened, because I'm afraid I won't be able to log back in, as suggested here . Also, I don't have a live USB of an Arch Linux image file, if that helps. Any help on how should I proceed further to make my system go back to normal, would be appreciated. Thanks! EDIT : I'm attaching the outputs of some commands: $ echo /* /boot /dev /etc /home /lost+found /mnt /opt /proc /root /run /srv /sys /tmp /usr /var $ echo /usr/* /usr/bin /usr/include /usr/lib /usr/lib32 /usr/lib64 /usr/local /usr/sbin /usr/share /usr/src Also, echo /usr/bin/* gives me a long list of directories in the format /usr/bin/{command} where {command} is any command that I could have run from the terminal had I not messed my system up. Please let me know if any other information is needed!
Arch Linux has four symbolic links in / : bin -> usr/bin lib -> usr/lib lib64 -> usr/lib sbin -> usr/bin You should be able to recreate them (using a Live-USB or an emergency shell) or by calling the linker (with root privileges and in / as working directory) directly: /usr/lib/ld-linux-x86-64.so.2 /usr/bin/ln -s usr/lib lib64 This should restore basic functionality in your running system. Then restoring the other symbolic links should be easy. If you don't have root privileges you can reboot into a recovery shell and fix the problems there. Why does /usr/bin/ls and other commands fail? Without the /lib64 symbolic link dynamically linked programs will not find the dynamic linker/loader because the path is hardcoded to /lib64/ld-linux-x86-64.so.2 (c.f. ldd /usr/bin/ln ).
{ "source": [ "https://unix.stackexchange.com/questions/640343", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/454328/" ] }
640,346
Recently i got an update which broke telegram and now i can't install it using apt on my KDE Neon installation. It used to work perfectly fine before that. I got the repo at this article https://www.omgubuntu.co.uk/2019/08/how-to-install-telegram-on-ubuntu and I added it by using the command sudo add-apt-repository ppa:atareao/telegram Using the below command used to install it perfectly fine $ sudo apt install telegram-desktop but after some update i have been getting this error message and I don't understand why. Reading package lists... Done Building dependency tree Reading state information... Done Starting pkgProblemResolver with broken count: 1 Starting 2 pkgProblemResolver with broken count: 1 Investigating (0) telegram-desktop:amd64 < none -> 2.1.7+ds-2~ubuntu20.04.1 @un puN Ib > Broken telegram-desktop:amd64 Depends on libopenal1:amd64 < none | 1:1.19.1-1 @un uH > (>= 1.14) Considering libopenal1:amd64 0 as a solution to telegram-desktop:amd64 9999 Re-Instated libopenal-data:amd64 Re-Instated libopenal1:amd64 Broken telegram-desktop:amd64 Depends on libqrcodegencpp1:amd64 < none | 1.5.0-2build1 @un uH > (>= 1.2.1) Considering libqrcodegencpp1:amd64 0 as a solution to telegram-desktop:amd64 9999 Re-Instated libqrcodegencpp1:amd64 Broken telegram-desktop:amd64 Depends on librlottie0-1:amd64 < none | 0~git20200305.a717479+dfsg-1 @un uH > (>= 0~git20200305.a717479+dfsg) Considering librlottie0-1:amd64 0 as a solution to telegram-desktop:amd64 9999 Re-Instated librlottie0-1:amd64 Broken telegram-desktop:amd64 Depends on libxxhash0:amd64 < none | 0.7.3-1 @un uH > (>= 0.6.5) Considering libxxhash0:amd64 0 as a solution to telegram-desktop:amd64 9999 Re-Instated libxxhash0:amd64 Broken telegram-desktop:amd64 Depends on qtbase-abi-5-12-8:amd64 < none @un H > Considering libqt5core5a:amd64 3417 as a solution to telegram-desktop:amd64 9999 Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: telegram-desktop : Depends: qtbase-abi-5-12-8 E: Unable to correct problems, you have held broken packages.
Arch Linux has four symbolic links in / : bin -> usr/bin lib -> usr/lib lib64 -> usr/lib sbin -> usr/bin You should be able to recreate them (using a Live-USB or an emergency shell) or by calling the linker (with root privileges and in / as working directory) directly: /usr/lib/ld-linux-x86-64.so.2 /usr/bin/ln -s usr/lib lib64 This should restore basic functionality in your running system. Then restoring the other symbolic links should be easy. If you don't have root privileges you can reboot into a recovery shell and fix the problems there. Why does /usr/bin/ls and other commands fail? Without the /lib64 symbolic link dynamically linked programs will not find the dynamic linker/loader because the path is hardcoded to /lib64/ld-linux-x86-64.so.2 (c.f. ldd /usr/bin/ln ).
{ "source": [ "https://unix.stackexchange.com/questions/640346", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/408689/" ] }
640,457
Yesterday before going to sleep I started a long process of which I thought it would be finished before I stand up, therefore I used ./command && sudo poweroff my system is configured to not ask for a password for sudo poweroff , so it should shutdown when that command is finished. However it is still running and I want to use that system for other tasks now. Having that command running in the background is not an issue, but having my system possibly shutting down any second is. Is there a way to prevent zsh from executing the poweroff command while making sure that the first one runs until it is done? Would editing the /etc/sudoers file so that the system asks for my password still help in this case?
As you clarified in comments it's still running in foreground on an interactive shell, you should just be able to press Ctrl+Z . That will suspend the ./command job. Unless ./command actually intercepts the SIGTSTP signal and chooses to exit(0) in that case (unlikely), the exit status will be non-0 (128+SIGTSTP, generally 148), so sudo poweroff will not be run. Then, you can resume ./command in foreground or background with fg or bg . You can test with: sleep 10 && echo poweroff And see that poweroff is not output when you press Ctrl+Z and resume later with fg / bg . Or with sleep 10 || echo "failed: $?" And see failed: 148 as soon as you press Ctrl+Z . Note that this is valid for zsh and assuming you started it with ./command && sudo poweroff . It may not be valid for other shells, and would not be if you started it some other way such as (./command && sudo poweroff) in a subshell or { ./command && sudo poweroff; } as part of a compound command (which zsh , contrary to most other shells transforms to a subshell so it can be resumed as a whole when suspended).
{ "source": [ "https://unix.stackexchange.com/questions/640457", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200668/" ] }
642,966
Why does file xxx.src lead to cannot open `xxx.src' (No such file or directory) but has an exit status of 0 (success)? $ file xxx.src ; echo $? xxx.src: cannot open `xxx.src' (No such file or directory) 0 Note: to compare with ls : $ ls xxx.src ; echo $? ls: cannot access 'xxx.src': No such file or directory 2
This behavior is documented on Linux, and required by the POSIX standard. From the file manual on an Ubuntu system: EXIT STATUS file will exit with 0 if the operation was successful or >0 if an error was encoun‐ tered. The following errors cause diagnostic messages, but don't affect the pro‐ gram exit code (as POSIX requires), unless -E is specified: • A file cannot be found • There is no permission to read a file • The file type cannot be determined With -E (as noted above): $ file -E saonteuh; echo $? saonteuh: ERROR: cannot stat `saonteuh' (No such file or directory) 1 The non-standard -E option on Linux is documented as On filesystem errors (file not found etc), instead of handling the error as regular output as POSIX mandates and keep going, issue an error message and exit. The POSIX specification for the file utility says (my emphasis): If the file named by the file operand does not exist, cannot be read, or the type of the file named by the file operand cannot be determined, this shall not be considered an error that affects the exit status .
{ "source": [ "https://unix.stackexchange.com/questions/642966", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/334715/" ] }
643,520
Along the lines of /dev/null (path to an empty source/sink file), is there a path that will never point to a valid file on at least Linux? This is mostly for testing purposes of some scripts I'm writing, and I don't want to just delete or move a file that doesn't belong to the script if it exists.
As an alternative, I would suggest that your script create a temporary directory, and then look for a file name in there. That way, you are 100% certain that the file doesn't exist, and you have full control and can easily clean up after yourself. Something like: dir=$(mktemp -d) if [ -e "$dir"/somefile ]; then echo "Something is seriously wrong here, '$dir/somefile' exists!" fi rmdir "$dir" You can write the equivalent code in any language, the vast majority (all?) higher level languages will have some dedicated tool to handle creating and deleting temporary directories. This seems like a far safer and cleaner approach than trying to guess a file name that should not exist.
{ "source": [ "https://unix.stackexchange.com/questions/643520", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/304684/" ] }
643,777
I am trying to send messages from kafka-console-producer.sh , which is #!/bin/bash if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then export KAFKA_HEAP_OPTS="-Xmx512M" fi exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleProducer "$@" I am pasting messages then via Putty terminal. On receive side I see messages truncated approximately to 4096 bytes. I don't see anywhere in Kafka, that this limit is set. Can this limit be from bash/terminal or Putty?
4095 is the limit of the tty line discipline internal editor length on Linux. From the termios(3) man page: The maximum line length is 4096 chars (including the terminating newline character); lines longer than 4096 chars are truncated. After 4095 characters, input processing (e.g., ISIG and ECHO* processing) continues, but any input data after 4095 characters up to (but not including) any terminating newline is discarded. This ensures that the terminal can always receive more input until at least one line can be read. See also the corresponding code in the Linux kernel . For instance, if you enter: $ wc -c Enter Enter in the shell's own line editor (readline in the case of bash) submits the line to the shell. As the command line is complete, the shell is ready to execute it, so it leaves its own line editor, puts the terminal device back in canonical (aka cooked ) mode, which enables that crude line editor (actually implemented in tty driver in the kernel). Then, if you paste a 5000 byte line, press Ctrl + D to submit that line, and once again to tell wc you're done, you'll see 4095 as output. (Note that that limit does not apply to bash 's own line editor, you'll see you can paste a lot more data at the prompt of the bash shell). So if your receiving application reads lines of input from its stdin and its stdin is a terminal device and that application doesn't implement its own line editor (like bash does) and doesn't change the input mode, you won't be able to enter lines longer than 4096 bytes (including the terminating newline character). You could however disable the line editor of the terminal device (with stty -icanon ) before you start that receiving application so it reads input directly as you enter it. But then you won't be able to use Backspace / Ctrl + W for instance to edit input nor Ctrl + D to end the input. If you enter: $ saved=$(stty -g); stty -icanon icrnl; head -n1 | wc -c; stty "$saved" Enter paste your 5000 byte long line and press Enter , you'll see 5001.
{ "source": [ "https://unix.stackexchange.com/questions/643777", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28089/" ] }
644,442
On 3 machines I get: $ speedtest-cli Retrieving speedtest.net configuration... Traceback (most recent call last): File "/usr/bin/speedtest-cli", line 11, in <module> load_entry_point('speedtest-cli==2.1.2', 'console_scripts', 'speedtest-cli')() File "/usr/lib/python3/dist-packages/speedtest.py", line 1986, in main shell() File "/usr/lib/python3/dist-packages/speedtest.py", line 1872, in shell speedtest = Speedtest( File "/usr/lib/python3/dist-packages/speedtest.py", line 1091, in __init__ self.get_config() File "/usr/lib/python3/dist-packages/speedtest.py", line 1173, in get_config ignore_servers = list( ValueError: invalid literal for int() with base 10: '' I have tested one of these machines on two different internet connections with the same result. Why is it not working?
From this speedtest-cli Pull Request , I gather the speedtest site have changed something in the response their API gives out. Looking at the first commit in the PR, you just need to modify a single line in speedtest.py. If you're in Ubuntu or similar, and you have the file in the location shown in your output, you can fix it with: ## Backup original code sudo gzip -k9 /usr/lib/python3/dist-packages/speedtest.py ## Make the line substitution sed -i "s/^ map(int, server_config\['ignoreids'\].split(','))$/ map(int, (server_config['ignoreids'].split(',') if len(server_config['ignoreids']) else []) )/" /usr/lib/python3/dist-packages/speedtest.py EDIT: the final patch is at https://github.com/sivel/speedtest-cli/commit/cadc68 , and published in v2.1.3 . It's too complex for a simple one-line sed command, but you could still apply it yourself manually. Or you could try downloading that version of the speedtest.py file yourself: sudo gzip -k9 /usr/lib/python3/dist-packages/speedtest.py sudo wget https://raw.githubusercontent.com/sivel/speedtest-cli/v2.1.3/speedtest.py \ -O /usr/lib/python3/dist-packages/speedtest.py (Again, you should double-check the location of the speedtest.py file. The above location seems to be common for Ubuntu, but not across all versions of Unix/Linux.)
{ "source": [ "https://unix.stackexchange.com/questions/644442", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2972/" ] }
645,008
I ran this command: grep -i 'bro*' shows.csv and got this as output 1845307,2 Broke Girls,2011,138,6.7,89093 1702042,An Idiot Abroad,2010,21,8.3,29759 903747,Breaking Bad,2008,62,9.5,1402577 2249364,Broadchurch,2013,24,8.4,89378 1733785,Bron/Broen,2011,38,8.6,56357 2467372,Brooklyn Nine-Nine,2013,145,8.4,209571 7569592,Chilling Adventures of Sabrina,2018,36,7.6,69041 7221388,Cobra Kai,2018,31,8.7,72993 1355642,Fullmetal Alchemist: Brotherhood,2009,69,9.1,111111 118360,Johnny Bravo,1997,67,7.2,32185 455275,Prison Break,2005,91,8.3,465246 115341,Sabrina the Teenage Witch,1996,163,6.6,33458 1312171,The Umbrella Academy,2019,20,8,140800 3339966,Unbreakable Kimmy Schmidt,2015,51,7.6,61891 Where is bro in breaking bad? In fact, o doesn't even appear in "Breaking bad". I tried it once more, and got the same result. It is not accounting for the last character. Is there something wrong in the way I am writing it? You can download the file shows.csv from https://cdn.cs50.net/2021/x/seminars/linux/shows.csv
In your code o* means "zero or more occurrences of o ". It seems you confused regular expressions with glob syntax (where o* means "one o and zero or more whatever characters"). In Breaking Bad there is exactly zero o characters after Br , so it matches bro* (case-insensitively). grep -i bro shows.csv will do what (I think) you want.
{ "source": [ "https://unix.stackexchange.com/questions/645008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/466667/" ] }
645,027
I was trying to install npm.. └─$ sudo apt-get install npm I got some error/message https://paste.ubuntu.com/p/ZvGd7Kt96f/ That's very huge that's why I didn't add them here.. Then, I tried sudo apt --fix-broken install I got these messages I am using Debian Based Linux Distro.. I tried as the website also.https://www.how2shout.com/linux/how-to-install-npm-and-nodejs-14-x-on-kali-linux/ curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash - sudo apt-get update sudo apt-get install nodejs I got following error Unpacking nodejs (14.16.1-deb-1nodesource1) over (12.21.0~dfsg-1) ... dpkg: error processing archive /var/cache/apt/archives/nodejs_14.16.1-deb-1nodesource1_amd64.deb (--unpack): trying to overwrite '/usr/share/doc/nodejs/api/cli.json.gz', which is also in package nodejs-doc 12.21.0~dfsg-1 dpkg-deb: error: paste subprocess was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/nodejs_14.16.1-deb-1nodesource1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)
In your code o* means "zero or more occurrences of o ". It seems you confused regular expressions with glob syntax (where o* means "one o and zero or more whatever characters"). In Breaking Bad there is exactly zero o characters after Br , so it matches bro* (case-insensitively). grep -i bro shows.csv will do what (I think) you want.
{ "source": [ "https://unix.stackexchange.com/questions/645027", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/464778/" ] }
646,590
I'm running Arch Linux, and use ext4 filesystems. When I run ls in a directory that is actually small now, but used to be huge - it hangs for a while. But the next time I run it, it's almost instantaneous. I tried doing: strace ls but I honestly don't know how to debug the output. I can post it if necessary, though it's more than a 100 lines long. And, no, I'm not using any aliases. $ type ls ls is hashed (/usr/bin/ls) $ df . Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda9 209460908 60427980 138323220 31% /home
A directory that used to be huge may still have a lot of blocks allocated for directory entries (= names and inode numbers of files and sub-directories in that directory), although almost all of them are now marked as deleted. When a new directory is created, only a minimum number of spaces are allocated for directory entries. As more and more files are added, new blocks are allocated to hold directory entries as needed. But when files are deleted, the ext4 filesystem does not consolidate the directory entries and release the now-unnecessary directory metadata blocks, as the assumption is that they might be needed again soon enough. You might have to unmount the filesystem and run a e2fsck -C0 -f -D /dev/sda9 on it to optimize the directories, to get the extra directory metadata blocks deallocated and the existing directory entries consolidated to a smaller space. Since it's your /home filesystem, you might be able to do it by making sure all regular user accounts are logged out, then logging in locally as root (typically on the text console). If umount /home in that situation reports that the filesystem is busy, you can use fuser -m /dev/sda9 to identify the processes blocking you from unmounting /home . If they are remnants of old user sessions, you can probably just kill them; but if they belong to services, you might want to stop those services in a controlled manner. The other classic way to do this sort of major maintenance to /home would be to boot the system into single-user/emergency mode. On distributions using systemd , the boot option systemd.unit=emergency.target should do it. And as others have mentioned, there is an even simpler solution, if preserving the timestamps of the directory is not important , and the problem directory is not the root directory of the filesystem it's in: create a new directory alongside the "bloated" one, move all files to the new directory, remove the old directory, and rename the new directory to have the same name as the old one did. For example, if /directory/A is the one with the problem: mkdir /directory/B mv /directory/A/* /directory/B/ # regular files and sub-directories mv /directory/A/.??* /directory/B/ # hidden files/dirs too rmdir /directory/A mv /directory/B /directory/A Of course, if the directory is being used by any services, it would be a good idea to stop those services first.
{ "source": [ "https://unix.stackexchange.com/questions/646590", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/414347/" ] }
647,551
I understand that "everything is a file" is not entirely true, but as far as I know, every process gets a directory in /proc with lots of files. Read/write operations are often great bottlenecks in speed, and having to read/write from/to files all the time can significantly slow down processing. Does having to keep a bunch of files in /proc slow things down? If not, how doesn't having to do a lot of IO operations not be a huge design flaw in Linux?
Files in /proc and /sys exist purely dynamically, i.e. when nothing is reading them, they aren't there at all and the kernel spends no time generating them. You could think of /proc and /sys files as API calls. If you don't execute them, the kernel doesn't run any code
{ "source": [ "https://unix.stackexchange.com/questions/647551", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/370354/" ] }
647,554
This should be simple but I am missing something, need some help. My requirement is to read the log file via tail to get latest logs, grep Download Config & Copying all files of and write it in MyOwnLogFile.log but I want this to stop as soon as .myworkisdone file appears in /usr/local/FOLDER/ One thing is sure that .myworkisdone will be generated at the last when all logs are done… but the script just continues to read the log file and never comes out of it, even if the file is created. while [[ ! -e /usr/local/FOLDER/.myworkisdone ]]; do sudo tail -f -n0 /var/log/server22.log | while read line; do echo "$line" | grep -e 'Downloading Config’ -e ‘Copying all files of' ; done >> /var/tmp/MyOwnLogFile.log done I also tried until instead of while to check the file but still the script cant break the spell of reading the log file. Thank you in advance.
Files in /proc and /sys exist purely dynamically, i.e. when nothing is reading them, they aren't there at all and the kernel spends no time generating them. You could think of /proc and /sys files as API calls. If you don't execute them, the kernel doesn't run any code
{ "source": [ "https://unix.stackexchange.com/questions/647554", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205940/" ] }
647,567
I've got a folder on a non reflink -capable file system (ext4) which I know contains many files with identical blocks in them. I'd like to move/copy that directory to an XFS file system whilst simultaneously deduplicating them. (I.e. if a block of a copied file is already present in a different file, I'd like to not actually copy it, but to make a second block ref point to that in the new file.) One option would of course be first copying over all files to the XFS filesystem, running duperemove on them there, and thus removing the duplicates after the fact. Small problem: this might get time-intense, as the target filesystem isn't as quick on random accesses. Therefore, I'd prefer if the process that copies over the files already takes care of telling the kernel that, hey, that block is a duplicate of that other block that's already there. Is such a thing possible?
Files in /proc and /sys exist purely dynamically, i.e. when nothing is reading them, they aren't there at all and the kernel spends no time generating them. You could think of /proc and /sys files as API calls. If you don't execute them, the kernel doesn't run any code
{ "source": [ "https://unix.stackexchange.com/questions/647567", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106650/" ] }
647,907
Looks like I cannot run any normal linux binaries if their name ends with .exe , any idea why? $ cp /bin/pwd pwd $ ./pwd /home/premek This is ok. But... $ cp /bin/pwd pwd.exe $ ./pwd.exe bash: ./pwd.exe: No such file or directory $ ls -la pwd.exe -rwxr-xr-x 1 premek premek 39616 May 3 20:27 pwd.exe $ file pwd.exe pwd.exe: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=2447335f77d6d8c4245636475439df52a09d8f05, stripped $ ls -la /lib64/ld-linux-x86-64.so.2 lrwxrwxrwx 1 root root 32 May 1 2019 /lib64/ld-linux-x86-64.so.2 -> /lib/x86_64-linux-gnu/ld-2.28.so $ ls -la /lib/x86_64-linux-gnu/ld-2.28.so -rwxr-xr-x 1 root root 165632 May 1 2019 /lib/x86_64-linux-gnu/ld-2.28.so $ file /lib/x86_64-linux-gnu/ld-2.28.so /lib/x86_64-linux-gnu/ld-2.28.so: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=f25dfd7b95be4ba386fd71080accae8c0732b711, stripped
I spent one day on this and of course 1 second after posting this question I remembered something like this existed to register .exe files for wine: $ sudo cat /proc/sys/fs/binfmt_misc/wine enabled interpreter /usr/bin/wine flags: extension .exe and /usr/bin/wine did not exist. I got rid of it using: $ sudo update-binfmts --remove wine /usr/bin/wine update-binfmts: warning: no executable /usr/bin/wine found, but continuing anyway as you request and it works now
{ "source": [ "https://unix.stackexchange.com/questions/647907", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20860/" ] }
648,308
I feel like this should be straightforward but I've never seen anyone ask this that I can tell. The situation is pretty straight forward. Whenever I become a user, ie su user it always starts in /root directory instead of it's home directory. Let me show you. [root@st-test2 ~]# grep "postgres" /etc/passwd postgres:x:26:26:PostgreSQL Server:/var/lib/pgsql/:/bin/bash [root@st-test2 ~]# su postgres bash-4.2$ pwd /root [root@st-test2 ~]# ls -lhart /var/lib |grep postgres drwx------. 4 postgres postgres 86 May 5 16:07 pgsql So, you can see that the postgres user's home directory exists and that its set in /etc/passwd...but for some reason, they start in the root directory. This happens with every user that I have created and I have no idea why. I can't say that I've ever seen this happen before either.
If you only give a username as argument, su changes user without changing much else : For backward compatibility, su defaults to not change the current directory and to only set the environment variables HOME and SHELL (plus USER and LOGNAME if the target user is not root). So su postgres stays in the same directory. However since HOME is set to the new user’s home directory, cd will take you to the right place. To log in and start from the user’s default directory, you need to ask su to start a login shell set up appropriately: su -l postgres or its common synonym, su - postgres
{ "source": [ "https://unix.stackexchange.com/questions/648308", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/470260/" ] }
649,013
I have been experimenting with hex numbers in AWK ( gawk ), but sometimes when I print them using e.g. printf , they are printed with some LSBs masked out, like in the following example: awk 'BEGIN { x=0xffffffffbb6002e0; printf("%x\n", x); }' ffffffffbb600000 Why do I experience this behaviour and how can I correct it? I'm using gawk on Debian Buster 10.
Numbers in AWK are floating-point numbers by default, and your value exceeds the precision available. 0xffffffffbb6002e0 ends up represented as 0 10000111110 1111111111111111111111111111111101110110110000000000 in IEEE-754 binary64 ( double-precision ) format, which represents the integer value 0xffffffffbb600000 . Note the change in the low 12 bits, rounded to zero. The smallest positive integer to get any rounding error when converted to double is 2 53 + 1. The larger the number, the larger the gap between values a double can represent. (Steps of 2, then 4, then 8, etc; that's why the low hex digits of your number round to zero.) With GAWK, if it’s built with MPFR and MP (which is the case in Debian), you can force arbitrary precision instead with the -M option: $ awk -M 'BEGIN { x=0xffffffffbb6002e0; printf("%x\n", x); }' ffffffffbb6002e0 For calculations, this will default to the same 53 bits of precision as available with IEEE-754 doubles, but the PREC variable can be used to control that. See the manual linked above for extensive details. There is a difference in handling for large integers and floating-point values requiring more than the default precision, which can result in surprising behaviour; large integers are parsed correctly with -M and its default settings (only subsequent calculations are affected by PREC ), whereas floating-point values are stored with the precision defined at the time they are parsed, which means PREC needs to be set appropriately beforehand: # Default settings, integer value too large to be exactly represented by a binary64 $ awk 'BEGIN { v=1234567890123456789; printf "%.20f\n", v }' 1234567890123456768.00000000000000000000 # Forced arbitrary precision, same integer value stored exactly without rounding $ awk -M 'BEGIN { v=1234567890123456789; printf "%.20f\n", v }' 1234567890123456789.00000000000000000000 # Default settings, floating-point value requiring too much precision $ awk 'BEGIN { v=123456789.0123456789; printf "%.20f\n", v }' 123456789.01234567165374755859 # Forced arbitrary precision, floating-point parsing doesn’t change $ awk -M 'BEGIN { v=123456789.0123456789; printf "%.20f\n", v }' 123456789.01234567165374755859 # Forced arbitrary precision, PREC set in the BEGIN block, no difference $ awk -M 'BEGIN { PREC=94; v=123456789.0123456789; printf "%.20f\n", v }' 123456789.01234567165374755859 # Forced arbitrary precision, PREC set initially $ awk -M -vPREC=94 'BEGIN { v=123456789.0123456789; printf "%.20f\n", v }' 123456789.01234567890000000000 When reading input values, AWK only recognises decimal values as numbers; to handle non-decimal values (octal or hexadecimal), fields should be processed using GAWK’s strtonum function .
{ "source": [ "https://unix.stackexchange.com/questions/649013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128739/" ] }
649,408
Never thought this would happen to me, but there you go. ¯\_(ツ)_/¯ I ran a build script from a repository inside the wrong directory without looking at the source first. Here's the script Scripts/BuildLocalWheelLinux.sh : cd ../Dependencies/cpython mkdir debug cd debug ../configure --with-pydebug --enable-shared make cd ../../.. cd .. mkdir -p cmake-build-local cd cmake-build-local rm -rf * cmake .. -DMVDIST_ONLY=True -DMVPY_VERSION=0 -DMVDPG_VERSION=local_build make -j cd .. cd Distribution python3 BuildPythonWheel.py ../cmake-build-local/[redacted]/core.so 0 python3 -m ensurepip python3 -m pip install --upgrade pip [more pip install stuff] python3 -m setup bdist_wheel --plat-name manylinux1_x86_64 --dist-dir ../dist cd .. cd Scripts The dangerous part seems to be mkdir -p cmake-build-local cd cmake-build-local rm -rf * But thinking about it, it actually seems like it couldn't possibly go wrong. The way you're supposed to run this script is cd Scripts; ./BuildLocalWheelLinux.sh . When I ran it the first time, it showed an error on the very last line (as I learned afterwards). I was in a hurry, so I though "maybe the docs are outdated, I'll try running from the project root instead. So I ran ./Scripts/BuildLocalWheelLinux.sh . Suddenly, vscodes theme and zoom level changed, my zsh terminal config was reset, terminal fonts were set to default, and I Ctrl+C'd once I realized what was happening. There are some files remaining, but there's no obvious pattern to them: $ ls -la total 216 drwx------ 27 felix felix 4096 May 12 18:08 . drwxr-xr-x 3 root root 4096 Apr 15 16:39 .. -rw------- 1 felix felix 12752 Apr 19 11:07 .bash_history -rw-r--r-- 1 felix felix 3980 Apr 15 13:40 .bashrc drwxrwxrwx 7 felix felix 4096 May 12 18:25 .cache drwx------ 8 felix felix 4096 May 12 18:26 .config drwx------ 3 root root 4096 Apr 13 21:40 .dbus drwx------ 2 felix felix 4096 Apr 30 12:18 .docker drwxr-xr-x 8 felix felix 4096 Apr 15 13:40 .dotfiles -rw------- 1 felix felix 8980 Apr 13 18:10 examples.desktop -rw-r--r-- 1 felix felix 196 Apr 19 15:19 .gitconfig -rw-r--r-- 1 felix felix 55 Apr 16 13:56 .gitconfig.old -rw-r--r-- 1 felix felix 1040 Apr 15 13:40 .gitmodules drwx------ 3 felix felix 4096 May 6 10:10 .gnupg -rw-r--r-- 1 felix felix 1848 May 5 14:24 heartbeat.tcl -rw------- 1 felix felix 1610 Apr 13 20:36 .ICEauthority drwxr-xr-x 5 felix felix 4096 Apr 21 16:39 .ipython drwxr-xr-x 2 felix felix 4096 May 4 09:35 .jupyter -rw------- 1 felix felix 161 Apr 27 14:23 .lesshst drwx------ 3 felix felix 4096 May 12 18:08 .local -rw-r--r-- 1 felix felix 140 Apr 29 17:54 minicom.log drwx------ 5 felix felix 4096 Apr 13 18:25 .mozilla drwxr-xr-x 2 felix felix 4096 Apr 13 18:10 Music drwxr-xr-x 6 felix felix 4096 May 12 17:16 Nextcloud -rw-r--r-- 1 felix felix 52 Apr 16 11:43 .nix-channels -rw------- 1 felix felix 1681 Apr 20 10:33 nohup.out drwx------ 3 felix felix 4096 Apr 15 11:16 .pki -rw------- 1 felix felix 946 Apr 16 11:43 .profile drwxr-xr-x 2 felix felix 4096 Apr 13 18:10 Public drwxr-xr-x 2 felix felix 4096 May 12 18:08 .pylint.d -rw------- 1 felix felix 1984 May 12 18:06 .pythonhist -rw-r--r-- 1 felix felix 2443 Apr 19 13:40 README.md drwxr-xr-x 13 felix felix 4096 May 12 18:08 repos drwxr-xr-x 6 felix felix 4096 Apr 19 11:08 snap drwx------ 3 felix felix 4096 May 5 15:33 .ssh drwxr-xr-x 5 felix felix 4096 Apr 26 17:39 .stm32cubeide drwxr-xr-x 5 felix felix 4096 May 5 15:52 .stm32cubemx drwxr-xr-x 2 felix felix 4096 Apr 23 11:44 .stmcube drwxr-xr-x 2 felix felix 4096 Apr 13 18:10 Templates drwxr-xr-x 3 felix felix 4096 Apr 19 11:57 test drwxr-xr-x 2 felix felix 4096 Apr 13 18:10 Videos -rw------- 1 felix felix 14313 May 12 10:45 .viminfo -rw-r--r-- 1 felix felix 816 Apr 15 13:40 .vimrc drwxr-xr-x 3 felix felix 4096 Apr 16 12:08 .vscode -rw-r--r-- 1 felix felix 2321 Apr 19 18:47 weird_bug.txt -rw-r--r-- 1 felix felix 162 Apr 15 13:40 .xprofile .config is gone, as well as some standard XDG dirs like Pictures and Desktop, but .bashrc is still there. .nix-channels is still there, but .nix-defexpr was nuked. So, this leads me to two questions: What went wrong? I'd like to fix this build script and make a PR to prevent this from happening in the future. What order were the files deleted in? Obviously not in alphabetical order, but * expands in alphabetical order, so something else is going on here, it seems.
Ouch. You aren't the first victim . What went wrong? Starting in your home directory, e.g. /home/felix , or even in /home/felix/src or /home/felix/Downloads/src . cd ../Dependencies/cpython Failed because there is no ../Dependencies . mkdir debug cd debug You're now in the subdirectory debug of the directory you started from. ../configure --with-pydebug --enable-shared make Does nothing because there's no ../configure or make . cd ../../.. cd .. If you started out no more than three directory levels deep, with cd debug reaching a fourth level, the current directory is now the root directory. If you started out four directory levels deep the current directory is now /home . mkdir -p cmake-build-local This fails since you don't have permission to write in / or /home . cd cmake-build-local This fails since there is no directory cmake-build-local . We now get to… What order were the files deleted in? rm -rf * This tries to recursively delete every file in the current directory, which is / or /home . The home directories are enumerated in alphabetical order, but the files underneath are enumerated in the arbitrary order of directory traversal. It's the same order as ls --sort=none (unless rm decides to use a different order for some reason). Note that this order is generally not preserved in backups, and can change when a file is created or removed in the directory. How to fix the script First, almost any shell script should have set -e near the top. set -e causes the script to abort if a command fails. (A command fails if its exit status is nonzero.) set -e is not a panacea, because there are circumstances where it doesn't go into effect. But it's the bare minimum you can expect and it would have done the right thing here. (Also the script should start with a shebang line to indicate which shell to use, e.g. #!/bin/sh or #!/bin/bash . But that wouldn't help with this problem.) rm -rf * , or variants like rm -rf $foo.* (what if $foo turns out to be empty?), are fragile. Here, instead of mkdir -p cmake-build-local cd cmake-build-local rm -rf * it would be more robust to just remove and re-create the directory. (This would not preserve the permissions on the directory, but here this is not a concern.) rm -rf cmake-build-local mkdir cmake-build-local cd cmake-build-local Another way is more robust against deleting the wrong files, but more fragile against missing files to delete: delete only files that are known to have been built, by running make clean which has rm commands for known build targets and for known extensions (e.g. rm *.o is ok).
{ "source": [ "https://unix.stackexchange.com/questions/649408", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67771/" ] }
649,759
While there is usually no need for more than the 64k available ports, I am interested in the PoC that having a port number on 64 bits would mitigate the regular attacks on the access ports (ssh, vpn...). Having a 64b port makes it almost impossible to randomly attack a service, targeting either DoS or a login. Like ssh -p 141592653589793238 my.site.com Is it possible to configure Linux to use 64 bit ports? (of course both client and server should be configured) and practically Would that disturb the Internet equipment? ('transport' is OSI layer 4, above IP, thus the routing itself should not be impacted, but some devices go up to the upper layers for analysis / malware detection... ; a 64 bit ports Linux box would act as home router)
Is it possible to configure Linux to use 64 bit ports? You cannot change a parameter to use 64bit ports in TCP/UDP. You could create similar protocols, but you would only be able to communicate with your modified hosts and it would not be TCP / UDP, but a new set of protocols, let's say TCP64 / UDP64. Here are just some of the things you'd have to add for these protocols to work, just to start, before even considering memory impact and a ton of other issues: a definition of the the TCP64 ( a modification of the current TCP segment ) a new family AF_INET capable of holding the extended ports, along with the kernel code to handle it (if you're thinking about copy/paste, note that you have to change, at the very least, a list the structure definitions, type definitions and calls to htons() or ntohs() for example code to all userspace programs meant to use the new stacks, including those at the edges of the network, such as firewalls if you plan to filter the traffic. Since it will be a different set of protocols, with their own IP numbers , they would not disturb the routing nodes, though they could be dropped by them along the route, because the IP protocol number would not be known. As for mitigation: software like fail2ban and custom service ports (in the 16-bit range) are usual techniques, though not the only ones .
{ "source": [ "https://unix.stackexchange.com/questions/649759", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3527/" ] }
649,776
I am a new learner in this field. I want to subtract few seconds from date_time. I used this code to extract data and then subtract the seconds. BUT I can not save this output into a variable. Could you please help me to save this? for stnm in H33 do cd $stnm for file in $input_dir/$stnm/2018/350.hyd echo $file do dat=`saclst kzdate f $file | awk '{print substr($2,1,10)}'` time=`saclst kztime f $file| awk '{print substr($2,1,11)}'` echo $dat $time "############################" # new_time= date -d "$(date -Iseconds -d "$dat $time" ) - 2 minutes - 0.05 seconds" new_time= date -d " $dat $time Z - 2 minutes - 0.05 seconds" +%Y/%m/%d_%H:%M:%S | awk '{print substr($1,1,24)}' echo $dat $time $new_time "####################" done done Output /NAS2/Abbas/TS14_OBS/H33/2018/350.hyd 2018/12/16 00:00:00.00 ############################ 2018/12/15_23:57:59 2018/12/16 00:00:00.00 ####################
Is it possible to configure Linux to use 64 bit ports? You cannot change a parameter to use 64bit ports in TCP/UDP. You could create similar protocols, but you would only be able to communicate with your modified hosts and it would not be TCP / UDP, but a new set of protocols, let's say TCP64 / UDP64. Here are just some of the things you'd have to add for these protocols to work, just to start, before even considering memory impact and a ton of other issues: a definition of the the TCP64 ( a modification of the current TCP segment ) a new family AF_INET capable of holding the extended ports, along with the kernel code to handle it (if you're thinking about copy/paste, note that you have to change, at the very least, a list the structure definitions, type definitions and calls to htons() or ntohs() for example code to all userspace programs meant to use the new stacks, including those at the edges of the network, such as firewalls if you plan to filter the traffic. Since it will be a different set of protocols, with their own IP numbers , they would not disturb the routing nodes, though they could be dropped by them along the route, because the IP protocol number would not be known. As for mitigation: software like fail2ban and custom service ports (in the 16-bit range) are usual techniques, though not the only ones .
{ "source": [ "https://unix.stackexchange.com/questions/649776", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/471808/" ] }
649,996
I want to back up my SSD using the Linux dd command, but I'm not sure how reliable that method will be. I think I read somewhere that dd does not check for or report errors, so obviously if true then it will be a deal breaker. This will be the command: sudo dd status=progress bs=512K if=/dev/nvme0n1 of=/media/d/ssd.img So please explain how reliable the dd command can be for said use case. And, are there any more reliable and/or easier alternative?
TLDR: Use ddrescue It supports resume/continue capabilities, has automatic logs, and tons of other options. More at the ddrescue home page . Example syntax: ddrescue /dev/sde yourimagename.image sde.log IF you want to (given your comment mentioning restoring) restore the image from the command above onto another drive of the same exact size: ddrescue -f yourimagehere.image /dev/sde restore.logfile Furthermore, it is faster than dd is -- at least it does look like it is when comparing speed of ddrescue and dd + pv .
{ "source": [ "https://unix.stackexchange.com/questions/649996", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/472052/" ] }
652,076
For example, I looking for files and directories in some directory: ubuntu@example:/etc/letsencrypt$ sudo find . -name example.com* ./archive/example.com ./renewal/example.com.conf ./live/example.com ubuntu@example:/etc/letsencrypt$ How can I mark that ./archive/example.com and ./live/example.com are directories in the output above?
Print the file type along with the name with -printf "%y %p\n" : $ sudo find . -name 'example.com*' -printf "%y %p\n" d ./archive/example.com f ./renewal/example.com.conf d ./live/example.com The use of -printf assumes GNU find (the most common find implementation on Linux systems).
{ "source": [ "https://unix.stackexchange.com/questions/652076", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/169259/" ] }
652,316
The UNIX and Linux System Administration Handbook says: man maintains a cache of formatted pages in /var/cache/man or /usr/share/man if the appropriate directories are writable; however, this is a security risk. Most systems preformat the man pages once at installation time (see catman) or not at all. What is the "security risk(s)" here? There is the obvious security risk that someone can alter the man pages to trick a (novice) user into running something undesirable, as pointed out by Ulrich Schwartz in their answer , but I am looking for other ways this could be exploited. Thanks!
It's not safe to let users manipulate the content of man pages (or any data really) that will also be used by other users, because there is a danger of cache poisoning . As the old BOFH joke goes: To learn everything about your system, from the root up, use the "read manual" command with the "read faster" switch like this: rm -rf / (To be clear, do not run this command.) But if I control the man page cache, you might type man rm to see a cached fake man page that tells you rm is indeed "rm - read manual" and not "rm - remove files or directories". Or even output terminal escape sequences that inject code into your shell .
{ "source": [ "https://unix.stackexchange.com/questions/652316", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/459222/" ] }
652,331
I'm trying to find the first non-zero byte (starting from an optional offset) on a block device using dd and print its offset, but I am stuck. I didn't mention dd in the title as I figured there might be a more appropriate tool than dd to do this, but I figured dd should be a good start. If you know of a more appropriate tool and/or more efficient way to reach my goal, that's fine too. In the meantime I'll show you how far I've come with dd in bash, so far. #!/bin/bash # infile is just a temporary test file for now, which will be replaced with /dev/sdb, for instance infile=test.txt offset=0 while true; do byte=`dd status='none' bs=1 count=1 if="$infile" skip=$offset` ret=$? # the following doesn't appear to work # ret is always 0, even when the end of file/device is reached # how do I correctly determine if dd has reached the end of file/device? if [ $ret -gt 0 ]; then echo 'error, or end of file reached' break fi # I don't know how to correctly determine if the byte is non-zero # how do I determine if the read byte is non-zero? if [ $byte ???? ]; then echo "non-zero byte found at $offset" break fi ((++offset)) done As you can see, I'm stuck with two issues that I don't know how to solve: a. How do I make the while loop break when dd has reached the end of the file/device? dd gives an exit code of 0 , where I expected a non-zero exit code instead. b. How do I evaluate whether the byte that dd read and returns on stdout is non-zero? I think I've read somewhere that special care should be taken in bash with \0 bytes as well, but I'm not even sure this pertains to this situation. Can you give me some hints on how to proceed, or perhaps suggest and alternative way to achieve my goal?
You can do this using cmp , comparing to /dev/zero : cmp /path/to/block-device /dev/zero cmp will give you the offset of the first non-zero byte. If you want to skip bytes, you can use GNU cmp ’s -i option, or if you’re not using GNU cmp , feed it the appropriate data using dd : cmp -i 100 /path/to/block-device /dev/zero dd if=/path/to/block-device bs=1 skip=100 | cmp - /dev/zero This will work with any file, not just block devices.
{ "source": [ "https://unix.stackexchange.com/questions/652331", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/474454/" ] }
652,375
I would like to convert the ZFS output of "10.9T" to actual bytes, using something in a line or two, rather than run generic math functions, and if conditions for T , G , M , etc.. Is there an efficient way to do this? For now, I have something like this: MINFREE="50G" POOLSIZE=`zpool list $POOLNAME -o size` #Size 10.9T POOLSIZE=$(echo "$POOLSIZE" | grep -e [[:digit:))] #10.9T POOLFREE=500M #as an example let p=POOLSIZE x=POOLFREE y=MINFREE z=POOLSIZE; CALC=$(expr "echo $((x / y))") if [ "${CALC}" < 1 ]; then # we are less than our min free space echo alert fi This produces an error: can't run the expression on 10.9T , or 50G because they arent numbers. Is there a known bash function for this? I also like the convenience of specifying it like i did there in the MINFREE var at the top. So an easy way to convert would be nice. This is what I was hoping to avoid (making case for each letter), the script looks clean though. Edit : Thanks for all the comments! Here is the code I have now. , relevant parts atleast; POOLNAME=san INFORMAT=auto #tip; specify in Gi, Ti, etc.. (optional) MINFREE=500Gi OUTFORMAT=iec NOW=`date`; LOGPATH=/var/log/zfs/zcheck.log BOLD=$(tput bold) BRED=${txtbld}$(tput setaf 1) BGREEN=${txtbld}$(tput setaf 2) BYELLOW=${txtbld}$(tput setaf 3) TXTRESET=$(tput sgr0); # ZFS Freespace check #poolsize, how large is it POOLSIZE=$(zpool list $POOLNAME -o size -p) POOLSIZE=$(echo "$POOLSIZE" | grep -e [[:digit:]]) POOLSIZE=$(numfmt --from=iec $POOLSIZE) #echo poolsize $POOLSIZE #poolfree, how much free space left POOLFREE=`zpool list $POOLNAME -o free` #POOLFREE=$(echo "$POOLFREE" | grep -e [[:digit:]]*.[[:digit:]].) POOLFREE=$(echo "$POOLFREE" | grep -e [[:digit:]]) POOLFREE=$(numfmt --from=$INFORMAT $POOLFREE) #echo poolfree $POOLFREE #grep -e "vault..[[:digit:]]*.[[:digit:]].") #minfree, how low can we go, before alerting MINFREE=$(numfmt --from=iec-i $MINFREE) #echo minfree $MINFREE #FORMATTED DATA USED FOR DISPLAYING THINGS #echo formattiing sizes: F_POOLSIZE=$(numfmt --from=$INFORMAT --to=$OUTFORMAT $POOLSIZE) F_POOLFREE=$(numfmt --from=$INFORMAT --to=$OUTFORMAT $POOLFREE) F_MINFREE=$(numfmt --from=$INFORMAT --to=$OUTFORMAT $MINFREE) F_MINFREE=$(numfmt --from=$INFORMAT --to=$OUTFORMAT $MINFREE) #echo printf "${BGREEN}$F_POOLSIZE - current pool size" printf "\n$F_MINFREE - mininium freespace allowed/as specified" # OPERATE/CALCULATE SPACE TEST #echo ... calculating specs, please wait.. #let f=$POOLFREE m=$MINFREE x=m/f; declare -i x=$POOLFREE/$MINFREE; # will be 0 if has reached low threshold, if poolfree/minfree #echo $x #IF_CALC=$(numfmt --to=iec-i $CALC) if ! [ "${x}" == 1 ]; then #printf "\n${BRED}ALERT! POOL FREESPACE is low! ($F_POOLFREE)" printf "\n${BRED}$F_POOLFREE ${BYELLOW}- current freespace! ${BRED}(ALERT!}${BYELLOW} Is below your preset threshold!"; echo else printf "\nPOOLFREE - ${BGREEN}$F_POOLFREE${TXTRESET}- current freespace"; #sleep 3 fi
You can use numfmt (in Debian and derivatives it is part of coreutils so it should be there already): numfmt - Convert numbers from/to human-readable strings $ numfmt --from=iec-i 50.1Gi 53794465383 it can also read the value from stdin $ echo "50.1Gi" | numfmt --from=iec-i 53794465383 Be careful, it takes into account the locale for the decimal separator.
{ "source": [ "https://unix.stackexchange.com/questions/652375", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111873/" ] }
654,484
I've been tracking down an issue I've been facing in a buildkite script, and here's what I've got: Firstly, I enter the shell of a docker image: docker run --rm -it --entrypoint bash node:12.21.0 This docker image doesn't have any text editors, so I create my shell scripts by concating to a file: touch a.sh chmod +x a.sh printf '#!/bin/sh\necho ${1:0:1}' >> a.sh touch b.sh chmod +x b.sh printf '#!/bin/bash\necho ${1:0:1}' >> b.sh I now run my scripts: ./a.sh hello >./a.sh: 2: ./a.sh: Bad substitution ./b.sh hello >h Can someone tell me in simple terms what the issue is here? This AskUbuntu question says that bash and sh are different shells, and that in many systems sh will symlink to bash. What's going on specifically on this docker image? How would I know?
/bin/sh is only expected to be a POSIX shell, and the POSIX shell doesn’t know about substrings in parameter expansions . POSIX “defines a standard operating system interface and environment, including a command interpreter (or “shell”)”, and is the standard largely followed in traditional Unix-style environments. See What exactly is POSIX? for a more extensive description. In POSIX-inspired environments, /bin/sh is supposed to provide a POSIX-style shell, and in a script using /bin/sh as its shebang, you can only rely on POSIX features (even though most actual implementations of /bin/sh provide more). It’s perfectly OK to rely on a more advanced shell, but the shebang needs to be adjusted accordingly. Since your script relies on a bash feature, the correct shebang is #!/bin/bash (or perhaps #!/usr/bin/env bash ), regardless of the environment it ends up running in. It may happen to work in some cases with #!/bin/sh , but that’s just a happy accident.
{ "source": [ "https://unix.stackexchange.com/questions/654484", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209769/" ] }
654,689
While setting up docker on Ubuntu 20.04 I did sudo usermod -G docker $USER . As noted in related questions here, I missed the -a flag and replaced all secondary groups. However, I didn't realize this until after I rebooted my machine. This is a single-user work station. I could fix this with root , but I don't have the password. How do I restore the proper groups without root access? The only one that causes a problem now is sudo , but I'm sure others will crop up. Can I do anything without reinstalling Ubuntu from scratch?
You still have one group left: docker . That means you still have control over the docker daemon. This daemon can run a container with the host's root filesystem mounted and then the container can edit files ( vi is available in busybox ) or simpler: can chroot to the host's filesystem. Download a minimal busybox image: myuser@myhost:~$ docker pull busybox Using default tag: latest latest: Pulling from library/busybox b71f96345d44: Pull complete Digest: sha256:930490f97e5b921535c153e0e7110d251134cc4b72bbb8133c6a5065cc68580d Status: Downloaded newer image for busybox:latest docker.io/library/busybox:latest Run a container with this image interactively and in privileged mode (in case AppArmor would block the chroot command later without it): $ docker run -it --mount type=bind,source=/,target=/host --privileged busybox Continue with interactive commands from the container. You can simply chroot to the mount point to "enter" the root filesystem and get all Ubuntu commands: / # chroot /host Use adduser which is a simpler wrapper around useradd : root@74fc1b7903e5:/# adduser myuser sudo Adding user `myuser' to group `sudo' ... Adding user myuser to group sudo Done. root@74fc1b7903e5:/# exit exit / # exit Either logout and relog, or change group manually: myuser@myhost$ sg sudo And root access is restored: myuser@myhost$ sudo -i [sudo] password for myuser: root@myhost# Conclusion: be very prudent when allowing remote access to Docker (through port 2375/TCP). It means root access by default.
{ "source": [ "https://unix.stackexchange.com/questions/654689", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21888/" ] }
654,700
I am trying to rename the column headers of a large file, and I want to know the most efficient way to do so. Files are in the range of 10M to 50M lines, with ~100 characters per line in 10 columns. A similar question was asked to remove the first line, and the best answer involved "tail". Efficient in-place header removing for large files using sed? My guess is: bash-4.2$ seq -w 100000000 1 125000000 > bigfile.txt bash-4.2$ tail -n +2 bigfile.txt > bigfile.tail && sed '1 s/^/This is my first line\n/' bigfile.tail > bigfile.new && mv -f bigfile.new bigfile.txt; Is there a faster way?
You still have one group left: docker . That means you still have control over the docker daemon. This daemon can run a container with the host's root filesystem mounted and then the container can edit files ( vi is available in busybox ) or simpler: can chroot to the host's filesystem. Download a minimal busybox image: myuser@myhost:~$ docker pull busybox Using default tag: latest latest: Pulling from library/busybox b71f96345d44: Pull complete Digest: sha256:930490f97e5b921535c153e0e7110d251134cc4b72bbb8133c6a5065cc68580d Status: Downloaded newer image for busybox:latest docker.io/library/busybox:latest Run a container with this image interactively and in privileged mode (in case AppArmor would block the chroot command later without it): $ docker run -it --mount type=bind,source=/,target=/host --privileged busybox Continue with interactive commands from the container. You can simply chroot to the mount point to "enter" the root filesystem and get all Ubuntu commands: / # chroot /host Use adduser which is a simpler wrapper around useradd : root@74fc1b7903e5:/# adduser myuser sudo Adding user `myuser' to group `sudo' ... Adding user myuser to group sudo Done. root@74fc1b7903e5:/# exit exit / # exit Either logout and relog, or change group manually: myuser@myhost$ sg sudo And root access is restored: myuser@myhost$ sudo -i [sudo] password for myuser: root@myhost# Conclusion: be very prudent when allowing remote access to Docker (through port 2375/TCP). It means root access by default.
{ "source": [ "https://unix.stackexchange.com/questions/654700", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/249967/" ] }
656,005
I just read the following sentence: Case Sensitivity is a function of the Linux filesystem NOT the Linux operating system. What I deduced from this sentence is if I'm on a Linux machine but I am working with a device formatted using the Windows File System, then case sensitivity will NOT be a thing. I tried the following to verify this: $ ~/Documents: mkdir Test temp $ ~/Documents: touch Test/a.txt temp/b.txt $ ~/Documents: ls te* b.txt And it listed only the files within the temp directory, which was expected because I am inside a Linux Filesystem. When I navigated to a Windows File System (NOTE: I am using WSL2), I still get the same results, but I was expecting it to list files inside both directories ignoring case sensitivity. $ /mnt/d: mkdir Test temp $ /mnt/d: touch Test/a.txt temp/b.txt $ /mnt/d: ls te* b.txt I tried it with both bash and zsh. I feel that it's somehow related to bash (or zsh), because I also read that bash enforces case sensitivity even when working with case insensitive filesystems. This test works on Powershell, so it means that the filesystem is indeed case insensitive.
Here, you're running: ls te* Using a feature of your shell called globbing or filename generation (pathname expansion in POSIX), not of the Linux system nor of any filesystem used on Linux. te* is expanded by the shell to the list of files that match that pattern. To do that, the shell requests the list of entries in the current directory from the system (typically using the readdir() function of the C library, which underneath will use a system-specific system call ( getdents() on Linux)), and then match each name against the pattern. And unless you've configured your shell to do that matching case insensitively (see nocaseglob options in zsh or bash) or use glob operators to toggle case insensitivity (like the (#i) extended glob operator in zsh ), te* will only expand to the list of files whose name as reported by readdir() starts with te , even if pathname resolution on the system or file system underneath is case insensitive or can be made to be like NTFS.
{ "source": [ "https://unix.stackexchange.com/questions/656005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/479172/" ] }
657,060
Given a nested directory, I would like to list all tif files with extension .tif , .TIF , .tiff , .TIFF . Currently I'm using find . -type f -iname *.TIF -print ; find . -type f -iname *.TIFF -print; Using -iname allows me to be case-insensitive but it goes through the directory twice to get files with .tif and .tiff . Is there a better way to do this? Perhaps with brace expansion? Why not *.tif* ? In some cases, my directories might have auxiliary files with extension .tif.aux.xml alongside the tiffs. I'd like to ignore those.
find supports an “or” disjunction, -o : find . -type f \( -iname \*.tif -o -iname \*.tiff \) This will list all files whose name matches *.tif or *.tiff , ignoring case. -print is the default action so it doesn’t need to be specified here. * , ( , and ) are escaped so that they lose their significance for the shell.
{ "source": [ "https://unix.stackexchange.com/questions/657060", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/480275/" ] }
657,072
Hi I have a md file containing the below string and I want to write a regular expression for this. Conditions The id will be anything. The type will be youtube,vimeo etc ID and type are mandatory fields {% include video.html id="T3q6QcCQZQg" type="youtube" %} So I want to check the string is in a proper format in bash script otherwise will through an error. Current code look like this . The below code is working for me without an ID. But I need to add a regex for id as well IFS=$'\n' read -r -d '' -a VIDEOS < <( grep "video.html" "$ROOT_DIR$file" && printf '\0' ) #output => {% include video.html id="T3q6QcCQZQg" type="youtube" %} for str in "${VIDEOS[@]}" do if [[ "$str" =~ ({%)[[:space:]](include)[[:space:]](video.html)[[:space:]](type="youtube"|type="vimeo")[[:space:]](%})$ ]]; then flag="dummy" echo "Invalid format:: $second" fi done Please help
find supports an “or” disjunction, -o : find . -type f \( -iname \*.tif -o -iname \*.tiff \) This will list all files whose name matches *.tif or *.tiff , ignoring case. -print is the default action so it doesn’t need to be specified here. * , ( , and ) are escaped so that they lose their significance for the shell.
{ "source": [ "https://unix.stackexchange.com/questions/657072", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/480295/" ] }
659,585
I have a firewall ( csf ) that lets you to separately allow incoming and outgoing TCP ports. My question is, why would anyone want to have any outgoing ports closed? I understand that by default you might want to have all ports closed for incoming connections . From there, if you are running an HTTP server you might want to open port 80. If you want to run an FTP server (in active mode) you might want to open port 21. But if it's set up for passive FTP mode, a bunch of ports will be necessary to receive data connections from FTP clients... and so on for additional services. But that's all. The rest of ports not concerned with a particular service that the server provides, and especially if you are mostly a client computer, must be closed. But what about outgoing connections ? Is there any security gain in having destination ports closed for outbound connections? I ask this because at first I thought that a very similar policy of closing all ports as for incoming connections could apply. But then I realised that when acting as a client in passive FTP mode, for instance, random high ports try to connect to the FTP server. Therefore by blocking these high ports in the client side you are effectively disabling passive FTP in that client, which is annoying. I'm tempted to just allow everything outgoing, but I'm concerned that this might be a security threat. Is this the case? Is it a bad idea, or has it noticeable drawbacks just opening all (or many) ports only for outgoing connections to facilitate services such as passive FTP?
There can be many reasons why someone might want to have outgoing ports closed. Here are some that I have applied to various servers at various times The machine is in a corporate environment where only outbound web traffic is permitted, and that via a proxy. All other ports are closed because they are not needed. The machine is running a webserver with executable code (think PHP, Ruby, Python, Perl, etc.) As part of a mitigation against possible code flaws, only expected outbound services are allowed. A service or application running on the machine attempts to connect to a remote resource but the server administrator does not want it to do so. Good security practice: what is not explicitly permitted should be denied.
{ "source": [ "https://unix.stackexchange.com/questions/659585", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27990/" ] }
659,744
Due to its high CPU usage i want to limit Chromium web browser by cpulimit and use terminal to run: cpulimit -l 30 -- chromium --incognito but it does not limit CPU usage as expected (i.e. to maximum to 30%). It again uses 100%. Why? What am I doing wrong?
Yeah, chromium doesn't care much when you stop one of its threads. cpulimit is, in 2021, really not the kind of tool that you want to use, especially not with interactive software: it "throttles" processes (or unsuccessfully tries to, in your case) by stopping and resuming them via signals. Um. That's a terrible hack, and it leads to unreliability you really don't want in a modern browser that might well be processing audio and video, or trying to scroll smoothly. Good news is that you really don't need it. Linux has cgroups , and these can be used to limit the resource consumption of any process, or group of processes (if you, for example, don't want chromium, skype and zoom together to consume more than 50% of your overall CPU capacity). They can also be used to limit other things, like storage access speed and network transfer. In the case of your browser, that'd boil down to (top of head, not tested): # you might need to create the right mountpoints first sudo mkdir /sys/fs/cgroup/cpu sudo mount -t cgroup -o cpu cpu /sys/fs/cgroup/cpu # Create a group that controls `cpu` allotment, called `/browser` sudo cgcreate -g cpu:/browser # Create a group that controls `cpu` allotment, called `/important` sudo cgcreate -g cpu:/important # allocate few shares to your `browser` group, and many shares of the CPU time to the `important` group. sudo cgset -r cpu.shares=128 browser sudo cgset -r cpu.shares=1024 important cgexec -g cpu:browser chromium --incognitio cgexec -g cpu:important make -j10 #or whatever The trick is usually giving your interactive session (e.g. gnome-session ) a high share, and other things a lower one. Note that this guarantees shares; it doesn't take away , unless necessary. I.e. if your CPU can't do anything else in that time (because nothing else is running, or because everything with more shares is blocked, for example by waiting for hard drives), it will still be allocated to the browser process. But that's usually what you want: It has no downsides (it doesn't make the rest of the system run any slower, the browser is just quicker "done" with what it has to do, which on the upside probably even saves energy on the average: when multiple CPU cores are just done, then things can be clocked down/suspended automatically).
{ "source": [ "https://unix.stackexchange.com/questions/659744", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
663,936
I guess this may be a naive question but I can't get my head around so I felt like asking... I was searching for some solution to a problem, when I found this very interesting post about why is using [while|for] loops in bash considered bad practice. There is a very good explanation in the post (see the chosen answer) but I can't find anything that solves the issues that are discussed. I searched extensively: I googled (or duckduckgo-ed) how to read a file in bash and all the results I am getting point towards a solution that, according to the above-mentioned post, is absolutely non-bash style and something that should be avoided. In particular, we have this: while read line; do echo $line | cut -c3 done and this: for line in `cat file`; do foo=`echo $line | awk '{print $2}'` echo whatever $foo done that are indicated as very bad examples of shell scripting. At this point I am wondering, and this is the actual question: if the posted while loops should be avoided because they are bad practice and whatever...what am I supposed to do, instead? EDIT: I see that I am already having comments/questions addressing the exact issue with the while loop, so I feel like to widen the question a bit. Basically, what I am understanding is that I need to dig deeper into bash commands, and that is the real thing that I should do. But, when one searches around, it looks like people are, in the general case, using and teaching bash in an improper way (as per my google-ing).
The point of the post you linked to is to explain that using bash to parse text files is a bad idea in general . It isn't specifically about using loops and there is nothing intrinsically wrong with shell loops in other contexts. Nobody is saying that a shell script with while is somehow bad. That other post is saying that you shouldn't try to parse text files using the shell and you should instead use other tools. To clarify, when I say "using the shell" I mean using the shell's internal tools to open the file, extract the data and parse it. For example something like this: while read number; do if [ $number -gt 10 ]; then echo "The number '$number' is greater than 10" else echo "The number '$number' is less than or equal to 10" done < numbers.txt Please read the answers at Why is using a shell loop to process text considered bad practice? for details on why this sort of thing is a bad idea. Here, I will only clarify that that post isn't arguing against shell loops in general, but against using shell loops (or the shell) for parsing files. The reason you don't find suggestions for better ways of doing it with bash is that there are no good ways of doing this with bash or any other shell. No matter what you do, parsing text using a shell will be slow, cumbersome, and error prone. Shells are primarily designed as a way of entering commands to be run by the computer. They can be used as scripting languages but, again, they are at their best when given commands to run and not when used instead of commands designed to handle text parsing. Shells are tools and just like any other tool, they should be used for the purpose they were designed for. The problem is that many people have learned a little bit of shell scripting, so they have a tool, a "hammer". Because all they know is a hammer, every problem they encounter looks like a nail to them and they try and use their hammer on this nail. Sadly, parsing text is not something that the shell was designed to handle, it isn't a "nail", so using a "hammer" is just not a good idea. So, the answer to "how should I read a file in bash" is very simply "you should not use bash and instead use a tool that is appropriate for the job".
{ "source": [ "https://unix.stackexchange.com/questions/663936", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/352134/" ] }
664,625
I am using this usb wifi device on Debian running on my DE10-Nano board . Looking at the product details, it seems like this uses the RT5370 chipset which is included in the RT2800USB driver. I have enabled this in the kernel as shown in the screenshot below: However, the wifi device doesn't work unless I install the firmware also with the following command: sudo apt install firmware-ralink My question is - what does the firmware have to do with the driver? Shouldn't the wifi device already have the necessary firmware? What exactly is going on here? I'm new to kernel drivers and devices so trying to understand the magic going on here. My understanding is that to use a device, I just need to make sure the relevant driver is either compiled into the kernel or available as a module that you can load in later. Here is the dmesg output when I run ifup wlan0 . The firmware file rt2870.bin is provided by the package firmware-ralink . [ 78.302351] ieee80211 phy0: rt2x00lib_request_firmware: Info - Loading firmware file 'rt2870.bin' [ 78.311413] ieee80211 phy0: rt2x00lib_request_firmware: Info - Firmware detected - version: 0.36 [ 80.175252] wlan0: authenticate with 30:23:03:41:73:67 [ 80.206023] wlan0: send auth to 30:23:03:41:73:67 (try 1/3) [ 80.220665] wlan0: authenticated [ 80.232966] wlan0: associate with 30:23:03:41:73:67 (try 1/3) [ 80.257518] wlan0: RX AssocResp from 30:23:03:41:73:67 (capab=0x411 status=0 aid=5) [ 80.270065] wlan0: associated [ 80.503705] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready
Many hardware device manufacturers do not embed firmware into their devices, they require firmware to be loaded into the device by the operating system's driver. Some other manufacturers embed an old version of the firmware but allow an updated version to be loaded by the driver - quite often the embedded version is ancient and/or buggy (and rarely, if ever, updated in the device itself because that might require changes to the manufacturing or testing process - this is generally a deliberate design decision. The rationale is that the embedded firmware version doesn't have to be good , it just has to resemble something that's minimally functional - updates can and should be loaded by the driver) The firmware files almost always have a license which is incompatible with the GPL (or even no explicit or discernible license, just an implied "right to use" by being distributed with the device itself and the Windows driver it comes with) and thus can not be distributed with the kernel itself, and has to be distributed as a separate package. To get the device working, you need both the driver and the firmware.
{ "source": [ "https://unix.stackexchange.com/questions/664625", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/324101/" ] }
665,326
I have a problem with a remote server having a keyboard layout in the console different from my physical keyboard. I need to copy a @ letter to be able to paste in a browser forum. The server is in a VPN without external access, so a simple googling for 'at symbol' doesn't work. Is there some trick to have a @ printed in the console so I can copy and paste it? Is there a well-known file to simply do a cat and show a @ inside it? A README or similar.
With the bash shell: echo $'\x40' With a POSIX shell: printf '\100'
{ "source": [ "https://unix.stackexchange.com/questions/665326", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167205/" ] }
666,766
I have heard many times that issuing rm -rf is dangerous since users can accidentally remove the entire system. But sometimes I want to remove a directory recursively without being asked every time. I am thinking about using yes | rm -r instead. The thing I am thinking is that, is yes | rm -r safer than rm -rf ? Or essentially, are they the same?
First, as others have already said, yes | rm -r is very similar but not identical to rm -rf . The difference is that the -f option tells rm to continue past various errors. This means that yes | rm -r will exit on the first error unlike rm -rf which continue on and keep deleting everything it can. This means that yes | rm is slightly less dangerous than rm -f , but not substantially so. So, what do you do to mitigate the risks of rm ? Here are a few habits I've developed that have made it much less likely to run into trouble with rm . This answer assumes you're not aliasing rm to rm -i , which is a bad practice in my opinion. Do not use an interactive root shell. . This immediately makes it much more difficult to do the worst-case rm -rf / . Instead, always use sudo , which should be a visual clue to look very carefully at the command you're typing. If it's absolutely necessary to start a root shell, do what you need there and exit. If something is forcing you to be root most of the time, fix whatever it is. Be wary of absolute paths. If you find yourself typing a path starting with / , stop. It's safer to avoid absolute paths, and instead cd to the directory you intend to delete from, use ls and pwd to look around to make sure you're the right place, and then go ahead. Pause before hitting return on the rm command. I've trained myself to always always always lift my fingers from the keyboard after typing any rm command (and a few other potentially dangerous commands), inspect what I've typed very carefully, and only then put my fingers back to the keyboard and hit return. Use echo rm ... to see what you're asking rm to do. I often do crucial rm commands as a two-step process. First, I type $ echo rm -rf ... this expands all shell globs (ie, * patterns, etc) and shows me the rm command that would have been executed. If this looks good, again after careful inspection, I type ^P (control-P) to get the previous input line back, delete echo , and inspect the command line again, then hit return without changing anything else. Maintain backups. If you're doing most of the above, the odds of having to restore the entire system are very low, but you can still accidentally delete your own files, and it's handy to be able to get them back from somewhere.
{ "source": [ "https://unix.stackexchange.com/questions/666766", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
666,770
Background I'm attempting to configure automatic LUKS unlock on CentOS 8 Stream. I would like to place a keyfile on the unencrypted boot partitionand and use it to unlock the LUKS protected LVM PV (which contains the root filesystem). I understand that this is a strange thing to want to do and undermines much of the value of disk encryption - but please humor me. Here's an overview of the current layout: $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 931.5G 0 disk ├─nvme0n1p1 259:1 0 256M 0 part /boot/efi ├─nvme0n1p2 259:2 0 1G 0 part /boot └─nvme0n1p3 259:3 0 930.3G 0 part └─luks-3d33d226-9640-4343-ba5a-b9812dda1465 253:0 0 930.3G 0 crypt └─cs-root 253:1 0 20G 0 lvm / $ sudo e2label /dev/nvme0n1p2 boot Today the /etc/crypttab contains the following for booting with a manually entered passphrase (UUIDs redacted for readability) which works just fine: luks-blah UUID=blah none discard In order to achieve automatic unlocking I have generated a keyfile /boot/keys/keyfile and added it as a key on the LUKS partition using luksAddKey . Attempt 1 In my first attempt I changed the crypttab line to this: luks-blah UUID=blah /keys/keyfile:LABEL=boot discard,keyfile-timeout=10s This does result in automatic unlocking and mounting of the root filesystem, but the boot process fails and dumps me into rescue mode as the system cannot mount /boot . The reason is that the boot partition has already been mounted (to a randomish location in order to obtain the keyfile: /run/systemd/cryptsetup/keydev-luks-blah ). Attempt 2 I tried changing crypttab to this: luks-blah UUID=blah /boot/keys/keyfile discard,keyfile-timeout=10s I thought maybe the boot scripts are smart enough to figure out how to access /boot/keys/keyfile without /boot being mounted yet. This didn't work however, and I just get the prompt to manually enter the passphrase. Question Is there a way to unlock the root filesystem using a keyfile stored on a partition that needs to be available for normal mounting?
First, as others have already said, yes | rm -r is very similar but not identical to rm -rf . The difference is that the -f option tells rm to continue past various errors. This means that yes | rm -r will exit on the first error unlike rm -rf which continue on and keep deleting everything it can. This means that yes | rm is slightly less dangerous than rm -f , but not substantially so. So, what do you do to mitigate the risks of rm ? Here are a few habits I've developed that have made it much less likely to run into trouble with rm . This answer assumes you're not aliasing rm to rm -i , which is a bad practice in my opinion. Do not use an interactive root shell. . This immediately makes it much more difficult to do the worst-case rm -rf / . Instead, always use sudo , which should be a visual clue to look very carefully at the command you're typing. If it's absolutely necessary to start a root shell, do what you need there and exit. If something is forcing you to be root most of the time, fix whatever it is. Be wary of absolute paths. If you find yourself typing a path starting with / , stop. It's safer to avoid absolute paths, and instead cd to the directory you intend to delete from, use ls and pwd to look around to make sure you're the right place, and then go ahead. Pause before hitting return on the rm command. I've trained myself to always always always lift my fingers from the keyboard after typing any rm command (and a few other potentially dangerous commands), inspect what I've typed very carefully, and only then put my fingers back to the keyboard and hit return. Use echo rm ... to see what you're asking rm to do. I often do crucial rm commands as a two-step process. First, I type $ echo rm -rf ... this expands all shell globs (ie, * patterns, etc) and shows me the rm command that would have been executed. If this looks good, again after careful inspection, I type ^P (control-P) to get the previous input line back, delete echo , and inspect the command line again, then hit return without changing anything else. Maintain backups. If you're doing most of the above, the odds of having to restore the entire system are very low, but you can still accidentally delete your own files, and it's handy to be able to get them back from somewhere.
{ "source": [ "https://unix.stackexchange.com/questions/666770", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119672/" ] }
666,779
I have a weird issue: sometimes when my monitor is turned off, the fans are running loud, even when there shouldn't be much usage of the CPU on the system as far as I know. But as soon as I move my mouse and start top to try to diagnose this, the activity, whatever it is, stops; with the fans winding down. So I want a script/program/method that I could start at some point in time, leave the computer unattended while this program is recording CPU activity of processes, then when I resume operating the computer I should be able to read the program's report from which I would quickly know what processes are making the fans work hard. EDIT: one chromium process is the one making the fans run loud while the screen is off. No idea why, though.
First, as others have already said, yes | rm -r is very similar but not identical to rm -rf . The difference is that the -f option tells rm to continue past various errors. This means that yes | rm -r will exit on the first error unlike rm -rf which continue on and keep deleting everything it can. This means that yes | rm is slightly less dangerous than rm -f , but not substantially so. So, what do you do to mitigate the risks of rm ? Here are a few habits I've developed that have made it much less likely to run into trouble with rm . This answer assumes you're not aliasing rm to rm -i , which is a bad practice in my opinion. Do not use an interactive root shell. . This immediately makes it much more difficult to do the worst-case rm -rf / . Instead, always use sudo , which should be a visual clue to look very carefully at the command you're typing. If it's absolutely necessary to start a root shell, do what you need there and exit. If something is forcing you to be root most of the time, fix whatever it is. Be wary of absolute paths. If you find yourself typing a path starting with / , stop. It's safer to avoid absolute paths, and instead cd to the directory you intend to delete from, use ls and pwd to look around to make sure you're the right place, and then go ahead. Pause before hitting return on the rm command. I've trained myself to always always always lift my fingers from the keyboard after typing any rm command (and a few other potentially dangerous commands), inspect what I've typed very carefully, and only then put my fingers back to the keyboard and hit return. Use echo rm ... to see what you're asking rm to do. I often do crucial rm commands as a two-step process. First, I type $ echo rm -rf ... this expands all shell globs (ie, * patterns, etc) and shows me the rm command that would have been executed. If this looks good, again after careful inspection, I type ^P (control-P) to get the previous input line back, delete echo , and inspect the command line again, then hit return without changing anything else. Maintain backups. If you're doing most of the above, the odds of having to restore the entire system are very low, but you can still accidentally delete your own files, and it's handy to be able to get them back from somewhere.
{ "source": [ "https://unix.stackexchange.com/questions/666779", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116512/" ] }
667,101
I am currently enrolled in a course that is teaching me UNIX fundamentals, such as common commands and such. After doing some digging on UNIX, I came across the rabbit hole of legal battles over who owns UNIX, and the UNIX wars. I have done some research, but the sources are sort of dated (circa 2003 - 2004) and have conflicting information as far as who owns it. Here are a couple of the sources I have found: https://www.zdnet.com/article/who-really-owns-unix/ - states that the Open Group owns it https://www.informit.com/articles/article.aspx?p=175171&seqNum=2 - states that the SCO owns it After reading these sources, it sounds like the Open Group is claiming to own the UNIX trademark, while the SCO claims to own the UNIX source code. Am I understanding that correctly?
TLDR As of today, and talking about the USA, the UNIX trademark is owned by The Open Group (you can see it on the USPTO website ) for "COMPUTER PROGRAMS, * NAMELY, TEST SUITES USED TO DETERMINE COMPLIANCE WITH CERTAIN SPECIFICATIONS AND STANDARDS *" (First use: Dec. 14, 1972, Use in Commerce: Dec. 14, 1972) A bit more Novell transferred the trademarks of Unix to The Open Group in 1993. See message from Chuck Karish on comp.std.unix news group . I quote a piece: Q4. Will Novell continue to control UNIX? A4. No. From today, the APIs which define UNIX will be controlled by X/Open and managed through the company's proven open industry consensus processes. Novell will continue to own one product (a single implementation of UNIX) which currently conforms to the specification. Novell is clearly free to evolve that product in any way that it chooses, but may only continue to call it UNIX if it maintains conformance to the X/Open specifications. SCO tried to buy UNIX from Novell (again). You may read docketing statement of The SCO Group, Inc. v. Novell, Inc case . I quote a piece: It is therefore ORDERED that SCO’s Renewed Motion for Judgment as a Matter of Law or, in the Alternative, for a New Trial (Docket No. 871) is DENIED. DATED June 10, 2010. BY THE COURT: TED STEWART United States District Judge Then SCO appealed: Party or Parties filing Notice of Appeal/Petition: The SCO Group, Inc. ______________________________________________________________________ I. TIMELINESS OF APPEAL OR PETITION FOR REVIEW A. APPEAL FROM DISTRICT COURT Date notice of appeal filed: July 7, 2010 On August 30, 2011, the Appeals Court affirmed the trial decision. You can read this . A quote: VII. IMPLIED COVENANT OF GOOD FAITH AND FAIR DEALING SCO argues the district court erred in entering judgment in Novell’s favor on its good faith and fair dealing claim (...). The district court’s conclusion on this point is consistent with the jury verdict on copyright ownership and is supported by evidence in the record. AFFIRMED. Entered by the Court: Terrence L. O’Brien. United States Circuit Judge So Unix is not owned by SCO. In fact, SCO holds some UNIX® certifications issued by The Open Group: UNIX 95 and UNIX 93 . Any system that wants to be called a UNIX® must be certified by The Open Group. A list of certified Unixes can be found on The Open Group official register of UNIX Certified Products page . Some related systems not holding a certification are usually referred to as *nixes or Unix-like systems. You can find out more on Wikipedia article about UNIX, section Branding and article about SCO Group, Inc. v. Novell, Inc. lawsuit .
{ "source": [ "https://unix.stackexchange.com/questions/667101", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/489364/" ] }
668,262
When I run a Centos 7 Docker Image like this docker run -it centos:7 bash Running something which is uses Process Substitution is fine (as expected, as Bash supports Process Substitution since the beginning of time - Bash 1.4.x actually). For example: while IFS= read -r test; do echo $test; done < <(cat anaconda-post.log) But when I switch to /bin/sh the same code doesn't work anymore /bin/sh while IFS= read -r test; do echo $test; done < <(cat anaconda-post.log) sh: syntax error near unexpected token `<' Although /bin/sh seems to be Bash /bin/sh --version GNU bash, version 4.2.46(2)-release (x86_64-redhat-linux-gnu) Copyright (C) 2011 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> But then why doesn't process substitution work anymore? Other non-POSIX features seems to work, though echo ${PATH//:/ } /usr/local/sbin /usr/local/bin /usr/sbin /usr/bin /sbin /bin
Yes, bash , when called as sh , runs in POSIX mode, disabling all its bash only features. From the manual - Invoked with name sh If Bash is invoked with the name sh, it tries to mimic the startup behavior of historical versions of sh as closely as possible, while conforming to the POSIX standard as well $ ls -lrth /bin/sh lrwxrwxrwx. 1 root root 4 Aug 26 2018 /bin/sh -> bash $ /bin/bash -c 'set -o | grep posix' posix off $ /bin/sh -c 'set -o | grep posix' posix on with posix mode enabled, non-standard features like process substitution won't be enabled. See Bash POSIX Mode to see its complete behavior running in the mode. From release 5.1 of the shell, process substitutions are available in POSIX mode.
{ "source": [ "https://unix.stackexchange.com/questions/668262", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2726/" ] }
669,449
I am executing the below command for 1000 files: ebook-convert <name-of-first-file>.epub <name-of-first-file>.mobi ebook-convert <name-of-second-file>.epub <name-of-second-file>.mobi Apparently, instead of manually doing this for 1000 files, one could write a bash script for the job. I was wondering if there is an easier way to do something like this in Linux though, a small command that would look something like ebook-convert *.epub *.mobi Can you use wildcards in a similar way, that works for a scenario like the above?
You can’t do it directly with wildcards, but a for loop can get you there: for epub in ./*.epub; do ebook-convert "${epub}" "${epub%.epub}.mobi"; done Zsh supports a more elegant form of this loop . Instead of a shell script, if your file names don’t contain whitespace characters, and more generally can be safely handled by Make and the shell, you can use GNU Make; put this in a Makefile : all: $(patsubst %.epub,%.mobi,$(wildcard *.epub)) %.mobi : %.epub ebook-convert ./$< ./$@ and then run make , which will ensure that all .epub files are converted to a .mobi file. You can run this repeatedly to update files as necessary — it will only build files which are missing or older than their source file. (Make sure that the ebook-convert line starts with a tab, not spaces.)
{ "source": [ "https://unix.stackexchange.com/questions/669449", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/433400/" ] }
669,669
I need to run a script every 64 hours. I couldn't find the answer with cron . Is it possible with it, or should I use a loop in a shell script?
I suggest perhaps using a crontab "front end" like crontab.guru for figuring out crontab if you're a beginner. However, as in your case, the hour setting only allows for values of 0 to 23, so you can't use crontab here. Instead, I'd suggest using at . In your case, I'd probably use something like: at now + 64 hours and then enter your command or echo "<your command>" | at now + 64 hours at the beginning of your script, etc. Basically, you'll be scheduling running the command right when the command has been invoked the last time. Also, if you don't want a time delta, rather the exact time, I suggest doing a bit of time arithmetic, and then use an exact time with at to have the command run. I highly suggest reading the man page of at , as it is fairly comprehensive.
{ "source": [ "https://unix.stackexchange.com/questions/669669", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/239796/" ] }
669,947
A bash script is using a variable Q for some purpose (outside the scope of this question): Q=0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ As this script is used in an environment where each byte counts, this is waste. But some workaround like Q=0$(seq -s "" 9)$(echo {A..Z}|tr -d " ") (for C locale) is even worse. Am I too blind to see the obvious trick to compactly generate such a simple sequence?
For any shell capable of brace expansion: Using printf : $ printf %s {0..9} {A..Z} 0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ --> Q=$(printf %s {0..9} {A..Z}) Backticks instead of $() saves one byte. For Bash specifically, printf -v var to printf into a variable is nice but no shorter than backticks. printf -vQ %s {0..9} {A..Z} Q=`printf %s {0..9} {A..Z}`
{ "source": [ "https://unix.stackexchange.com/questions/669947", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216004/" ] }
669,956
What are some possible causes, that a command could not be found in Linux? Other than it is not in the PATH ? Some background info: When trying to execute pdflatex from vscode, I got some troubles, that vscode was not able to find pdflatex. Probably because the PATH is not set correctly. Since I was not able to fix the problem right away, I tried to work around this problem by executing a shell script, which then calls pdflatex: #!/bin/bash export PATH=/usr/bin pdflatex $@ or #!/bin/bash /usr/bin/pdflatex $@ In both cases, the script works as expected when executed over the normal terminal. But when executed in the vscode intern terminal it says pdflatex: command not found As far as I know, the only way that a command can not be found, is if it is not in a directory included by the PATH . Or when the absolute path is wrong. But this seems not to be the case here. So what other factors are used to determine, how a command is searched for? Additional Infos (as requestet) OS: POP OS 21.04 from vscode terminal: $ echo $PATH /app/bin:/usr/bin:/home/flo/.var/app/com.visualstudio.code from a native terminal: $ echo $PATH /opt/anaconda3/bin:/opt/anaconda3/condabin:/home/flo/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin Other Commands as ls , which are also in /usr/bin directory do work from the vscode internal terminal (as ls aswell /usr/bin/ls ). properties of pdflatex: $ ls -l /usr/bin/pdflatex lrwxrwxrwx 1 root root 6 Feb 17 2021 /usr/bin/pdflatex -> pdftex or $file /usr/bin/pdflatex /usr/bin/pdflatex: symbolic link to pdftex and pdftex (same behavior as pdflatex): $ ls -l /usr/bin/pdftex -rwxr-xr-x 1 root root 2115048 Mar 13 2021 /usr/bin/pdftex or $ file /usr/bin/pdftex /usr/bin/pdftex: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=88c89d7d883163b4544f9461668b73383e1ca04e, for GNU/Linux 3.2.0, stripped the following script gives also the same output: #!/bin/bash pdflatex $@ The original (copied, without any edits) script is as follow: #!/bin/bash #export PATH=/usr/bin #printenv PATH pdflatex $@ #/usr/bin/pdflatex $@ To test the other scripts, I changed the comments and deleted the irrelevant lines in the post here. /app/bin does not exist. ( /app does not exist) I tried to change the PATH in vscode (inside the LaTeX Workshop extensions) since this is most likely the cause for my problem in the first place. However, I could neither fix the problem nor confirm in any way, that my configs (for the LaTeX Workshop extension) had any effect at all. when adding the following lines to the script ( makeTex.sh is my wrapper script): declare -p LD_LIBRARY_PATH declare -p LD_PRELOAD The outputs are as follows: native Terminal: ./makeTex.sh: line 4: declare: LD_LIBRARY_PATH: not found ./makeTex.sh: line 5: declare: LD_PRELOAD: not found vscode Terminal: declare -x LD_LIBRARY_PATH="/app/lib" ./makeTex.sh: line 5: declare: LD_PRELOAD: not found The problem occured by using vscode 1.57.1 (installed via flatpak). Other versions of vscode (at least vscodium 1.60.1) do not show the same behavior.
For any shell capable of brace expansion: Using printf : $ printf %s {0..9} {A..Z} 0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ --> Q=$(printf %s {0..9} {A..Z}) Backticks instead of $() saves one byte. For Bash specifically, printf -v var to printf into a variable is nice but no shorter than backticks. printf -vQ %s {0..9} {A..Z} Q=`printf %s {0..9} {A..Z}`
{ "source": [ "https://unix.stackexchange.com/questions/669956", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/263650/" ] }
670,636
I have a USB Zigbee dongle, but I'm unable to connect to it. It briefly shows up in /dev/ttyUSB0 , but then quickly disappears. I see the following output in the console: $ dmesg --follow ... [ 738.365561] usb 1-10: new full-speed USB device number 8 using xhci_hcd [ 738.607730] usb 1-10: New USB device found, idVendor=1a86, idProduct=7523, bcdDevice= 2.64 [ 738.607737] usb 1-10: New USB device strings: Mfr=0, Product=2, SerialNumber=0 [ 738.607739] usb 1-10: Product: USB Serial [ 738.619446] ch341 1-10:1.0: ch341-uart converter detected [ 738.633501] usb 1-10: ch341-uart converter now attached to ttyUSB0 [ 738.732348] audit: type=1130 audit(1632606446.974:2212): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=brltty-device@sys-devices-pci0000:00-0000:00:01.3-0000:03:00.0-usb1-1\x2d10 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ 738.768081] audit: type=1130 audit(1632606447.007:2213): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=brltty@-sys-devices-pci0000:00-0000:00:01.3-0000:03:00.0-usb1-1\x2d10 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ 738.776433] usb 1-10: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1 [ 738.783508] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0 [ 738.783521] ch341 1-10:1.0: device disconnected [ 739.955783] input: BRLTTY 6.4 Linux Screen Driver Keyboard as /devices/virtual/input/input35 ...
The problem here is BRLTTY, a program that "provides access to the Linux/Unix console (when in text mode) for a blind person using a refreshable braille display". If you are not blind, you can disable BRLTTY in two different ways: Remove udev rules BRLTTY uses udev rules to get permissions to mess with the TTYs without being root. You can disable these rules by overriding the rules shipped by your distro with /dev/null : for f in /usr/lib/udev/rules.d/*brltty*.rules; do sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")" done sudo udevadm control --reload-rules Disable service The BRLTTY service is launched by the brltty.path service. This service can be completely prevented from ever starting by running by doing the following: $ sudo systemctl mask brltty.path Created symlink /etc/systemd/system/brltty.path → /dev/null.
{ "source": [ "https://unix.stackexchange.com/questions/670636", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70735/" ] }
671,747
I'd like to ban this range of Chinese IP addresses in nginx: '223.64.0.0 - 223.117.255.255' I know how to ban each of /16 range like: deny 223.64.0.0/16; But it will take many lines to include the whole 223.64 - 223.117 range. Is there a shorthand notation to do so in one line?
ipcalc ( ipcalc package on Debian) can help you deaggregate a range into a number of matching CIDR s: $ ipcalc -r 223.64.0.0 - 223.117.255.255 deaggregate 223.64.0.0 - 223.117.255.255 223.64.0.0/11 223.96.0.0/12 223.112.0.0/14 223.116.0.0/15 Same with that other ipcalc ( ipcalc-ng package and command name on Debian): $ ipcalc-ng -d '223.64.0.0 - 223.117.255.255' [Deaggregated networks] Network: 223.64.0.0/11 Network: 223.96.0.0/12 Network: 223.112.0.0/14 Network: 223.116.0.0/15 That one has more options to vary the output format: $ ipcalc-ng --no-decorate -d '223.64.0.0 - 223.117.255.255' 223.64.0.0/11 223.96.0.0/12 223.112.0.0/14 223.116.0.0/15 Including json which gives endless possibilities of reformatting if combined with tools like jq : $ ipcalc-ng -j -d '223.64.0.0 - 223.117.255.255' | jq -r '.DEAGGREGATEDNETWORK[]|"deny " + . + ";"' deny 223.64.0.0/11; deny 223.96.0.0/12; deny 223.112.0.0/14; deny 223.116.0.0/15; $ ipcalc-ng -j -d '223.64.0.0 - 223.117.255.255' | jq -r '"deny " + (.DEAGGREGATEDNETWORK|join(" ")) + ";"' deny 223.64.0.0/11 223.96.0.0/12 223.112.0.0/14 223.116.0.0/15;
{ "source": [ "https://unix.stackexchange.com/questions/671747", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/495497/" ] }
672,274
I see that this has the behavior: [root@divinity test]# echo 0 > file.txt [root@divinity test]# cat file.txt 0 [root@divinity test]# echo 0> file.txt [root@divinity test]# cat file.txt I also noticed that if I include "" then it works as expected: [root@divinity test]# echo 0""> file.txt [root@divinity test]# cat file.txt 0 I imagine this is all just part of IO redirection, but I do not quite understand what echo 0> is doing.
In echo 0 > file.txt , with the spaces, > file.txt causes the shell to redirect standard output so that it is written to file.txt (after truncating the file if it already exists). The rest of the command, echo 0 , is run with the redirections in place. When a redirection operator is prefixed with an unquoted number, with no separation, the redirection applies to the corresponding file descriptor instead of the default. 0 is the file descriptor for standard input , so 0> file.txt redirects standard input to file.txt . (1 is standard output, 2 is standard error.) The number is treated as part of the redirection operator, and no longer as part of the command’s arguments; all that’s left is echo . You can see this happening more obviously if you include more content in the echo arguments: $ echo number 0 > file.txt $ echo number 0> file.txt number number shows up in the second case because standard output isn’t redirected. This variant of the redirection operator is more commonly done with standard error, 2> file.txt . Redirecting standard input in this way will break anything which actually tries to read from its standard input; if you really want to redirect standard input, while allowing writes, you can do so with the <> operator: cat 0<> file.txt
{ "source": [ "https://unix.stackexchange.com/questions/672274", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137794/" ] }
672,871
Suppose I have a file direction with the lines east north south west south-west and using a loop and echo in a shell script I want to generate this output: Direction: east Direction: north Direction: south Direction: west Last direction: south-west So in other words I want to do something different with the last line in the script.
bash can't detect the end of a file (without trying to read the next line and failing), but perl can with its eof function: $ perl -n -e 'print "Last " if eof; print "Direction: $_"' direction Direction: east Direction: north Direction: south Direction: west Last Direction: south-west note: unlike echo in bash, the print statement in perl doesn't print a newline unless you either 1. explicitly tell it to by including \n in the string you're printing, or 2. are using perl's -l command-line option, or 3. if the string already contains a newline....as is the case with $_ - which is why you often need to chomp() it to get rid of the newline. BTW, in perl, $_ is the current input line. Or the default iterator (often called "the current thingy" probably because "dollarunderscore" is a bit of a mouthful) in any loop that doesn't specify an actual variable name. Many perl functions and operators use $_ as their default/implicit argument if one isn't provided. See man perlvar and search for $_ . sed can too - the $ address matches the last line of a file: $ sed -e 's/^/Direction: /; $s/^/Last /' direction Direction: east Direction: north Direction: south Direction: west Last Direction: south-west The order of the sed rules is important. My first attempt did it the wrong way around (and printed "Direction: Last south-west"). This sed script always adds "Direction: " to the beginning of each line. On the last line ( $ ) it adds "Last " to the beginning of the line already modified by the previous statement.
{ "source": [ "https://unix.stackexchange.com/questions/672871", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117409/" ] }
673,641
Maybe I haven't had enough coffee yet today, but I can't remember or think of any reason why /proc/PID/cmdline should be world-readable - after all, /proc/PID/environ isn't. Making it readable only by the user (and maybe the group. and root, of course) would prevent casual exposure of passwords entered as command-line arguments. Sure, it would affect other users running ps and htop and the like - but that's a good thing, right? That would be the point of not making it world-readable.
I suspect the main, and perhaps only, reason is historical — /proc/.../cmdline was initially world-readable, so it remains that way for backwards compatibility. cmdline was added in 0.98.6, released on December 2, 1992, with mode 444; the changelog says - /proc filesystem extensions. Based on ideas (and some code) by Darren Senn, but mostly written by yours truly. More about that later. I don’t know when “later” was; as far as I can tell, Darren Senn’s ideas are lost in the mists of time. environ is an interesting counter-example to the backwards compatibility argument: it started out word-readable, but was made readable only by its owner in 1.1.85. I haven’t found the changelog for that so I don’t know what the reasoning was. The overall accessibility and visibility of /proc/${pid} (including /proc/${pid}/cmdline ) can be controlled using proc ’s hidepid mount option , which was added in version 3.3 of the kernel . The gid mount option can be used to give full access to a specific group, e.g. so that monitoring processes can still see everything without running as root.
{ "source": [ "https://unix.stackexchange.com/questions/673641", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7696/" ] }
676,608
I'm perplexed but still guess I misunderstood Bash somehow. /$ if [ -e /bin/grep ]; then echo yea; else echo nay ; fi yea /$ if [ ! -e /bin/grep ]; then echo yea; else echo nay ; fi nay /$ if [ -a /bin/grep ]; then echo yea; else echo nay ; fi yea /$ if [ ! -a /bin/grep ]; then echo yea; else echo nay ; fi yea Why negation ! reverses effect of -e test but not -a test? Man bash says: test : 3 arguments The following conditions are applied in the order listed. If the second argument is one of the binary conditional operators listed above under CONDITIONAL EXPRESSIONS, the result of the expression is the result of the binary test using the first and third arguments as operands. The -a and -o operators are considered binary operators when there are three arguments. If the first argument is ! , the value is the negation of the two-argument test using the second and third arguments. Bash Conditional Expressions Conditional expressions are used by the [[ compound command and the test and [ builtin commands -a file True if file exists. -b file True if file exists and is a block special file. -c file True if file exists and is a character special file. -d file True if file exists and is a directory. -e file True if file exists.
-a is both a unary (for a ccessible, added for compatibility with the Korn shell, but otherwise non-standard and now redundant with -e ) and binary (for a nd, in POSIX (with XSI) but deprecated there) operator. Here [ ! -a /bin/grep ] invokes the binary operator as required by POSIX. It's [ "$a" -a "$b" ] to test whether $a is non-empty and $b is non empty, here with $a == ! and $b == /bin/grep . As both strings are non-empty, it returns true . See also the "The -a and -o operators are considered binary operators when there are three arguments" in the text you quoted. -a is deprecated in both the unary and binary form, the unary one because it's superseded by -e , the binary one because it makes for unreliable and ambiguous test expressions. To test for file existence (though in effect, it's more a test whether the file is accessible , whether stat() would succeed on the path¹), use [ -e filepath ] . To and two conditions, use && between two invocations of [ . To test whether a string is non empty, I personally prefer the [ -n "$string" ] form over the [ "$string" ] one. So: test for file existence: [ -e "$file" ] # not [ -a "$file" ] [ ! -e "$file" ] # not [ ! -a "$file" ] test for two strings being non-empty: [ -n "$a" ] && [ -n "$b" ] # not [ "$a" -a "$b" ] [ "$a" ] && [ "$b" ] From the rationale in the POSIX specification for the test (aka [ ) utility : The XSI extensions specifying the -a and -o binary primaries and the '(' and ')' operators have been marked obsolescent. (Many expressions using them are ambiguously defined by the grammar depending on the specific expressions being evaluated.) Scripts using these expressions should be converted to the forms given below. Even though many implementations will continue to support these obsolescent forms, scripts should be extremely careful when dealing with user-supplied input that could be confused with these and other primaries and operators. and: An early proposal used the KornShell -a primary (with the same meaning), but this was changed to -e because there were concerns about the high probability of humans confusing the -a primary with the -a binary operator. The manuals of yash , bosh , GNU coreutils do guard against using binary -a / -o in their respective [ / test implementations and zsh 's manual never documented them², but many others including bash (the GNU shell) unfortunately still don't discourage their usage nor deprecate them. ¹ more on that in this answer of mine to a related stackoverflow Q&A ² a test / [ builtin was only added to zsh in version 2.0.3 in 1991. The [[ ... ]] special construct, from the Korn shell was always preferred there, and has its own syntax where && and || are used for and and or operators.
{ "source": [ "https://unix.stackexchange.com/questions/676608", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/446998/" ] }
676,617
How do we have our own linux binary recognize and use its library/deps. in local library i.e. in /usr/local/lib as we install it itself in /usr/local/bin ?
-a is both a unary (for a ccessible, added for compatibility with the Korn shell, but otherwise non-standard and now redundant with -e ) and binary (for a nd, in POSIX (with XSI) but deprecated there) operator. Here [ ! -a /bin/grep ] invokes the binary operator as required by POSIX. It's [ "$a" -a "$b" ] to test whether $a is non-empty and $b is non empty, here with $a == ! and $b == /bin/grep . As both strings are non-empty, it returns true . See also the "The -a and -o operators are considered binary operators when there are three arguments" in the text you quoted. -a is deprecated in both the unary and binary form, the unary one because it's superseded by -e , the binary one because it makes for unreliable and ambiguous test expressions. To test for file existence (though in effect, it's more a test whether the file is accessible , whether stat() would succeed on the path¹), use [ -e filepath ] . To and two conditions, use && between two invocations of [ . To test whether a string is non empty, I personally prefer the [ -n "$string" ] form over the [ "$string" ] one. So: test for file existence: [ -e "$file" ] # not [ -a "$file" ] [ ! -e "$file" ] # not [ ! -a "$file" ] test for two strings being non-empty: [ -n "$a" ] && [ -n "$b" ] # not [ "$a" -a "$b" ] [ "$a" ] && [ "$b" ] From the rationale in the POSIX specification for the test (aka [ ) utility : The XSI extensions specifying the -a and -o binary primaries and the '(' and ')' operators have been marked obsolescent. (Many expressions using them are ambiguously defined by the grammar depending on the specific expressions being evaluated.) Scripts using these expressions should be converted to the forms given below. Even though many implementations will continue to support these obsolescent forms, scripts should be extremely careful when dealing with user-supplied input that could be confused with these and other primaries and operators. and: An early proposal used the KornShell -a primary (with the same meaning), but this was changed to -e because there were concerns about the high probability of humans confusing the -a primary with the -a binary operator. The manuals of yash , bosh , GNU coreutils do guard against using binary -a / -o in their respective [ / test implementations and zsh 's manual never documented them², but many others including bash (the GNU shell) unfortunately still don't discourage their usage nor deprecate them. ¹ more on that in this answer of mine to a related stackoverflow Q&A ² a test / [ builtin was only added to zsh in version 2.0.3 in 1991. The [[ ... ]] special construct, from the Korn shell was always preferred there, and has its own syntax where && and || are used for and and or operators.
{ "source": [ "https://unix.stackexchange.com/questions/676617", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
676,733
I'm trying to get the exit code of the function that I'm repeatedly calling in the "condition" part of a Bash while loop: while <function>; do <stuff> done When this loop terminates due to an error, I need the exit code of <function> . Any thoughts on how I can get that?
You can capture the exit value from the condition and propagate that forward: while rmdir FOO; ss=$?; [[ $ss -eq 0 ]] do echo in loop done echo "out of loop with ?=$? but ss=$ss" Output rmdir: failed to remove 'FOO': No such file or directory out of loop with ?=0 but ss=1 In this instance the exit status from rmdir FOO has been captured in the variable ss and is 1 . (Try replacing rmdir FOO with ( exit 4 ) . You'll find that ss=4 .) How does this work? Remember that the syntax is actually while list-1; do list-2; done , and not the much more usual expectation of while command; do list; done . The list-1 can be a sequence of semicolon-separated commands, and the documentation states that the " while command continuously executes the list list-2 as long as the last command in the list list-1 returns an exit status of zero. " As an alternative presentation of the messy-looking while condition, it is possible to assign a variable while inside an expression (( ... )) , and then to use the result. This gives the harder-to-read but more compact assign-and-test structure: while rmdir FOO; ((! (ss=$?))) do echo in loop done echo "out of loop with ?=$? but ss=$ss" Alternatively you can use while rmdir FOO; ! (( ss=$? )) . These work because ((1)) evaluates arithmetically to 1, which is generally associated with true, and so the exit code of that evaluation is 0 (success). On the other hand, ((0)) evaluates arithmetically to 0, which is generally associated with false, and so the exit code of that evaluation is 1 (failure). This may seem confusing, as after all both evaluations ((.)) are "successful", but this is a hack to bring the value of arithmetic expressions representing true/false in line with bash's exit codes of success/failure, and make conditional expressions like if ...; then ...; fi , while ...; do ...; done , etc, work correctly, whether based on exit codes or arithmetic values.
{ "source": [ "https://unix.stackexchange.com/questions/676733", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58343/" ] }
677,591
I'm getting the following error from sudo : $ sudo ls sudo: /etc/sudoers is owned by uid 1000, should be 0 sudo: no valid sudoers sources found, quitting sudo: unable to initialize policy plugin Of course I can't chown it back to root without using sudo . We don't have a password on the root account either. I honestly don't know how the system got into this mess, but now it's up to me to resolve it. Normally I would boot into recovery mode, but the system is remote and only accessible over a VPN while booted normally. For the same reason, booting from a live CD or USB stick is also impractical. The system is Ubuntu 16.04 (beyond EOL, don't ask), but the question and answers are probably more general.
The procedure described here (which may itself be an imperfect copy of this Ask Ubuntu answer ) performed the miracle. I'm copying it here, and adding some more explanations. Procedure Open two SSH sessions to the target server. In the first session, get the PID of bash by running: echo $$ In the second session, start the authentication agent with: pkttyagent --process 29824 Use the pid obtained from step 1. Back in the first session, run: pkexec chown root:root /etc/sudoers /etc/sudoers.d -R Enter the password in the second session password promt. Explanation Similar to sudo , pkexec allows an authorized user to execute a program as another user, typically root . It uses polkit for authentication; in particular, the org.freedesktop.policykit.exec action is used. This action is defined in /usr/share/polkit-1/actions/org.freedesktop.policykit.policy : <action id="org.freedesktop.policykit.exec"> <description>Run programs as another user</description> <message>Authentication is required to run a program as another user</message> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>auth_admin</allow_active> </defaults> </action> auth_admin means that an administrative user is allowed to perform this action. Who qualifies as an administrative user? On this particular system (Ubuntu 16.04), that is configured in /etc/polkit-1/localauthority.conf.d/51-ubuntu-admin.conf : [Configuration] AdminIdentities=unix-group:sudo;unix-group:admin So any user in the group sudo or admin can use pkexec . On a newer system (Arch Linux), it's in /usr/share/polkit-1/rules.d/50-default.rules : polkit.addAdminRule(function(action, subject) { return ["unix-group:wheel"]; }); So here, everyone in the wheel group is an administrative user. In the pkexec manual page, it states that if no authentication agent is found for the current session, pkexec uses its own textual authentication agent, which appears to be pkttyagent . Indeed, if you run pkexec without first starting the pkttyagent process, you are prompted for a password in the same shell but it fails after entering the password: polkit-agent-helper-1: error response to PolicyKit daemon: GDBus.Error:org.freedesktop.PolicyKit1.Error.Failed: No session for cookie This appears to be an old bug in polkit that doesn't seem to be getting any traction. More discussion . The trick of using two shells is merely a workaround for this issue.
{ "source": [ "https://unix.stackexchange.com/questions/677591", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177652/" ] }
677,608
I have a series of headings in a file that have names like this: grep ">scaffold_3" DM_v6.1_unanchoredScaffolds.fasta >scaffold_3 >scaffold_303 >scaffold_31 >scaffold_34 >scaffold_36 >scaffold_37 >scaffold_39 >scaffold_33 >scaffold_300 I would like to select only the first, so I tried: $ grep ">scaffold_3 " file.fasta $ $ grep ">scaffold_3[[:blank:]]" file.fasta $ $ grep ">scaffold_3\t" file.fasta $ $ grep ">scaffold_3\ " file.fasta $ $ grep ">scaffold_3 " file.fasta $ $ grep ">scaffold_3[[:space:]]" file.fasta $ $ grep ">scaffold_3$" file.fasta >scaffold_3 How can I get the exact name but not the synonyms, given that the character after the name might be a space, tab, newline (perhaps from Windows too) and that [[:space:]] did not work? Thank you
The procedure described here (which may itself be an imperfect copy of this Ask Ubuntu answer ) performed the miracle. I'm copying it here, and adding some more explanations. Procedure Open two SSH sessions to the target server. In the first session, get the PID of bash by running: echo $$ In the second session, start the authentication agent with: pkttyagent --process 29824 Use the pid obtained from step 1. Back in the first session, run: pkexec chown root:root /etc/sudoers /etc/sudoers.d -R Enter the password in the second session password promt. Explanation Similar to sudo , pkexec allows an authorized user to execute a program as another user, typically root . It uses polkit for authentication; in particular, the org.freedesktop.policykit.exec action is used. This action is defined in /usr/share/polkit-1/actions/org.freedesktop.policykit.policy : <action id="org.freedesktop.policykit.exec"> <description>Run programs as another user</description> <message>Authentication is required to run a program as another user</message> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>auth_admin</allow_active> </defaults> </action> auth_admin means that an administrative user is allowed to perform this action. Who qualifies as an administrative user? On this particular system (Ubuntu 16.04), that is configured in /etc/polkit-1/localauthority.conf.d/51-ubuntu-admin.conf : [Configuration] AdminIdentities=unix-group:sudo;unix-group:admin So any user in the group sudo or admin can use pkexec . On a newer system (Arch Linux), it's in /usr/share/polkit-1/rules.d/50-default.rules : polkit.addAdminRule(function(action, subject) { return ["unix-group:wheel"]; }); So here, everyone in the wheel group is an administrative user. In the pkexec manual page, it states that if no authentication agent is found for the current session, pkexec uses its own textual authentication agent, which appears to be pkttyagent . Indeed, if you run pkexec without first starting the pkttyagent process, you are prompted for a password in the same shell but it fails after entering the password: polkit-agent-helper-1: error response to PolicyKit daemon: GDBus.Error:org.freedesktop.PolicyKit1.Error.Failed: No session for cookie This appears to be an old bug in polkit that doesn't seem to be getting any traction. More discussion . The trick of using two shells is merely a workaround for this issue.
{ "source": [ "https://unix.stackexchange.com/questions/677608", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/277882/" ] }
678,930
I would like to output this on completion of my bash script. /\_/\ ( o.o ) > ^ < I have tried the following but all return errors. echo /\\_/\\\n\( o.o \)\n > ^ < echo \/\\_\/\\\r\n( o.o )\r\n > ^ < echo /\\_/\\\n\( o.o \)\n \> ^ < How do I escape these characters so that bash renders them as a string?
In this case, I'd use cat with a (quoted) here-document: cat <<'END_CAT' /\_/\ ( o.o ) > ^ < END_CAT This is the best way of ensuring the ASCII art is outputted the way it is intended without the shell "getting in the way" (expanding variables etc., interpreting backslash escape sequences, or doing redirections, piping etc.) You could also use a multi-line string with printf : printf '%s\n' ' /\_/\ ( o.o ) > ^ <' Note the use of single quotes around the static string that we want to output. We use single quotes to ensure that the ASCII art is not interpreted in any way by the shell. Also note that the string that we output is the second argument to printf . The first argument to printf is always a single quoted formatting string, where backslashes are far from inactive. Or multiple strings with printf (one per line): printf '%s\n' ' /\_/\' '( o.o )' ' > ^ <' printf '%s\n' \ ' /\_/\' \ '( o.o )' \ ' > ^ <' Or, with echo (but see Why is printf better than echo? ; basically, depending on the shell and its current settings, there are possible issues with certain escape sequences that may not play nice with ASCII drawings), echo ' /\_/\ ( o.o ) > ^ <' But again, just outputting it from a here-document with cat would be most convenient and straight-forward I think.
{ "source": [ "https://unix.stackexchange.com/questions/678930", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/465869/" ] }
679,925
I am trying to clone a 500 GB SSD to a 1TB SSD. For some reason, it keeps failing when the data being copied reaches 8GB. This is the third 1TB SSD I've tried this on and they all get stuck at the same place. I've ran the following command: dd if=/dev/sda of=/dev/sdb bs=1024k status=progress I've also tried to clone the drive using Clonezilla which fails at the same spot. I used GParted to reformat the drive and set it to a EXT4 file system but it still gets stuck at the same spot. Sda is internal and sdb is plugged in externally. The error I'm getting says: 7977443328 bytes (8.0 GB, 7.4 GB) copied, 208s, 38.4 MB/s dd: error reading '/dev/sda': Input/output error 7607+1 records in 7607+1 records out Thanks to @roaima for the answer below. I was able to run ddrescue and it copied most of the data over. I took the internal SSD out and connected both the new and old SSDs to a CentOS box via USB3. I ran the following: ddrescue -v /dev/sdb /dev/sdc tmp --force It ran for over 15 hours. It stopped overnight. But the good thing is it picked back up where it left off when I ran the command again. I used screen so that I wouldn't be locked into a single session the second time around :) . I used Ctrl+c to exit the ddrescue command after 99.99% of the data was rescued since it was hanging there for hours. I was able to boot from the new drive and it booted right up. Here is the state where I exited the ddrescue: Initial status (read from mapfile) rescued: 243778 MB, tried: 147456 B, bad-sector: 0 B, bad areas: 0 Current status ipos: 474344 MB, non-trimmed: 1363 kB, current rate: 0 B/s ipos: 474341 MB, non-trimmed: 0 B, current rate: 0 B/s opos: 474341 MB, non-scraped: 522752 B, average rate: 8871 kB/s non-tried: 0 B, bad-sector: 143360 B, error rate: 0 B/s rescued: 500107 MB, bad areas: 123, run time: 8h 1m 31s pct rescued: 99.99%, read errors: 354, remaining time: 14h 31m time since last successful read: 6m 7s Scraping failed blocks... (forwards)^C Interrupted by user Hopefully this helps others. I think my old drive was starting to fail. Hopefully no data was lost. Now on to resizing the LUKS partition :)
The error is, dd: error reading '/dev/sda': Input/output error , which tells you that the problem is reading the source disk and not writing to the destination. You can replace the destination disk as many times as you like and it won't resolve the issue of reading the source. Instead of using dd , consider rescuing the data off the disk before it dies completely. Either copy the files using something like rsync or cp , or take an image copy with ddrescue . ddrescue -v /dev/sda /dev/sdb /some/path/not/on/sda_or_sdb The last parameter points to a relatively small temporary file (the map file) that is on neither /dev/sda nor /dev/sdb . It could be on an external USB memory stick if you have nothing else. The ddrescue command understands that a source disk may be faulty. It reads relatively large blocks at a time until it hits an error, and at that point it marks the section for closer inspection and smaller copy attempts. The map file is used to allow for restarts and continuations in the event that your source disk locks up and the system has to be restarted. It'll do its best to copy everything it can. Once you've copied the disk, your /dev/sdb will appear to have partitions corresponding only to the original disk's size. You can use fdisk or gparted / parted to fix that up afterwards. If you had an error copying data you should first use one of the fsck family to check and fix the partitions. For example, e2fsck -f /dev/sdb1 .
{ "source": [ "https://unix.stackexchange.com/questions/679925", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289356/" ] }
680,635
I stumbled upon the bsdutils package in Debian. The description says: This package contains the bare minimum of BSD utilities needed for a Debian system: logger, renice, script, scriptlive, scriptreplay and wall. The remaining standard BSD utilities are provided by bsdextrautils. Similarly, the description of bsdmainutils also mention BSD: This package contains lots of small programs many people expect to find when they use a BSD-style Unix system. I was surprised to see that these packages relates to BSD, in the context of a Linux system. Do these packages use some code from BSD? What is a BSD-style Unix system ?
In the beginning, there was Unix , which was a product developed by Bell Labs (a subsidiary of AT&T ). A lot of groups customized their copy and added their own programs, and shared their improvements with others (for pay or for free). One such group was the University of California, Berkley (UCB). They shared the Berkeley Software Distribution (BSD) under a very liberal license (known today as the original BSD license ). Originally, this was a set of additions to the basic Unix. Eventually, they rewrote the complete operating system, so that it could be used without getting a license from AT&T. Apart from BSD, the main suppliers of Unix operating systems were computer vendors who sold the operating system with the computer. Some kept basing their operating system on the AT&T version. These systems are known as the System V family, because it was based on this version of AT&T Unix. Other vendors used the BSD version. Some made their own, with the goal of being broadly compatible with the two main players (System V and BSD) but each with their own specifics. A “System V operating system” is a system that is more compatible with AT&T Unix. A “BSD operating system” is a system that is more compatible with BSD. GNU was another project to make an operating system that could play the same role as BSD: freely available, and with the same kinds of features as Unix. GNU was much more ambitious than BSD, but as a result they didn't manage to do everything they wanted, and in particular they were missing a critical bit: a kernel. In the 1990s, Linux became the de facto standard kernel for GNU, and an operating system based mostly on GNU core programs on a Linux kernel is known as “Linux”, or sometimes “GNU/Linux”. GNU/Linux has its own history that's independent from System V and BSD, so it doesn't have all the features that all actual System V systems share, or all the features that all actual BSD systems share. Debian's bsdutils and bsdmainutils are collections of small programs that are typically present on BSD systems, but not part of the core that is present on all Unix systems. The bsdutils collection is from util-linux . They're programs with similar interfaces to the BSD utilities with the same name, but most if not all were written completely independently, and they're distributed under a GNU license. bsdmainutils is a collection of programs copied from a BSD collection, still distributed under a BSD license. They're now maintained by Debian, but they pick up some improvements made by BSD distributions.
{ "source": [ "https://unix.stackexchange.com/questions/680635", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50687/" ] }
683,063
rmdir deletes only an empty directory. To delete recursively, rm -rf is used. Why doesn't rmdir have a recursive option? Logically, when I am deleting a directory , I want to use rmdir . Given that rm is used for deleting a directory in all but the simplest case, why does rmdir even exist? The functionality is subsumed in rm . Is this just a historical accident?
Unlinking directories was originally a privileged operation : It is also illegal to unlink a directory (except for the super-user). So rmdir was implemented as a small binary which only removed directories , which at the time involved removing .. and . inside the directory, and then the directory itself. rmdir was designed to be setuid root; it performs separate permission tests using access to determine whether the real user is allowed to remove a directory. Like any setuid root binary, it’s better to keep it simple and tightly-focused. rm -r actually used this separate binary to delete directories as necessary. It seems the lasting difference between rm -r and rmdir is the result of this initial difference. Presumably since rm acquired the ability to delete recursively early on, and rmdir was supposed to have a very small remit, it was never deemed useful to give rmdir the ability to delete recursively itself.
{ "source": [ "https://unix.stackexchange.com/questions/683063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/146345/" ] }
684,833
I am writing a bash script which contains a simple if section with two conditions: if [[ -n $VAR_A ]] && [[ -n $VAR_B ]]; then echo >&2 "error: cannot use MODE B in MODE A" && exit 1 fi A senior engineer reviewed my code and commented: please avoid using && when you could simply execute the two commands in subsequent lines instead. He didn't further explain. But out of curiosity, I wonder if this is true, and what is the reason for avoiding using && .
The review comment probably refers to the second usage of the && operator. You don't want to not exit if the echo fails, I guess, so writing the commands on separate lines makes more sense: if [[ -n $VAR_A ]] && [[ -n $VAR_B ]]; then echo >&2 "error: cannot use MODE B in MODE A" exit 1 fi BTW, in bash you can include && inside the [[ ... ]] conditions: if [[ -n $VAR_A && -n $VAR_B ]]; then
{ "source": [ "https://unix.stackexchange.com/questions/684833", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/491732/" ] }
684,839
So I'm building a custom Linux-based OS, and I chose to run it as a RAM disk (initramfs). Unfortunately, I keep getting a Kernel Panic during boot. RAMDISK: gzip image found at block 0 using deprecated initrd support, will be removed in 2021. exFAT-fs (ram0): invalid boot record signature exFAT-fs (ram0): failed to read boot sector exFAT-fs (ram0): failed to recognize exfat type exFAT-fs (ram0): invalid boot record signature exFAT-fs (ram0): failed to read boot sector exFAT-fs (ram0): failed to recognize exfat type List of all partitions: 0100 4096 ram0 (driver?) 0101 4096 ram1 (driver?) 0102 4096 ram2 (driver?) 0103 4096 ram3 (driver?) 0104 4096 ram4 (driver?) 0105 4096 ram5 (driver?) 0106 4096 ram6 (driver?) 0107 4096 ram7 (driver?) 0108 4096 ram8 (driver?) 0109 4096 ram9 (driver?) 010a 4096 ram10 (driver?) 010b 4096 ram11 (driver?) 010c 4096 ram12 (driver?) 010d 4096 ram13 (driver?) 010e 4096 ram14 (driver?) 010f 4096 ram15 (driver?) No filesystem could mount root, tried: vfat msdos exfat ntfs ntfs3 Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0) Any chance this is something missing in my kernel build? Here's how I've designed the OS: Component My Choice Init Daemon initrd Commands busybox 1.35.0 Kernel Linux 5.15.12 filesystem msdos, fat, exfat, ext2, ext3, or ext4 Bootloader syslinux or extlinux NOTES: I tried each file system one at a time, and all provide the same response, which leads me to believe that it is not an issue with the filesystem itself. I also tried both syslinux and extlinux for testing purposes. Here's how I've structured my disk: /media/vfloppy └── [ 512 Jan 3 08:06] boot ├── [ 36896 Jan 3 08:06] initramfs.cpio.gz ├── [ 512 Jan 3 08:06] syslinux │   ├── [ 283 Jan 3 08:06] boot.msg │   ├── [ 120912 Jan 3 08:06] ldlinux.c32 │   ├── [ 60928 Jan 3 08:06] ldlinux.sys │   └── [ 173 Jan 3 08:06] syslinux.cfg └── [ 939968 Jan 3 08:06] vmlinux Here is my syslinux.cfg : DISPLAY boot.msg DEFAULT linux label linux KERNEL /boot/vmlinux INITRD /boot/initramfs.cpio.gz APPEND root=/dev/ram0 init=/init loglevel=3 PROMPT 1 TIMEOUT 10 F1 boot.msg I've also enabled the following filesystem options in my kernel's .config file: CONFIG_BLK_DEV_INITRD=y CONFIG_INITRAMFS_SOURCE="" CONFIG_FS_IOMAP=y CONFIG_EXT2_FS=y CONFIG_EXT2_FS_XATTR=y CONFIG_FS_MBCACHE=y CONFIG_EXPORTFS_BLOCK_OPS=y CONFIG_FAT_FS=y CONFIG_MSDOS_FS=y CONFIG_PROC_FS=y CONFIG_BLK_DEV_RAM=y CONFIG_BLK_DEV_RAM_COUNT=16 CONFIG_BLK_DEV_RAM_SIZE=4096 CONFIG_HAVE_KERNEL_GZIP=y CONFIG_RD_GZIP=y CONFIG_DECOMPRESS_GZIP=y
The review comment probably refers to the second usage of the && operator. You don't want to not exit if the echo fails, I guess, so writing the commands on separate lines makes more sense: if [[ -n $VAR_A ]] && [[ -n $VAR_B ]]; then echo >&2 "error: cannot use MODE B in MODE A" exit 1 fi BTW, in bash you can include && inside the [[ ... ]] conditions: if [[ -n $VAR_A && -n $VAR_B ]]; then
{ "source": [ "https://unix.stackexchange.com/questions/684839", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/508479/" ] }
685,233
I was wondering if it would be possible to write a disk image file directly to a partition without saving it as a file first. Something like dd if="http://diskimages.com/i_am_a_disk_image.img" of=/dev/sdb1 bs=2M I would also accept an answer in C or Python because I know how to compile them.
This is actually trivial. You can write to the device just like it's a file, and there are commands for directly downloading content and either writing it to a file or writing it to "stdout". As the user root you can simply: curl https://www.example.com/some/file.img > /dev/sdb Where /dev/sdb is your hard drive. This is not generally recommended but will work just fine and is useful in very small devices without much disk space. Incidently it would be more normal to write a disk image to a disk /dev/sdb not a partition /dev/sdb1 .
{ "source": [ "https://unix.stackexchange.com/questions/685233", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/499916/" ] }
685,305
In short: mkfifo fifo; (echo a > fifo) &; (echo b > fifo) &; cat fifo What I expected: a b since the first echo … > fifo should be the first to have opened the file, so I expect that process to be the first to write to it (with its open unblocking first). What I get: b a To my surprise, this behaviour also happened when opening two separate terminals to do the writing in definitely independent processes. Am I misunderstanding something about the first-in, first-out semantics of a named pipe? Stephen suggested adding a delay: #!/usr/bin/zsh delay=$1 N=$(( $2 - 1 )) out=$(for n in {00..$N}; do mkfifo /tmp/fifo$n (echo $n > /tmp/fifo$n) & sleep $delay (echo $(( $n + 1000 )) > /tmp/fifo$n )& # intentionally using `cat` here to not step into any smartness cat /tmp/fifo$n | sort -C || echo +1 rm /tmp/fifo$n done) echo "$(( $res )) inverted out of $(( $N + 1 ))" Now, this works 100% correct ( delay = 0.1, N = 100 ). Still, running mkfifo fifo; (echo a > fifo) &; sleep 0.1 ; (echo b > fifo) &; cat fifo manually almost always yields the inverted order. In fact, even copying and pasting the for loop itself fails about half of the time. I'm very confused about what's happening here.
This has nothing to do with FIFO semantics of pipes, and doesn’t prove anything about them either way. It has to do with the fact that FIFOs block on opening until they are opened for both writing and reading; so nothing happens until cat opens fifo for reading. since the first echo should be first. Starting processes in the background means that you don’t know when they will actually be scheduled, so there’s no guarantee that the first background process will do its work before the second one. The same applies to unblocking blocked processes . You can improve the odds, while still using background processes, by artificially delaying the second one: rm fifo; mkfifo fifo; echo a > fifo & (sleep 0.1; echo b > fifo) & cat fifo The longer the delay, the better the odds: echo a > fifo blocks waiting to finish opening fifo , cat starts and opens fifo which unblocks echo a , and then echo b runs. However the major factor here is when cat opens the FIFO: until then, the shells block trying to set up the redirections. The output order seen ultimately depends on the order in which the writing processes are unblocked. You’ll get different results if you run cat first: rm fifo; mkfifo fifo; cat fifo & echo a > fifo & echo b > fifo That way, opening fifo for writing will tend not to block (still, without guarantees), so you’ll see a first with a higher frequency than in the first setup. You’ll also see cat finishing before echo b runs, i.e. only a being output.
{ "source": [ "https://unix.stackexchange.com/questions/685305", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106650/" ] }
685,766
I've checked these two questions ( question one , question two ), but they were not helpful for me to understand. I have a file file.txt with 40 lines of Hello World! string. ls -l shows that its size is 520 bytes. Now I archive this file with tar -cvf file.tar file.txt and when I do ls -l again I see that file.tar is 10240 bytes. Why? I've read some manuals and have understood that archiving and compressing are different things. But can someone please explain how it is working?
tar archives have a minimum size of 10240 bytes by default; see the GNU tar manual for details (but this is not GNU-specific). With GNU tar , you can reduce this by specifying either a different block size, or different block factor, or both: tar -cv -b 1 -f file.tar file.txt The result will still be bigger than file.txt , because file.tar stores metadata about file.txt in addition to file.txt itself. In most cases you’ll see one block for the file’s metadata (name, size, timestamps, ownership, permissions), then the file content, then two blocks for the end-of-archive entry, so the smallest archive containing a non-zero-length file is four blocks in size (2,048 bytes with a 512-byte block).
{ "source": [ "https://unix.stackexchange.com/questions/685766", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/504939/" ] }
686,502
I am trying to run top with multiple PIDs using -p option and xargs . However, top fails to run with error top: failed tty get : $ pgrep gvfs | paste -s -d ',' | xargs -t top -p top -p 1598,1605,1623,1629,1635,1639,1645,1932,2744 top: failed tty get I used the -t option for xargs to see the full command which is about to be executed. It seems fine and I can run it successfully by hand: top -p 1598,1605,1623,1629,1635,1639,1645,1932,2744 However, it does not run with xargs . Why is that?
Turns out that there is a special option --open-tty in xargs for interactive applications like top . From man xargs : -o, --open-tty Reopen stdin as /dev/tty in the child process before executing the command. This is useful if you want xargs to run an interactive application. The command to run top should be: pgrep gvfs | paste -s -d ',' | xargs --open-tty top -p
{ "source": [ "https://unix.stackexchange.com/questions/686502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87918/" ] }
686,513
I want to understand how the APT package is managed in general, considering the following situation I got into today: I was trying to add MongoDB to my Debian machine. apt search mongodb showed good-looking results, and before attempting to install I read the MondoDB documentation which stated: Follow these steps to run MongoDB Community Edition on your system. These instructions assume that you are using the official mongodb-org package -- not the unofficial mongodb package provided by Debian -- and are using the default settings. From this, I understood and was surprised that what I get from Debian's apt install is unofficial by the developers of the app. This sounds worse than "not recommended". I do understand Debian APT package repository tends to show old versions and is never meant to catch up with latest leading edge updates. There are so many ways to deal with this, but now I'm concerned by the words unofficial . Does this mean, packages related to MongoDB (or any other app) on the APT repository isn't officially approved by the app developers? Or was it officially shipped by the developers but "avoid because it's not the latest version"? Or did someone (some entity?) copy from the official installation package and paste it to APT? I'm not trying to understand just this specific case with MongoDB. Instead I want to understand the overall "politics" on applications and APT. How does it work, how was it supposed to work? If this is a noob question then I'm sorry, but I couldn't find a good explanation online. Any links or reference would be appreciated.
Packages in all distributions (not only Debian) are usually not packaged by the developers of the application, but by the members of the community of the distribution, usually called packagers or package maintainers . Sometimes the application developer can be also the packager in some distributions but it isn't a rule and developers definitely cannot maintain their application in all distributions (for example I maintain my software in Fedora, but it is packaged by someone else in Debian). When it comes to "approval" and being "official" or "unoffical". We are talking about free software here, the licenses allow distributing the software so you don't need anyone's approval to package software for a distribution. The developers may disagree with the way their software is being packaged and shipped but that's all they can do. I'm not sure what makes the package (un)official. I guess all packages are in theory unofficial because they are made by a third party. It probably depends on your definition of being (un)official. One thing that can cause tension between packagers and developers is the release cycle. Distribution (especially "stable" distributions like Debian Stable or RHEL/CentOS) have their own release cycle and their own promises about software and API stability which is usually different from the upstream release cycle. This is the reason why you see older versions in your distributions, usually with some bug fixes backports. And sometimes upstream developers don't like this, because they get bug reports for things that are already fixed but not backported etc. And sometimes packagers make their own decisions about compile time options and other things that change (default) functionality of the software, which can be also annoying. So developers tell you something like "Use our 'official' packages instead of your distribution packages" and it's up to the user to decide what is best for them.
{ "source": [ "https://unix.stackexchange.com/questions/686513", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/496102/" ] }
686,516
What I am trying to do (in bash) is: for i in <host1> <host2> ... <hostN>; do ssh leroy@$i "sudo -i; grep Jan\ 15 /var/log/auth.log" > $i;done to get just today's entries from these hosts auth.logs and aggregate them on my local filesystem. sudo is required because auth.log only allows root access. Using the root user isn't an option because that account is disabled. Using key-based authentication isn't an option because the systems implement 2FA (key and password). When I do the above (after initial authentication) I get sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper I have tried various parameters to the -S option and included the -M option, nothing works. Searching the web doesn't surface anything with this exact situation.
Packages in all distributions (not only Debian) are usually not packaged by the developers of the application, but by the members of the community of the distribution, usually called packagers or package maintainers . Sometimes the application developer can be also the packager in some distributions but it isn't a rule and developers definitely cannot maintain their application in all distributions (for example I maintain my software in Fedora, but it is packaged by someone else in Debian). When it comes to "approval" and being "official" or "unoffical". We are talking about free software here, the licenses allow distributing the software so you don't need anyone's approval to package software for a distribution. The developers may disagree with the way their software is being packaged and shipped but that's all they can do. I'm not sure what makes the package (un)official. I guess all packages are in theory unofficial because they are made by a third party. It probably depends on your definition of being (un)official. One thing that can cause tension between packagers and developers is the release cycle. Distribution (especially "stable" distributions like Debian Stable or RHEL/CentOS) have their own release cycle and their own promises about software and API stability which is usually different from the upstream release cycle. This is the reason why you see older versions in your distributions, usually with some bug fixes backports. And sometimes upstream developers don't like this, because they get bug reports for things that are already fixed but not backported etc. And sometimes packagers make their own decisions about compile time options and other things that change (default) functionality of the software, which can be also annoying. So developers tell you something like "Use our 'official' packages instead of your distribution packages" and it's up to the user to decide what is best for them.
{ "source": [ "https://unix.stackexchange.com/questions/686516", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/436589/" ] }
687,436
I can ping google.com for several seconds and when I press Ctrl + C , a brief summary is displayed at the bottom: $ ping google.com PING google.com (74.125.131.113) 56(84) bytes of data. 64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=2 ttl=56 time=46.7 ms 64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=3 ttl=56 time=45.0 ms 64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=4 ttl=56 time=54.5 ms ^C --- google.com ping statistics --- 4 packets transmitted, 3 received, 25% packet loss, time 3009ms rtt min/avg/max/mdev = 44.965/48.719/54.524/4.163 ms However, when I do the same redirecting output to log file with tee , the summary is not displayed: $ ping google.com | tee log PING google.com (74.125.131.113) 56(84) bytes of data. 64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=1 ttl=56 time=34.1 ms 64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=2 ttl=56 time=57.0 ms 64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=3 ttl=57 time=50.9 ms ^C Can I get the summary as well when redirecting output with tee ?
ping shows the summary when it is killed with SIGINT , e.g. as a result of Ctrl C , or when it has transmitted the requested number of packets (the -c option). Ctrl C causes SIGINT to be sent to all processes in the foreground process group, i.e. in this scenario all the processes in the pipeline ( ping and tee ). tee doesn’t catch SIGINT (on Linux, look at SigCgt in /proc/$(pgrep tee)/status ), so when it receives the signal, it dies, closing its end of the pipe. What happens next is a race: if ping was still outputting, it will die with SIGPIPE before it gets the SIGINT ; if it gets the SIGINT before outputting anything, it will try to output its summary and die with SIGPIPE . In any case, there’s no longer anywhere for the output to go. To get the summary, arrange to kill only ping with SIGINT : killall -INT ping or run it with a pre-determined number of packets: ping -c 20 google.com | tee log or (keeping the best for last), have tee ignore SIGINT , as you discovered.
{ "source": [ "https://unix.stackexchange.com/questions/687436", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87918/" ] }
687,845
To my surprise the CentOS 7 installer allowed me to create a RAID0 device consisting of roughly a 17 GB disk and a 26 GB disk. I would've expected that even if it allows that, that the logical size would be 2 * min(17 GB, 26 GB) ~= 34 GB . Yet I can really see a usable size of 44 GB on the filesystem level: $ cat /sys/block/md127/md/dev*/size 16955392 26195968 $ df -h |grep md /dev/md127 44G 1.9G 40G 5% / How will the md subsystem behave performance wise, compared to a situation where the disks are equal? As it's impossible to do a straightforward balanced stripe across 2 disks.
raid.wiki.kernel.org says: RAID0/Stripe Mode: The devices should (but don't HAVE to) be the same size. [...] If one device is much larger than the other devices, that extra space is still utilized in the RAID device, but you will be accessing this larger disk alone, during writes in the high end of your RAID device. This of course hurts performance. That's a bit awkward phrasing, but the Wikipedia page for mdadm puts it like this: RAID 0 – Block-level striping. MD can handle devices of different lengths, the extra space on the larger device is then not striped. So, what you get probably looks like this, for a simplified case of two disks of 4 and 2 "blocks" in size: disk0 disk1 00 01 02 03 04 05 Reading "blocks" 04-05 would have to be done just from disk0, so no striping advantage there. md devices should be partitionable, so you could probably test with partitions at the start and at the end of the device to see if the speed difference becomes evident.
{ "source": [ "https://unix.stackexchange.com/questions/687845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/125367/" ] }
688,195
I have a filesystem with many small files that I erase regularly (the files are a cache that can easily be regenerated). It's much faster to simply create a new filesystem rather than run rm -rf or rsync to delete all the files (i.e. Efficiently delete large directory containing thousands of files ). The only issue with creating a new filesystem to wipe the filesystem is that its UUID changes, leading to changes in e.g. /etc/fstab . Is there a way to simply "unlink" a directory from e.g. an ext4 filesystem, or completely clear its list of inodes?
Since you're using ext4 you could format the filesystem and the set the UUID to a known value afterwards. man tune2fs writes, -U UUID Set the universally unique identifier (UUID) of the filesystem to UUID. The format of the UUID is a series of hex digits separated by hyphens, like this c1b9d5a2-f162-11cf-9ece-0020afc76f16 . And similarly, man mkfs.ext4 writes, -U UUID Set the universally unique identifier (UUID) of the filesystem to UUID. […as above…] Personally, I prefer to reference filesystems by label. For example in the /etc/fstab for one of my systems I have entries like this # <file system> <mount point> <type> <options> <dump> <pass> LABEL=root / ext4 errors=remount-ro 0 1 LABEL=backup /backup ext4 defaults 0 2 Such labels can be added with the -L flag for tune2efs and mkfs.ext4 . They avoid issues with inode checksums causing rediscovery or corruption on a reformatted filesystem and they are considerably easier to identify visually. (But highly unlikely to be unique across multiple systems, so beware if swapping disks around.)
{ "source": [ "https://unix.stackexchange.com/questions/688195", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233125/" ] }