source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
420,513
Will the executable of a small, extremely simple program, such as the one shown below, that is compiled on one flavor of Linux run on a different flavor? Or would it need to be recompiled? Does machine architecture matter in a case such as this? int main() { return (99); }
It depends. Something compiled for IA-32 (Intel 32-bit) may run on amd64 as Linux on Intel retains backwards compatibility with 32-bit applications (with suitable software installed). Here's your code compiled on RedHat 7.3 32-bit system (circa 2002, gcc version 2.96) and then the binary copied over to and run on a Centos 7.4 64-bit system (circa 2017): -bash-4.2$ file code code: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped -bash-4.2$ ./code -bash: ./code: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory -bash-4.2$ sudo yum -y install glibc.i686 ... -bash-4.2$ ./code ; echo $? 99 Ancient RedHat 7.3 to Centos 7.4 (essentially RedHat Enterprise Linux 7.4) is staying in the same "distribution" family, so will likely have better portability than going from some random "Linux from scratch" install from 2002 to some other random Linux distribution in 2018. Something compiled for amd64 would not run on 32-bit only releases of Linux (old hardware does not know about new hardware). This is also true for new software compiled on modern systems intended to be run on ancient old things, as libraries and even system calls may not be backwards portable, so may require compilation tricks, or obtaining an old compiler and so forth, or possibly instead compiling on the old system. (This is a good reason to keep virtual machines of ancient old things around.) Architecture does matter; amd64 (or IA-32) is vastly different from ARM or MIPS so the binary from one of those would not be expected to run on another. At the assembly level the main section of your code on IA-32 compiles via gcc -S code.c to main: pushl %ebp movl %esp,%ebp movl $99,%eax popl %ebp ret which an amd64 system can deal with (on a Linux system--OpenBSD by contrast on amd64 does not support 32-bit binaries; backwards compatibility with old archs does give attackers wiggle room, e.g. CVE-2014-8866 and friends). Meanwhile on a big-endian MIPS system main instead compiles to: main: .frame $fp,8,$31 .mask 0x40000000,-4 .fmask 0x00000000,0 .set noreorder .set nomacro addiu $sp,$sp,-8 sw $fp,4($sp) move $fp,$sp li $2,99 move $sp,$fp lw $fp,4($sp) addiu $sp,$sp,8 j $31 nop which an Intel processor will have no idea what to do with, and likewise for the Intel assembly on MIPS. You could possibly use QEMU or some other emulator to run foreign code (perhaps very, very slowly). However! Your code is very simple code, so will have fewer portability issues than anything else; programs typically make use of libraries that have changed over time (glibc, openssl, ...); for those one may also need to install older versions of various libraries (RedHat for example typically puts "compat" somewhere in the package name for such) compat-glibc.x86_64 1:2.12-4.el7.centos or possibly worry about ABI changes (Application Binary Interface) for way old things that use glibc, or more recently changes due to C++11 or other C++ releases. One could also compile static (greatly increasing the binary size on disk) to try to avoid library issues, though whether some old binary did this depends on whether the old Linux distribution was compiling most everything dynamic (RedHat: yes) or not. On the other hand, things like patchelf can rejigger dynamic (ELF, but probably not a.out format) binaries to use other libraries. However! Being able to run a program is one thing, and actually doing something useful with it another. Old 32-bit Intel binaries may have security issues if they depend on a version of OpenSSL that has some horrible and not-backported security problem in it, or the program may not be able to negotiate at all with modern web servers (as the modern servers reject the old protocols and ciphers of the old program), or SSH protocol version 1 is no longer supported, or ...
{ "source": [ "https://unix.stackexchange.com/questions/420513", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272846/" ] }
420,519
i have two variables (txt and a line number) i want to insert my txt in the x line card=$(shuf -n1 shuffle.txt) i=$(shuf -i1-52 -n1) 'card' is my txt : a card randomly selected in a shuffle 'deck' and i want to insert it at a random line (i)
It depends. Something compiled for IA-32 (Intel 32-bit) may run on amd64 as Linux on Intel retains backwards compatibility with 32-bit applications (with suitable software installed). Here's your code compiled on RedHat 7.3 32-bit system (circa 2002, gcc version 2.96) and then the binary copied over to and run on a Centos 7.4 64-bit system (circa 2017): -bash-4.2$ file code code: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped -bash-4.2$ ./code -bash: ./code: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory -bash-4.2$ sudo yum -y install glibc.i686 ... -bash-4.2$ ./code ; echo $? 99 Ancient RedHat 7.3 to Centos 7.4 (essentially RedHat Enterprise Linux 7.4) is staying in the same "distribution" family, so will likely have better portability than going from some random "Linux from scratch" install from 2002 to some other random Linux distribution in 2018. Something compiled for amd64 would not run on 32-bit only releases of Linux (old hardware does not know about new hardware). This is also true for new software compiled on modern systems intended to be run on ancient old things, as libraries and even system calls may not be backwards portable, so may require compilation tricks, or obtaining an old compiler and so forth, or possibly instead compiling on the old system. (This is a good reason to keep virtual machines of ancient old things around.) Architecture does matter; amd64 (or IA-32) is vastly different from ARM or MIPS so the binary from one of those would not be expected to run on another. At the assembly level the main section of your code on IA-32 compiles via gcc -S code.c to main: pushl %ebp movl %esp,%ebp movl $99,%eax popl %ebp ret which an amd64 system can deal with (on a Linux system--OpenBSD by contrast on amd64 does not support 32-bit binaries; backwards compatibility with old archs does give attackers wiggle room, e.g. CVE-2014-8866 and friends). Meanwhile on a big-endian MIPS system main instead compiles to: main: .frame $fp,8,$31 .mask 0x40000000,-4 .fmask 0x00000000,0 .set noreorder .set nomacro addiu $sp,$sp,-8 sw $fp,4($sp) move $fp,$sp li $2,99 move $sp,$fp lw $fp,4($sp) addiu $sp,$sp,8 j $31 nop which an Intel processor will have no idea what to do with, and likewise for the Intel assembly on MIPS. You could possibly use QEMU or some other emulator to run foreign code (perhaps very, very slowly). However! Your code is very simple code, so will have fewer portability issues than anything else; programs typically make use of libraries that have changed over time (glibc, openssl, ...); for those one may also need to install older versions of various libraries (RedHat for example typically puts "compat" somewhere in the package name for such) compat-glibc.x86_64 1:2.12-4.el7.centos or possibly worry about ABI changes (Application Binary Interface) for way old things that use glibc, or more recently changes due to C++11 or other C++ releases. One could also compile static (greatly increasing the binary size on disk) to try to avoid library issues, though whether some old binary did this depends on whether the old Linux distribution was compiling most everything dynamic (RedHat: yes) or not. On the other hand, things like patchelf can rejigger dynamic (ELF, but probably not a.out format) binaries to use other libraries. However! Being able to run a program is one thing, and actually doing something useful with it another. Old 32-bit Intel binaries may have security issues if they depend on a version of OpenSSL that has some horrible and not-backported security problem in it, or the program may not be able to negotiate at all with modern web servers (as the modern servers reject the old protocols and ciphers of the old program), or SSH protocol version 1 is no longer supported, or ...
{ "source": [ "https://unix.stackexchange.com/questions/420519", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273172/" ] }
420,655
Accidentially, I found out that wc counts differently depending on how it gets the input from bash: $ s='hello' $ wc -m <<<"$s" 6 $ wc -c <<<"$s" 6 $ printf '%s' "$s" | wc -m 5 $ printf '%s' "$s" | wc -c 5 Is this - IMHO confusing - behaviour documented somewhere? What does wc count here - is this an assumed newline?
The difference is caused by a newline added to the here string. See the Bash manual : The result is supplied as a single string, with a newline appended, to the command on its standard input (or file descriptor n if n is specified). wc is counting in the same way, but its input is different.
{ "source": [ "https://unix.stackexchange.com/questions/420655", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123669/" ] }
420,891
I have some long log files. I can view the last lines with tail -n 50 file.txt , but sometimes I need to edit those last lines. How do I jump straight to the end of a file when viewing it with nano ?
Open the file with nano file.txt . Now type Ctrl + _ and then Ctrl + V
{ "source": [ "https://unix.stackexchange.com/questions/420891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273498/" ] }
420,894
How to create a script that will create a new user with a blank password in Solaris 10?
Open the file with nano file.txt . Now type Ctrl + _ and then Ctrl + V
{ "source": [ "https://unix.stackexchange.com/questions/420894", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273500/" ] }
421,066
I'm using Kubuntu 17.10. Always after login, the notification below pops up. When I click it, it asks for my password and wants to install or remove packages – but without telling me what packages . I already searched the internet but couldn't find a way to identify what packages are needed. The standard apt upgrade is not affecting this popup. What program is causing this popup, and how can I see which packages it wants to install? Thanks!
The missing packages can be seen if installing the full language support in Terminal: sudo apt install $(check-language-support)
{ "source": [ "https://unix.stackexchange.com/questions/421066", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273159/" ] }
421,354
I have a epoch time in cell H2 which has a value of 1517335200000 . I am trying to convert it to a human readable format which should return 30/01/2018 6:00:00 PM in GMT. I have tried to convert it with the formula H2/86400+25569 which I got from the OpenOffice forum. The formula returns the value 17587319 . When I change the number format in LibreOffice Calc to Date , it returns the value of 06/09/-15484 . That's not the value I want. So, how can I get the value in dd/mm/yyyy hh:mm:ss format form?
If H2 contains the number to transform ( 1517335200000 ). Make H3 contain the formula: = H2/1000/(60*60*24) + 25569 Which will return the number 43130.75. Change format of cell H3 to date. Either: Press Shift - Ctrl - 3 Select Format --> Number Format --> Date Select Format --> Cells (a window opens) --> Numbers - Date - Format Change format of the H3 cell to the required date format: Select Format --> Cells (a panel opens) --> Numbers - Date - Format (select one) Expand width of cell if not wide enough to show the desired format (hint: three # appear). Why: Epoch time is in seconds since 1/1/1970. Calc internal time is in days since 12/30/1899. So, to get a correct result in H3: Get the correct number (last formula): H3 = H2/(60*60*24) + ( Difference to 1/1/1970 since 12/30/1899 in days ) H3 = H2/86400 + ( DATE (1970,1,1) - DATE(1899,12,30) ) H3 = H2/86400 + 25569 But the epoch value you are giving is too big, it is three zeros bigger than it should. Should be 1517335200 instead of 1517335200000. It seems to be given in milliseconds. So, divide by 1000. With that change, the formula gives: H3 = H2/1000/86400+25569 = 43130.75 Change the format of H3 to date and time (Format --> Cells --> Numbers --> Date --> Date and time) and you will see: 01/30/2018 18:00:00 in H3. Of course, since Unix epoch time is always based on UTC (+0 meridian), the result above needs to be shifted as many hours as the local Time zone is distant from UTC. So, to get the local time, if the Time zone is Pacific standard time GMT-8, we need to add (-8) hours. The formula for H3 with the local time zone (-8) in H4 would be: H3 = H2/1000/86400 + 25569 + H4/24 = 43130.416666 And presented as: 01/30/2018 10:00:00 if the format of H3 is set to such time format.
{ "source": [ "https://unix.stackexchange.com/questions/421354", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199183/" ] }
421,460
Using https://regex101.com/ I built a regular expression to return the first occurrence of an IP address in a string. RegExp: (?:\d{1,3}\.)+(?:\d{1,3}) RegExp including delimiters: /(?:\d{1,3}\.)+(?:\d{1,3})/ With the following test string: eu-west 140.243.64.99 It returns a full match of: 140.243.64.99 No matter what I try with anchors etc, the following bash script will not work with the regular expression generated. temp="eu-west 140.243.64.99 " regexp="(?:\d{1,3}\.)+(?:\d{1,3})" if [[ $temp =~ $regexp ]]; then echo "found a match" else echo "No IP address returned" fi
\d is a nonstandard way for saying "any digit". I think it comes from Perl, and a lot of other languages and utilities support Perl-compatible REs (PCRE), too. (and e.g. GNU grep 2.27 in Debian stretch supports the similar \w for word characters even in normal mode.) Bash doesn't support \d , though, so you need to explicitly use [0-9] or [[:digit:]] . Same for the non-capturing group (?:..) , use just (..) instead. This should print match : temp="eu-west 140.243.64.99 " regexp="([0-9]{1,3}\.)+([0-9]{1,3})" [[ $temp =~ $regexp ]] && echo match
{ "source": [ "https://unix.stackexchange.com/questions/421460", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273907/" ] }
421,491
As I understand it, the hosts file is one of several system facilities that assists in addressing network nodes in a computer network. But what should be inside it? When I install Ubuntu by default 127.0.0.1 localhost will be there. Why? How does /etc/hosts work in case of JVM systems like Cassandra? When is DNS alternative, I guess not on a single computer?
The file /etc/hosts started in the old days of DARPA as the resolution file for all the hosts connected to the internet (before DNS existed). It has the maximum priority, meaning this file is preferred ahead of any other name system. 1 However, as a single file, it doesn't scale well: the size of the file becomes too big very soon. That is why the DNS system was developed, a hierarchical distributed name system. It allows any host to find the numerical address of some other host efficiently. The very old concept of the /etc/hosts file is very simple, just an address and a host name: 127.0.0.1 localhost for each line. That is a simple list of pairs of address-host. 2 Its primary present-day use is to bypass DNS resolution. A match found in the /etc/hosts file will be used before any DNS entry. In fact, if the name searched (like localhost ) is found in the file, no DNS resolution will be performed at all. 1 Well, the order of name resolution is actually defined in /etc/nsswitch.conf , which usually has this entry: hosts: files dns which means "try files ( /etc/hosts ); and if it fails, try DNS." But that order could be changed or expanded. 2 (in present days) The hosts file contains lines of text consisting of an IP address in the first text field followed by one or more host names. Each field is separated by white space – tabs are often preferred for historical reasons, but spaces are also used. Comment lines may be included; they are indicated by an octothorpe (#) in the first position of such lines. Entirely blank lines in the file are ignored. For example, a typical hosts file may contain the following: 127.0.0.1 localhost loopback ::1 localhost localhost6 ipv6-localhost ipv6-loopback mycomputer.local 192.168.0.8 mycomputer.lan 10.0.0.27 mycomputer.lan This example contains entries for the loopback addresses of the system and their host names, the first line is a typical default content of the hosts file. The second line has several additional (probably only valid in local systems) names. The example illustrates that an IP address may have multiple host names (localhost and loopback), and that a host name may be mapped to both IPv4 and IPv6 IP addresses, as shown on the first and second lines respectively. One name ( mycomputer.lan ) may resolve to several addresses ( 192.168.0.8 10.0.0.27 ). However, in that case, which one is used depends on the routes (and their priorities) set for the computer. Some older OSes had no way to report a list of addresses for a given name.
{ "source": [ "https://unix.stackexchange.com/questions/421491", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143955/" ] }
421,750
I see a lot of people online referencing arch/x86/entry/syscalls/syscall_64.tbl for the syscall table, that works fine. But a lot of others reference /include/uapi/asm-generic/unistd.h which is commonly found in the headers package. How come syscall_64.tbl shows, 0 common read sys_read The right answer, and unistd.h shows, #define __NR_io_setup 0 __SC_COMP(__NR_io_setup, sys_io_setup, compat_sys_io_setup) And then it shows __NR_read as #define __NR_read 63 __SYSCALL(__NR_read, sys_read) Why is that 63, and not 1? How do I make sense of out of /include/uapi/asm-generic/unistd.h ? Still in /usr/include/asm/ there is /usr/include/asm/unistd_x32.h #define __NR_read (__X32_SYSCALL_BIT + 0) #define __NR_write (__X32_SYSCALL_BIT + 1) #define __NR_open (__X32_SYSCALL_BIT + 2) #define __NR_close (__X32_SYSCALL_BIT + 3) #define __NR_stat (__X32_SYSCALL_BIT + 4) /usr/include/asm/unistd_64.h #define __NR_read 0 #define __NR_write 1 #define __NR_open 2 #define __NR_close 3 #define __NR_stat 4 /usr/include/asm/unistd_32.h #define __NR_restart_syscall 0 #define __NR_exit 1 #define __NR_fork 2 #define __NR_read 3 #define __NR_write 4 Could someone tell me the difference between these unistd files. Explain how unistd.h works? And what the best method for finding the syscall table?
When I’m investigating this kind of thing, I find it useful to ask the compiler directly (see Printing out standard C/GCC predefined macros in terminal for details): printf SYS_read | gcc -include sys/syscall.h -E - This shows that the headers involved (on Debian) are /usr/include/x86_64-linux-gnu/sys/syscall.h , /usr/include/x86_64-linux-gnu/asm/unistd.h , /usr/include/x86_64-linux-gnu/asm/unistd_64.h , and /usr/include/x86_64-linux-gnu/bits/syscall.h , and prints the system call number for read , which is 0 on x86-64. You can find the system call numbers for other architectures if you have the appropriate system headers installed (in a cross-compiler environment). For 32-bit x86 it’s quite easy: printf SYS_read | gcc -include sys/syscall.h -m32 -E - which involves /usr/include/asm/unistd_32.h among other header files, and prints the number 3. So from the userspace perspective, 32-bit x86 system calls are defined in asm/unistd_32.h , 64-bit x86 system calls in asm/unistd_64.h . asm/unistd_x32.h is used for the x32 ABI . uapi/asm-generic/unistd.h lists the default system calls, which are used on architectures which don’t have an architecture-specific system call table. In the kernel the references are slightly different, and are architecture-specific (again, for architectures which don’t use the generic system call table). This is where files such as arch/x86/entry/syscalls/syscall_64.tbl come in (and they ultimately end up producing the header files which are used in user space, unistd_64.h etc.). You’ll find a lot more detail about system calls in the pair of LWN articles on the topic, Anatomy of a system call part 1 and Anatomy of a system call part 2 .
{ "source": [ "https://unix.stackexchange.com/questions/421750", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
421,778
The: http://lynx.isc.org/ is not loading. Is it the https://lynx.invisible-island.net/ what is the official for the "lynx", the text-based webbrowser?
The homepage for Lynx has moved more than once, as discussed in this development page : However, things change. Paul Vixie left ISC in mid-2013 to form a new company. At the time, that did not affect Lynx—from ISC's standpoint Lynx was just a box in a rack of servers. For the last four years of Lynx's stay at ISC, I did all of the software maintenance for the project. Still, a box in a rack costs money for electricity. Late in 2015, ISC shifted away from this style of project support, to reduce costs. I expanded my website to incorporate Lynx (roughly doubling the size of the site). Old site: http://lynx.isc.org/ ftp://lynx.isc.org/ New site: https://lynx.invisible-island.net/ ftp://ftp.invisible-island.net/lynx/ This new site is still the current homepage as of February 2018.
{ "source": [ "https://unix.stackexchange.com/questions/421778", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/261470/" ] }
421,821
I cannot update my Kali Linux, when trying to execute apt-get update I get this error message: # apt-get update Get:1 http://kali.mirror.garr.it/mirrors/kali kali-rolling InRelease [30.5 kB] Err:1 http://kali.mirror.garr.it/mirrors/kali kali-rolling InRelease The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <[email protected]> Reading package lists... Done W: GPG error: http://kali.mirror.garr.it/mirrors/kali kali-rolling InRelease: The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <[email protected]> E: The repository 'http://kali.mirror.garr.it/mirrors/kali kali-rolling InRelease' is not signed. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. If you need my kernel version: # uname -a 4.13.0-kali1-amd64 #1 SMP Debian 4.13.10-1kali2 (2017-11-08) x86_64 GNU/Linux How can I fix this?
Add the gpg key: gpg --keyserver keyserver.ubuntu.com --recv-key 7D8D0BF6 Check the fingerprint: gpg --fingerprint 7D8D0BF6 Sample output: pub rsa4096 2012-03-05 [SC] [expires: 2021-02-03] 44C6 513A 8E4F B3D3 0875 F758 ED44 4FF0 7D8D 0BF6 uid [ unknown] Kali Linux Repository <[email protected]> sub rsa4096 2012-03-05 [E] [expires: 2021-02-03] then : gpg -a --export 7D8D0BF6 | apt-key add - apt update Debian : SecureApt update : 8 Feb , 2018. Answer from the official documentation : Note that if you haven’t updated your Kali installation in some time (tsk2), you will like receive a GPG error about the repository key being expired ( ED444FF07D8D0BF6 ). Fortunately, this issue is quickly resolved by running the following as root: wget -q -O - https://archive.kali.org/archive-key.asc | apt-key add Kali docs: how to deal with APT complaining about Kali's expired key The easiest solution is to retrieve the latest key and store it in place where apt will find it: sudo wget https://archive.kali.org/archive-key.asc -O /etc/apt/trusted.gpg.d/kali-archive-keyring.asc
{ "source": [ "https://unix.stackexchange.com/questions/421821", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274166/" ] }
421,985
I'm getting an invalid signature error when I try to apt-get update : Ign:1 http://dl.google.com/linux/chrome/deb stable InRelease Hit:2 http://dl.google.com/linux/chrome/deb stable Release Hit:4 https://download.sublimetext.com apt/dev/ InRelease Hit:5 http://deb.i2p2.no unstable InRelease Get:6 http://ftp.yzu.edu.tw/Linux/kali kali-rolling InRelease [30.5 kB] Err:6 http://ftp.yzu.edu.tw/Linux/kali kali-rolling InRelease The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <[email protected]> Reading package lists... Done W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://ftp.yzu.edu.tw/Linux/kali kali-rolling InRelease: The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <[email protected]> W: Failed to fetch http://http.kali.org/kali/dists/kali-rolling/InRelease The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <[email protected]> W: Some index files failed to download. They have been ignored, or old ones used instead. Why is this happening? How can I fix it?
Per: https://twitter.com/kalilinux/status/959515084157538304 , your archive-keyring package is outdated. You need to do this (as root): wget -q -O - https://archive.kali.org/archive-key.asc | apt-key add
{ "source": [ "https://unix.stackexchange.com/questions/421985", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274283/" ] }
422,005
I have this cat then sed operation: cat ${location_x}/file_1 > ${location_y}/file2 sed -i "s/0/1/g" /${location_y}/file2 Can this be done in a single line? I might miss such a way here , but that explanation seems to me to deal with the opposite of that, and I fail too unite the above cat and sed into one operation without double ampersand ( && ) or semicolon ( ; ) and getting closer to assume it's not possible, but it's important for me to ask here because I might be wrong. Not all readers are English speakers are familiar with the terms "ampersand" or "semicolon" so I elaborated on these.
Yes: sed 's/0/1/g' "$location_x/file_1" >"$location_y/file2" Your code first makes a copy of the first file and then changes the copy using inline editing with sed -i . The code above reads from the original file, does the changes, and writes the result to the new file. There is no need for cat here. If you're using GNU sed and if the $location_x could contain a leading - , you will need to make sure that the path is not interpreted as a command line flag: sed -e 's/0/1/g' -- "$location_x/file_1" >"$location_y/file2" The double dash marks the end of command line options. The alternative is to use < to redirect the contents of the file into sed . Other sed implementation (BSD sed ) stops parsing the command line for options at the first non-option argument whereas GNU sed (like some other GNU software) rearranges the command line in its parsing of it. For this particular editing operation (changing all zeros to ones), the following will also work: tr '0' '1' <"$location_x/file_1" >"$location_y/file2" Note that tr always reads from standard input, hence the < to redirect the input from the original file. I notice that on your sed command line, you try to access /${location_y}/file2 , which is different from the path on the line above it (the / at the start of the path). This may be a typo.
{ "source": [ "https://unix.stackexchange.com/questions/422005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
422,111
In what file are the keyboard shortcuts saved in Linux Mint Cinnamon 18 ? I want to backup the shortcuts so if I know the file where the shortcuts are saved, I can simply create a symlink to the shortcut file after reinstalling the OS.
You can utilize the following to export your keyboard shortcuts to a file: $ dconf dump /org/cinnamon/desktop/keybindings/ > dconf-settings.conf This requires the dconf-cli package to be installed. Then, to import the file after making any desired keybinding changes: $ dconf load /org/cinnamon/desktop/keybindings/ < dconf-settings.conf
{ "source": [ "https://unix.stackexchange.com/questions/422111", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274388/" ] }
422,183
When looking at Unix, I always find the number of terminal commands to be a little overwhelming. TinyCoreLinux, by example my favorite distribution, has over 300 commands. I can't tell how necessary a lot of those commands are. How many commands did the original Unix box have? I'm essentially hoping that, by going to the original box, we can dwindle down the number of commands to newcomers. Yes, I understand you don't have to learn all the commands, but I know I definitely feel a sense of completion when I have learned all the commands for a distribution (which hasn't exactly happened yet).
The first edition of Unix had 60-odd commands, as documented in the manual (also available as a web site ): ar ed rkl as find rm /usr/b/rc (the B compiler) for rmdir bas form roff bcd hup sdate boot lbppt sh cat ld stat chdir ln strip check ls su chmod mail sum chown mesg tap cmp mkdir tm cp mkfs tty date mount type db mv umount dbppt nm un dc od wc df pr who dsw rew write dtf rkd du rkf There were a few more commands, such as /etc/glob , which were documented in another command’s manual page ( sh in /etc/glob ’s case); but the list above gives a good idea. Many of these have survived and are still relevant; others have gone the way of the dodo (thankfully, in dsw ’s case!). It’s easy enough to read all the Unix V1 manual; I’m not sure it’s worth doing anything like that for a modern distribution. The POSIX specification itself is now over 3,000 pages, and that “only” documents a common core, with 160 commands (many of which are optional) and a few shell built-ins ; modern distributions contain thousands of commands, which no single person can learn exhaustively. The last full system manual I read cover to cover was the Coherent manual... If you want to experience V1 Unix, check out Jim Huang’s V1 repository : you’ll find source code, documentation and instructions to build and run a V1-2 hybrid using SIMH ’s PDP-11 simulation. (Thanks to Guy for the suggestion.) Warren Toomey’s PDP-7 Unix repository is also interesting. (Thanks as always to Stéphane for his multiple suggestions.)
{ "source": [ "https://unix.stackexchange.com/questions/422183", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4930/" ] }
422,198
Some file copying programs like rsync and curl have the ability to resume failed transfers/copies. Noting that there can be many causes of these failures, in some cases the program can do "cleanup" some cases the program can't. When these programs resume, they seem to just calculate the size of the file/data that was transferred successfully and just start reading the next byte from the source and appending on to the file fragment. e.g the size of the file fragment that "made it" to the destination is 1378 bytes, so they just start reading from byte 1379 on the original and adding to the fragment. My question is, knowing that bytes are made up of bits and not all files have their data segmented in clean byte sized chunks, how do these programs know they the point they have chosen to start adding data to is correct? When writing the destination file is some kind of buffering or "transactions" similar to SQL databases occurring, either at the program, kernel or filesystem level to ensure that only clean, well formed bytes make it to the underlying block device? Or do the programs assume the latest byte would be potentially incomplete, so they delete it on the assumption its bad, recopy the byte and start the appending from there? knowing that not all data is represented as bytes, these guesses seem incorrect. When these programs "resume" how do they know they are starting at the right place?
For clarity's sake - the real mechanics is more complicated to give even better security - you can imagine the write-to-disk operation like this: application writes bytes (1) the kernel (and/or the file system IOSS) buffers them once the buffer is full, it gets flushed to the file system: the block is allocated (2) the block is written (3) the file and block information is updated (4) If the process gets interrupted at (1), you don't get anything on the disk, the file is intact and truncated at the previous block. You sent 5000 bytes, only 4096 are on the disk, you restart transfer at offset 4096. If at (2), nothing happens except in memory. Same as (1). If at (3), the data is written but nobody remembers about it . You sent 9000 bytes, 4096 got written, 4096 got written and lost , the rest just got lost. Transfer resumes at offset 4096. If at (4), the data should now have been committed on disk. The next bytes in the stream may be lost. You sent 9000 bytes, 8192 get written, the rest is lost, transfer resumes at offset 8192. This is a simplified take. For example, each "logical" write in stages 3-4 is not "atomic", but gives rise to another sequence (let's number it #5) whereby the block, subdivided into sub-blocks suitable for the destination device (e.g. hard disk) is sent to the device's host controller, which also has a caching mechanism , and finally stored on the magnetic platter. This sub-sequence is not always completely under the system's control, so having sent data to the hard disk is not a guarantee that it has been actually written and will be readable back. Several file systems implement journaling , to make sure that the most vulnerable point, (4), is not actually vulnerable, by writing meta-data in, you guessed it, transactions that will work consistently whatever happens in stage (5). If the system gets reset in the middle of a transaction, it can resume its way to the nearest intact checkpoint. Data written is still lost, same as case (1), but resumption will take care of that. No information actually gets lost.
{ "source": [ "https://unix.stackexchange.com/questions/422198", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
422,207
Background: I need to clone ext4 partitions from an eMMC using uboot (and if neccessary custom bare metal code). I copied the whole thing using mmc read and found that most of the partition is empty, but there are some blocks of data like inode tables spread across the partition. This would mean I need to copy the whole partition (which is too slow, I need to do this a lot) or identify what parts of the partition are relevant. Most similar Q&A to this problem suggest to use dd creating a sparse image or piping to gzip , but I have no operating system running, so I need to understand the file system layout. Can I use those bitmap blocks to identify what is used and what is free? Documentation of ext4 seems to refer the linux kernel code as soon as it comes to details. Preferably I'd do it with uboot code, but I could as well write some bare metal code I can execute from uboot. One more border condition: The targets to where the partition gets clone are not empty, so if there are blocks of only zeros on the origin, which are required to be zero, I need to overwrite those blocks with zeros on the target.
For clarity's sake - the real mechanics is more complicated to give even better security - you can imagine the write-to-disk operation like this: application writes bytes (1) the kernel (and/or the file system IOSS) buffers them once the buffer is full, it gets flushed to the file system: the block is allocated (2) the block is written (3) the file and block information is updated (4) If the process gets interrupted at (1), you don't get anything on the disk, the file is intact and truncated at the previous block. You sent 5000 bytes, only 4096 are on the disk, you restart transfer at offset 4096. If at (2), nothing happens except in memory. Same as (1). If at (3), the data is written but nobody remembers about it . You sent 9000 bytes, 4096 got written, 4096 got written and lost , the rest just got lost. Transfer resumes at offset 4096. If at (4), the data should now have been committed on disk. The next bytes in the stream may be lost. You sent 9000 bytes, 8192 get written, the rest is lost, transfer resumes at offset 8192. This is a simplified take. For example, each "logical" write in stages 3-4 is not "atomic", but gives rise to another sequence (let's number it #5) whereby the block, subdivided into sub-blocks suitable for the destination device (e.g. hard disk) is sent to the device's host controller, which also has a caching mechanism , and finally stored on the magnetic platter. This sub-sequence is not always completely under the system's control, so having sent data to the hard disk is not a guarantee that it has been actually written and will be readable back. Several file systems implement journaling , to make sure that the most vulnerable point, (4), is not actually vulnerable, by writing meta-data in, you guessed it, transactions that will work consistently whatever happens in stage (5). If the system gets reset in the middle of a transaction, it can resume its way to the nearest intact checkpoint. Data written is still lost, same as case (1), but resumption will take care of that. No information actually gets lost.
{ "source": [ "https://unix.stackexchange.com/questions/422207", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216004/" ] }
422,213
I'm looking for a way, to simply print the last X lines from a systemctl service in Debian. I would like to install this code into a script, which uses the printed and latest log entries. I've found this post but I wasn't able to modify it for my purposes. Currently I'm using this code, which is just giving me a small snippet of the log files: journalctl --unit=my.service --since "1 hour ago" -p err To give an example of what the result should look like, simply type in the command above for any service and scroll until the end of the log. Then copy the last 300 lines starting from the bottom. My idea is to use egrep ex. egrep -m 700 . but I had no luck since now.
journalctl --unit=my.service -n 100 --no-pager
{ "source": [ "https://unix.stackexchange.com/questions/422213", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157946/" ] }
422,392
I need to delete all folders inside a folder using a daily script. The folder for that day needs to be left. Folder 'myfolder' has 3 sub folder: 'test1', 'test2' and 'test3' I need to delete all except 'test2'. I am trying to match exact name here: find /home/myfolder -type d ! -name 'test2' | xargs rm -rf OR find /home/myfolder -type d ! -name 'test2' -delete This command always tries to delete the main folder 'myfolder' also ! Is there a way to avoid this ?
This will delete all folders inside ./myfolder except that ./myfolder/test2 and all its contents will be preserved: find ./myfolder -mindepth 1 ! -regex '^./myfolder/test2\(/.*\)?' -delete How it works find starts a find command. ./myfolder tells find to start with the directory ./myfolder and its contents. -mindepth 1 not to match ./myfolder itself, just the files and directories under it. ! -regex '^./myfolder/test2\(/.*\)?' tells find to exclude ( ! ) any file or directory matching the regular expression ^./myfolder/test2\(/.*\)? . ^ matches the start of the path name. The expression (/.*\)? matches either (a) a slash followed by anything or (b) nothing at all. -delete tells find to delete the matching (that is, non-excluded) files. Example Consider a directory structure that looks like; $ find ./myfolder ./myfolder ./myfolder/test1 ./myfolder/test1/dir1 ./myfolder/test1/dir1/test2 ./myfolder/test1/dir1/test2/file4 ./myfolder/test1/file1 ./myfolder/test3 ./myfolder/test3/file3 ./myfolder/test2 ./myfolder/test2/file2 ./myfolder/test2/dir2 We can run the find command (without -delete ) to see what it matches: $ find ./myfolder -mindepth 1 ! -regex '^./myfolder/test2\(/.*\)?' ./myfolder/test1 ./myfolder/test1/dir1 ./myfolder/test1/dir1/test2 ./myfolder/test1/dir1/test2/file4 ./myfolder/test1/file1 ./myfolder/test3 ./myfolder/test3/file3 We can verify that this worked by looking at the files which remain: $ find ./myfolder ./myfolder ./myfolder/test2 ./myfolder/test2/file2 ./myfolder/test2/dir2
{ "source": [ "https://unix.stackexchange.com/questions/422392", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271206/" ] }
422,405
I want an alias on my local machine that will ssh to the target system, execute sudo using the password stored on my local machine, but leave the shell open on the remote system. The reason is I'm lazy and every time I log into a server I don't want to type my very long password. I'm aware that this is not the safest of practices. Right now it works if I do the following: ssh -q -t $1 "echo $mypword|base64 -d|sudo -S ls; bash -l" $1 being the host name of the remote system. mypword is my encoded password stored on my local system. This works and leaves my shell open. I can then do anything with sudo because it is now cached for that shell. The problem I have is if you do a ps and grep for my account you will see the encoded string containing the password in the process list. Can't have that. Is there a way to accomplish this without having the password showing in the process list? I have tried: echo $mypword|ssh -q -t $1 "base64 -d|sudo -S ls -l /root;bash -l" The ls goes off but the shell does not remain open.
This will delete all folders inside ./myfolder except that ./myfolder/test2 and all its contents will be preserved: find ./myfolder -mindepth 1 ! -regex '^./myfolder/test2\(/.*\)?' -delete How it works find starts a find command. ./myfolder tells find to start with the directory ./myfolder and its contents. -mindepth 1 not to match ./myfolder itself, just the files and directories under it. ! -regex '^./myfolder/test2\(/.*\)?' tells find to exclude ( ! ) any file or directory matching the regular expression ^./myfolder/test2\(/.*\)? . ^ matches the start of the path name. The expression (/.*\)? matches either (a) a slash followed by anything or (b) nothing at all. -delete tells find to delete the matching (that is, non-excluded) files. Example Consider a directory structure that looks like; $ find ./myfolder ./myfolder ./myfolder/test1 ./myfolder/test1/dir1 ./myfolder/test1/dir1/test2 ./myfolder/test1/dir1/test2/file4 ./myfolder/test1/file1 ./myfolder/test3 ./myfolder/test3/file3 ./myfolder/test2 ./myfolder/test2/file2 ./myfolder/test2/dir2 We can run the find command (without -delete ) to see what it matches: $ find ./myfolder -mindepth 1 ! -regex '^./myfolder/test2\(/.*\)?' ./myfolder/test1 ./myfolder/test1/dir1 ./myfolder/test1/dir1/test2 ./myfolder/test1/dir1/test2/file4 ./myfolder/test1/file1 ./myfolder/test3 ./myfolder/test3/file3 We can verify that this worked by looking at the files which remain: $ find ./myfolder ./myfolder ./myfolder/test2 ./myfolder/test2/file2 ./myfolder/test2/dir2
{ "source": [ "https://unix.stackexchange.com/questions/422405", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179761/" ] }
423,012
I roughly know about the files located under /dev. I know there are two types (character/block), accessing these files communicates with a driver in the kernel. I want to know what happens if I delete one -- specifically for both types of file. If I delete a block device file, say /dev/sda , what effect -- if any -- does this have? Have I just unmounted the disk? Similarly, what if I delete /dev/mouse/mouse0 -- what happens? Does the mouse stop working? Does it automatically replace itself? Can I even delete these files? If I had a VM set up, I'd try it.
Those are simply (special) files. They only serve as "pointers" to the actual device. (i.e. the driver module inside the kernel.) If some command/service already opened that file, it already has a handle to the device and will continue working. If some command/service tries to open a new connection, it will try to access that file and fail because of "file not found". Usually those files are populated by udev , which automatically creates them at system startup and on special events like plugging in a USB device, but you could also manually create those using mknod .
{ "source": [ "https://unix.stackexchange.com/questions/423012", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274927/" ] }
423,121
I know that macOS is a UNIX operating system , but I don't know whether macOS could be called a UNIX distribution in the same way Gentoo or Debian are GNU/Linux distributions . Is macOS a UNIX distribution? If it isn't, how could one correctly refer to macOS' membership in the UNIX operating system family and compliance to Single UNIX Specification (i.e., is it a Unix variant , a Unix version , a Unix flavor , etc.)? Also, this question applies to Solaris, HP-UX and other unices (are they all UNIX distributions?). Furthermore, is the word "distribution" restricted to GNU(/Linux, /Hurd, /kFreeBSD, /etc) operating systems, or may it be used in other cases? EDIT: I've realized that the UNIX' official website uses "UNIX implementations" and "UNIX operating systems" for referring to the family of Unix operating systems, i.e., the ones which implement the Single Unix Standard.
What is UNIX at all ? Short answer: UNIX is a specification/standard nowadays. At the time of writing, to quote the official sources , "UNIX® is a registered trademark of The Open Group", the company which among many things provides UNIX certification : "UNIX®, an open standard owned and managed by The Open Group, is an enabler of key technologies and delivers reduced total cost of ownership, increased IT agility, stability, and interoperability in hetero¬geneous environments enabling business and market innovation across the globe." The same page specifically states which specification defines UNIX: The latest version of the certification standard is UNIX V7, aligned with the Single UNIX Specification Version 4, 2013 Edition Details of those specs can be found here . Curiously enough the latest standard listed on their website is UNIX 03, and to quote another source , "UNIX® 03 - the mark for systems conforming to version 3 of the Single UNIX Specification". To quote the About Us page with my own emphasis in bold: The success of the UNIX approach led to a large number of “look-alike” operating systems, often divergent in compatibility and interoperability. To address this, vendors and users joined together in the 1980s to create the POSIX® standard and later the Single UNIX Specification . So what this suggests (or at least so is my interpretation), is that when an OS conforms to the POSIX standard and Single UNIX Specifications, it is compatible in behavior with Unix as an OS that once existed at one point in time in history. Please note that this does not mention the presence of any traces of the original Unix source code, nor does it mention the kernel in any way (this will become important later). As for the AT&T and System V Unix developed by Ritchie and Thompson, nowadays we can say it has ceased to exist. Based on the above sources, it seems UNIX nowadays is not that specific OS, but rather a standard derived out of the best possible generalization for how operating systems in Unix family behave. Where does macOS X stand in the *nix world ? In a very specific definition, macOS version 10.13 High Sierra on Intel-based hardware is compliant to the UNIX 03 standard and to quote the pdf certificate , "Apple Inc. has entered into a Trademark License Agreement with X/Open Company Limited." Side note: I hesitate to question what it would means for macOS 10.13 on non-Intel hardware to be treated as, but considering that hardware is mentioned for other OS, the hardware is significant. Example: "Hewlett Packard Enterprise: HP-UX 11i V3 Release B.11.31 or later on HP 9000 Servers with Precision Architecture" (from the register page ). Let’s return to previous section of my answer. Since this particular version of OS conforms to interoperability and compatibility standard, it means the OS is as close in behavior and system implementation as possible to original Unix as an Operating System. At the very least, it will be close in behavior and in environment. The closer it gets to system level and kernel level, the more specific and shadier the area will get, but at least fundamental mechanics and behavior that were present in Unix should be present in an OS that aims to be compatible. macOS X should be very close to that aim. What is a distribution ? To quote Wikipedia : A Linux distribution (often abbreviated as distro) is an operating system made from a software collection, which is based upon the Linux kernel and, often, a package management system. Let's remember for a second that Linux as in the Linux Kernel is supposed to be distributable software, with modifications, or at least in accordance with GPL v2 . If we consider a package manager and kernel, Ubuntu and Red Hat being distributions makes sense. macOS X has a different kernel than the original AT&T Unix - therefore calling macOS X a Unix distribution doesn't make sense. People suggest that macOS X kernel is based on FreeBSD, but to quote FreeBSD Wiki : The XNU kernel used on OS X includes a few subsystems from (older versions of) FreeBSD, but is mostly an independent implementation Some people mistakenly call the OS X kernel Darwin. To quote Apple's Kernel Programming Guide : The kernel, along with other core parts of OS X are collectively referred to as Darwin. Darwin is a complete operating system based on many of the same technologies that underlie OS X. And to quote the same page: Darwin technology is based on BSD, Mach 3.0, and Apple technologies. Based on everything above we can confidently say, OS X is not a distribution , in the sense of Linux distribution. Similarly, other mentioned OSs are POSIX compliant and are certified Unix systems, but again they differ in kernels and variations on underlying system calls (which is why there exist books on Solaris system programming and it's a worthy subject in its own right). Therefore, they aren't distributions in the sense Linux distributions are - a common core with variations on utilities. In case of Linux, you see books on Linux system programming or Linux kernel programming, not system programming specific to distribution, because there's nothing system-specific about a particular distribution. Confirmation of what we see here can be found in official documentation. For instance, article on developerWorks by IBM which addressed difference between UNIX OS types and Linux distributions states (emphasis added): Most modern UNIX variants known today are licensed versions of one of the original UNIX editions . Sun's Solaris, Hewlett-Packard's HP-UX, and IBM's AIX® are all flavors of UNIX that have their own unique elements and foundations . In other words, they are based on the same foundation, but they don't share exactly same one in the sense Linux distros share the kernel. Considerations Note that the word distribution appears to be mostly used when referencing operating systems which have the Linux kernel at its core. Take for instance the BSD type of Operating Systems: there's GhostBSD , which is based on the kernel and uses some of the utilities of FreeBSD , but I've never seen it to be referred to as a BSD distribution; every BSD OS only mentions what it is based on and usually an operating system is mentioned as an OS in its own right. Sure, BSD stands for Berkeley Software Distribution, but...that's it. To quote this answer on our site in response to the question whether different BSD versions use same kernels: No, although there are similarities due to the historic forks. Each project evolved separately. They are not distributions in the sense of Linux distributions. Consider the copyright notice from this document : Portions of this product may be derived from the UNIX® and Berkeley 4.3 BSD systems Notes the before mentioned POSIX standard is also referenced as IEEE standard (where IEEE is Institute of Electrical and Electronics Engineers, which handles among other things IT types of things). to quote Wikipedia : "In 2016, with the release of macOS 10.12 Sierra, the name was changed from OS X to macOS to streamline it with the branding of Apple's other primary operating systems: iOS, watchOS, and tvOS.[56]" Mac OS X history answer conceptual difference between Linux and BSD kernel In conclusion: macOS X can be referred to as either Unix-like OS, Unix-like system, Unix implementation, POSIX compliant-OS when you want to relate it to the original AT&T Unix; "Unix version" wouldn't be the appropriate term because macOS X is vastly different from the original AT&T Unix, and as mentioned before there's no more Unix in the sense of software, and it is now a more of an industry standard; Probably the word "distribution" fits only within the Linux world. The true problem is that you (the reader) and I have way too much time to argue about the topic which lawyers should be arguing about. Maybe we should be like Linux Torvalds and use terminology and OSs that just allows us to move on with the life and do the things we honestly care about and are supposed to care about.
{ "source": [ "https://unix.stackexchange.com/questions/423121", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233964/" ] }
423,282
I use Bash 4.3.48(1) and I have a .sh file containing about 20 variables right under the shebang. The file contains only variables. This is the pattern: x="1" y="2" ... I need to export all these variables in a DRY way: For example, one export to all vars, instead say 20 export for 20 vars. What's the most elegant way (shortest, most efficient by all means) to do that inside that file? A for loop? An array ? maybe something simpler than these (some kind of collection sugar syntax)?
Use set -a (or the equivalent set -o allexport ) at the top of your file to enable the allexport shell option. Then use set +a (or set +o allexport ) at the end of the file (wherever appropriate) to disable the allexport shell option. Using enabling the allexport shell option will have the following effect (from the bash manual): Each variable or function that is created or modified is given the export attribute and marked for export to the environment of subsequent commands. This shell option, set with either set -a or set -o allexport , is defined in POSIX (and should therefore be available in all sh -like shells) as When this option is on, the export attribute shall be set for each variable to which an assignment is performed; [...] If the assignment precedes a utility name in a command, the export attribute shall not persist in the current execution environment after the utility completes, with the exception that preceding one of the special built-in utilities causes the export attribute to persist after the built-in has completed. If the assignment does not precede a utility name in the command, or if the assignment is a result of the operation of the getopts or read utilities, the export attribute shall persist until the variable is unset. The variables set while this option is enabled will be exported, i.e. made into environment variables. These environment variables will be available to the current shell environment and to any subsequently created child process environments, as per usual. This means that you will have to either source this file (using . , or source in shells that have this command), or start the processes that should have access to the variables from this file.
{ "source": [ "https://unix.stackexchange.com/questions/423282", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
423,301
I've been using the default configuration of vim for a while and want to make a few changes. However, if I edit ~/.vimrc it seems to overwrite all other configuration settings of /etc/vimrc and such, e.g. now there is no syntax highlighting. Here is what vim loads: :scriptnames /etc/vimrc /usr/share/vim/vimfiles/archlinux.vim ~/.vimrc /usr/share/vim/vim80/plugin/... <there are a few> In other words I want to keep whatever there is configured in vim, but simply make minor adjustments for my shell user. What do I need to do to somehow weave ~/.vimrc into the existing configuration or what do I need to put into ~/.vimrc so it loads the default configuration? EDIT: My intended content of ~/.vimrc : set expandtab set shiftwidth=2 set softtabstop=2
You can source the global Vim configuration file into your local ~/.vimrc : unlet! skip_defaults_vim source $VIMRUNTIME/defaults.vim set mouse-=a See :help defaults.vim and :help defaults.vim-explained for details.
{ "source": [ "https://unix.stackexchange.com/questions/423301", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153505/" ] }
423,854
I use Ubuntu 16.04 with the native Bash on it. I'm not sure if executing #!/bin/bash myFunc() { export myVar="myVal" } myFunc equals in any sense, to just executing export myVar="myVal" . Of course, a global variable should usually be declared outside of a function (a matter of convention I assume, even if technically possible) but I do wonder about the more exotic cases where one writes some very general function and wants a variable inside it to still be available to everything, anywhere. Would export of a variable inside a function, be identical to exporting it globally, directly in the CLI, making it available to everything in the shell (all subshells, and functions inside them)?
Your script creates an environment variable, myVar , in the environment of the script. The script, as it is currently presented, is functionally exactly equivalent to #!/bin/bash export myVar="myVal" The fact that the export happens in the function body is not relevant to the scope of the environment variable (in this case). It will start to exist as soon as the function is called. The variable will be available in the script's environment and in the environment of any other process started from the script after the function call. The variable will not exist in the parent process' environment (the interactive shell that you run the script from), unless the script is sourced (with . or source ) in which case the whole script will be executing in the interactive shell's environment (which is the purpose of "sourcing" a shell file). Without the function call itself: myFunc() { export myVar="myVal" } Sourcing this file would place myFunc in the environment of the calling shell. Calling the function would then create the environment variable. See also the question What scopes can shell variables have?
{ "source": [ "https://unix.stackexchange.com/questions/423854", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
423,894
What *nix command would cause the hard drive arm to rapidly switch between the centre and the edge of the platter? In theory it should soon cause a mechanical failure. It is for an experiment with old hard drives.
hdparm --read-sector N will issue a low-level read of sector N bypassing the block layer abstraction. Use -I to get the device's number of sectors.
{ "source": [ "https://unix.stackexchange.com/questions/423894", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
424,130
This is my Git repository: https://github.com/benqzq/ulcwe It has a dir named local and I want to change its name to another name (say, from local to xyz ). Changing it through GitHub GUI manually is a nightmare as I have to change the directory name for each file separately (GitHub has yet to include a "Directory rename" functionality, believe it or not). After installing Git, I tried this command: git remote https://github.com/benqzq/ulcwe && git mv local xyz && exit While I didn't get any prompt for my GitHub password, I did get this error: fatal: Not a git repository (or any parent up to mount point /mnt/c) Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). I know the whole point in Git is to download a project, change, test, and then push to the hosting provider (GitHub in this case), but to just change a directory, I desire a direct operation. Is it even possible with Git? Should I use another program maybe?
The fatal error message indicates you’re working from somewhere that’s not a clone of your git repository. So let’s start by cloning the git repository first: git clone https://github.com/benqzq/ulcwe.git Then enter it: cd ulcwe and rename the directory: git mv local xyz For the change to be shareable, you need to commit it: git commit -m "Rename local to xyz" Now you can push it to your remote git repository: git push and you’ll see the change in the GitHub interface.
{ "source": [ "https://unix.stackexchange.com/questions/424130", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
424,142
Some days ago I downloaded a .deb file that does not have a descriptive name and I want to know which version it is before executing dpkg -i . I do not know if the same package also comes in a repository, so I am looking to extract this information from the actual file, rather than querying the repository's database.
To get lots of information about the package use -I or --info : dpkg-deb -I package.deb dpkg-deb --info package.deb To only get the version use, -f or --field : dpkg-deb -f package.deb Version dpkg-deb --field package.deb Version
{ "source": [ "https://unix.stackexchange.com/questions/424142", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275973/" ] }
424,183
I want to set up an alias in my config file that has the same result as this command: ssh -N devdb -L 1234:127.0.0.1:1234 My .ssh/config entry for devdb : Host devdb User someuser HostName the_hostname LocalForward 1234 127.0.0.1:1234 What do I put in the above config to not start a shell?
So in ssh.c for OpenSSH 7.6p1 we find case 'N': no_shell_flag = 1; options.request_tty = REQUEST_TTY_NO; so -N does two things: the no_shell_flag only appears in ssh.c and is only enabled for the -W or -N options, otherwise it appears in some logic blocks related to ControlPersist and sanity checking involving background forks. I do not see a way an option could directly set it. according to readconf.c the request_tty corresponds to the RequestTTY option detailed in ssh_config(5) . This leaves (apart from monkey patching OpenSSH and recompiling, or asking for a ssh_config option to toggle no_shell_flag with...) something like: Host devdb User someuser HostName the_hostname LocalForward 1234 127.0.0.1:1234 RequestTTY no RemoteCommand cat Which technically does start a shell but that shell should immediately replace itself with the cat program which should then block allowing the port forward to be used meanwhile. cat is portable, but will consume input (if there is any) or could fail (if standard input is closed). Another option would be to run something that just blocks .
{ "source": [ "https://unix.stackexchange.com/questions/424183", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68394/" ] }
424,492
I am defining a shell script which a user should source rather than execute. Is there a conventional or intelligent way to hint to the user that this is the case, for instance via a file extension? Is there shell code I can write in the file itself, which will cause it to echo a message and quit if it is executed instead of sourced, so that I can help the user avoid this obvious mistake?
Assuming that you are running bash, put the following code near the start of the script that you want to be sourced but not executed: if [ "${BASH_SOURCE[0]}" -ef "$0" ] then echo "Hey, you should source this script, not execute it!" exit 1 fi Under bash, ${BASH_SOURCE[0]} will contain the name of the current file that the shell is reading regardless of whether it is being sourced or executed. By contrast, $0 is the name of the current file being executed. -ef tests if these two files are the same file. If they are, we alert the user and exit. Neither -ef nor BASH_SOURCE are POSIX. While -ef is supported by ksh, yash, zsh and Dash, BASH_SOURCE requires bash. In zsh , however, ${BASH_SOURCE[0]} could be replaced by ${(%):-%N} .
{ "source": [ "https://unix.stackexchange.com/questions/424492", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20716/" ] }
424,628
Why md5sum is prepending "\" in front of the checksum when finding the checksum of a file with "\" in the name? $ md5sum /tmp/test\\test \d41d8cd98f00b204e9800998ecf8427e /tmp/test\\test The same is noted for every other utility.
This is documented , for Coreutils’ md5sum : If file contains a backslash or newline, the line is started with a backslash, and each problematic character in the file name is escaped with a backslash, making the output unambiguous even in the presence of arbitrary file names. ( file is the filename, not the file’s contents). b2sum , sha1sum , and the various SHA-2 tools behave in the same way as md5sum . sum and cksum don’t; sum is only provided for backwards-compatibility (and its ancestors don’t produce quoted output), and cksum is specified by POSIX and doesn’t allow this type of output. This behaviour was introduced in November 2015 and released in version 8.25 (January 2016), with the following NEWS entry: md5sum now ensures a single line per file for status on standard output, by using a '\' at the start of the line, and replacing any newlines with '\n'. This also affects sha1sum , sha224sum , sha256sum , sha384sum and sha512sum . The backslash at the start of the line serves as a flag: escapes in filenames are only processed if the line starts with a backslash. (Unescaping can’t be the default behaviour: it would break sums generated with older versions of Coreutils containing \\ or \n in the stored filenames.)
{ "source": [ "https://unix.stackexchange.com/questions/424628", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104666/" ] }
424,967
My /etc/group has grown by adding new users as well as installing programs that have added their own user and/or group. The same is true for /etc/passwd . Editing has now become a little cumbersome due to the lack of structure. May I sort these files (e.g. by numerical id or alphabetical by name) without negative effect on the system and/or package managers? I would guess that is does not matter but just to be sure I would like to get a 2nd opinion. Maybe root needs to be the 1st line or within the first 1k lines or something? The same goes for /etc/*shadow .
You should be OK doing this : in fact, according to the article and reading the documentation, you can sort /etc/passwd and /etc/group by UID/GID with pwck -s and grpck -s , respectively.
{ "source": [ "https://unix.stackexchange.com/questions/424967", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115138/" ] }
424,998
I was reading Ritchie and Thompson's paper about the Unix file system. They write, 'It is worth noting that the system is totally self-supporting'. Were the systems before Unix not self-supporting? In what ways?
The question in your title is addressed immediately after your quote in the paper : All Unix software is maintained on the system; likewise, this paper and all other documents in this issue were generated and formatted by the Unix editor and text formatting programs. So “self-supporting” means that once a Unix system is setup, it is self-sufficient, and its users can use it to make changes to the system itself. “This issue” in the quote above refers to Bell System Technical Journal, Volume 57, Number 6, Part 2, July-August 1978 (also available on the Internet Archive ), which was all about the Unix system (and makes fascinating reading for anyone interested in Unix and its history). The fact that Unix is self-supporting doesn’t mean all other systems before it weren’t; but some operating systems did require the use of other systems to build them (this became more common later, in fact, with the advent of micro-computers, whose systems were often developed on minis). Unix was novel in that it also included typesetting tools, which meant that it could not only build itself, but also produce its documentation, both online and in print (I imagine Unix might not be the first such system, but this would have been at least unusual).
{ "source": [ "https://unix.stackexchange.com/questions/424998", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276636/" ] }
425,205
How can I find all files I can not write to? Would be good if it takes standard permissions and acls into account. Is there an "easy" way or do I have to parse the permissions myself?
Try find . ! -writable the command find returns a list of files, -writable filters only the ones you have write permission to, and the ! inverts the filter. You can add -type f if you want to ignore the directories and other 'special files'.
{ "source": [ "https://unix.stackexchange.com/questions/425205", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23987/" ] }
425,211
I am developing a application to don't forget the pendrive. This app must lock the shutdown if a pendrive is connected to the machine. As this form, if the user wants to shutdown the system while a pendrive is connected, the system shows a notification to alert about it must disconnect the pendrive to unlock shutdown. To detect the shutdown event, I set a polkit rule what call a script to check if any pendrive are connected to the system. If there are any pendrive connected, the polkit rule calls to notify-send through the script send_notify.sh , which execute this command: notify-send "Pendrive-Reminder" "Extract Pendrive to enable shutdown" -t 5000 The polkit rule is this: polkit.addRule(function(action, subject) { if (action.id == "org.freedesktop.consolekit.system.stop" || action.id == "org.freedesktop.login1.power-off" || action.id == "org.freedesktop.login1.power-off-multiple-sessions" || action.id == "org.xfce.session.xfsm-shutdown-helper") { try{ polkit.spawn(["/usr/bin/pendrive-reminder/check_pendrive.sh", subject.user]); return polkit.Result.YES; }catch(error){ polkit.spawn(["/usr/bin/pendrive-reminder/send_notify.sh", subject.user]); return polkit.Result.NO; } } } But. after put this polkit rule and press shutdown button, my user don't receive any notification. I debug the rule and I checked that second script It's executed, but the notify-send don't shows the notification to my user. How can I solve this? UPDATE: I tried to modify the script as this: #!/bin/bash user=$1 XAUTHORITY="/home/$user/.Xauthority" DISPLAY=$( who | grep -m1 $user.*\( | awk '{print $5}' | sed 's/[(|)]//g') notify-send "Extract Pendrive to enable shutdown" -t 5000 exit 0 The user is passed as parameter by pòlkit But the problem continues UPDATE: I've just seen this bug https://bugs.launchpad.net/ubuntu/+source/libnotify/+bug/160598 that don't allows to send notifications as root. Later I'll test to modify workaround changing user UPDATE2: After change code to this. the problem continues: #!/bin/bash export XAUTHORITY="/home/$user/.Xauthority" export DISPLAY=$(cat "/tmp/display.$user") user=$1 su $user -c 'notify-send "Pendrive Reminder" "Shutdown lock enabled. Disconnect pendrive to enable shutdown" -u critical'
Try find . ! -writable the command find returns a list of files, -writable filters only the ones you have write permission to, and the ! inverts the filter. You can add -type f if you want to ignore the directories and other 'special files'.
{ "source": [ "https://unix.stackexchange.com/questions/425211", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/255730/" ] }
425,487
I've read that you need to put the swap partition on HDD rather than SSD. My questions are the following: When and how is the "checking" done by the distribution (or something else) to find its Swap partition? Does it happen during boot? It just checks all available disks and searches for a partition with 'swap' flag? What happens if there are several partitions like that? Also, how many swap partitions do I need to have if I run, for example, two different distributions on the same disk, let's say Fedora and Ubuntu?
Statically configured swap space (the type that pretty much every distribution uses) is configured in /etc/fstab just like filesystems are. A typical entry looks something like: UUID=21618415-7989-46aa-8e49-881efa488132 none swap sw 0 0 You may also see either discard or nofail specified in the flags field (the fourth field). Every such line corresponds to one swap area (it doesn't have to be a partition, you can have swap files, or even entire swap disks). In some really specific cases you might instead have dynamically configured swap space, although this is rather rare because it can cause problematic behavior relating to memory management. In this case, the configuration is handled entirely by a userspace component that creates and enables swap files as needed at run time. As far as how many you need, that's a complicated question to answer, but the number of different Linux distributions you plan to run has zero impact on this unless you want to be able to run one distribution while you have another in hibernation (and you probably don't want to do this, as it's a really easy way to screw up your system). When you go to run the installer for almost any major distribution (including Fedora, OpenSUSE, Linux Mint, Debian, and Ubuntu), it will detect any existing swap partitions on the system, and add those to the configuration for the distribution you're installing (except possibly if you select manual partitioning), and in most cases this will result in the system being configured in a sensible manner. Even aside from that, I would personally suggest avoiding having multiple swap partitions unless you're talking about a server system with lots of disks, and even then you really need to know what you're doing to get set up so that it performs well.
{ "source": [ "https://unix.stackexchange.com/questions/425487", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186014/" ] }
426,683
I would like to do something like this: > grep pattern file.txt | size -h 16.4 MB or something equivalent to: > grep pattern file.txt > grepped.txt > ls -h grepped.txt 16.4 MB > rm grepped.txt (that would be a bit inconvenient, though) Is that possible?
You can use wc for this: grep pattern file.txt | wc -c will count the number of bytes in the output. You can post-process that to convert large values to “human-readable” format . You can also use pv to get this information inside a pipe: grep pattern file.txt | pv -b > output.txt (this displays the number of bytes processed, in human-readable format).
{ "source": [ "https://unix.stackexchange.com/questions/426683", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60673/" ] }
426,693
On Alpine Linux, I'd like to know how to extract just the IP address from a DNS / dig query. The query I'm running looks like this: lab-1:/var/# dig +answer smtp.mydomain.net +short smtp.ggs.mydomain.net 10.11.11.11 I'd like to be able to get just the IP address returned. I'm currently playing around with the bash pipe and the awk command. But so far, nothing I've tried is working. Thanks.
I believe dig +short outputs two lines for you because the domain you query, smtp.mydomain.net is a CNAME for smtp.ggs.mydomain.net , and dig prints the intermediate resolution step. You can probably rely on the last line from dig's output being the IP you want, though, and therefore the following should do: dig +short smtp.mydomain.net | tail -n1
{ "source": [ "https://unix.stackexchange.com/questions/426693", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52684/" ] }
426,748
I have about 15,000 files that are named file_1.pdb , file_2.pdb , etc. I can cat about a few thousand of these in order by doing: cat file_{1..2000}.pdb >> file_all.pdb However, if I do this for 15,000 files, I get the error -bash: /bin/cat: Argument list too long I have seen this problem being solved by doing find . -name xx -exec xx but this wouldn't preserve the order with which the files are joined. How can I achieve this?
Using find , sort and xargs : find . -maxdepth 1 -type f -name 'file_*.pdb' -print0 | sort -zV | xargs -0 cat >all.pdb The find command finds all relevant files, then prints their pathnames out to sort that does a "version sort" to get them in the right order (if the numbers in the filenames had been zero-filled to a fixed width we would not have needed -V ). xargs takes this list of sorted pathnames and runs cat on these in as large batches as possible. This should work even if the filenames contains strange characters such as newlines and spaces. We use -print0 with find to give sort nul-terminated names to sort, and sort handles these using -z . xargs too reads nul-terminated names with its -0 flag. Note that I'm writing the result to a file whose name does not match the pattern file_*.pdb . The above solution uses some non-standard flags for some utilities. These are supported by the GNU implementation of these utilities and at least by the OpenBSD and the macOS implementation. The non-standard flags used are -maxdepth 1 , to make find only enter the top-most directory but no subdirectories. POSIXly, use find . ! -name . -prune ... -print0 , to make find output nul-terminated pathnames (this was considered by POSIX but rejected). One could use -exec printf '%s\0' {} + instead. -z , to make sort take nul-terminated records. There is no POSIX equivalence. -V , to make sort sort e.g. 200 after 3 . There is no POSIX equivalence, but could be replaced by a numeric sort on specific parts of the filename if the filenames have a fixed prefix. -0 , to make xargs read nul-terminated records. There is no POSIX equivalence. POSIXly, one would need to quote the file names in a format recognised by xargs . If the pathnames are well behaved, and if the directory structure is flat (no subdirectories), then one could make do without these flags, except for -V with sort .
{ "source": [ "https://unix.stackexchange.com/questions/426748", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90334/" ] }
426,837
I tried to use sha256sum in High Sierra; I attempted to install it with MacPorts , as: sudo port install sha256sum It did not work. What to do?
After investigating a little, I found a ticket in an unrelated software in GitHub sha256sum command is missing in MacOSX , with several solutions: installing coreutils sudo port install coreutils It installs sha256sum at /opt/local/libexec/gnubin/sha256sum As another possible solution, using openssl : function sha256sum() { openssl sha256 "$@" | awk '{print $2}'; } As yet another one, using the shasum command native to MacOS: function sha256sum() { shasum -a 256 "$@" ; } && export -f sha256sum
{ "source": [ "https://unix.stackexchange.com/questions/426837", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138261/" ] }
428,217
I am trying to create a raspberry pi spy cam bug. I am trying to make it so the new file created for the various processes come up with NOW=`date '+%F_%H:%M:%S'`; which works fine. but it requires an echo to update the time $NOW is also in the /home/pi/.bashrc file Same issue, does not update wo . ~/.bashrc I found on this forum here and it works: #! /bin/bash NOW=`date '+%F_%H:%M:%S'`; filename="/home/pi/gets/$NOW.jpg" raspistill -n -v -t 500 -o $NOW.jpg; echo $filename; I don't get how it works bc it's before o the output of raspistil and in quotes. Thank you all in advance!!!
When you do NOW=`date '+%F_%H:%M:%S'` or, using more modern syntax, NOW=$( date '+%F_%H:%M:%S' ) the variable NOW will be set to the output of the date command at the time when that line is executed. If you do this in ~/.bashrc , then $NOW will be a timestamp that tells you when you started the current interactive shell session. You could also set the variable's value with printf -v NOW '%(%F_%H:%M:%S)T' -1 if you're using bash release 4.2 or later. This prints the timestamp directly into the variable without calling date . In the script that you are showing, the variable NOW is being set when the script is run (this is what you want). When the assignment filename="/home/pi/gets/$NOW.jpg" is carried out, the shell will expand the variable in the string. It does this even though it is in double quotes. Single quotes stops the shell from expanding embedded variables (this is not what you want in this case). Note that you don't seem to actually use the filename variable in the call to raspistill though, so I'm not certain why you set its value, unless you just want it outputted by echo at the end. In the rest of the code, you should double quote the $NOW variable expansion (and $filename ). If you don't, and later change how you define NOW so that it includes spaces or wildcards (filename globbing patterns), the commands that use $NOW may fail to parse their command line properly. Compare, e.g., string="hello * there" printf 'the string is "%s"\n' $string with string="hello * there" printf 'the string is "%s"\n' "$string" Related things: About backticks in command substitutions: Have backticks (i.e. `cmd`) in *sh shells been deprecated? About quoting variable expansions: Security implications of forgetting to quote a variable in bash/POSIX shells and Why does my shell script choke on whitespace or other special characters?
{ "source": [ "https://unix.stackexchange.com/questions/428217", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/232342/" ] }
428,233
Is there an existing tool, which can be used to download big files over a bad connection? I have to regularly download a relatively small file: 300 MB, but the slow (80-120 KBytes/sec) TCP connection randomly breaks after 10-120 seconds. (It's a big company's network. We contacted their admins (working from India) multiple times, but they can't or don't want to do anything.) The problem might be with their reverse proxies / load balancers. Up until now I used a modified version of pcurl: https://github.com/brunoborges/pcurl I changed this line: curl -s --range ${START_SEG}-${END_SEG} -o ${FILENAME}.part${i} ${URL} & to this: curl -s --retry 9999 --retry-delay 3 --speed-limit 2048 --speed-time 10 \ --retry-max-time 0 -C - --range ${START_SEG}-${END_SEG} -o ${FILENAME}.part${i} ${URL} & I had to add --speed-limit 2048 --speed-time 10 because the connection mostly just hangs for minutes when it fails. But recently even this script can't complete. One problem is that it seems to ignore the -C - part, so it doesn't "continue" the segment after a retry. It seems to truncate the relating temp file, and start from the beginning after each fail. (I think the --range and the -C options cannot be used together.) The other problem is that this script downloads all segments at the same time. It cannot have 300 segments, of which only are 10 being downloaded at a time. I was thinking of writing a download tool in C# for this specific purpose, but if there's an existing tool, or if the curl command could work properly with different parameters, then I could spare some time. UPDATE 1: Additional info: The parallel download functionality should not be removed, because they have a bandwidth limit (80-120 Kbytes / sec, mostly 80) per connection, so 10 connections can cause a 10 times speedup. I have to finish the file download in 1 hour, because the file is generated hourly.
lftp ( Wikipedia ) is good for that. It supports a number of protocols, can download files using several concurrent parallel connections (useful where there's a lot of packet loss not caused by congestion), and can automatically resume downloads. It's also scriptable. Here including the fine-tuning you came up with (credits to you): lftp -c 'set net:idle 10 set net:max-retries 0 set net:reconnect-interval-base 3 set net:reconnect-interval-max 3 pget -n 10 -c "https://host/file.tar.gz"'
{ "source": [ "https://unix.stackexchange.com/questions/428233", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/195913/" ] }
428,419
Saying that I have two files: a.txt and b.txt . The content of a.txt : hello world The content of b.txt : hello world something else Of course I can use vimdiff to check their difference, I can make sure that a.txt is a subset of b.txt , which means that b.txt must contain all of lines existing in a.txt (just like the example above). My question is how to record lines which exists in b.txt but doesn't exist in a.txt into a file?
comm -1 -3 a.txt b.txt > c.txt The -1 excludes lines that are only in a.txt , and the -3 excludes lines that are in both. Thus only the lines exclusively in b.txt are output (see man comm or comm --help for details). The output is redirected to c.txt If you want the difference between the two files, use diff rather than comm . e.g. diff -u a.txt b.txt > c.txt
{ "source": [ "https://unix.stackexchange.com/questions/428419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145824/" ] }
428,721
In a VM on a cloud provider, I'm seeing a process with weird random name. It consumes significant network and CPU resources. Here's how the process looks like from pstree view: systemd(1)───eyshcjdmzg(37775)─┬─{eyshcjdmzg}(37782) ├─{eyshcjdmzg}(37783) └─{eyshcjdmzg}(37784) I attached to the process using strace -p PID . Here's the output I've got: https://gist.github.com/gmile/eb34d262012afeea82af1c21713b1be9 . Killing the process does not work. It is somehow (via systemd?) resurrected. Here's how it looks from systemd point of view ( note the weird IP address at the bottom): $ systemctl status 37775 ● session-60.scope - Session 60 of user root Loaded: loaded Transient: yes Drop-In: /run/systemd/system/session-60.scope.d └─50-After-systemd-logind\x2eservice.conf, 50-After-systemd-user-sessions\x2eservice.conf, 50-Description.conf, 50-SendSIGHUP.conf, 50-Slice.conf, 50-TasksMax.conf Active: active (abandoned) since Tue 2018-03-06 10:42:51 EET; 1 day 1h ago Tasks: 14 Memory: 155.4M CPU: 18h 56min 4.266s CGroup: /user.slice/user-0.slice/session-60.scope ├─37775 cat resolv.conf ├─48798 cd /etc ├─48799 sh ├─48804 who ├─48806 ifconfig eth0 ├─48807 netstat -an ├─48825 cd /etc ├─48828 id ├─48831 ps -ef ├─48833 grep "A" └─48834 whoami Mar 06 10:42:51 k8s-master systemd[1]: Started Session 60 of user root. Mar 06 10:43:27 k8s-master sshd[37594]: Received disconnect from 23.27.74.92 port 59964:11: Mar 06 10:43:27 k8s-master sshd[37594]: Disconnected from 23.27.74.92 port 59964 Mar 06 10:43:27 k8s-master sshd[37594]: pam_unix(sshd:session): session closed for user root What is going on?!
eyshcjdmzg is a Linux DDoS trojan (easily found through a Google search). You've likely been hacked. Take that server off-line now. It's not yours any longer. Please read the following ServerFault Q/A carefully: How to deal with a compromised server . Note that depending on who you are and where you are, you may additionally be legally obliged to report this incident to authorities. This is the case if you are working at a government agency in Sweden (e.g. a university), for example. Related: How can I kill minerd malware on an AWS EC2 instance? (compromised server) Need help understanding suspicious SSH commands
{ "source": [ "https://unix.stackexchange.com/questions/428721", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30894/" ] }
428,724
Is there a practical and easy way to capture data going through a named pipe? I've tried wireshark, but it only accepts a specific data format. I've also tried cat, but I get mixed results. Thank you
eyshcjdmzg is a Linux DDoS trojan (easily found through a Google search). You've likely been hacked. Take that server off-line now. It's not yours any longer. Please read the following ServerFault Q/A carefully: How to deal with a compromised server . Note that depending on who you are and where you are, you may additionally be legally obliged to report this incident to authorities. This is the case if you are working at a government agency in Sweden (e.g. a university), for example. Related: How can I kill minerd malware on an AWS EC2 instance? (compromised server) Need help understanding suspicious SSH commands
{ "source": [ "https://unix.stackexchange.com/questions/428724", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/279398/" ] }
428,727
I have the following json as my input for jq processing [ { "Category": "Disk Partition Details", "Filesystem": "udev", "Size": "3.9G", "Used": 0, "Avail": "3.9G", "Use%": "0%", "Mounted": "/dev" }, { "Category": "Disk Partition Details", "Filesystem": "tmpfs", "Size": "799M", "Used": "34M", "Avail": "766M", "Use%": "5%", "Mounted": "/run" } ] using ./csvtojson.sh bb.csv | jq 'map( {(.Category): del(.Category)})' as suggested by @peak here , I've reached till the json below [ { "Disk Partition Details": { "Filesystem": "udev", "Size": "3.9G", "Used": 0, "Avail": "3.9G", "Use%": "0%", "Mounted": "/dev" } }, { "Disk Partition Details": { "Filesystem": "tmpfs", "Size": "799M", "Used": "34M", "Avail": "766M", "Use%": "5%", "Mounted": "/run" } } ] All I want is to put the category on the top for once only and to break this json to another level as i did in the previous step like this. [ { "Disk Partition Details": { "udev" :{ "Size": "3.9G", "Used": 0, "Avail": "3.9G", "Use%": "0%", "Mounted": "/dev" }, "tmpfs" : { "Size": "799M", "Used": "34M", "Avail": "766M", "Use%": "5%", "Mounted": "/run" } } } ]
eyshcjdmzg is a Linux DDoS trojan (easily found through a Google search). You've likely been hacked. Take that server off-line now. It's not yours any longer. Please read the following ServerFault Q/A carefully: How to deal with a compromised server . Note that depending on who you are and where you are, you may additionally be legally obliged to report this incident to authorities. This is the case if you are working at a government agency in Sweden (e.g. a university), for example. Related: How can I kill minerd malware on an AWS EC2 instance? (compromised server) Need help understanding suspicious SSH commands
{ "source": [ "https://unix.stackexchange.com/questions/428727", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/279400/" ] }
428,825
I'd like to ask: Why is echo {1,2,3} expanded to 1 2 3 which is an expected behavior, while echo [[:digit:]] returns [[:digit:]] while I expected it to print all digits from 0 to 9 ?
Because they are two different things. The {1,2,3} is an example of brace expansion . The {1,2,3} construct is expanded by the shell , before echo even sees it. You can see what happens if you use set -x : $ set -x $ echo {1,2,3} + echo 1 2 3 1 2 3 As you can see, the command echo {1,2,3} is expanded to: echo 1 2 3 However, [[:digit:]] is a POSIX character class . When you give it to echo , the shell also processes it first, but this time it is being processed as a shell glob . it works the same way as if you run echo * which will print all files in the current directory. But [[:digit:]] is a shell glob that will match any digit. Now, in bash, if a shell glob doesn't match anything, it will be expanded to itself: $ echo /this*matches*no*files + echo '/this*matches*no*files' /this*matches*no*files If the glob does match something, that will be printed: $ echo /e*c + echo /etc /etc In both cases, echo just prints whatever the shell tells it to print, but in the second case, since the glob matches something ( /etc ) it is told to print that something. So, since you don't have any files or directories whose name consists of exactly one digit (which is what [[:digit:]] would match), the glob is expanded to itself and you get: $ echo [[:digit:]] [[:digit:]] Now, try creating a file called 5 and running the same command: $ echo [[:digit:]] 5 And if there are more than one matching files: $ touch 1 5 $ echo [[:digit:]] 1 5 This is (sort of) documented in man bash in the explanation of the nullglob options which turns this behavior off: nullglob If set, bash allows patterns which match no files (see Pathname Expansion above) to expand to a null string, rather than themselves. If you set this option: $ rm 1 5 $ shopt -s nullglob $ echo [[:digit:]] ## prints nothing $
{ "source": [ "https://unix.stackexchange.com/questions/428825", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268014/" ] }
429,421
When I run chmod +w filename it doesn't give write permission to other , it just gives write permission to user and group . After executing this command chmod +w testfile.txt running ls -l testfile.txt prints -rw-rw-r-- 1 ravi ravi 20 Mar 10 18:09 testfile.txt but in case of +r and +x it works properly. I don't want to use chmod ugo+w filename .
Your specific situation In your specific situation, we can guess that your current umask is 002 (this is a common default value) and this explains your surprise. In that specific situation where umask value is 002 (all numbers octal). +r means ugo+r because 002 & 444 is 000 , which lets all bits to be set +x means ugo+x because 002 & 111 is 000 , which lets all bits to be set but +w means ug+w because 002 & 222 is 002 , which prevents the "o" bit to be set. Other examples With umask 022 +w would mean u+w . With umask 007 +rwx would mean ug+rwx . With umask 077 +rwx would mean u+rwx . What would have matched your expectations When you change umask to 000 , by executing umask 000 in your terminal, then chmod +w file will set permissions to ugo+w. Side note As suggested by ilkkachu, note that umask 000 doesn't mean that everybody can read and write all your files. But umask 000 means everyone that has some kind of access to any user account on your machine (which may include programs running server services ofc) can read and write all the files you make with that mask active and don't change (if the containing chain of directories up to the root also allows them).
{ "source": [ "https://unix.stackexchange.com/questions/429421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276630/" ] }
430,207
After executing the following to disable ping replies: # sysctl net.ipv4.icmp_echo_ignore_all=1 # sysctl -p I obtain different results from pinging localhost vs. 127.0.0.1 # ping -c 3 localhost PING localhost(localhost (::1)) 56 data bytes 64 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.029 ms 64 bytes from localhost (::1): icmp_seq=2 ttl=64 time=0.035 ms 64 bytes from localhost (::1): icmp_seq=3 ttl=64 time=0.101 ms --- localhost ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2042ms rtt min/avg/max/mdev = 0.047/0.072/0.101/0.022 ms Pinging 127.0.0.1 fails: ping -c 3 127.0.0.1 PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data. --- 127.0.0.1 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2032ms Why are these results different?
The ping command shows the address it resolved the name to. In this case it resolved to the IPv6 localhost address, ::1 . On the other hand, 127.0.0.1 is an IPv4 address, so it explicitly makes ping use IPv4. The sysctl you used only affects IPv4 pings, so you get replies for ::1 , but not for 127.0.0.1 . The address you get from resolving localhost depends on how your DNS is resolver is set up. localhost is probably set in /etc/hosts , but in theory you could get it from an actual name server. As for how to drop IPv6 pings, you may need to look into ip6tables , as there doesn't seem to be a similar sysctl for IPv6. Or just disable IPv6 entirely, if you're not using it in your network. (Though of course that's not a very forward-looking idea, but doable if you're not currently using it anyway.)
{ "source": [ "https://unix.stackexchange.com/questions/430207", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/252134/" ] }
430,318
Say if I wrote a program with the following line: int main(int argc, char** argv) Now it knows what command line arguments are passed to it by checking the content of argv . Can the program detect how many spaces between arguments? Like when I type these in bash: ibug@linux:~ $ ./myprog aaa bbb ibug@linux:~ $ ./myprog aaa bbb Environment is a modern Linux (like Ubuntu 16.04), but I suppose the answer should apply to any POSIX-compliant systems.
In general, no. Command line parsing is done by the shell which does not make the unparsed line available to the called program. In fact, your program might be executed from another program which created the argv not by parsing a string but by constructing an array of arguments programmatically.
{ "source": [ "https://unix.stackexchange.com/questions/430318", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211239/" ] }
430,616
I'm trying to understand permissions better, so I'm doing some "exercises". Here's a sequence of commands that I'm using with their respective output: $ umask 0022 $ touch file1 $ ls -l file1 -rw-r--r-- 1 user group 0 Mar 16 12:55 file1 $ mkdir dir1 $ ls -ld dir1 drwxr-xr-x 2 user group 4096 Mar 16 12:55 dir1 That makes sense because we know that the default file permissions are 666 ( rw-rw-rw- ) and directories default permissions are 777 ( rwxrwxrwx ). If I subtract the umask value from these default permissions I have 666-022=644 , rw-r--r-- , for the file1 , so it's coherent with the previous output; 777-022=755 , rwx-r-x-r-x , for the dir1 , also coherent. But if I change the umask from 022 to 021 it isn't any more. Here is the example for the file: $ umask 0021 $ touch file2 $ ls -l file2 -rw-r--rw- user group 0 Mar 16 13:33 file2 -rw-r--rw- is 646 but it should be 666-021=645 . So it doesn't work according to the previous computation. Here is the example for the directory: $ touch dir2 $ ls -ld dir2 drwxr-xrw- 2 user group 4096 Mar 16 13:35 dir2 drwxr-xrw- is 756 , 777-021=756 . So in this case the result is coherent with the previous computation. I've read the man but I haven't found anything about this behaviour. Can somebody explain why? EXPLANATION As pointed out in the answers: umask 's value is not mathematically subtracted from default directory and file's permissions. The operation effectively involved is a combination of AND (&) and NOT (!) boolean operators. Given: R = resulting permissions D = default permissions U = current umask R = D & !U For example: 666& !0053 = 110 110 110 & !000 101 011 110 110 110 & 111 010 100 = 110 010 100 = 624 = rw--w-r-- 777& !0022 = 111 111 111 & !000 010 010 111 111 111 & 111 101 101 = 111 101 101 = 755 = rwxr--xr-x TIP An easy way to quickly know the resulting permissions (at least it helped me) is to think that we can use just 3 decimal values: r = 100 = 4 w = 010 = 2 x = 001 = 1 Permissions will be a combination of these 3 values. " " is used to indicate that the relative permission is not given. 666 = 4+2+" " 4+2+" " 4+2+" " = rw rw rw So if my current umask is 0053 I know I'm removing read and execution (4+1) permission from group and write and execution (2+1) from other resulting in 4+2 " "+2+" " 4+" "+" " = 624 = rw--w-r-- (group and other already hadn't execution permission)
umask is a mask , it’s not a subtracted value. Thus: mode 666, mask 022: the result is 666 & ~022, i.e. 666 & 755, which is 644; mode 666, mask 021: the result is 666 & ~021, i.e. 666 & 756, which is 646. Think of the bits involved. 6 in a mode means bits 1 and 2 are set, read and write. 2 in a mask masks bit 1, the write bit. 1 in a mask masks bit 0, the execute bit. Another way to represent this is to look at the permissions in text form. 666 is rw-rw-rw- ; 022 is ----w--w- ; 021 is ----w---x . The mask drops its set bits from the mode, so rw-rw-rw- masked by ----w--w- becomes rw-r--r-- , masked by ----w---x becomes rw-r--rw- .
{ "source": [ "https://unix.stackexchange.com/questions/430616", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/218981/" ] }
432,002
I accidentally renamed the directory /usr into /usr_bak . I want to change it back, so I append the path /usr_bak/bin to $PATH to allow the system to find the command sudo . But now sudo mv /usr_bak /usr gives me the error: sudo: error while loading shared libraries: libsudo_util.so.0: cannot open shared object file: No such file or directory Is there a way to rename the /usr_bak as /usr besides reinstalling the system?
Since you have set a password for root, use su and busybox , installed by default in Ubuntu. All of su 's required libraries are in /lib . Busybox is a collection of utilities that's statically linked, so missing libraries shouldn't be a problem. Do: su -c '/bin/busybox mv /usr_bak /usr' (While Busybox itself also has a su applet, the /bin/busybox binary is not setuid and so doesn't work unless ran as root.) If you don't have a root password, you could probably use Gilles' solution here using LD_LIBRARY_PATH , or (Gilles says this won't work with setuid binaries like sudo) reboot and edit the GRUB menu to boot with init=/bin/busybox as a kernel parameter and move the folder back.
{ "source": [ "https://unix.stackexchange.com/questions/432002", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145824/" ] }
432,774
Can I take a Linux kernel and use it with, say, FreeBSD and vice versa (FreeBSD kernel in, say, a Debian)? Is there a universal answer? What are the limitations? What are the obstructions?
No, kernels from different implementations of Unix-style operating systems are not interchangeable, notably because they all present different interfaces to the rest of the system (user space) — their system calls (including ioctl specifics), the various virtual file systems they use... What is interchangeable to some extent, at the source level, is the combination of the kernel and the C library, or rather, the user-level APIs that the kernel and libraries expose (essentially, the view at the layer described by POSIX, without considering whether it is actually POSIX). Examples of this include Debian GNU/kFreeBSD , which builds a Debian system on top of a FreeBSD kernel, and Debian GNU/Hurd , which builds a Debian system on top of the Hurd. This isn’t quite at the level of kernel interchangeability, but there have been attempts to standardise a common application binary interface, to allow binaries to be used on various systems without needing recompilation. One example is the Intel Binary Compatibility Standard , which allows binaries conforming to it to run on any Unix system implementing it, including older versions of Linux with the iBCS 2 layer. I used this in the late 90s to run WordPerfect on Linux. See also How to build a FreeBSD chroot inside of Linux .
{ "source": [ "https://unix.stackexchange.com/questions/432774", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
433,630
When I install a program like GIMP or LibreOffice on Linux I'm never asked about permissions. By installing a program on Ubuntu, am I explicitly giving that program full permission to read/write anywhere on my drive and full access to the internet? Theoretically, could GIMP read or delete any directory on my drive, not requiring a sudo-type password? I'm only curious if it's technically possible, not if it's likely or not. Of course, I know it's not likely.
There are two things here: when you install a program by standard means (system installer such as apt/apt-get on Ubuntu) it is usually installed in some directory where it is available to all users (/usr/bin...). This directory requires privileges to be written to so you need special privileges during installation. when you use the program, it runs with your user id and can only read or write where programs executed with your id are allowed to read or write. In the case of Gimp, you will discover for instance that you cannot edit standard resources such as brushes because they are in the shared /usr/share/gimp/ and that you have to copy them first. This also shows in Edit>Preferences>Folders where most folders come in pairs, a system one which is read-only and a user one that can be written to.
{ "source": [ "https://unix.stackexchange.com/questions/433630", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/282776/" ] }
433,782
The output of the above command when passed through echo is: # echo systemctl\ {restart,status}\ sshd\; systemctl restart sshd; systemctl status sshd; Even if I paste the output to the terminal, the command works. But when I try to directly run the command, I get: # systemctl\ {restart,status}\ sshd\; bash: systemctl restart sshd;: command not found... I have two questions.. What exactly is this method of substitution and expansion called? (So that I can research it and learn more about it and how to use it properly). What did I do wrong here? Why doesn't it work?
It is a form of Brace expansion done in the shell. The brace-expansion idea is right, but the way it was used is incorrect here. When you meant to do: systemctl\ {restart,status}\ sshd\; The shell interprets systemctl restart sshd; as one long command and tries to run it, and it couldn't locate a binary to run it that way. Because at this stage, the shell tries to tokenize the items in the command line before building the complete command with arguments -- but it has not happened yet. For such known expansion values, you could use eval and still be safe, but be sure of what you are trying to expand with it. eval systemctl\ {restart,status}\ sshd\; But I would rather use a loop instead with for , instead of trying to do write a one-liner or use eval : for action in restart status; do systemctl "$action" sshd done
{ "source": [ "https://unix.stackexchange.com/questions/433782", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210853/" ] }
433,991
I know that i can use nmap to see which ports are open on specific machine. But what i need is a way to get it from the host side itself. Currently, if i use nmap on one of my machines to check the other one, i get for an example: smb:~# nmap 192.168.1.4 PORT STATE SERVICE 25/tcp open smtp 80/tcp open http 113/tcp closed ident 143/tcp open imap 443/tcp open https 465/tcp open smtps 587/tcp open submission 993/tcp open imaps Is there a way to do this on the host itself? Not from a remote machine to a specific host. I know that i can do nmap localhost But that is not what i want to do as i will be putting the command into a script that goes through all the machines. EDIT: This way, nmap showed 22 5000 5001 5432 6002 7103 7106 7201 9200 but lsof command showed me 22 5000 5001 5432 5601 6002 7102 7103 7104 7105 7106 7107 7108 7109 7110 7111 7112 7201 7210 11211 27017
On Linux, you can use: ss -ltu or netstat -ltu To list the l istening T CP and U DP ports. Add the -n option (for either ss or netstat ) if you want to disable the translation from port number and IP address to service and host name. Add the -p option to see the processes (if any, some ports may be bound by the kernel like for NFS) which are listening (if you don't have superuser privileges, that will only give that information for processes running in your name). That would list the ports where an application is listening on (for UDP, that has a socket bound to it). Note that some may only listen on a given address only (IPv4 and/or IPv6), which will show in the output of ss / netstat ( 0.0.0.0 means listen on any IPv4 address, [::] on any IPv6 address). Even then that doesn't mean that a given other host on the network may contact the system on that port and that address as any firewall, including the host firewall may block or mask/redirect the incoming connections on that port based on more or less complex rules (like only allow connections from this or that host, this or that source port, at this or that time and only up to this or that times per minutes, etc). For the host firewall configuration, you can look at the output of iptables-save . Also note that if a process or processes is/are listening on a TCP socket but not accepting connections there, once the number of pending incoming connection gets bigger than the maximum backlog, connections will no longer be accepted, and from a remote host, it will show as if the port was blocked. Watch the Recv-Q column in the output of ss / netstat to spot those situations (where incoming connections are not being accepted and fill up a queue).
{ "source": [ "https://unix.stackexchange.com/questions/433991", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/255177/" ] }
433,998
I am reading a book "Linux Command Line", there's -u update option for command mv and `cp' -u, --update When moving files from one directory to another, only move files that either don't exist, or are newer than the existing corresponding files in the destination directory. The option is not included in BSD 'mv' command. What's the alternative options for --update ?
On Linux, you can use: ss -ltu or netstat -ltu To list the l istening T CP and U DP ports. Add the -n option (for either ss or netstat ) if you want to disable the translation from port number and IP address to service and host name. Add the -p option to see the processes (if any, some ports may be bound by the kernel like for NFS) which are listening (if you don't have superuser privileges, that will only give that information for processes running in your name). That would list the ports where an application is listening on (for UDP, that has a socket bound to it). Note that some may only listen on a given address only (IPv4 and/or IPv6), which will show in the output of ss / netstat ( 0.0.0.0 means listen on any IPv4 address, [::] on any IPv6 address). Even then that doesn't mean that a given other host on the network may contact the system on that port and that address as any firewall, including the host firewall may block or mask/redirect the incoming connections on that port based on more or less complex rules (like only allow connections from this or that host, this or that source port, at this or that time and only up to this or that times per minutes, etc). For the host firewall configuration, you can look at the output of iptables-save . Also note that if a process or processes is/are listening on a TCP socket but not accepting connections there, once the number of pending incoming connection gets bigger than the maximum backlog, connections will no longer be accepted, and from a remote host, it will show as if the port was blocked. Watch the Recv-Q column in the output of ss / netstat to spot those situations (where incoming connections are not being accepted and fill up a queue).
{ "source": [ "https://unix.stackexchange.com/questions/433998", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/260114/" ] }
434,092
Linux doesn't actually distinguish between processes and threads, and implements both as a data structure task_struct . So what does Linux provide to some programs for them to tell threads of a process from its child processes? For example, Is there a way to see details of all the threads that a process has in Linux? Thanks.
From a task_struct perspective, a process’s threads have the same thread group leader ( group_leader in task_struct ), whereas child processes have a different thread group leader (each individual child process). This information is exposed to user space via the /proc file system. You can trace parents and children by looking at the ppid field in /proc/${pid}/stat or .../status (this gives the parent pid); you can trace threads by looking at the tgid field in .../status (this gives the thread group id, which is also the group leader’s pid). A process’s threads are made visible in the /proc/${pid}/task directory: each thread gets its own subdirectory. (Every process has at least one thread.) In practice, programs wishing to keep track of their own threads would rely on APIs provided by the threading library they’re using, instead of using OS-specific information. Typically on Unix-like systems that means using pthreads.
{ "source": [ "https://unix.stackexchange.com/questions/434092", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
434,094
I have a server running Centos 7 which needs to be rebooted to upgrade some software. Some of the physical NICs have around 5-10 VLAN interfaces each. They're subject to change on a weekly/monthly basis so storing the details in /etc/sysconfig/network-scripts to persist across reboots isn't practical. Is there an simple way to take a snapshot of the current networking stack and restore after the reboot? Similar to the way you can save/restore iptables rules? I've found several references to the system-config-network-cmd but I'm wary of using this tool in the event it overwrites the static configs for the physical interfaces we do have in /etc/sysconfig/network-scripts Thanks!
From a task_struct perspective, a process’s threads have the same thread group leader ( group_leader in task_struct ), whereas child processes have a different thread group leader (each individual child process). This information is exposed to user space via the /proc file system. You can trace parents and children by looking at the ppid field in /proc/${pid}/stat or .../status (this gives the parent pid); you can trace threads by looking at the tgid field in .../status (this gives the thread group id, which is also the group leader’s pid). A process’s threads are made visible in the /proc/${pid}/task directory: each thread gets its own subdirectory. (Every process has at least one thread.) In practice, programs wishing to keep track of their own threads would rely on APIs provided by the threading library they’re using, instead of using OS-specific information. Typically on Unix-like systems that means using pthreads.
{ "source": [ "https://unix.stackexchange.com/questions/434094", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143817/" ] }
434,278
When I wanted to create a hard link in my /home directory in root mode, Linux showed the following error message: ln: failed to create hard link ‘my_sdb’ => ‘/dev/sda1’: Invalid cross-device link The above error message is shown below: # cd /home/user/ # ln /dev/sda1 my_sdb But I could only create a hard link in the /dev directory, and it was not possible in other directories. Now, I want to know how to create a hard link from an existing device file (like sdb1 ) in /home directory (or other directories) ?
But I could only create a hard link in the /dev directory and it was not possible in other directories. As shown by the error message, it is not possible to create a hard link across different filesystems; you can create only soft (symbolic) links. For instance, if your /home is in a different partition than your root partition, you won't be able to hard link /tmp/foo to /home/user/ . Now, as @RichardNeumann pointed out, /dev is usually mounted as a devtmpfs filesystem. See this example: [dr01@centos7 ~]$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/centos_centos7-root 46110724 3792836 42317888 9% / devtmpfs 4063180 0 4063180 0% /dev tmpfs 4078924 0 4078924 0% /dev/shm tmpfs 4078924 9148 4069776 1% /run tmpfs 4078924 0 4078924 0% /sys/fs/cgroup /dev/sda1 1038336 202684 835652 20% /boot tmpfs 815788 28 815760 1% /run/user/1000 Therefore you can only create hard links to files in /dev within /dev .
{ "source": [ "https://unix.stackexchange.com/questions/434278", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273848/" ] }
434,611
I am trying to append and prepend text to every line in a .txt file. I want to prepend: I am a I want to append: 128... [} to every line. Inside of a.txt: fruit, like bike, like dino, like Upon performing the following command: $ cat a.txt|sed 'I am a ',' 128... [}' it does not work how I want it to. I would really want it to say the following: I am a fruit, like 128... [} I am a bike, like 128... [} I am a dino, like 128... [}
Simple sed approach: sed 's/^/I am a /; s/$/ 128... [}/' file.txt ^ - stands for start of the string/line $ - stands for end of the string/line The output: I am a fruit, like 128... [} I am a bike, like 128... [} I am a dino, like 128... [} Alternatively, with Awk you could do: awk '{ print "I am a", $0, "128... [}" }' file.txt
{ "source": [ "https://unix.stackexchange.com/questions/434611", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237911/" ] }
435,413
The issue of jq needing an explicit filter when the output is redirected is discussed all over the web. But I'm unable to redirect output if jq is part of a pipe chain, even when an explicit filter is in use. Consider: touch in.txt tail -f in.txt | jq '.f1' # in a different terminal: echo '{"f1":1,"f2":2}' >> in.txt echo '{"f1":3,"f2":2}' >> in.txt As expected, the output in the original terminal from the jq command is: 1 3 But if I add any sort of redirection or piping to the end of the jq command, the output goes silent: rm in.txt touch in.txt tail -f in.txt | jq '.f1' | tee out.txt # in a different terminal: echo '{"f1":1,"f2":2}' >> in.txt echo '{"f1":3,"f2":2}' >> in.txt No output appears in the first terminal and out.txt is empty. I've tried hundreds of variations but it's an elusive issue. The only workaround I've found , as discovered through mosquitto_sub and The Things Network (which was where I also discovered the issue), is to wrap the tail and jq functions in a shell script: #!/bin/bash tail -f $1 | while IFS='' read line; do echo $line | jq '.f1' done Then: ./tail_and_jq.sh | tee out.txt # in a different terminal: echo '{"f1":1,"f2":2}' >> in.txt echo '{"f1":3,"f2":2}' >> in.txt And sure enough, the output appears: 1 3 This is with the latest jq installed via Homebrew: $ echo $SHELL /bin/bash $ jq --version jq-1.5 $ brew install jq Warning: jq 1.5_3 is already installed and up-to-date Is this a (largely undocumented) bug in jq or with my understanding of pipe chains?
The output from jq is buffered when its standard output is piped. To request that jq flushes its output buffer after every object, use its --unbuffered option, e.g. tail -f in.txt | jq --unbuffered '.f1' | tee out.txt From the jq manual: --unbuffered Flush the output after each JSON object is printed (useful if you're piping a slow data source into jq and piping jq 's output elsewhere).
{ "source": [ "https://unix.stackexchange.com/questions/435413", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216323/" ] }
435,621
I'm trying to copy a file to a different name into the same directory using brace expansion. I'm using bash 4.4.18. Here's what I did: cp ~/some/dir/{my-file-to-rename.bin, new-name-of-file.bin} but I get this error: cp: cannot stat '/home/xyz/some/dir/{my-file-to-rename.bin,': No such file or directory Even a simple brace expansion like this gives me the same error: cp {my-file-to-rename.bin, new-name-of-file.bin} What am I doing wrong?
The brace expansion syntax accepts commas, but it does not accept a space after the comma. In many programming languages, spaces after commas are commonplace, but not here. In Bash, the presence of an unquoted space prevents brace expansion from being performed. Remove the space, and it will work: cp ~/some/dir/{my-file-to-rename.bin,new-name-of-file.bin} While not at all required, note that you can move the trailing .bin outside the braces: cp ~/some/dir/{my-file-to-rename,new-name-of-file}.bin If you want to test the effect of brace expansion, you can use echo or printf '%s ' , or printf with whatever format string you prefer, to do that. (Personally I just use echo for this, when I am in Bash, because Bash's echo builtin doesn't expand escape sequences by default, and is thus reasonably well suited to checking what command will actually run.) For example: ek@Io:~$ echo cp ~/some/dir/{my-file-to-rename,new-name-of-file}.bin cp /home/ek/some/dir/my-file-to-rename.bin /home/ek/some/dir/new-name-of-file.bin
{ "source": [ "https://unix.stackexchange.com/questions/435621", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/266221/" ] }
435,778
As for the "Spectre" security vulnerability, "Retpoline" was introduced to be a solution to mitigate the risk. However, I've read a post that mentioned: If you build the kernel without CONFIG_RETPOLINE , you can't build modules with retpoline and then expect them to load — because the thunk symbols aren't exported. If you build the kernel with the retpoline though, you can successfully load modules which aren't built with retpoline. ( Source ) Is there an easy and common/generic/unified way to check if kernel is "Retpoline" enabled or not? I want to do this so that my installer can use the proper build of kernel module to be installed.
If you’re using mainline kernels, or most major distributions’ kernels, the best way to check for full retpoline support ( i.e. the kernel was configured with CONFIG_RETPOLINE , and was built with a retpoline-capable compiler) is to look for “Full generic retpoline” in /sys/devices/system/cpu/vulnerabilities/spectre_v2 . On my system: $ cat /sys/devices/system/cpu/vulnerabilities/spectre_v2 Mitigation: Full generic retpoline, IBPB, IBRS_FW If you want more comprehensive tests, to detect retpolines on kernels without the spectre_v2 systree file, check out how spectre-meltdown-checker goes about things.
{ "source": [ "https://unix.stackexchange.com/questions/435778", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/284565/" ] }
435,786
New to Linux - I'm using Debian 9. I want to run a script to install pwndbg , following the tutorial here . I'm using my root account to do so and want to install it to my root account's home directory. The output is as follows: root@My-Debian-PC:~/pwndbg# ./setup.sh + PYTHON= + INSTALLFLAGS= + osx + uname + grep -i Darwin + '[' '' == --user ']' + PYTHON='sudo ' + linux + uname + grep -i Linux + sudo apt-get update ./setup.sh: line 24: sudo: command not found + true + sudo apt-get -y install gdb python-dev python3-dev python-pip python3-pip libglib2.0-dev libc6-dbg ./setup.sh: line 25: sudo: command not found root@My-Debian-PC:~/pwndbg# Evidentally the script is presumed to be ran as an account with sudo priveliges, hence giving the error because the root account can't use the sudo command. So is there a way of removing the errors? Should I simply edit the script and remove the word sudo from lines 24 and 25, or is it bad practice to do so? Or is it possible to add my root account to the sudo user group in case I come across the error with another script in the future? Or should I just run the script as-is, then afterwards run apt-get update and then apt-get -y install gdb python-dev python3-dev python-pip python3-pip libglib2.0-dev libc6-dbg ? Thanks!
If you’re using mainline kernels, or most major distributions’ kernels, the best way to check for full retpoline support ( i.e. the kernel was configured with CONFIG_RETPOLINE , and was built with a retpoline-capable compiler) is to look for “Full generic retpoline” in /sys/devices/system/cpu/vulnerabilities/spectre_v2 . On my system: $ cat /sys/devices/system/cpu/vulnerabilities/spectre_v2 Mitigation: Full generic retpoline, IBPB, IBRS_FW If you want more comprehensive tests, to detect retpolines on kernels without the spectre_v2 systree file, check out how spectre-meltdown-checker goes about things.
{ "source": [ "https://unix.stackexchange.com/questions/435786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/277651/" ] }
435,868
Unix file systems usually have an inode table, and the number of entries in this table is usually fixed at the time the file system is created. This sometimes leads to people with plenty of disk space getting confusing error messages about no free space, and even after they figure out what the problem is, there is no easy solution for what to do about it. But it seems (to me) that it would be very desirable to avoid this whole mess by allocating inodes on demand, completely transparently to users and system administrators. If you're into cute hacks, you could even make the inode table itself be a file, and thus reuse the code you already have that finds free space on the disk. If you're lucky, you might even end up with the inodes near the files themselves, without explicitly trying to achieve this result. But nobody (that I know of) actually does this, so there's probably a catch that I'm missing. Any idea what it might be?
Say you did make the inode table a file; then the next question is... where do you store information about that file? You'd thus need "real" inodes and "extended" inodes, like an MS-DOS partition table. Given, you'd only need one (or maybe a few — e.g., to also have your journal be a file). But you'd actually have special cases, different code. Any corruption to that file would be disastrous, too. And consider that, before journaling, it was common for files that were being written e.g., when the power went out to be heavily damaged. Your file operations would have to be a lot more robust vs. power failure/crash/etc. than they were on, e.g., ext2. Traditional Unix filesystems found a simpler (and more robust) solution: put an inode block (or group of blocks) every X blocks. Then you find them by simple arithmetic. Of course, then it's not possible to add more (without restructuring the entire filesystem). And even if you lose/corrupt the inode block you were writing to when the power failed, that's only losing a few inodes — far better than a substantial portion of the filesystem. More modern designs use things like B-tree variants. Modern filesystems like btrfs, XFS, and ZFS do not suffer from inode limits.
{ "source": [ "https://unix.stackexchange.com/questions/435868", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/284635/" ] }
435,970
I'm using Debian and today I typed: exec bash in my terminal and somehow the user@xxx changed to bash-4.4 . How do I get back the user@xxx ? I think it's better for me because for example it shows the path to my current folder etc...
exec bash -l This will replace the current shell session with a bash shell started as a login shell. A login shell will read your .bash_profile (or .bash_login or .profile , whichever it finds first) and other files where your prompt may be defined. With exec bash , you replaced the current shell session with an interactive shell. This will read .bashrc from your home directory. If you don't set your prompt there, then you will get the default bash prompt. Without the exec , you would have been able to just exit to get back to your old shell session. With the exec , the old session is now gone. You may also simply exit the shell and start a new one.
{ "source": [ "https://unix.stackexchange.com/questions/435970", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/284734/" ] }
436,314
I can use ls -l to get the logical size of a file, but is there a way to get the physical size of a file?
ls -l will give you the apparent size of the file, which is the number of bytes a program would read if it read the file from start to finish. du would give you the size of the file "on disk". By default, du gives you the size of the file in number of disk blocks, but you may use -h to get a human readable unit instead. See also the manual for du on your system. Note that with GNU coreutil's du (which is probably what you have on Linux), using -b to get bytes implies the --apparent-size option. This is not what you want to use to get number of bytes actually used on disk. Instead, use --block-size=1 or -B 1 . With GNU ls , you may also do ls -s --block-size=1 on the file. This will give the same number as du -B 1 for the file. Example: $ ls -l file -rw-r--r-- 1 myself wheel 536870912 Apr 8 11:44 file $ ls -lh file -rw-r--r-- 1 myself wheel 512M Apr 8 11:44 file $ du -h file 24K file $ du -B 1 file 24576 file $ ls -s --block-size=1 file 24576 file This means that this is a 512 MB file that takes about 24 KB on disk. It is a sparse file (mostly zeros that are not actually written to disk but represented as logical "holes" in the file). Sparse files are common when working with pre-allocated large files, e.g. disk images for virtual machines or swap files etc. Creating a sparse file is quick, while filling it with zeros is slow (and unnecessary). See also the manual for fallocate on your Linux system.
{ "source": [ "https://unix.stackexchange.com/questions/436314", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/285003/" ] }
436,520
I have a text file that contains some IPs. I want to copy the contents of this text file into /etc/ansible/hosts without showing the output on the terminal (as shown in example 2). Note: root user is disabled. If I use the following: sudo cat myfile.txt >> /etc/ansible/host It will not work, since sudo cat didn't affect redirections (expected). cat myfile.txt | sudo tee --append /etc/ansible/hosts It will show the output in the terminal then copy them to /etc/ansible/hosts A.A.A.A B.B.B.B C.C.C.C Adding /dev/null will interrupt the result (nothing will be copied to /etc/ansible/hosts ).
sudo tee -a /etc/ansible/hosts <myfile.txt >/dev/null Or, if you want to use cat : cat myfile.txt | sudo tee -a /etc/ansible/hosts >/dev/null Either of these should work. It is unclear how you "added" /dev/null when you tried, but this redirects the standard output of tee to /dev/null .
{ "source": [ "https://unix.stackexchange.com/questions/436520", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111780/" ] }
436,521
I'm searching for a specific REGEX, 3 days I'm trying and trying but not founding the right answer. I need to delete specific parts of an xml feed, I tried with sed, awk and it's not working right. What I have : ...Something before <description><![CDATA[Des  chercheurs de l&#x27;université de Columbia à New York ont mis au point un nouveau moyen de cacher un message dans un texte sans en altérer le sens et sans dépendre d&#x27;un format de fichier particulier. Nommée FontCode, cette idée est <a href="https://korben.info/cacher-des-informations-dans-un-texte-grace-a-des-modifications-sur-les-caracteres.html">Passage a la news suivante</a>]]></description> ... Other news What I need : ...Something before <description><![CDATA[Des chercheurs de l&#x27;université de Columbia à New York ont mis au point un nouveau moyen de cacher un message dans un texte sans en altérer le sens et sans dépendre d&#x27;un format de fichier particulier.<a href="https://korben.info/cacher-des-informations-dans-un-texte-grace-a-des-modifications-sur-les-caracteres.html">Passage a la news suivante</a>]]></description> ... Other news Select the multiples instances between "<\description></description> Remove the last sentence which is not complete (before a href, "Nommée FontCode, cette idée est ") Thank you for helping ! ;)
sudo tee -a /etc/ansible/hosts <myfile.txt >/dev/null Or, if you want to use cat : cat myfile.txt | sudo tee -a /etc/ansible/hosts >/dev/null Either of these should work. It is unclear how you "added" /dev/null when you tried, but this redirects the standard output of tee to /dev/null .
{ "source": [ "https://unix.stackexchange.com/questions/436521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268987/" ] }
436,864
How does a FIFO (named pipe) differs from a regular pipe (|)? As I understand from Wikipedia , unlike a regular pipe, a FIFO pipe "keeps living" after the process has ended and can be deleted sometime afterwards. But if f the process is based on a shell command containing a pipe ( cat x | grep y ), we could "keep it alive after the process" if we store it in a variable or a file, isn't it then a FIFO? Also, a regular pipe also has the first stdout it gets, as stdin for another command , so isn't it also a kind of first in first out pipe as well?
"Named pipe" is actually a very accurate name for what it is — it is just like a regular pipe, except that it has a name (on a filesystem). A pipe — the regular, un-named ("anonymous") one used in some-command | grep pattern is a special kind of file. And I do mean file, you read and write to it just like you do every other file. Grep doesn't really care¹ that it's reading from a pipe instead of a terminal³ or an ordinary file. Technically, what goes on behind the scenes is that stdin, stdout, and stderr are three open files (file descriptors) passed to every command run. File descriptors (which are used in every syscall to read/write/etc. files) are just numbers; stdin, stdout, and stderr are file descriptors 0, 1, and 2. So when your shell sets up some-command | grep what it does is something this: Asks the kernel for an anonymous pipe. There is no name, so this can't be done with open like for a normal file — instead it's done with pipe or pipe2 , which returns two file descriptors.⁴ Forks off a child process ( fork() creates a copy of the parent process; both sides of the pipe are open here), copies the write-side of the pipe to fd 1 (stdout). The kernel has a syscall to copy around file descriptor numbers; it's dup2() or dup3() . It then closes the read side and other copy of the write side. Finally, it uses execve to execute some-command . Since the pipe is fd 1, stdout of some-command is the pipe. Forks of another child process. This time, it duplicates the read side of the pipe to fd 0 (stdin), and executes grep . So grep will read from the pipe as stdin. Then it waits for both of those children to exit. At this point, the kernel notices that the pipe isn't open any more, and garbage collects it. That's what actually destroys the pipe. A named pipe just gives that anonymous pipe a name by putting it in the filesystem. So now any process, at any point in the future, can obtain a file descriptor for the pipe by using an ordinary open syscall. Conceptually, the pipe won't be destroyed until both all readers/writers have closed it and it's unlink ed from the filesystem.² This, by the way, is how files in general work on Unix. unlink (the syscall behind rm ) just removes one of the names for the file; only when all names are removed and nothing has the file open will it actually be deleted. A couple of answers here explore this: Why do hard links seem to take the same space as the originals? How can a log program continue to log to a deleted file? What is Linux doing differently that allows me to remove/replace files where Windows would complain the file is currently in use? FOOTNOTES Technically this probably isn't true — it's probably possible to do some optimizations by knowing, and actual grep implementations have often been heavily optimized. But conceptually it doesn't care (and indeed a straightforward implementation of grep wouldn't). Of course the kernel doesn't actually keep all the data structures around in memory forever, but rather it recreates them, transparently, whenever the first program opens the named pipe (and then keeps them as long as its open). So it's as if they existed as long as the name does. Terminal isn't a common place for grep to read from, but it's the default stdin when you don't specify another. So if you type just grep pattern in to your shell, grep will be reading from the terminal. The only use for this that comes to mind is if you're about to paste something to the terminal. On Linux, anonymous pipes actually are created on a special filesystem, pipefs. See How pipes work in Linux for details. Note that this is an internal implementation detail of Linux.
{ "source": [ "https://unix.stackexchange.com/questions/436864", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
437,468
I connected a pair of AirPods to everything I could. Android, OSX, Linux Mint, Arch LInux. It sounds great on all of them, but when connected under Arch, I can get get less than half the volume even if I max all volumes I can find. It's strange that Mint gets the volume right. I switched to Linux Mint for a while for this exact reason. But I prefer Arch. It's smoother and faster. Pacman is another easy to use tool. However, I searched for all and any solutions to bluetooth volume, but none worked. Volume on wired headphones and laptop's speakers is loud and clear. Problem only exists in bluetooth device that relies on source to set volume. If the device has own volume buttons, then I can pump up the volume all the way. From Gnome Sound Settings I tried going over 100%, but the sound is distorted. I tried alsamixer and pavucontrol. All volumes are maxed, but I only get Intel card and PulseAudio. should I also have a bluetooth volume? I also found PulseAudio/Troubleshooting - Volume adjustment does not work properly which mentioned the volume cap of 65536. Since sound is clear, I believe this volume limit is the source of my problem. But even if I try to increase the volume as mentioned there, I cannot get past the upper limit of 65536. $ amixer set Master 12345+ Simple mixer control 'Master',0 Capabilities: pvolume pswitch pswitch-joined Playback channels: Front Left - Front Right Limits: Playback 0 - 65536 Mono: Front Left: Playback 65536 [100%] [on] Front Right: Playback 65536 [100%] [on] Debugging Bad dB Information of ALSA Drivers describes the same problem, but I could not get any information using this tool. I believe there should be a way to set a config per bluetooth device and set the lower and upper limits. Alternative, maybe setting the volume to dB instead of absolute value might help, but disabling flat-volumes in /etc/pulse/daemon.conf did nothing. The only comparison I was able to make against LinuxMint is that Mint sets dB instead of absolute value. (I have a live USB so I can boot any time in Mint) Any suggestion is welcome.
VMG's answer is subtly wrong; it will technically work, but it will disable all other plugins than a2dp, meaning bluetooth keyboards/mice/gamepads/etc will stop working, when the only plugin causing issues seems to be one called avrcp. Edit /lib/systemd/system/bluetooth.service and change ExecStart=/usr/lib/bluetooth/bluetoothd to ExecStart=/usr/lib/bluetooth/bluetoothd --noplugin=avrcp and run sudo systemctl daemon-reload sudo systemctl restart bluetooth
{ "source": [ "https://unix.stackexchange.com/questions/437468", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/285941/" ] }
437,469
The mails sent are waiting in queue with the error below: `(Host or domain name not found. Name service error for name=srvr1.com.my type=MX: Host not found, try again)` However, I have defined the host entry for that domain in /etc/hosts .
VMG's answer is subtly wrong; it will technically work, but it will disable all other plugins than a2dp, meaning bluetooth keyboards/mice/gamepads/etc will stop working, when the only plugin causing issues seems to be one called avrcp. Edit /lib/systemd/system/bluetooth.service and change ExecStart=/usr/lib/bluetooth/bluetoothd to ExecStart=/usr/lib/bluetooth/bluetoothd --noplugin=avrcp and run sudo systemctl daemon-reload sudo systemctl restart bluetooth
{ "source": [ "https://unix.stackexchange.com/questions/437469", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176232/" ] }
437,812
Bash uses exclamation marks for history expansions, as explained in the answers to this question (e.g. sudo !! runs the previous command-line with sudo ). However, I can't find anywhere that explains what running the following command (i.e. a single exclamation mark) does: ! It appears to print nothing and exit with 1, but I'm not sure why it does that. I've looked online and in the Bash man page, but can't find anything, apart from the fact that it's a "reserved word" – but so is } , and running this: } prints an error: bash: syntax error near unexpected token `}'
The lone ! at the start of a command negates the exit status of the command or pipeline : if the command exits 0 , it will flip into 1 (failure), and if it exits non-zero it will turn it into a 0 (successful) exit. This use is documented in the Bash manual: If the reserved word ‘!’ precedes the pipeline, the exit status is the logical negation of the exit status as described above. A ! with no following command negates the empty command, which does nothing and returns true (equivalent to the : command ). It thus inverts the true to a false and exits with status 1, but produces no error. There are also other uses of ! within the test and [[ commands, where they negate a conditional test. These are unrelated to what you're seeing. In both your question and those cases it's not related to history expansion and the ! is separated from any other terms.
{ "source": [ "https://unix.stackexchange.com/questions/437812", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/226269/" ] }
437,965
In order to remind myself when I try to use shopt in Zsh instead of setopt , I created the following alias, testing it first at a shell prompt: $ alias shopt='echo "You\'re looking for setopt. This is Z shell, man, not Bash."' Despite the outer single quotes matching and the inner double quotes matching, and the apostrophe being escaped, I was prompted to finish closing the quotes with: dquote > _ What's going on? It appeared that the escaping was being ignored, or that it needed to be double-escaped because of multiple levels of interpretation... So, just to test this theory, I tried double-escaping it (and triple-escaping it, and so on) all the way up until: alias shopt='echo "You\\\\\\\\\\\\\\\\\\\\\\'re looking for setopt. This is Z shell, man, not Bash." ' and never saw any different behavior. This makes no sense to me. What kind of weird voodoo is preventing the shell from behaving as I expect? The practical solution is to not use quotes for echo , since it doesn't really need any, and to use double quotes for alias , and to escape the apostrophe so it is ignored when the text is echo ed. Then all of the practical problems go away. Can you help me? I need resolution to this perplexing problem.
This is zsh , man, not fish . In zsh , like in every Bourne-like shell (and also csh ), single quotes are strong quotes, there is no escaping within them (except by using the rcquotes options as hinted by @JdeBP where zsh emulates rc quotes¹). You cannot have a single quote inside a single-quoted string, you need to first close the single quoted string and enter the literal single quote using another quoting mechanism (like \ or " ): alias shopt='echo "You'\''re looking for setopt. This is Z shell, man, not Bash."' Or: alias shopt='echo "You'"'"'re looking for setopt. This is Z shell, man, not Bash."' Though you could also do: alias shopt="echo \"You're looking for setopt. This is Z shell, man, not Bash.\"" ( "..." are weaker quotes inside which several characters, including \ (here used to escape the embedded " ) are still special). Or: alias shopt=$'echo "You\'re looking for setopt. This is Z shell, man, not Bash."' ( $'...' is yet another kind of quotes from ksh93, where the ' can be escaped with \' ). (and BTW, you can also use the standard set -o in place of setopt in zsh . bash , for historical reasons, has two sets of options, one that you set with set -o one with shopt ; zsh like most other shells has only one set of options). ¹ In `rc`, the shell of Plan9, with a version for unix-likes also available, [single quotes are the only quoting mechanism](/a/296147) (backslash and double quotes are ordinary characters there), the only way to enter a literal single-quote there is with `''` inside single quotes, so with `zsh -o rcquotes`, you could do: alias shopt='echo "You''re looking for setopt. This is Z shell, man, not Bash."'
{ "source": [ "https://unix.stackexchange.com/questions/437965", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3358/" ] }
438,064
I'm trying to set up watchman as a user service. I've followed their documentation as closely as possible. This is what I have: The socket file: [Unit] Description=Watchman socket for user %i [Socket] ListenStream=/usr/local/var/run/watchman/%i-state/sock Accept=false SocketMode=0664 SocketUser=%i SocketGroup=%i [Install] WantedBy=sockets.target The service file: [Unit] Description=Watchman for user %i After=remote-fs.target Conflicts=shutdown.target [Service] ExecStart=/usr/local/bin/watchman --foreground --inetd --log-level=2 ExecStop=/usr/bin/pkill -u %i -x watchman Restart=on-failure User=%i Group=%i StandardInput=socket StandardOutput=syslog SyslogIdentifier=watchman-%i [Install] WantedBy=multi-user.target Systemd attempts to run watchman but is stuck in a restart loop. These are the errors I get: Apr 16 05:41:00 debian systemd[20894]: [email protected]: Failed to determine supplementary groups: Operation not permitted Apr 16 05:41:00 debian systemd[20894]: [email protected]: Failed at step GROUP spawning /usr/local/bin/watchman: Operation not permitted I'm 100% sure the group and user I'm enabling this service & socket exists. What am I doing wrong?
I was running into the same issue. Googling I found this thread: https://bbs.archlinux.org/viewtopic.php?id=233035 The problem is with how the service is being started. If you specify the user/group in the unit file then you should start the service as a system service. If you want to start the service as a user service then the User/Group is not needed and can be removed from the unit config. You simply start the service when logged in as the current user passing the --user flag to systemctl.
{ "source": [ "https://unix.stackexchange.com/questions/438064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2781/" ] }
438,086
I often use a colorizer "ccze" which is pretty cool, - it colors text on my shell. I just pipe any output through it. cat /etc/nginx.nginx.conf | ccze -A How can I do this with all commands by default?
I was running into the same issue. Googling I found this thread: https://bbs.archlinux.org/viewtopic.php?id=233035 The problem is with how the service is being started. If you specify the user/group in the unit file then you should start the service as a system service. If you want to start the service as a user service then the User/Group is not needed and can be removed from the unit config. You simply start the service when logged in as the current user passing the --user flag to systemctl.
{ "source": [ "https://unix.stackexchange.com/questions/438086", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269003/" ] }
438,130
I am trying to understanding the concept of special files on Linux. However, having a special file in /dev seems plain silly when its function could be implemented by a handful of lines in C to my knowledge. Moreover you could use it in pretty much the same manner, i.e. piping into null instead of redirecting into /dev/null . Is there a specific reason for having it as a file? Doesn't making it a file cause many other problems like too many programs accessing the same file?
In addition to the performance benefits of using a character-special device, the primary benefit is modularity . /dev/null may be used in almost any context where a file is expected, not just in shell pipelines. Consider programs that accept files as command-line parameters. # We don't care about log output. $ frobify --log-file=/dev/null # We are not interested in the compiled binary, just seeing if there are errors. $ gcc foo.c -o /dev/null || echo "foo.c does not compile!". # Easy way to force an empty list of exceptions. $ start_firewall --exception_list=/dev/null These are all cases where using a program as a source or sink would be extremely cumbersome. Even in the shell pipeline case, stdout and stderr may be redirected to files independently, something that is difficult to do with executables as sinks: # Suppress errors, but print output. $ grep foo * 2>/dev/null
{ "source": [ "https://unix.stackexchange.com/questions/438130", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/278472/" ] }
439,349
I am beginning to work with micro controllers and programming them using C language. All my programming experience is with Python language. I know that if I want to test a script I have written in python, I can simply launch terminal and type in “python” with the path of the file I want to run. I tried a web search, but most didn’t seem to understand what I was asking. How do I run c from terminal?
C is not an interpreted language like Python or Perl. You cannot simply type C code and then tell the shell to execute the file. You need to compile the C file with a C compiler like gcc then execute the binary file it outputs. For example, running gcc file.c will output a binary file with the name a.out . You can then tell the shell to execute the binary file by specifying the files full path ./a.out . Edit: As some comments and other answers have stated, there are some C interpreters that exist. However, I would argue that C compilers are more popular.
{ "source": [ "https://unix.stackexchange.com/questions/439349", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270306/" ] }
439,497
Suppose that I were using sha1pass to generate a hash of some sensitive password on the command line. I can use sha1pass mysecret to generate a hash of mysecret but this has the disadvantage that mysecret is now in the bash history. Is there a way to accomplish the end goal of this command while avoiding revealing mysecret in plain text, perhaps by using a passwd -style prompt? I'm also interested in a generalized way to do this for passing sensitive data to any command. The method would change when the sensitive data is passed as an argument (such as in sha1pass ) or on STDIN to some command. Is there a way to accomplish this? Edit : This question attracted a lot of attention and there have been several good answers offered below. A summary is: As per @Kusalananda's answer , ideally one would never have to give a password or secret as a command-line argument to a utility. This is vulnerable in several ways as described by him, and one should use a better-designed utility that is capable of taking the secret input on STDIN @vfbsilva's answer describes how to prevent things from being stored in bash history @Jonathan's answer describes a perfectly good method for accomplishing this as long as the program can take its secret data on STDIN. As such, I've decided to accept this answer. sha1pass in my OP was just an example, but the discussion has established that better tools exist that do take data on STDIN. as @R.. notes in his answer , use of command expansion on a variable is not safe. So, in summary, I've accepted @Jonathan's answer since it's the best solution given that you have a well-designed and well-behaved program to work with. Though passing a password or secret as a command-line argument is fundamentally unsafe, the other answers provide ways of mitigating the simple security concerns.
If using the zsh or bash shell, use the -s option to the read shell builtin to read a line from the terminal device without it echoing it. IFS= read -rs VARIABLE < /dev/tty Then you can use some fancy redirection to use the variable as stdin. sha1pass <<<"$VARIABLE" If anyone runs ps , all they'll see is "sha1pass". That assumes that sha1pass reads the password from stdin (on one line, ignoring the line delimiter) when not given any argument.
{ "source": [ "https://unix.stackexchange.com/questions/439497", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7868/" ] }
439,514
I have a file with one column with names that repeat a number of times each. I want to condense each repeat into one, while keeping any other repeats of the same name that are not adjacent to other repeats of the same name. E.g. I want to turn the left side to the right side: Golgb1 Golgb1 Golgb1 Akna Golgb1 Spata20 Golgb1 Golgb1 Golgb1 Akna Akna Akna Akna Spata20 Spata20 Spata20 Golgb1 Golgb1 Golgb1 Akna Akna Akna This is what I've been using: perl -ne 'print if ++$k{$_}==1' file.txt > file2.txt However, this method only keeps one representative from the left (i.e. Golb1 and Akna are not repeated). Is there a way to keep unique names for each block, while keeping names that repeat in multiple, non-adjacent blocks?
uniq will do this for you: $ uniq inputfile Golgb1 Akna Spata20 Golgb1 Akna
{ "source": [ "https://unix.stackexchange.com/questions/439514", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213707/" ] }
439,521
I'm trying to do streaming from a source (tv box transmitting in multicast in this source rtp://@X.X.X.X:Y) to Internet (to my mobile phone as example or another device inside of my LAN) but I can not achieve it. The command I'm using is something like this ffmpeg -i rtp://@X.X.X.X:Y -vcodec copy -f mpegts udp://127.0.0.1:1234 But it does not work as I expected, I mean, I'm able to open vlc and play the streaming in the same machine I'm running ffmpeg but not in another machine in the same LAN. Somebody can help me? Thank you! EDIT: Finally I solved installing a software called "udpxy" that forwards multicast content to the clients. I installed in a raspberry and it works perfect for this purpose. Thank you for all your explanations. It helped me to understand what I want to do and the limitations I have using a transcoder. I guess I can do the same with udpxy with ffmpeg but I can publish directly the TV Box IPs.
uniq will do this for you: $ uniq inputfile Golgb1 Akna Spata20 Golgb1 Akna
{ "source": [ "https://unix.stackexchange.com/questions/439521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/287509/" ] }
439,674
In light of memtest86+ not working with UEFI , is there an open source alternative or something I can use from grub to test memory?
Yes, there is, and it is now Memtest86+ v6 itself. This is a new version of Memtest86+, based on PCMemTest , which is a rewrite of Memtest86+ which can be booted from UEFI. Its authors still label it as not ready for production, but it does work in many configurations. Binaries of Memtest86+ v6 are available on memtest.org . Alternatively, the Linux kernel itself contains a memory test tool: the memtest option will run a memory check with up to 17 patterns (currently). If you add memtest to your kernel boot parameters, it will run all tests at boot, and reserve any failing addresses so that they’re not used. If you want fewer tests, you can specify the number of patterns ( memtest=8 for example). This isn’t as extensive as Memtest86+’s tests, but it still gives pretty good results. Some distribution kernels don’t include this feature; you can check whether it’s available by looking for CONFIG_MEMTEST in your kernel configuration (try /boot/config-$(uname -r) ). The kernel won’t complain if you specify memtest but it doesn’t support it; when it does run, you should see output like [ 0.000000] early_memtest: # of tests: 17 [ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern 4c494e5558726c7a [ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern 4c494e5558726c7a [ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern 4c494e5558726c7a [ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern eeeeeeeeeeeeeeee [ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern eeeeeeeeeeeeeeee [ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern eeeeeeeeeeeeeeee [ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern dddddddddddddddd [ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern dddddddddddddddd [ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern dddddddddddddddd [ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern bbbbbbbbbbbbbbbb [ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern bbbbbbbbbbbbbbbb ... while the kernel boots (or in its boot logs, later). You can use QEMU to get a feel for this: qemu-system-x86_64 -kernel /boot/vmlinuz-$(uname -r) -append "memtest console=ttyS0" -nographic (or whichever qemu-system-... is appropriate for your architecture), and look for “early_memtest”. To exit QEMU after the kernel panics, press Ctrl a , c , q , Enter .
{ "source": [ "https://unix.stackexchange.com/questions/439674", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
439,680
I'm trying to use shebang with /usr/bin/env form to execute my script under custom interpret. This is how my file looks: $ cat test.rb #!/usr/bin/env winruby print "Input someting: " puts "Got: #{gets}" sleep(100) but it fails when executed: $ ./test.rb /usr/bin/env: ‘winruby’: No such file or directory and I do not understand why tv185035@WCZTV185035-001 ~ $ winruby --version ruby 2.5.1p57 (2018-03-29 revision 63029) [x64-mingw32] tv185035@WCZTV185035-001 ~ $ env winruby --version env: ‘winruby’: No such file or directory tv185035@WCZTV185035-001 ~ $ which winruby /home/tv185035/bin/winruby The winruby exists, is in path and is executable. But env fails to find it. I took a look at man env but it didn't tell me anything useful. EDIT: $ cat ~/bin/winruby #!/usr/bin/bash winpty /cygdrive/g/WS/progs/Ruby25-x64/bin/ruby.exe "$@"
Yes, there is, and it is now Memtest86+ v6 itself. This is a new version of Memtest86+, based on PCMemTest , which is a rewrite of Memtest86+ which can be booted from UEFI. Its authors still label it as not ready for production, but it does work in many configurations. Binaries of Memtest86+ v6 are available on memtest.org . Alternatively, the Linux kernel itself contains a memory test tool: the memtest option will run a memory check with up to 17 patterns (currently). If you add memtest to your kernel boot parameters, it will run all tests at boot, and reserve any failing addresses so that they’re not used. If you want fewer tests, you can specify the number of patterns ( memtest=8 for example). This isn’t as extensive as Memtest86+’s tests, but it still gives pretty good results. Some distribution kernels don’t include this feature; you can check whether it’s available by looking for CONFIG_MEMTEST in your kernel configuration (try /boot/config-$(uname -r) ). The kernel won’t complain if you specify memtest but it doesn’t support it; when it does run, you should see output like [ 0.000000] early_memtest: # of tests: 17 [ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern 4c494e5558726c7a [ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern 4c494e5558726c7a [ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern 4c494e5558726c7a [ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern eeeeeeeeeeeeeeee [ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern eeeeeeeeeeeeeeee [ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern eeeeeeeeeeeeeeee [ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern dddddddddddddddd [ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern dddddddddddddddd [ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern dddddddddddddddd [ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern bbbbbbbbbbbbbbbb [ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern bbbbbbbbbbbbbbbb ... while the kernel boots (or in its boot logs, later). You can use QEMU to get a feel for this: qemu-system-x86_64 -kernel /boot/vmlinuz-$(uname -r) -append "memtest console=ttyS0" -nographic (or whichever qemu-system-... is appropriate for your architecture), and look for “early_memtest”. To exit QEMU after the kernel panics, press Ctrl a , c , q , Enter .
{ "source": [ "https://unix.stackexchange.com/questions/439680", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112194/" ] }
439,689
man test only explains what -n means, wit a lowercase n. How does the capital -N work in this script? #!/bin/bash # Check for an altered certificate (means there was a renew) if [[ -N '/etc/letsencrypt/live/mx1.example.com/fullchain.pem' ]]; then # Reload postfix /bin/systemctl reload postfix # Restart dovecot /bin/systemctl restart dovecot fi
Yes, there is, and it is now Memtest86+ v6 itself. This is a new version of Memtest86+, based on PCMemTest , which is a rewrite of Memtest86+ which can be booted from UEFI. Its authors still label it as not ready for production, but it does work in many configurations. Binaries of Memtest86+ v6 are available on memtest.org . Alternatively, the Linux kernel itself contains a memory test tool: the memtest option will run a memory check with up to 17 patterns (currently). If you add memtest to your kernel boot parameters, it will run all tests at boot, and reserve any failing addresses so that they’re not used. If you want fewer tests, you can specify the number of patterns ( memtest=8 for example). This isn’t as extensive as Memtest86+’s tests, but it still gives pretty good results. Some distribution kernels don’t include this feature; you can check whether it’s available by looking for CONFIG_MEMTEST in your kernel configuration (try /boot/config-$(uname -r) ). The kernel won’t complain if you specify memtest but it doesn’t support it; when it does run, you should see output like [ 0.000000] early_memtest: # of tests: 17 [ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern 4c494e5558726c7a [ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern 4c494e5558726c7a [ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern 4c494e5558726c7a [ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern eeeeeeeeeeeeeeee [ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern eeeeeeeeeeeeeeee [ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern eeeeeeeeeeeeeeee [ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern dddddddddddddddd [ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern dddddddddddddddd [ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern dddddddddddddddd [ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern bbbbbbbbbbbbbbbb [ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern bbbbbbbbbbbbbbbb ... while the kernel boots (or in its boot logs, later). You can use QEMU to get a feel for this: qemu-system-x86_64 -kernel /boot/vmlinuz-$(uname -r) -append "memtest console=ttyS0" -nographic (or whichever qemu-system-... is appropriate for your architecture), and look for “early_memtest”. To exit QEMU after the kernel panics, press Ctrl a , c , q , Enter .
{ "source": [ "https://unix.stackexchange.com/questions/439689", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20661/" ] }
439,801
I have a Linux based process controller that occasionally locks up to the point where you can't ping it (i.e. I can ping it, then it becomes no longer pingable without any modifications to network settings). I'm curious, what process/system is responsible for actually responding to pings? It appears that this process is crashing.
The kernel network stack is handling ICMP messages, which are those sent by the ping command. If you do not get replies, besides network problems or filtering, and host based filtering/rate-limiting/black-holing/etc. it means the machine is probably overloaded by something, which can be transient, or the kernel crashed, which is rare but can happen (faulty hardware, etc.), not necessarily because of the ICMP traffic (but trying to overload it with such traffic can be a good test at the beginning of life of a server to see how it sustains things). In the later case of kernel crash you should have ample information in the log files or on the console. Also note that ping is almost always the wrong tool to check if a service is online or not. For various reasons, but mostly because it does not mimic real application traffic, by definition. For example if you need to check that a webserver is still live, you should instead do an HTTP query to it (TCP port 80 or 443), if you need to check a mailserver you do an SMTP query (TCP port 25), if a DNS server, an UDP and a TCP query to port 53, etc.
{ "source": [ "https://unix.stackexchange.com/questions/439801", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/287718/" ] }
440,088
I'm using Ubuntu 16.04 with Bash and I tried to read in Wikipedia , in here and in here , but I failed to understand the meaning of "command substitution" in shell-scripting in general, and in Bash in particular, as in: $(command) or `command` What is the meaning of this term? Edit: When I first published this question I already knew the pure concept of substitution and also the Linux concept of variable substitution (replacing a variable with its value by execution), yet I still missed the purpose of this shell feature from the documentation for whatever reason or group of reasons. My answer after question locked Command substitution is an operation with dedicated syntax to both execute a command and to have this command's output hold (stored) by a variable for later use. An example with date : thedate="$(date)" We can then print the result using the command printf : printf 'The date is %s\n' "$thedate" The command substitution syntax is $() . The command itself is date . Combining both we get $(date) , its value is the result of the substitution (that we could get after execution ). We save that value in a variable, $thedate , for later use. We display the output value held by the variable with printf , per the command above. Note: \n in printf is a line-break.
"Command substitution" is the name of the feature of the shell language that allows you to execute a command and have the output of that command replace (substitute) the text of the command. There is no other feature of the shell language that allows you to do that. A command substitution, i.e. the whole $(...) expression, is replaced by its output, which is the primary use of command substitutions. The command that the command substitution executes, is executed in a subshell, which means it has its own environment that will not affect the parent shell's environment. Not all subshell executions are command substitutions though (see further examples at end). Example showing that a command substitution is executed in a subshell: $ s=123 $ echo "hello $( s=world; echo "$s" )" hello world $ echo "$s" 123 Here, the variable s is set to the string 123 . On the next line, echo is invoked on a string containing the result of a command substitution. The command substitution sets s to the string world and echoes this string. The string world is the output of the command in the command substitution and thus, if this was run under set -x , we would see that the second line above would have been expanded to echo 'hello world' , which produces hello world on the terminal: $ set -x $ echo "hello $( s=world; echo "$s" )" ++ s=world ++ echo world + echo 'hello world' hello world ( bash adds an extra level of + prompts to every level of a command substitution subshell in the trace output, other shells may not do this) Lastly, we show that the command inside the command substitution was run in its own subshell, because it did not affect the value of s in the calling shell (the value of s is still 123 , not world ). There are other situations where commands are executed in subshells, such as in echo 'hello' | read message In bash , unless you set the lastpipe option (only in non-interactive instances), the read is executed in a subshell, which means that $message will not be changed in the parent shell, i.e. doing echo "$message" after the above command will echo an empty string (or whatever value $message was before). A process substitution in bash also executes in a subshell: cat < <( echo 'hello world' ) This too is distinct from a command substitution.
{ "source": [ "https://unix.stackexchange.com/questions/440088", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
440,426
When a script runs, commands in it may output some text to stdout/stderr. Bash itself may also output some text. But if a few scripts are running at the same time, it is hard to identify where does an error come from. So is it possible to insert a prefix to all output of the script? Something like: #!/bin/bash prefix 'PREFIX' &2 echo "wrong!" >&2 Then: $ ./script.sh PREFIXwrong!
You can redirect stderr/stdout to a process substitution that adds the prefix of choice. For example, this script: #! /bin/bash exec > >(trap "" INT TERM; sed 's/^/foo: /') exec 2> >(trap "" INT TERM; sed 's/^/foo: (stderr) /' >&2) echo foo echo bar >&2 date Produces this output: foo: foo foo: (stderr) bar foo: Fri Apr 27 20:04:34 IST 2018 The first two lines redirect stdout and stderr respectively to sed commands that add foo: and foo: (stderr) to the input. The calls to the shell built-in command trap make sure that the subshell does not exit when terminating the script with Ctrl+C or by sending the SIGTERM signal using kill $pid . This ensures that your shell won't forcefully terminate your script because the stdout file descriptor disappears when sed exits because it received the termination signal as well. Effectively you can still use exit traps in your main script and sed will still be running to process any output generated while running your exit traps. The subshell should still exit after your main script ends so sed process won't be left running forever.
{ "source": [ "https://unix.stackexchange.com/questions/440426", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50636/" ] }
440,558
How can I describe or explain "buffers" in the output of free ? $ free -h total used free shared buff/cache available Mem: 501M 146M 19M 9.7M 335M 331M Swap: 1.0G 85M 938M $ free -w -h total used free shared buffers cache available Mem: 501M 146M 19M 9.7M 155M 180M 331M Swap: 1.0G 85M 938M I don't have any (known) problem with this system. I am just surprised and curious to see that "buffers" is almost as high as "cache" (155M v.s. 180M). I thought "cache" represented the page cache of file contents, and tended to be the most significant part of "cache/buffers". I'm not sure what "buffers" are though. For example, I compared this to my laptop which has more RAM. On my laptop, the "buffers" figure is an order of magnitude smaller than "cache" (200M v.s. 4G). If I understood what "buffers" were then I could start to look at why the buffers grew to such a larger proportion on the smaller system. From man proc (I ignore the hilariously outdated definition of "large"): Buffers %lu Relatively temporary storage for raw disk blocks that shouldn't get tremendously large (20MB or so). Cached %lu In-memory cache for files read from the disk (the page cache). Doesn't include SwapCached. $ free -V free from procps-ng 3.3.12 $ uname -r # the Linux kernel version 4.9.0-6-marvell $ systemd-detect-virt # this is not inside a virtual machine none $ cat /proc/meminfo MemTotal: 513976 kB MemFree: 20100 kB MemAvailable: 339304 kB Buffers: 159220 kB Cached: 155536 kB SwapCached: 2420 kB Active: 215044 kB Inactive: 216760 kB Active(anon): 56556 kB Inactive(anon): 73280 kB Active(file): 158488 kB Inactive(file): 143480 kB Unevictable: 10760 kB Mlocked: 10760 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 513976 kB LowFree: 20100 kB SwapTotal: 1048572 kB SwapFree: 960532 kB Dirty: 240 kB Writeback: 0 kB AnonPages: 126912 kB Mapped: 40312 kB Shmem: 9916 kB Slab: 37580 kB SReclaimable: 29036 kB SUnreclaim: 8544 kB KernelStack: 1472 kB PageTables: 3108 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 1305560 kB Committed_AS: 1155244 kB VmallocTotal: 507904 kB VmallocUsed: 0 kB VmallocChunk: 0 kB $ sudo slabtop --once Active / Total Objects (% used) : 186139 / 212611 (87.5%) Active / Total Slabs (% used) : 9115 / 9115 (100.0%) Active / Total Caches (% used) : 66 / 92 (71.7%) Active / Total Size (% used) : 31838.34K / 35031.49K (90.9%) Minimum / Average / Maximum Object : 0.02K / 0.16K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 59968 57222 0% 0.06K 937 64 3748K buffer_head 29010 21923 0% 0.13K 967 30 3868K dentry 24306 23842 0% 0.58K 4051 6 16204K ext4_inode_cache 22072 20576 0% 0.03K 178 124 712K kmalloc-32 10290 9756 0% 0.09K 245 42 980K kmalloc-96 9152 4582 0% 0.06K 143 64 572K kmalloc-node 9027 8914 0% 0.08K 177 51 708K kernfs_node_cache 7007 3830 0% 0.30K 539 13 2156K radix_tree_node 5952 4466 0% 0.03K 48 124 192K jbd2_revoke_record_s 5889 5870 0% 0.30K 453 13 1812K inode_cache 5705 4479 0% 0.02K 35 163 140K file_lock_ctx 3844 3464 0% 0.03K 31 124 124K anon_vma 3280 3032 0% 0.25K 205 16 820K kmalloc-256 2730 2720 0% 0.10K 70 39 280K btrfs_trans_handle 2025 1749 0% 0.16K 81 25 324K filp 1952 1844 0% 0.12K 61 32 244K kmalloc-128 1826 532 0% 0.05K 22 83 88K trace_event_file 1392 1384 0% 0.33K 116 12 464K proc_inode_cache 1067 1050 0% 0.34K 97 11 388K shmem_inode_cache 987 768 0% 0.19K 47 21 188K kmalloc-192 848 757 0% 0.50K 106 8 424K kmalloc-512 450 448 0% 0.38K 45 10 180K ubifs_inode_slab 297 200 0% 0.04K 3 99 12K eventpoll_pwq 288 288 100% 1.00K 72 4 288K kmalloc-1024 288 288 100% 0.22K 16 18 64K mnt_cache 287 283 0% 1.05K 41 7 328K idr_layer_cache 240 8 0% 0.02K 1 240 4K fscrypt_info
What is the difference between "buffers" and the other type of cache? Why is this distinction so prominent? Why do some people say "buffer cache" when they talk about cached file content? What are Buffers used for? Why might we expect Buffers in particular to be larger or smaller? 1. What is the difference between "buffers" and the other type of cache? Buffers shows the amount of page cache used for block devices. "Block devices" are the most common type of data storage device. The kernel has to deliberately subtract this amount from the rest of the page cache when it reports Cached . See meminfo_proc_show() : cached = global_node_page_state(NR_FILE_PAGES) - total_swapcache_pages() - i.bufferram; ... show_val_kb(m, "MemTotal: ", i.totalram); show_val_kb(m, "MemFree: ", i.freeram); show_val_kb(m, "MemAvailable: ", available); show_val_kb(m, "Buffers: ", i.bufferram); show_val_kb(m, "Cached: ", cached); 2. Why is this distinction made so prominent? Why do some people say "buffer cache" when they talk about cached file content? The page cache works in units of the MMU page size, typically a minimum of 4096 bytes. This is essential for mmap() , i.e. memory-mapped file access.[1][2] It is designed to share pages of loaded program / library code between separate processes, and allow loading individual pages on demand. (Also for unloading pages when something else needs the space, and they haven't been used recently). [1] Memory-mapped I/O - The GNU C Library manual. [2] mmap - Wikipedia. Early UNIX had a "buffer cache" of disk blocks, and did not have mmap(). Apparently when mmap() was first added, they added the page cache as a new layer on top. This is as messy as it sounds. Eventually, UNIX-based OS's got rid of the separate buffer cache. So now all file cache is in units of pages. Pages are looked up by (file, offset), not by location on disk. This was called "unified buffer cache", perhaps because people were more familiar with "buffer cache".[3] [3] UBC: An Efficient Unified I/O and Memory Caching Subsystem for NetBSD ("One interesting twist that Linux adds is that the device block numbers where a page is stored on disk are cached with the page in the form of a list of buffer_head structures. When a modified page is to be written back to disk, the I/O requests can be sent to the device driver right away, without needing to read any indirect blocks to determine where the page's data should be written."[3]) In Linux 2.2 there was a separate "buffer cache" used for writes, but not for reads. "The page cache used the buffer cache to write back its data, needing an extra copy of the data, and doubling memory requirements for some write loads".[4] Let's not worry too much about the details, but this history would be one reason why Linux reports Buffers usage separately. [4] Page replacement in Linux 2.4 memory management , Rik van Riel. By contrast, in Linux 2.4 and above, the extra copy does not exist. "The system does disk IO directly to and from the page cache page."[4] Linux 2.4 was released in 2001. 3. What are Buffers used for? Block devices are treated as files, and so have page cache. This is used "for filesystem metadata and the caching of raw block devices".[4] But in current versions of Linux, filesystems do not copy file contents through it, so there is no "double caching". I think of the Buffers part of the page cache as being the Linux buffer cache. Some sources might disagree with this terminology. How much buffer cache the filesystem uses, if any, depends on the type of filesystem. The system in the question uses ext4. ext3/ext4 use the Linux buffer cache for the journal, for directory contents, and some other metadata. Certain file systems, including ext3, ext4, and ocfs2, use the jbd or jbd2 layer to handle their physical block journalling, and this layer fundamentally uses the buffer cache. -- Email article by Ted Tso , 2013 Prior to Linux kernel version 2.4, Linux had separate page and buffer caches. Since 2.4, the page and buffer cache are unified and Buffers is raw disk blocks not represented in the page cache—i.e., not file data. ... The buffer cache remains, however, as the kernel still needs to perform block I/O in terms of blocks, not pages. As most blocks represent file data, most of the buffer cache is represented by the page cache. But a small amount of block data isn't file backed—metadata and raw block I/O for example—and thus is solely represented by the buffer cache. -- A pair of Quora answers by Robert Love , last updated 2013. Both writers are Linux developers who have worked with Linux kernel memory management. The first source is more specific about technical details. The second source is a more general summary, which might be contradicted and outdated in some specifics. It is true that filesystems may perform partial-page metadata writes, even though the cache is indexed in pages. Even user processes can perform partial-page writes when they use write() (as opposed to mmap() ), at least directly to a block device. This only applies to writes, not reads. When you read through the page cache, the page cache always reads full pages. Linus liked to rant that the buffer cache is not required in order to do block-sized writes, and that filesystems can do partial-page metadata writes even with page cache attached to their own files instead of the block device. I am sure he is right to say that ext2 does this. ext3/ext4 with its journalling system does not. It is less clear what the issues were that led to this design. The people he was ranting at got tired of explaining. ext4_readdir() has not been changed to satisfy Linus' rant. I don't see his desired approach used in readdir() of other filesystems either. I think XFS uses the buffer cache for directories as well. bcachefs does not use the page cache for readdir() at all; it uses its own cache for btrees. I'm not sure about btrfs. 4. Why might we expect Buffers in particular to be larger or smaller? In this case it turns out the ext4 journal size for my filesystem is 128M. So this explains why 1) my buffer cache can stabilize at slightly over 128M; 2) buffer cache does not scale proportionally with the larger amount of RAM on my laptop. For some other possible causes, see What is the buffers column in the output from free? Note that "buffers" reported by free is actually a combination of Buffers and reclaimable kernel slab memory. To verify that journal writes use the buffer cache, I simulated a filesystem in nice fast RAM (tmpfs), and compared the maximum buffer usage for different journal sizes. # dd if=/dev/zero of=/tmp/t bs=1M count=1000 ... # mkfs.ext4 /tmp/t -J size=256 ... # LANG=C dumpe2fs /tmp/t | grep '^Journal size' dumpe2fs 1.43.5 (04-Aug-2017) Journal size: 256M # mount /tmp/t /mnt # cd /mnt # free -w -m total used free shared buffers cache available Mem: 7855 2521 4321 285 66 947 5105 Swap: 7995 0 7995 # for i in $(seq 40000); do dd if=/dev/zero of=t bs=1k count=1 conv=sync status=none; sync t; sync -f t; done # free -w -m total used free shared buffers cache available Mem: 7855 2523 3872 551 237 1223 4835 Swap: 7995 0 7995 # dd if=/dev/zero of=/tmp/t bs=1M count=1000 ... # mkfs.ext4 /tmp/t -J size=16 ... # LANG=C dumpe2fs /tmp/t | grep '^Journal size' dumpe2fs 1.43.5 (04-Aug-2017) Journal size: 16M # mount /tmp/t /mnt # cd /mnt # free -w -m total used free shared buffers cache available Mem: 7855 2507 4337 285 66 943 5118 Swap: 7995 0 7995 # for i in $(seq 40000); do dd if=/dev/zero of=t bs=1k count=1 conv=sync status=none; sync t; sync -f t; done # free -w -m total used free shared buffers cache available Mem: 7855 2509 4290 315 77 977 5086 Swap: 7995 0 7995 History of this answer: How I came to look at the journal I had found Ted Tso's email first, and was intrigued that it emphasized write caching. I would find it surprising if "dirty", unwritten data was able to reach 30% of RAM on my system. sudo atop shows that over a 10 second interval, the system in question consistently writes only 1MB. The filesystem concerned would be able to keep up with over 100 times this rate. (It's on a USB2 hard disk drive, max throughput ~20MB/s). Using blktrace ( btrace -w 10 /dev/sda ) confirms that the IOs which are being cached must be writes, because there is almost no data being read. Also that mysqld is the only userspace process doing IO. I stopped the service responsible for the writes (icinga2 writing to mysql) and re-checked. I saw "buffers" drop to under 20M - I have no explanation for that - and stay there. Restarting the writer again shows "buffers" rising by ~0.1M for each 10 second interval. I observed it maintain this rate consistently, climbing back to 70M and above. Running echo 3 | sudo tee /proc/sys/vm/drop_caches was sufficient to lower "buffers" again, to 4.5M. This proves that my accumulation of buffers is a "clean" cache, which Linux can drop immediately when required. This system is not accumulating unwritten data. ( drop_caches does not perform any writeback and hence cannot drop dirty pages. If you wanted to run a test which cleaned the cache first, you would use the sync command). The entire mysql directory is only 150M. The accumulating buffers must represent metadata blocks from mysql writes, but it surprised me to think there would be so many metadata blocks for this data.
{ "source": [ "https://unix.stackexchange.com/questions/440558", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29483/" ] }
440,582
I want that line from the file which have highest length among all the lines using awk command.
What is the difference between "buffers" and the other type of cache? Why is this distinction so prominent? Why do some people say "buffer cache" when they talk about cached file content? What are Buffers used for? Why might we expect Buffers in particular to be larger or smaller? 1. What is the difference between "buffers" and the other type of cache? Buffers shows the amount of page cache used for block devices. "Block devices" are the most common type of data storage device. The kernel has to deliberately subtract this amount from the rest of the page cache when it reports Cached . See meminfo_proc_show() : cached = global_node_page_state(NR_FILE_PAGES) - total_swapcache_pages() - i.bufferram; ... show_val_kb(m, "MemTotal: ", i.totalram); show_val_kb(m, "MemFree: ", i.freeram); show_val_kb(m, "MemAvailable: ", available); show_val_kb(m, "Buffers: ", i.bufferram); show_val_kb(m, "Cached: ", cached); 2. Why is this distinction made so prominent? Why do some people say "buffer cache" when they talk about cached file content? The page cache works in units of the MMU page size, typically a minimum of 4096 bytes. This is essential for mmap() , i.e. memory-mapped file access.[1][2] It is designed to share pages of loaded program / library code between separate processes, and allow loading individual pages on demand. (Also for unloading pages when something else needs the space, and they haven't been used recently). [1] Memory-mapped I/O - The GNU C Library manual. [2] mmap - Wikipedia. Early UNIX had a "buffer cache" of disk blocks, and did not have mmap(). Apparently when mmap() was first added, they added the page cache as a new layer on top. This is as messy as it sounds. Eventually, UNIX-based OS's got rid of the separate buffer cache. So now all file cache is in units of pages. Pages are looked up by (file, offset), not by location on disk. This was called "unified buffer cache", perhaps because people were more familiar with "buffer cache".[3] [3] UBC: An Efficient Unified I/O and Memory Caching Subsystem for NetBSD ("One interesting twist that Linux adds is that the device block numbers where a page is stored on disk are cached with the page in the form of a list of buffer_head structures. When a modified page is to be written back to disk, the I/O requests can be sent to the device driver right away, without needing to read any indirect blocks to determine where the page's data should be written."[3]) In Linux 2.2 there was a separate "buffer cache" used for writes, but not for reads. "The page cache used the buffer cache to write back its data, needing an extra copy of the data, and doubling memory requirements for some write loads".[4] Let's not worry too much about the details, but this history would be one reason why Linux reports Buffers usage separately. [4] Page replacement in Linux 2.4 memory management , Rik van Riel. By contrast, in Linux 2.4 and above, the extra copy does not exist. "The system does disk IO directly to and from the page cache page."[4] Linux 2.4 was released in 2001. 3. What are Buffers used for? Block devices are treated as files, and so have page cache. This is used "for filesystem metadata and the caching of raw block devices".[4] But in current versions of Linux, filesystems do not copy file contents through it, so there is no "double caching". I think of the Buffers part of the page cache as being the Linux buffer cache. Some sources might disagree with this terminology. How much buffer cache the filesystem uses, if any, depends on the type of filesystem. The system in the question uses ext4. ext3/ext4 use the Linux buffer cache for the journal, for directory contents, and some other metadata. Certain file systems, including ext3, ext4, and ocfs2, use the jbd or jbd2 layer to handle their physical block journalling, and this layer fundamentally uses the buffer cache. -- Email article by Ted Tso , 2013 Prior to Linux kernel version 2.4, Linux had separate page and buffer caches. Since 2.4, the page and buffer cache are unified and Buffers is raw disk blocks not represented in the page cache—i.e., not file data. ... The buffer cache remains, however, as the kernel still needs to perform block I/O in terms of blocks, not pages. As most blocks represent file data, most of the buffer cache is represented by the page cache. But a small amount of block data isn't file backed—metadata and raw block I/O for example—and thus is solely represented by the buffer cache. -- A pair of Quora answers by Robert Love , last updated 2013. Both writers are Linux developers who have worked with Linux kernel memory management. The first source is more specific about technical details. The second source is a more general summary, which might be contradicted and outdated in some specifics. It is true that filesystems may perform partial-page metadata writes, even though the cache is indexed in pages. Even user processes can perform partial-page writes when they use write() (as opposed to mmap() ), at least directly to a block device. This only applies to writes, not reads. When you read through the page cache, the page cache always reads full pages. Linus liked to rant that the buffer cache is not required in order to do block-sized writes, and that filesystems can do partial-page metadata writes even with page cache attached to their own files instead of the block device. I am sure he is right to say that ext2 does this. ext3/ext4 with its journalling system does not. It is less clear what the issues were that led to this design. The people he was ranting at got tired of explaining. ext4_readdir() has not been changed to satisfy Linus' rant. I don't see his desired approach used in readdir() of other filesystems either. I think XFS uses the buffer cache for directories as well. bcachefs does not use the page cache for readdir() at all; it uses its own cache for btrees. I'm not sure about btrfs. 4. Why might we expect Buffers in particular to be larger or smaller? In this case it turns out the ext4 journal size for my filesystem is 128M. So this explains why 1) my buffer cache can stabilize at slightly over 128M; 2) buffer cache does not scale proportionally with the larger amount of RAM on my laptop. For some other possible causes, see What is the buffers column in the output from free? Note that "buffers" reported by free is actually a combination of Buffers and reclaimable kernel slab memory. To verify that journal writes use the buffer cache, I simulated a filesystem in nice fast RAM (tmpfs), and compared the maximum buffer usage for different journal sizes. # dd if=/dev/zero of=/tmp/t bs=1M count=1000 ... # mkfs.ext4 /tmp/t -J size=256 ... # LANG=C dumpe2fs /tmp/t | grep '^Journal size' dumpe2fs 1.43.5 (04-Aug-2017) Journal size: 256M # mount /tmp/t /mnt # cd /mnt # free -w -m total used free shared buffers cache available Mem: 7855 2521 4321 285 66 947 5105 Swap: 7995 0 7995 # for i in $(seq 40000); do dd if=/dev/zero of=t bs=1k count=1 conv=sync status=none; sync t; sync -f t; done # free -w -m total used free shared buffers cache available Mem: 7855 2523 3872 551 237 1223 4835 Swap: 7995 0 7995 # dd if=/dev/zero of=/tmp/t bs=1M count=1000 ... # mkfs.ext4 /tmp/t -J size=16 ... # LANG=C dumpe2fs /tmp/t | grep '^Journal size' dumpe2fs 1.43.5 (04-Aug-2017) Journal size: 16M # mount /tmp/t /mnt # cd /mnt # free -w -m total used free shared buffers cache available Mem: 7855 2507 4337 285 66 943 5118 Swap: 7995 0 7995 # for i in $(seq 40000); do dd if=/dev/zero of=t bs=1k count=1 conv=sync status=none; sync t; sync -f t; done # free -w -m total used free shared buffers cache available Mem: 7855 2509 4290 315 77 977 5086 Swap: 7995 0 7995 History of this answer: How I came to look at the journal I had found Ted Tso's email first, and was intrigued that it emphasized write caching. I would find it surprising if "dirty", unwritten data was able to reach 30% of RAM on my system. sudo atop shows that over a 10 second interval, the system in question consistently writes only 1MB. The filesystem concerned would be able to keep up with over 100 times this rate. (It's on a USB2 hard disk drive, max throughput ~20MB/s). Using blktrace ( btrace -w 10 /dev/sda ) confirms that the IOs which are being cached must be writes, because there is almost no data being read. Also that mysqld is the only userspace process doing IO. I stopped the service responsible for the writes (icinga2 writing to mysql) and re-checked. I saw "buffers" drop to under 20M - I have no explanation for that - and stay there. Restarting the writer again shows "buffers" rising by ~0.1M for each 10 second interval. I observed it maintain this rate consistently, climbing back to 70M and above. Running echo 3 | sudo tee /proc/sys/vm/drop_caches was sufficient to lower "buffers" again, to 4.5M. This proves that my accumulation of buffers is a "clean" cache, which Linux can drop immediately when required. This system is not accumulating unwritten data. ( drop_caches does not perform any writeback and hence cannot drop dirty pages. If you wanted to run a test which cleaned the cache first, you would use the sync command). The entire mysql directory is only 150M. The accumulating buffers must represent metadata blocks from mysql writes, but it surprised me to think there would be so many metadata blocks for this data.
{ "source": [ "https://unix.stackexchange.com/questions/440582", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288367/" ] }
440,803
I'm trying to use Remmina on Ubuntu to remote into one of the servers at my work. However, after entering the connection information in I get the following error: "You requested an H264 GFX mode for ser [email protected], but your libfreedp does not support H264. Please check colour depth settings." I am quite new to Ubuntu in general so I am not really sure what to do about the above error. Could anybody help me out? Cheers
quoting from the following GitLab issue link: in the profile Basic settings, change the colour depth untill you find the one that is supported by your server. remmina issue explained if you have some issues to find the profile basic settings, check the remmina user's guide
{ "source": [ "https://unix.stackexchange.com/questions/440803", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288534/" ] }
440,840
I'm trying to install the most up-to-date NVIDIA driver in Debian Stretch. I've downloaded NVIDIA-Linux-x86_64-390.48.run from here , but when I try to do sudo sh ./NVIDIA-Linux-x86_64-390.48.run as suggested, an error message appears. ERROR: An NVIDIA kernel module 'nvidia-drm' appears to already be loaded in your kernel. This may be because it is in use (for example, by an X server, a CUDA program, or the NVIDIA Persistence Daemon), but this may also happen if your kernel was configured without support for module unloading. Please be sure to exit any programs that may be using the GPU(s) before attempting to upgrade your driver. If no GPU-based programs are running, you know that your kernel supports module unloading, and you still receive this message, then an error may have occured that has corrupted an NVIDIA kernel module's usage count, for which the simplest remedy is to reboot your computer. When I try to find out who is using nvidia-drm (or nvidia_drm ), I see nothing. ~$ sudo lsof | grep nvidia-drm lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete. ~$ sudo lsof -e /run/user/1000/gvfs | grep nvidia-drm ~$ And when I try to remove it, it says it's being used. ~$ sudo modprobe -r nvidia-drm modprobe: FATAL: Module nvidia_drm is in use. ~$ I have rebooted and started in text-only mode (by pressing Ctrl+Alt+F2 before giving username/password), but I got the same error. Besides it, how do I "know that my kernel supports module unloading"? I'm getting a few warnings on boot up related to nvidia, no idea if they're related, though: Apr 30 00:46:15 debian-9 kernel: nvidia: loading out-of-tree module taints kernel. Apr 30 00:46:15 debian-9 kernel: nvidia: module license 'NVIDIA' taints kernel. Apr 30 00:46:15 debian-9 kernel: Disabling lock debugging due to kernel taint Apr 30 00:46:15 debian-9 kernel: NVRM: loading NVIDIA UNIX x86_64 Kernel Module 375.82 Wed Jul 19 21:16:49 PDT 2017 (using threaded interrupts)
I imagine you want to stop the display manager which is what I'd suspect would be using the Nvidia drivers. After change to a text console (pressing Ctrl + Alt + F2 ) and logging in as root, use the following command to disable the graphical target, which is what keeps the display manager running: # systemctl isolate multi-user.target At this point, I'd expect you'd be able to unload the Nvidia drivers using modprobe -r (or rmmod directly): # modprobe -r nvidia-drm Once you've managed to replace/upgrade it and you're ready to start the graphical environment again, you can use this command: # systemctl start graphical.target
{ "source": [ "https://unix.stackexchange.com/questions/440840", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/134202/" ] }
440,844
I'm trying to send a bug report for the app file, /usr/bin/file But consulting the man and sending an email BUGS Please report bugs and send patches to the bug tracker at http://bugs.gw.com/ or the mailing list at ⟨[email protected]⟩ (visit http://mx.gw.com/mailman/listinfo/file first to subscribe). Made me find out the mail address does not exist. Is there another way of communicating to the community? Hopefully this here question is already part of it :) So here's my email: possible feature failure: the --extension option doesn't seem to output anything $ file --extension "ab.gif" ab.gif: ??? It would be useful to easily be able to use the output of this to rename a file to its correct extension. something like file --likely_extension would only output the likely detected extension or an error if the detection was too low like thus: $ file --likely_extension "ab.gif" gif better though would be a --correct_extension option: $ file --correct_extension "ab.jpg" $ ls ab.gif Tyvm for this app :)
You are following the correct procedure to file an issue or enhancement request: if a program’s documentation mentions how to do so, follow those instructions. Unfortunately it often happens that projects die, or that the instructions in the version you have are no longer accurate. In these cases, things become a little harder. One possible general approach is to file a bug with your distribution; success there can be rather hit-or-miss though... (I should mention that it’s usually better to report a bug to the distribution you got your package from, if you’re using a package; this is especially true if the packaged version is older than the current “upstream” version, and if you haven’t checked whether the issue is still present there.) For file specifically, the official documentation has been updated to mention that the bug tracker and mailing list are down, and it also provides a direct email address for the current maintainer, which you could use to contact him.
{ "source": [ "https://unix.stackexchange.com/questions/440844", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150158/" ] }
441,151
$ ls -l /usr/bin/sudo -rwsr-xr-x 1 root root 136808 Jul 4 2017 /usr/bin/sudo so sudo is runnable by any user, and any user who runs sudo will have root as the effective user ID of the process because the set-user-id bit of /usr/bin/sudo is set. From https://unix.stackexchange.com/a/11287/674 the most visible difference between sudo and su is that sudo requires the user's password and su requires root's password. Which user's password does sudo asks for? Is it the user represented by the real user ID of the process? If yes, doesn't any user can gain the superuser privilege by running sudo and then providing their own password? Can Linux restrict that on some users? Is it correct that sudo asks for the password after execve() starts to execute main() of /usr/bin/sudo ? Since the euid of the process has been changed to root (because the set-user-id bit of /usr/bin/sudo is set), what is the point of sudo asking for password later? Thanks. I have read https://unix.stackexchange.com/a/80350/674 , but it doesn't answer the questions above.
In its most common configuration, sudo asks for the password of the user running sudo (as you say, the user corresponding to the process’ real user id). The point of sudo is to grant extra privileges to specific users (as determined by the configuration in sudoers ), without those users having to provide any other authentication than their own. However, sudo does check that the user running sudo really is who they claim to be, and it does that by asking for their password (or whatever authentication mechanism is set up for sudo , usually using PAM — so this could involve a fingerprint, or two-factor authentication etc.). sudo doesn’t necessarily grant the right to become root, it can grant a variety of privileges. Any user allowed to become root by sudoers can do so using only their own authentication; but a user not allowed to, can’t (at least, not by using sudo ). This isn’t enforced by Linux itself, but by sudo (and its authentication setup). sudo does indeed ask for the password after it’s started running; it can’t do otherwise ( i.e. it can’t do anything before it starts running). The point of sudo asking for the password, even though it’s root, is to verify the running user’s identity (in its typical configuration).
{ "source": [ "https://unix.stackexchange.com/questions/441151", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
441,434
If I create a file as an unprivileged user, and change the permissions mode to 400 , it's seen by that user as read-only, correctly: $ touch somefile $ chmod 400 somefile $ [ -w somefile ] && echo rw || echo ro ro All is well. But then root comes along: # [ -w somefile ] && echo rw || echo ro rw What the heck? Sure, root can write to read-only files, but it shouldn't make a habit of it: Best Practice would tend to dictate that I should be able to test for the write permission bit, and if it's not, then it was set that way for a reason. I guess I want to understand both why this is happening, and how can I get a false return code when testing a file that doesn't have the write bit set?
test -w aka [ -w doesn't check the file mode. It checks if it's writable. For root, it is. $ help test | grep '\-w' -w FILE True if the file is writable by you. The way I would test would be to do a bitwise comparison against the output of stat(1) (" %a Access rights in octal"). (( 0$(stat -c %a somefile) & 0200 )) && echo rw || echo ro Note the subshell $(...) needs a 0 prefixed so that the output of stat is interpreted as octal by (( ... )) .
{ "source": [ "https://unix.stackexchange.com/questions/441434", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67464/" ] }
441,438
I have been trying to install GUI, i.e. gnome and lxde, into Debian 9 stretch in google cloud computing instance. I have even increased the cpu, ram, harddisk size. However, the installation is always stuck at "Setting up dbus (1.10.26-0+deb9u1)" My last attempt is letting it sit for 6 hours now. It's still stuck there. What can I do? Thanks and Regards Edit1: I found this line. Does this have to do with this error? Setting up rtkit (0.11-4+b1) ... Created symlink /etc/systemd/system/graphical.target.wants/rtkit-daemon.service → /lib/systemd/system/rtkit-daemon.service. Job for rtkit-daemon.service failed because a timeout was exceeded. See "systemctl status rtkit-daemon.service" and "journalctl -xe" for details. rtkit-daemon.service couldn't start. Edit2: I shutdown the instance and get thefollowing. Not sure if this may mean anything or not related at all - again because I forced shutdown the system. Setting up dbus (1.10.26-0+deb9u1) ... Job for dbus.service canceled. invoke-rc.d: initscript dbus, action "start" failed. ● dbus.service - D-Bus System Message Bus Loaded: loaded (/lib/systemd/system/dbus.service; static; vendor preset: enabled) Active: failed (Result: exit-code) since Wed 2018-05-02 21:38:17 UTC; 31min ago Docs: man:dbus-daemon(1) Main PID: 15748 (code=exited, status=1/FAILURE) May 02 21:37:52 instance-2 systemd[1]: Started D-Bus System Message Bus. May 02 21:37:52 instance-2 dbus-daemon[15748]: Failed to start message bus: Could not get UID and GID for username "messagebus" May 02 21:38:17 instance-2 systemd[1]: dbus.service: Main process exited, code=exited, status=1/FAILURE May 02 21:38:17 instance-2 systemd[1]: dbus.service: Unit entered failed state. May 02 21:38:17 instance-2 systemd[1]: dbus.service: Failed with result 'exit-code'. dpkg: error processing package dbus (--configure): subprocess installed post-installation script returned error exit status 1
test -w aka [ -w doesn't check the file mode. It checks if it's writable. For root, it is. $ help test | grep '\-w' -w FILE True if the file is writable by you. The way I would test would be to do a bitwise comparison against the output of stat(1) (" %a Access rights in octal"). (( 0$(stat -c %a somefile) & 0200 )) && echo rw || echo ro Note the subshell $(...) needs a 0 prefixed so that the output of stat is interpreted as octal by (( ... )) .
{ "source": [ "https://unix.stackexchange.com/questions/441438", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289033/" ] }
441,740
I want to get the BIOS version from Linux without going directly to the BIOS. I mean, is there a way to get the BIOS version from inside Linux?
Without superuser privileges It is as simple as reading the following file: $ cat /sys/class/dmi/id/bios_version 1.1.3 With superuser privileges Use dmidecode : $ sudo dmidecode -s bios-version 1.1.3 Also, you might have to install this package, which is available in: Linux i386, x86-64, ia64 FreeBSD i386, amd64 NetBSD i386, amd64 OpenBSD i386, amd64 BeOS i386 Solaris x86 Haiku i586
{ "source": [ "https://unix.stackexchange.com/questions/441740", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/244823/" ] }
442,323
Where can I get the original Unix (from the year 1969)? I would like to look at the source code of the original Unix.
The closest of the feeling of a contemporary system you can get freely in the Internet, and pretty much tested and ready to run, is a version 7 disk image running with the PDP-11 SimH emulator, and even a system III disk image with the actual C sources also with the PDP-11 emulation under SimH. See my post with step-by-step instructions how to download and get running Unix version 7 after installing SimH . The original site has some inconsistencies: the original instructions are for an older SimH version, and are lacking some procedures needing to be done after booting: Link to my answer in Retro Computing explaining how to boot the PDP-11 system 7 image disk SimH runs in several architectures, including MacOS, DOS (I think) and Linux. For installing SimH in Debian, the corresponding package is: simh See https://packages.debian.org/jessie/otherosfs/simh Package: simh (3.8.1-5) Emulators for 33 different computers This is the SIMH set of emulators for 33 different computers: DEC PDP-1, PDP-4, ` PDP-7 , PDP-8, PDP-9, DEC PDP-10, PDP-11 ... To install then it in Debian: sudo apt-get install simh After installation, you will have a binary called pdp11 for emulating the PDP-11. After this you can follow my answer, in the first link of this answer, in our sister site Retro computing, as it is oriented to the same SimH version. As per the @user996142 comment, you can find nowadays the version 7 Unix source code tree at https://github.com/dspinellis/unix-history-repo As an alternative, there is a port of V7 for x86/Intel. A VM for VmWare and VirtualBox can be downloaded here: http://www.nordier.com/v7x86/releases/v7x86-0.8a-vm.zip ; you boot the VM, login as "guest", run su and introduce the password "password". I think the main use for it is for teaching purposes. More interestingly yet, is a System III disk image that was made from recovered tape(s), which can also be run under the PDP-11 emulator in SimH. System III has much more kernel source code lines written in C, and more utilities. The system resembles a little more Unix as we know it today. The tape/disk image also comes with the source code tree, in /usr/local/src (have to check the directory), that can be read, changed and compiled inside the emulator, thus not obliging you to much effort trying to (re)building and modifying legacy code if you want to test out some modifications. Obviously, the utilities are much smaller than nowadays, and such a system is much more easy to understand, rebuild and hack for pedagogic purposes. The HOW-TO to use and build the System III image emulation for SimH is here http://mailman.trailing-edge.com/pipermail/simh/2009-May/002382.html ; however the download links do not work anymore; nonetheless I managed to find a working download link of the System III version here: https://unixarchive.tliquest.net/PDP-11/Distributions/usdl/SysIII/ PS. I built my working System III SimH PDP-11 emulation disk image from those files.
{ "source": [ "https://unix.stackexchange.com/questions/442323", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/283800/" ] }
442,510
I am installing a huge program, which has its resources as an rpm file. It stuck at the line of #!/bin/sh SCITEGICPERLBIN=`dirname $0` SCITEGICPERLHOME=`dirname $SCITEGICPERLBIN` if [ $SCITEGICPERLHOME == "." ] Apparently, sh work for bash in Red Hat Linux with this syntax, but it gives the error of unexpected operator in Ubuntu. I cannot change the script to bash as the script comes from the rpm package. I can extract and repack the rpm package, but there might be many of such scripts. Is there a way to change the shell default to treat #!/bin/sh as bash or anything else, which can handle the [ operator?
To switch sh to bash (instead of dash , the default), reconfigure dash (yes, it’s somewhat counter-intuitive): sudo dpkg-reconfigure dash This will ask whether you want dash to be the default system shell; answer “No” ( Tab then Enter ) and bash will become the default instead ( i.e. /bin/sh will point to /bin/bash ).
{ "source": [ "https://unix.stackexchange.com/questions/442510", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10780/" ] }