source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
366,581
Is there a way to print an entire array ([key]=value) without looping over all elements? Assume I have created an array with some elements: declare -A array array=([a1]=1 [a2]=2 ... [b1]=bbb ... [f500]=abcdef) I can print back the entire array with for i in "${!array[@]}" do echo "${i}=${array[$i]}" done However, it seems bash already knows how to get all array elements in one "go" - both keys ${!array[@]} and values ${array[@]} . Is there a way to make bash print this info without the loop? Edit: typeset -p array does that! However I can't remove both prefix and suffix in a single substitution: a="$(typeset -p array)" b="${a##*(}" c="${b%% )*}" Is there a cleaner way to get/print only the key=value portion of the output?
I think you're asking two different things there. Is there a way to make bash print this info without the loop? Yes, but they are not as good as just using the loop. Is there a cleaner way to get/print only the key=value portion of the output? Yes, the for loop. It has the advantages that it doesn't require external programs, is straightforward, and makes it rather easy to control the exact output format without surprises. Any solution that tries to handle the output of declare -p ( typeset -p ) has to deal with a) the possibility of the variables themselves containing parenthesis or brackets, b) the quoting that declare -p has to add to make it's output valid input for the shell. For example, your expansion b="${a##*(}" eats some of the values, if any key/value contains an opening parenthesis. This is because you used ## , which removes the longest prefix. Same for c="${b%% )*}" . Though you could of course match the boilerplate printed by declare more exactly, you'd still have a hard time if you didn't want all the quoting it does. This doesn't look very nice unless you need it. $ declare -A array=([abc]="'foobar'" [def]='"foo bar"') $ declare -p array declare -A array='([def]="\"foo bar\"" [abc]="'\''foobar'\''" )' With the for loop, it's easier to choose the output format as you like: # without quoting $ for x in "${!array[@]}"; do printf "[%s]=%s\n" "$x" "${array[$x]}" ; done [def]="foo bar" [abc]='foobar' # with quoting $ for x in "${!array[@]}"; do printf "[%q]=%q\n" "$x" "${array[$x]}" ; done [def]=\"foo\ bar\" [abc]=\'foobar\' From there, it's also simple to change the output format otherwise (remove the brackets around the key, put all key/value pairs on a single line...). If you need quoting for something other than the shell itself, you'll still need to do it by yourself, but at least you have the raw data to work on. (If you have newlines in the keys or values, you are probably going to need some quoting.) With a current Bash (4.4, I think), you could also use printf "[%s]=%s" "${x@Q}" "${array[$x]@Q}" instead of printf "%q=%q" . It produces a somewhat nicer quoted format, but is of course a bit more work to remember to write. (And it quotes the corner case of @ as array key, which %q doesn't quote.) If the for loop seems too weary to write, save it a function somewhere (without quoting here): printarr() { declare -n __p="$1"; for k in "${!__p[@]}"; do printf "%s=%s\n" "$k" "${__p[$k]}" ; done ; } And then just use that: $ declare -A a=([a]=123 [b]="foo bar" [c]="(blah)") $ printarr a a=123 b=foo bar c=(blah) Works with indexed arrays, too: $ b=(abba acdc) $ printarr b 0=abba 1=acdc
{ "source": [ "https://unix.stackexchange.com/questions/366581", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77319/" ] }
366,949
I am using OpenStack Cloud and using LVM on RHEL 7 to manage volumes. As per my use case, I should be able to detach and attach these volumes to different instances. While updating fstab, I have used defaults,nofail for now but I am not sure what exactly I should be using. I am aware of these options: rw, nofail, noatime, discard, defaults But I don't how to use them. What should be the ideal configuration for my use case ?
As said by @ilkkachu, if you take a look at the mount(8) manpage, all your doubts should go away. Quoting the manpages: -w, --rw, --read-write Mount the filesystem read/write. This is the default. A synonym is -o rw. Means : Not needed at all, since rw is the default, and it is part of the defaults option nofail Do not report errors for this device if it does not exist. Means : If the device is not enable after you boot and mount it using fstab, no errors will be reported. You will need to know if a disk can be ignored if not mounted. Pretty useful on usb drivers, but i see no point on using this on a server... noatime Do not update inode access times on this filesystem (e.g., for faster access on the news spool to speed up news servers). Means : No read operation is a "pure" read operation on filesystems. Even if you only cat file for example, a little write operation will update the last time the inode of this file was accessed. It's pretty useful on some situations(like caching servers), but it can be dangerous if used on sync technologies like Dropbox. I'm no one to judge here what is best for you, if noatime set or ignored... discard/nodiscard Controls whether ext4 should issue discard/TRIM commands to the underlying block device when blocks are freed.This is useful for SSD devices and sparse/thinly -provisioned LUNs, but it is off by default until sufficient testing has been done. Means : TRIM feature from ssds . Take your time to read on this guy, and probe if your ssd support this feature(pretty much all modern ssds suport it). hdparm -I /dev/sdx | grep "TRIM supported" will tell you if trim is supported on your ssd. As for today, you could achieve better performance and data health by Periodic trimming instead of a continuous trimming on your fstab . There is even a in-kernel device blacklist for continuous trimming since it can cause data corruption due to non-queued operations. defaults Use default options: rw, suid, dev, exec, auto, nouser, and async. tl;dr: on your question, rw can be removed( defaults already imply rw), nofail is up to you, noatime is up to you, the same way discard is just up to your hardware features.
{ "source": [ "https://unix.stackexchange.com/questions/366949", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135219/" ] }
367,008
On Unix systems path names have usually virtually no length limitation (well, 4096 characters on Linux)... except for socket files paths which are limited to around 100 characters (107 characters on Linux ). First question: why such a low limitation? I've checked that it seems possible to work around this limitation by changing the current working directory and creating in various directories several socket files all using the same path ./myfile.sock : the client applications seem to correctly connect to the expected server processes even-though lsof shows all of them listening on the same socket file path. Is this workaround reliable or was I just lucky? Is this behavior specific to Linux or may this workaround be applicable to other Unixes as well?
Compatibility with other platforms, or compatibility with older stuff to avoid overruns while using snprintf() and strncpy() . Michael Kerrisk explain in his book at the page 1165 - Chapter 57, Sockets: Unix domain : SUSv3 doesn’t specify the size of the sun_path field. Early BSD implementations used 108 and 104 bytes, and one contemporary implementation (HP-UX 11) uses 92 bytes. Portable applications should code to this lower value, and use snprintf() or strncpy() to avoid buffer overruns when writing into this field. Docker guys even made fun of it, because some sockets were 110 characters long: lol 108 chars ETOOMANY This is why LINUX uses a 108 char socket. Could this be changed? Of course. And this, is the reason why in the first place this limitation was created on older Operating Systems: Why is the maximal path length allowed for unix-sockets on linux 108? Quoting the answer: It was to match the space available in a handy kernel data structure. Quoting "The Design and Implementation of the 4.4BSD Operating System" by McKusick et. al. (page 369): The memory management facilities revolve around a data structure called an mbuf. Mbufs, or memory buffers, are 128 bytes long, with 100 or 108 bytes of this space reserved for data storage. Other OSs(unix domain sockets): OpenBSD : 104 characters FreeBSD : 104 characters Mac OS X 10.9 : 104 characters
{ "source": [ "https://unix.stackexchange.com/questions/367008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53965/" ] }
367,055
I have a file, that has trash (binary header and footer) at the beginning and end of the file. I would like to know how to nuke these bytes. For an example, let's assume 25 bytes from the beginning. And, 2 bytes from the end. I know I can use truncate and dd, but truncate doesn't work with a stream and it seems kind of cludgey to run two commands on the hard file. It would be nicer if truncate , knowing how big the file was, could cat the file to dd . Or, if there was a nicer way to do this?
You can combine GNU tail and head : tail -c +26 file | head -c -2 will output the contents of file starting at byte 26, and stopping two bytes (minus two -2 ) before the end. ( -c operates on bytes, not characters.)
{ "source": [ "https://unix.stackexchange.com/questions/367055", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
367,070
I want to scp files from machineA into my machineB and this is how I am doing it. I am copying bunch of files one by one from primary folder of machineA to primary folder of machineB and secondary folder of machineA to secondary folder of machineB . trinity@machineB:~$ scp trinity@machineA:/data01/primary/* /data01/primary/ trinity@machineB:~$ scp trinity@machineA:/data02/secondary/* /data02/secondary/ Is there any way by which I can copy multiple files in parallel? Like five files at a time from a folder? So instead of copying one files at a time, I want to copy five files from primary or secondary folders respectively? Basically I want to copy whatever is there in primary and secondary folders of machineA into machineB parallely. I also have GNU Parallel installed on my box if I can use that. I tried below command but it doesn't work. I was expecting that it should copy 5 files in parallel at a time until everything gets copied from that folder. parallel -j 5 scp trinity@machineA:/data01/primary/* /data01/primary/ Anything wrong with my parallel syntax? What is the best way by which I can copy five files in parallel from a remote folder until everything gets copied from it?
You can combine GNU tail and head : tail -c +26 file | head -c -2 will output the contents of file starting at byte 26, and stopping two bytes (minus two -2 ) before the end. ( -c operates on bytes, not characters.)
{ "source": [ "https://unix.stackexchange.com/questions/367070", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64455/" ] }
367,108
I know what a while loop is. However, I've only seen it work with: while [condition] while ![condition] while TRUE (infinite loop) Where the statement after while has to be either TRUE or FALSE . There is a shell builtin command named : . It is described as a dummy command doing nothing, but I do not know if it is the same here, even if can it be TRUE or FALSE . Maybe it is something different, but what?
The syntax is: while first list of commands do second list of commands done which runs the second list of commands in a loop as long as the first list of commands (so the last run in that list) is successful. In that first list of commands , you can use the [ command to do various kinds of tests, or you can use the : null command that does nothing and returns success, or any other command. while :; do cmd; done Runs cmd over and over forever as : always returns success. That's the forever loop. You could use the true command instead to make it more legible: while true; do cmd; done People used to prefer : as : was always builtin while true was not (a long time ago; most shells have true builtin nowadays)¹. Other variants you might see: while [ 1 ]; do cmd; done Above, we're calling the [ command to test whether the "1" string is non-empty (so always true as well) while ((1)); do cmd; done Using the Korn/bash/zsh ((...)) syntax to mimic the while(1) { ...; } of C. Or more convoluted ones like until false; do cmd; done , until ! true ... Those are sometimes aliased like: alias forever='while :; do' So you can do something like: forever cmd; done Few people realise that the condition is a list of commands. For instance, you see people writing: while :; do cmd1 cmd2 || break cmd3 done When they could have written: while cmd1 cmd2 do cmd3 done It does make sense for it to be a list as you often want to do things like while cmd1 && cmd2; do...; done which are command lists as well. In any case, note that [ is a command like any other (though it's built-in in modern Bourne-like shells), it doesn't have to be used solely in the if / while / until condition lists, and those condition lists don't have to use that command more than any other command. ¹ : is also shorter and accepts arguments (which it ignores). While the behaviour of true or false is unspecified if you pass it any argument. So one may do for instance: while : you wait; do something done But, the behaviour of: until false is true; do something done is unspecified (though it would work in most shell/ false implementations).
{ "source": [ "https://unix.stackexchange.com/questions/367108", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/229576/" ] }
367,138
According to a rapid7 article there are some vulnerable Samba versions allowing a remote code execution on Linux systems: While the WannaCry ransomworm impacted Windows systems and was easily identifiable, with clear remediation steps, the Samba vulnerability will impact Linux and Unix systems and could present significant technical obstacles to obtaining or deploying appropriate remediations. CVE-2017-7494 All versions of Samba from 3.5.0 onwards are vulnerable to a remote code execution vulnerability, allowing a malicious client to upload a shared library to a writable share, and then cause the server to load and execute it. Possible attack scenario: Starting from the two factors: The Samba vulnerability isn't fixed yet on some Linux distributions. There is a non-patched local privilege escalation vulnerability on some Linux kernel versions (for example, CVE-2017-7308 on the 4.8.0-41-generic Ubuntu kernel). An attacker can access a Linux machine and elevate privileges using a local exploit vulnerability to gain the root access and installing a possible future ramsomware, similar to this mock up WannaCry ransomware for Linux . Update A newest article "Warning! Hackers Started Using "SambaCry Flaw" to Hack Linux Systems" demonstrate how to use the Sambacry flaw to infecte a linux machine. The prediction came out to be quite accurate, as honeypots set up by the team of researchers from Kaspersky Lab have captured a malware campaign that is exploiting SambaCry vulnerability to infect Linux computers with cryptocurrency mining software. Another security researcher, Omri Ben Bassat‏, independently discovered the same campaign and named it "EternalMiner" . According to the researchers, an unknown group of hackers has started hijacking Linux PCs just a week after the Samba flaw was disclosed publicly and installing an upgraded version of "CPUminer," a cryptocurrency mining software that mines "Monero" digital currency. After compromising the vulnerable machines using SambaCry vulnerability, attackers execute two payloads on the targeted systems: INAebsGB.so — A reverse-shell that provides remote access to the attackers. cblRWuoCc.so — A backdoor that includes cryptocurrency mining utilities – CPUminer. TrendLab report posted on July 18, 2017: Linux Users Urged to Update as a New Threat Exploits SambaCry How do I secure a Linux system to prevent being attacked?
This Samba new vulnerability is already being called "Sambacry", while the exploit itself mentions "Eternal Red Samba", announced in twitter (sensationally) as: Samba bug, the metasploit one-liner to trigger is just: simple.create_pipe("/path/to/target.so") Potentially affected Samba versions are from Samba 3.5.0 to 4.5.4/4.5.10/4.4.14. If your Samba installation meets the configurations described bellow, the fix/upgrade should be done ASAP as there are already exploits , other exploit in python and metasploit modules out there. More interestingly enough, there are already add-ons to a know honeypot from the honeynet project, dionaea both to WannaCry and SambaCry plug-ins . Samba cry seems to be already being (ab)used to install more crypto-miners "EternalMiner" or double down as a malware dropper in the future . honeypots set up by the team of researchers from Kaspersky Lab have captured a malware campaign that is exploiting SambaCry vulnerability to infect Linux computers with cryptocurrency mining software. Another security researcher, Omri Ben Bassat‏, independently discovered the same campaign and named it "EternalMiner." The advised workaround for systems with Samba installed (which also is present in the CVE notice) before updating it, is adding to smb.conf : nt pipe support = no (and restarting the Samba service) This is supposed to disable a setting that turns on/off the ability to make anonymous connections to the windows IPC named pipes service. From man samba : This global option is used by developers to allow or disallow Windows NT/2000/XP clients the ability to make connections to NT-specific SMB IPC$ pipes. As a user, you should never need to override the default. However from our internal experience, it seems the fix is not compatible with older? Windows versions ( at least some? Windows 7 clients seem to not work with the nt pipe support = no ), and as such the remediation route can go in extreme cases into installing or even compiling Samba. More specifically, this fix disable shares listing from Windows clients, and if applied they have to manually specify the full path of the share to be able to use it. Other known workaround is to make sure Samba shares are mounted with the noexec option. This will prevent the execution of binaries residing on the mounted filesystem. The official security source code patch is here from the samba.org security page . Debian already pushed yesterday (24/5) an update out the door, and the corresponding security notice DSA-3860-1 samba To verify in if the vulnerability is corrected in Centos/RHEL/Fedora and derivates, do: #rpm -q –changelog samba | grep -i CVE – resolves: #1450782 – Fix CVE-2017-7494 – resolves: #1405356 – CVE-2016-2125 CVE-2016-2126 – related: #1322687 – Update CVE patchset There is now an nmap detection script : samba-vuln-cve-2017-7494.nse for detecting Samba versions, or a much better nmap script that checks if the service is vulnerable at http://seclists.org/nmap-dev/2017/q2/att-110/samba-vuln-cve-2017-7494.nse , copy it to /usr/share/nmap/scripts and then update the nmap database , or run it as follows: nmap --script /path/to/samba-vuln-cve-2017-7494.nse -p 445 <target> About long term measures to protect the SAMBA service: The SMB protocol should never be offered directly to the Internet at large. It goes also without saying that SMB has always been a convoluted protocol, and that these kind of services ought to be firewalled and restricted to the internal networks [to which they are being served]. When remote access is needed, either to home or specially to corporate networks, those accesses should be better done using VPN technology. As usual, on this situations the Unix principle of only installing and activating the minimum services required does pay off. Taken from the exploit itself: Eternal Red Samba Exploit -- CVE-2017-7494. Causes vulnerable Samba server to load a shared library in root context. Credentials are not required if the server has a guest account. For remote exploit you must have write permissions to at least one share. Eternal Red will scan the Samba server for shares it can write to. It will also determine the fullpath of the remote share. For local exploit provide the full path to your shared library to load. Your shared library should look something like this extern bool change_to_root_user(void); int samba_init_module(void) { change_to_root_user(); /* Do what thou wilt */ } It is also known systems with SELinux enabled are not vulnerable to the exploit. See 7-Year-Old Samba Flaw Lets Hackers Access Thousands of Linux PCs Remotely According to the Shodan computer search engine, more than 485,000 Samba-enabled computers exposed port 445 on the Internet, and according to researchers at Rapid7, more than 104,000 internet-exposed endpoints appeared to be running vulnerable versions of Samba, out of which 92,000 are running unsupported versions of Samba. Since Samba is the SMB protocol implemented on Linux and UNIX systems, so some experts are saying it is "Linux version of EternalBlue," used by the WannaCry ransomware. ...or should I say SambaCry? Keeping in mind the number of vulnerable systems and ease of exploiting this vulnerability, the Samba flaw could be exploited at large scale with wormable capabilities. Home networks with network-attached storage (NAS) devices [that also run Linux] could also be vulnerable to this flaw. See also A wormable code-execution bug has lurked in Samba for 7 years. Patch now! The seven-year-old flaw, indexed as CVE-2017-7494, can be reliably exploited with just one line of code to execute malicious code, as long as a few conditions are met. Those requirements include vulnerable computers that: (a) make file- and printer-sharing port 445 reachable on the Internet, (b) configure shared files to have write privileges, and (c) use known or guessable server paths for those files. When those conditions are satisfied, remote attackers can upload any code of their choosing and cause the server to execute it, possibly with unfettered root privileges, depending on the vulnerable platform. Given the ease and reliability of exploits, this hole is worth plugging as soon as possible. It's likely only a matter of time until attackers begin actively targeting it. Also Rapid 7 - Patching CVE-2017-7494 in Samba: It’s the Circle of Life And more SambaCry: The Linux Sequel to WannaCry . Need-to-Know Facts CVE-2017-7494 has a CVSS Score of 7.5 (CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H)3. Threat Scope A shodan.io query of "port:445 !os:windows" shows approximately one million non-Windows hosts that have tcp/445 open to the Internet, more than half of which exist in the United Arab Emirates (36%) and the U.S. (16%). While many of these may be running patched versions, have SELinux protections, or otherwise don't match the necessary criteria for running the exploit, the possible attack surface for this vulnerability is large. P.S. The commit fix in the SAMBA github project appear to be commit 02a76d86db0cbe79fcaf1a500630e24d961fa149
{ "source": [ "https://unix.stackexchange.com/questions/367138", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153195/" ] }
367,220
I have a PKCS12 file containing the full certificate chain and private key. I need to break it up into 3 files for an application. The 3 files I need are as follows (in PEM format): an unecrypted key file a client certificate file a CA certificate file (root and all intermediate) This is a common task I have to perform, so I'm looking for a way to do this without any manual editing of the output. I tried the following: openssl pkcs12 -in <filename.pfx> -nocerts -nodes -out <clientcert.key> openssl pkcs12 -in <filename.pfx> -clcerts -nokeys -out <clientcert.cer> openssl pkcs12 -in <filename.pfx> -cacerts -nokeys -chain -out <cacerts.cer> This works fine, however, the output contains bag attributes, which the application doesn't know how to handle. After some searching I found a suggested solution of passing the results through x509 to strip the bag attributes. openssl x509 -in <clientcert.cer> -out <clientcert.cer> This works, but I run into an issue on the cacert file. The output file only contains one of the 3 certs in the chain. Is there a way to avoid including the bag attributes in the output of the pkcs12 command, or a way to have the x509 command output include all the certificates? Additionally, if running it through x509 is the simplest solution, is there a way to pipe the output from pkcs12 into x509 instead of writing out the file twice?
The solution I finally came to was to pipe it through sed. openssl pkcs12 -in <filename.pfx> -nocerts -nodes | sed -ne '/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p' > <clientcert.key> openssl pkcs12 -in <filename.pfx> -clcerts -nokeys | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <clientcert.cer> openssl pkcs12 -in <filename.pfx> -cacerts -nokeys -chain | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <cacerts.cer>
{ "source": [ "https://unix.stackexchange.com/questions/367220", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160045/" ] }
367,223
I have files with column-wise dates and time in "YYYY MM DD HHMM" format plus a variable (temperature) and want to convert them into YYYY DDD format (and keep hour and temperature as is). They look like this but same date appears several times in file: 1980 01 01 0100 3.3 1982 04 11 0400 2.2 1985 12 04 0700 1.7 1995 12 31 1000 2.2 I have created an index file (1980-2017) with the number of days to be added to each date of the first file to get the cumulative day of year DDD (last column). First year looks like this (1980 was a leap year): 1980 01 31 000 1980 02 29 031 1980 03 31 060 1980 04 30 090 1980 05 31 121 1980 06 30 152 1980 07 31 182 1980 08 31 213 1980 09 30 244 1980 10 31 274 1980 11 30 305 1980 12 31 335 I am trying to compare the two files based on first two columns and if they match to add the fourth column of file2 to third column of file 1 and end up with something like this: 1980 001 0100 3.3 1982 101 0400 2.2 1985 346 0700 1.7 1995 365 1000 2.2 I managed to compare the two columns of the files and add the two columns with awk below: awk -F' ' 'NR==FNR{c[$1$2]++;next};c[$1$2] > 0' junktemp matrix_sample | awk '{print $1, $3+$4}' but this way I lose $4 and $5 (hour and temperature). Is there a way to combine the two awk functions and get $4 and $5 of file1 in the result as well? Any help much appreciated.
The solution I finally came to was to pipe it through sed. openssl pkcs12 -in <filename.pfx> -nocerts -nodes | sed -ne '/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p' > <clientcert.key> openssl pkcs12 -in <filename.pfx> -clcerts -nokeys | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <clientcert.cer> openssl pkcs12 -in <filename.pfx> -cacerts -nokeys -chain | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <cacerts.cer>
{ "source": [ "https://unix.stackexchange.com/questions/367223", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/232924/" ] }
367,224
I have 2 lists, one containing all 32 bit IP addresses, and the other is a list of ip ranges and other IP addresses. I need to find if each IP within list A exists in any IP range or address in list B. The end result would be to display the addresses from list A that do not exists in list B. This would be easy using diff if the IP ranges were not involved. The list itself contains nearly 10,000 lines so to go through this manually would take forever.
The solution I finally came to was to pipe it through sed. openssl pkcs12 -in <filename.pfx> -nocerts -nodes | sed -ne '/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p' > <clientcert.key> openssl pkcs12 -in <filename.pfx> -clcerts -nokeys | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <clientcert.cer> openssl pkcs12 -in <filename.pfx> -cacerts -nokeys -chain | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <cacerts.cer>
{ "source": [ "https://unix.stackexchange.com/questions/367224", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/207764/" ] }
367,597
I am connected with SSH to a machine on which I don't have root access. To install something I uploaded libraries from my machine and put them in the ~/lib directory of the remote host. Now, for almost any command I run, I get the error below (example is for ls ) or a Segmentation fault (core dumped) message. ls: relocation error: /lib/libpthread.so.0: symbol __getrlimit, version GLIBC_PRIVATE not defined in file libc.so.6 with link time reference The only commands I have been successful running are cd and pwd until now. I can pretty much find files in a directory by using TAB to autocomplete ls , so I can move through directories. uname -r also returns the Segmentation fault (core dumped) message, so I'm not sure what kernel version I'm using.
Since you can log in, nothing major is broken; presumably your shell’s startup scripts add ~/lib to LD_LIBRARY_PATH , and that, along with the bad libraries in ~/lib , is what causes the issues you’re seeing. To fix this, run unset LD_LIBRARY_PATH This will allow you to run rm , vim etc. to remove the troublesome libraries and edit your startup scripts if appropriate.
{ "source": [ "https://unix.stackexchange.com/questions/367597", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164427/" ] }
367,982
Commands , for instance sed , are programs and programs are codified logic inside a file and these files are somewhere on the hard disk. However when commands are being run, a copy of their files from the hard disk is put into the RAM , where they come to life and can do stuff and are called processes . Processes can make use of other files, read or write into them, and if they do those files are called open files. There is a command to list all open files by all running processes: lsof . OK, so what I wonder about is if the double life of a command, one on the hard disk, the other in the RAM is also true for other kind of files, for instance those who have no logic programmed, but are simply containers for data. My assumption is, that files opened by processes are also loaded into the RAM. I do not know if it is true, it is just an intuition. Please, could someone make sense of it?
However when commands are being run, a copy of their files from the hard disk is put into the RAM, This is wrong (in general). When a program is executed (thru execve(2) ...) the process (running that program) is changing its virtual address space and the kernel is reconfiguring the MMU for that purpose. Read also about virtual memory . Notice that application programs can change their virtual address space using mmap(2) & munmap & mprotect(2) , also used by the dynamic linker (see ld-linux(8) ). See also madvise(2) & posix_fadvise(2) & mlock(2) . Future page faults will be processed by the kernel to load (lazily) pages from the executable file. Read also about thrashing . The kernel maintains a large page cache . Read also about copy-on-write . See also readahead(2) . OK, so what I wonder about is if the double life of a command, one on the hard disk, the other in the RAM is also true for other kind of files, for instance those who have no logic programmed, but are simply containers for data. For system calls like read(2) & write(2) the page cache is also used. If the data to be read is sitting in it, no disk IO will be done. If disk IO is needed, the read data would be very likely put in the page cache. So, in practice, if you run the same command twice, it could happen that no physical I/O is done to the disk on the second time (if you have an old rotating hard disk - not an SSD - you might hear that; or observe carefully your hard disk LED). I recommend reading a book like Operating Systems : Three Easy Pieces (freely downloadable, one PDF file per chapter) which explains all this. See also Linux Ate My RAM and run commands like xosview , top , htop or cat /proc/self/maps or cat /proc/$$/maps (see proc(5) ). PS. I am focusing on Linux, but other OSes also have virtual memory and page cache.
{ "source": [ "https://unix.stackexchange.com/questions/367982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/229576/" ] }
368,100
Where should I put systemd file for e.g. Nginx nginx.service or something like that on Ubuntu 16.04 ?
The recommended place is /etc/systemd/system/nginx.service Then issue the command : systemctl enable nginx And finally systemctl start nginx
{ "source": [ "https://unix.stackexchange.com/questions/368100", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154659/" ] }
368,122
I'm trying to let my Raspberry Pi (running Debian Jessie) push a notification to Pushbullet when for some reason my HDD is disconnected, using udev rules. Now I've managed to make this work sort of. The problem however is that the script runs 14 times instead of just once and it runs on disconnect AND connect actions, which is not my intention.. I've tried a lot of different configurations of the rules file, such as: ACTION==”remove”, KERNEL==”sda1”, SUBSYSTEM==”block”, KERNELS==”1-1.2”, SUBSYSTEMS==”usb”, ATTRS{idProduct}==”10a2”, ATTRS{idVendor}==”1058”, ATTRS{manufacturer}=="Western Digital", RUN+="/home/pi/HDD_removed.sh" and ACTION=="remove", ENV{ID_MODEL}=="Elements_10A2", RUN+="/home/pi/HDD_removed.sh" and others, but nothing works properly.. As a help I printed the outputs of udevadm info and udevadm monitor below (sorry for the large sizes..): $ udevadm info -a -p $(udevadm info -q path -n /dev/sda1) looking at device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda/sda1': KERNEL=="sda1" SUBSYSTEM=="block" DRIVER=="" ATTR{start}=="2048" ATTR{inflight}==" 0 0" ATTR{ro}=="0" ATTR{partition}=="1" ATTR{stat}==" 545 0 61544 4110 0 0 0 0 0 1320 4110" ATTR{size}=="1953456128" ATTR{alignment_offset}=="0" ATTR{discard_alignment}=="0" looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda': KERNELS=="sda" SUBSYSTEMS=="block" DRIVERS=="" ATTRS{badblocks}=="" ATTRS{range}=="16" ATTRS{capability}=="50" ATTRS{inflight}==" 0 0" ATTRS{ext_range}=="256" ATTRS{ro}=="0" ATTRS{stat}==" 590 0 62336 4140 0 0 0 0 0 1350 4140" ATTRS{events_poll_msecs}=="-1" ATTRS{events_async}=="" ATTRS{removable}=="0" ATTRS{size}=="1953458176" ATTRS{events}=="" ATTRS{alignment_offset}=="0" ATTRS{discard_alignment}=="0" looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0': KERNELS=="0:0:0:0" SUBSYSTEMS=="scsi" DRIVERS=="sd" ATTRS{evt_soft_threshold_reached}=="0" ATTRS{evt_mode_parameter_change_reported}=="0" ATTRS{inquiry}=="" ATTRS{evt_capacity_change_reported}=="0" ATTRS{vendor}=="WD " ATTRS{timeout}=="30" ATTRS{evt_lun_change_reported}=="0" ATTRS{evt_media_change}=="0" ATTRS{queue_type}=="none" ATTRS{device_busy}=="0" ATTRS{eh_timeout}=="10" ATTRS{model}=="Elements 10A2 " ATTRS{iocounterbits}=="32" ATTRS{queue_depth}=="1" ATTRS{type}=="0" ATTRS{evt_inquiry_change_reported}=="0" ATTRS{max_sectors}=="240" ATTRS{iodone_cnt}=="0x27d" ATTRS{state}=="running" ATTRS{iorequest_cnt}=="0x27d" ATTRS{rev}=="1033" ATTRS{ioerr_cnt}=="0x3" ATTRS{scsi_level}=="7" ATTRS{device_blocked}=="0" looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0': KERNELS=="target0:0:0" SUBSYSTEMS=="scsi" DRIVERS=="" looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0': KERNELS=="host0" SUBSYSTEMS=="scsi" DRIVERS=="" looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0': KERNELS=="1-1.2:1.0" SUBSYSTEMS=="usb" DRIVERS=="usb-storage" ATTRS{bInterfaceProtocol}=="50" ATTRS{bInterfaceNumber}=="00" ATTRS{bInterfaceSubClass}=="06" ATTRS{bInterfaceClass}=="08" ATTRS{bAlternateSetting}==" 0" ATTRS{authorized}=="1" ATTRS{bNumEndpoints}=="02" ATTRS{supports_autosuspend}=="1" looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2': KERNELS=="1-1.2" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{bDeviceClass}=="00" ATTRS{manufacturer}=="Western Digital" ATTRS{bmAttributes}=="80" ATTRS{bConfigurationValue}=="1" ATTRS{version}==" 2.10" ATTRS{devnum}=="9" ATTRS{bMaxPower}=="100mA" ATTRS{idProduct}=="10a2" ATTRS{avoid_reset_quirk}=="0" ATTRS{urbnum}=="7168" ATTRS{bDeviceSubClass}=="00" ATTRS{maxchild}=="0" ATTRS{bcdDevice}=="1033" ATTRS{bMaxPacketSize0}=="64" ATTRS{idVendor}=="1058" ATTRS{product}=="Elements 10A2" ATTRS{speed}=="480" ATTRS{removable}=="removable" ATTRS{ltm_capable}=="no" ATTRS{serial}=="575831314541323038393032" ATTRS{bNumConfigurations}=="1" ATTRS{busnum}=="1" ATTRS{authorized}=="1" ATTRS{quirks}=="0x0" ATTRS{configuration}=="" ATTRS{devpath}=="1.2" ATTRS{bDeviceProtocol}=="00" ATTRS{bNumInterfaces}==" 1" looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1': KERNELS=="1-1" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{bDeviceClass}=="09" ATTRS{bmAttributes}=="e0" ATTRS{bConfigurationValue}=="1" ATTRS{version}==" 2.00" ATTRS{devnum}=="2" ATTRS{bMaxPower}=="2mA" ATTRS{idProduct}=="9514" ATTRS{avoid_reset_quirk}=="0" ATTRS{urbnum}=="144" ATTRS{bDeviceSubClass}=="00" ATTRS{maxchild}=="5" ATTRS{bcdDevice}=="0200" ATTRS{bMaxPacketSize0}=="64" ATTRS{idVendor}=="0424" ATTRS{speed}=="480" ATTRS{removable}=="unknown" ATTRS{ltm_capable}=="no" ATTRS{bNumConfigurations}=="1" ATTRS{busnum}=="1" ATTRS{authorized}=="1" ATTRS{quirks}=="0x0" ATTRS{configuration}=="" ATTRS{devpath}=="1" ATTRS{bDeviceProtocol}=="02" ATTRS{bNumInterfaces}==" 1" looking at parent device '/devices/platform/soc/3f980000.usb/usb1': KERNELS=="usb1" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{bDeviceClass}=="09" ATTRS{manufacturer}=="Linux 4.9.24-v7+ dwc_otg_hcd" ATTRS{bmAttributes}=="e0" ATTRS{bConfigurationValue}=="1" ATTRS{version}==" 2.00" ATTRS{devnum}=="1" ATTRS{bMaxPower}=="0mA" ATTRS{idProduct}=="0002" ATTRS{avoid_reset_quirk}=="0" ATTRS{urbnum}=="25" ATTRS{bDeviceSubClass}=="00" ATTRS{maxchild}=="1" ATTRS{bcdDevice}=="0409" ATTRS{bMaxPacketSize0}=="64" ATTRS{idVendor}=="1d6b" ATTRS{product}=="DWC OTG Controller" ATTRS{speed}=="480" ATTRS{authorized_default}=="1" ATTRS{interface_authorized_default}=="1" ATTRS{removable}=="unknown" ATTRS{ltm_capable}=="no" ATTRS{serial}=="3f980000.usb" ATTRS{bNumConfigurations}=="1" ATTRS{busnum}=="1" ATTRS{authorized}=="1" ATTRS{quirks}=="0x0" ATTRS{configuration}=="" ATTRS{devpath}=="0" ATTRS{bDeviceProtocol}=="01" ATTRS{bNumInterfaces}==" 1" looking at parent device '/devices/platform/soc/3f980000.usb': KERNELS=="3f980000.usb" SUBSYSTEMS=="platform" DRIVERS=="dwc_otg" ATTRS{wr_reg_test}=="Time to write GNPTXFSIZ reg 10000000 times: 540 msecs (54 jiffies)" ATTRS{grxfsiz}=="GRXFSIZ = 0x00000306" ATTRS{srpcapable}=="SRPCapable = 0x1" ATTRS{buspower}=="Bus Power = 0x1" ATTRS{bussuspend}=="Bus Suspend = 0x0" ATTRS{hptxfsiz}=="HPTXFSIZ = 0x02000406" ATTRS{hnp}=="HstNegScs = 0x0" ATTRS{mode}=="Mode = 0x1" ATTRS{mode_ch_tim_en}=="Mode Change Ready Timer Enable = 0x0" ATTRS{hsic_connect}=="HSIC Connect = 0x1" ATTRS{gsnpsid}=="GSNPSID = 0x4f54280a" ATTRS{driver_override}=="(null)" ATTRS{hcd_frrem}=="HCD Dump Frame Remaining" ATTRS{gotgctl}=="GOTGCTL = 0x001c0001" ATTRS{gpvndctl}=="GPVNDCTL = 0x00000000" ATTRS{hnpcapable}=="HNPCapable = 0x1" ATTRS{spramdump}=="SPRAM Dump" ATTRS{regoffset}=="0xffffffff" ATTRS{gnptxfsiz}=="GNPTXFSIZ = 0x01000306" ATTRS{guid}=="GUID = 0x2708a000" ATTRS{regdump}=="Register Dump" ATTRS{hprt0}=="HPRT0 = 0x00001405" ATTRS{hcddump}=="HCD Dump" ATTRS{rem_wakeup_pwrdn}=="" ATTRS{regvalue}=="invalid offset" ATTRS{gusbcfg}=="GUSBCFG = 0x20001700" ATTRS{fr_interval}=="Frame Interval = 0x1d4b" ATTRS{busconnected}=="Bus Connected = 0x1" ATTRS{remote_wakeup}=="Remote Wakeup Sig = 0 Enabled = 0 LPM Remote Wakeup = 0" ATTRS{devspeed}=="Device Speed = 0x0" ATTRS{rd_reg_test}=="Time to read GNPTXFSIZ reg 10000000 times: 1500 msecs (150 jiffies)" ATTRS{enumspeed}=="Device Enumeration Speed = 0x1" ATTRS{inv_sel_hsic}=="Invert Select HSIC = 0x0" ATTRS{ggpio}=="GGPIO = 0x00000000" ATTRS{srp}=="SesReqScs = 0x1" looking at parent device '/devices/platform/soc': KERNELS=="soc" SUBSYSTEMS=="platform" DRIVERS=="" ATTRS{driver_override}=="(null)" looking at parent device '/devices/platform': KERNELS=="platform" SUBSYSTEMS=="" DRIVERS=="" $ udevadm monitor --property KERNEL[50982.358011] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/bsg/0:0:0:0 (bsg) ACTION=remove DEVNAME=/dev/bsg/0:0:0:0 DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/bsg/0:0:0:0 MAJOR=251 MINOR=0 SEQNUM=1095 SUBSYSTEM=bsg KERNEL[50982.359131] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_generic/sg0 (scsi_generic) ACTION=remove DEVNAME=/dev/sg0 DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_generic/sg0 MAJOR=21 MINOR=0 SEQNUM=1096 SUBSYSTEM=scsi_generic KERNEL[50982.359731] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_device/0:0:0:0 (scsi_device) ACTION=remove DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_device/0:0:0:0 SEQNUM=1097 SUBSYSTEM=scsi_device KERNEL[50982.361349] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0 (scsi_disk) ACTION=remove DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0 SEQNUM=1098 SUBSYSTEM=scsi_disk KERNEL[50982.367606] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda/sda1 (block) ACTION=remove DEVNAME=/dev/sda1 DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda/sda1 DEVTYPE=partition MAJOR=8 MINOR=1 PARTN=1 SEQNUM=1099 SUBSYSTEM=block KERNEL[50982.369279] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda (block) ACTION=remove DEVNAME=/dev/sda DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda DEVTYPE=disk MAJOR=8 MINOR=0 SEQNUM=1100 SUBSYSTEM=block KERNEL[50982.370139] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0 (scsi) ACTION=remove DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0 DEVTYPE=scsi_device MODALIAS=scsi:t-0x00 SEQNUM=1101 SUBSYSTEM=scsi KERNEL[50982.410910] remove /devices/virtual/bdi/8:0 (bdi) ACTION=remove DEVPATH=/devices/virtual/bdi/8:0 SEQNUM=1102 SUBSYSTEM=bdi KERNEL[50982.411476] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0 (scsi) ACTION=remove DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0 DEVTYPE=scsi_target SEQNUM=1103 SUBSYSTEM=scsi KERNEL[50982.412387] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/scsi_host/host0 (scsi_host) ACTION=remove DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/scsi_host/host0 SEQNUM=1104 SUBSYSTEM=scsi_host KERNEL[50982.414188] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0 (scsi) ACTION=remove DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0 DEVTYPE=scsi_host SEQNUM=1105 SUBSYSTEM=scsi KERNEL[50982.415487] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0 (usb) ACTION=remove DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0 DEVTYPE=usb_interface INTERFACE=8/6/80 MODALIAS=usb:v1058p10A2d1033dc00dsc00dp00ic08isc06ip50in00 PRODUCT=1058/10a2/1033 SEQNUM=1106 SUBSYSTEM=usb TYPE=0/0/0 KERNEL[50982.419788] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2 (usb) ACTION=remove BUSNUM=001 DEVNAME=/dev/bus/usb/001/007 DEVNUM=007 DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2 DEVTYPE=usb_device MAJOR=189 MINOR=6 PRODUCT=1058/10a2/1033 SEQNUM=1107 SUBSYSTEM=usb TYPE=0/0/0 UDEV [50982.973557] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/bsg/0:0:0:0 (bsg) ACTION=remove DEVNAME=/dev/bsg/0:0:0:0 DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/bsg/0:0:0:0 MAJOR=251 MINOR=0 SEQNUM=1095 SUBSYSTEM=bsg USEC_INITIALIZED=982359004 UDEV [50982.999940] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_device/0:0:0:0 (scsi_device) ACTION=remove DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_device/0:0:0:0 SEQNUM=1097 SUBSYSTEM=scsi_device USEC_INITIALIZED=982362751 UDEV [50983.095057] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda/sda1 (block) ACTION=remove DEVLINKS=/dev/disk/by-id/usb-WD_Elements_10A2_575831314541323038393032-0:0-part1 /dev/disk/by-label/Steven /dev/disk/by-path/platform-3f980000.usb-usb-0:1.2:1.0-scsi-0:0:0:0-part1 /dev/disk/by-uuid/21741F4F6C4915E1 DEVNAME=/dev/sda1 DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda/sda1 DEVTYPE=partition ID_BUS=usb ID_FS_LABEL=Steven ID_FS_LABEL_ENC=Steven ID_FS_TYPE=ntfs ID_FS_USAGE=filesystem ID_FS_UUID=21741F4F6C4915E1 ID_FS_UUID_ENC=21741F4F6C4915E1 ID_INSTANCE=0:0 ID_MODEL=Elements_10A2 ID_MODEL_ENC=Elements\x2010A2\x20\x20\x20 ID_MODEL_ID=10a2 ID_PART_ENTRY_DISK=8:0 ID_PART_ENTRY_NUMBER=1 ID_PART_ENTRY_OFFSET=2048 ID_PART_ENTRY_SCHEME=dos ID_PART_ENTRY_SIZE=1953456128 ID_PART_ENTRY_TYPE=0x7 ID_PART_ENTRY_UUID=00023f15-01 ID_PART_TABLE_TYPE=dos ID_PART_TABLE_UUID=00023f15 ID_PATH=platform-3f980000.usb-usb-0:1.2:1.0-scsi-0:0:0:0 ID_PATH_TAG=platform-3f980000_usb-usb-0_1_2_1_0-scsi-0_0_0_0 ID_REVISION=1033 ID_SERIAL=WD_Elements_10A2_575831314541323038393032-0:0 ID_SERIAL_SHORT=575831314541323038393032 ID_TYPE=disk ID_USB_DRIVER=usb-storage ID_USB_INTERFACES=:080650: ID_USB_INTERFACE_NUM=00 ID_VENDOR=WD ID_VENDOR_ENC=WD\x20\x20\x20\x20\x20\x20 ID_VENDOR_ID=1058 MAJOR=8 MINOR=1 PARTN=1 SEQNUM=1099 SUBSYSTEM=block TAGS=:systemd: USEC_INITIALIZED=8036767 UDEV [50983.126799] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0 (scsi_disk) ACTION=remove DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0 SEQNUM=1098 SUBSYSTEM=scsi_disk USEC_INITIALIZED=982364537 UDEV [50983.136895] remove /devices/virtual/bdi/8:0 (bdi) ACTION=remove DEVPATH=/devices/virtual/bdi/8:0 SEQNUM=1102 SUBSYSTEM=bdi USEC_INITIALIZED=982411342 UDEV [50983.138940] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_generic/sg0 (scsi_generic) ACTION=remove DEVNAME=/dev/sg0 DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_generic/sg0 MAJOR=21 MINOR=0 SEQNUM=1096 SUBSYSTEM=scsi_generic USEC_INITIALIZED=982360886 KERNEL[50983.194516] remove /devices/virtual/bdi/8:1-fuseblk (bdi) ACTION=remove DEVPATH=/devices/virtual/bdi/8:1-fuseblk SEQNUM=1108 SUBSYSTEM=bdi UDEV [50983.204265] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/scsi_host/host0 (scsi_host) ACTION=remove DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/scsi_host/host0 SEQNUM=1104 SUBSYSTEM=scsi_host USEC_INITIALIZED=982413320 UDEV [50983.643690] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda (block) ACTION=remove DEVLINKS=/dev/disk/by-id/usb-WD_Elements_10A2_575831314541323038393032-0:0 /dev/disk/by-path/platform-3f980000.usb-usb-0:1.2:1.0-scsi-0:0:0:0 DEVNAME=/dev/sda DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda DEVTYPE=disk ID_BUS=usb ID_INSTANCE=0:0 ID_MODEL=Elements_10A2 ID_MODEL_ENC=Elements\x2010A2\x20\x20\x20 ID_MODEL_ID=10a2 ID_PART_TABLE_TYPE=dos ID_PART_TABLE_UUID=00023f15 ID_PATH=platform-3f980000.usb-usb-0:1.2:1.0-scsi-0:0:0:0 ID_PATH_TAG=platform-3f980000_usb-usb-0_1_2_1_0-scsi-0_0_0_0 ID_REVISION=1033 ID_SERIAL=WD_Elements_10A2_575831314541323038393032-0:0 ID_SERIAL_SHORT=575831314541323038393032 ID_TYPE=disk ID_USB_DRIVER=usb-storage ID_USB_INTERFACES=:080650: ID_USB_INTERFACE_NUM=00 ID_VENDOR=WD ID_VENDOR_ENC=WD\x20\x20\x20\x20\x20\x20 ID_VENDOR_ID=1058 MAJOR=8 MINOR=0 SEQNUM=1100 SUBSYSTEM=block TAGS=:systemd: USEC_INITIALIZED=748036370 UDEV [50983.733473] remove /devices/virtual/bdi/8:1-fuseblk (bdi) ACTION=remove DEVPATH=/devices/virtual/bdi/8:1-fuseblk SEQNUM=1108 SUBSYSTEM=bdi USEC_INITIALIZED=3192262 UDEV [50984.141379] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0 (scsi) ACTION=remove DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0 DEVTYPE=scsi_device MODALIAS=scsi:t-0x00 SEQNUM=1101 SUBSYSTEM=scsi USEC_INITIALIZED=2371212 UDEV [50984.629455] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0 (scsi) ACTION=remove DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0 DEVTYPE=scsi_target SEQNUM=1103 SUBSYSTEM=scsi USEC_INITIALIZED=2413053 UDEV [50985.087418] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0 (scsi) ACTION=remove DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0 DEVTYPE=scsi_host SEQNUM=1105 SUBSYSTEM=scsi USEC_INITIALIZED=2415484 UDEV [50985.618300] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0 (usb) ACTION=remove DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0 DEVTYPE=usb_interface ID_MODEL_FROM_DATABASE=Elements SE Portable (WDBPCK) ID_VENDOR_FROM_DATABASE=Western Digital Technologies, Inc. INTERFACE=8/6/80 MODALIAS=usb:v1058p10A2d1033dc00dsc00dp00ic08isc06ip50in00 PRODUCT=1058/10a2/1033 SEQNUM=1106 SUBSYSTEM=usb TYPE=0/0/0 USEC_INITIALIZED=5647475 UDEV [50986.078354] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2 (usb) ACTION=remove BUSNUM=001 DEVNAME=/dev/bus/usb/001/007 DEVNUM=007 DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2 DEVTYPE=usb_device ID_BUS=usb ID_MODEL=Elements_10A2 ID_MODEL_ENC=Elements\x2010A2 ID_MODEL_FROM_DATABASE=Elements SE Portable (WDBPCK) ID_MODEL_ID=10a2 ID_REVISION=1033 ID_SERIAL=Western_Digital_Elements_10A2_575831314541323038393032 ID_SERIAL_SHORT=575831314541323038393032 ID_USB_INTERFACES=:080650: ID_VENDOR=Western_Digital ID_VENDOR_ENC=Western\x20Digital ID_VENDOR_FROM_DATABASE=Western Digital Technologies, Inc. ID_VENDOR_ID=1058 MAJOR=189 MINOR=6 PRODUCT=1058/10a2/1033 SEQNUM=1107 SUBSYSTEM=usb TYPE=0/0/0 USEC_INITIALIZED=745644833
The recommended place is /etc/systemd/system/nginx.service Then issue the command : systemctl enable nginx And finally systemctl start nginx
{ "source": [ "https://unix.stackexchange.com/questions/368122", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233431/" ] }
368,123
I have an end-entity/server certificate which have an intermediate and root certificate. When I cat on the end-entity certificate, I see only a single BEGIN and END tag. It is the only the end-entity certificate. Is there any way I can view the intermediate and root certificate content. I need only the content of BEGIN and END tag. In Windows I can see the full cert chain from the "Certification Path". Below is the example for the Stack Exchange's certificate. From there I can perform a View Certificate and export them. I can do that for both root and intermediate in Windows. I am looking for this same method in Linux.
From a web site, you can do: openssl s_client -showcerts -verify 5 -connect stackexchange.com:443 < /dev/null That will show the certificate chain and all the certificates the server presented. Now, if I save those two certificates to files, I can use openssl verify : $ openssl verify -show_chain -untrusted dc-sha2.crt se.crt se.crt: OK Chain: depth=0: C = US, ST = NY, L = New York, O = "Stack Exchange, Inc.", CN = *.stackexchange.com (untrusted) depth=1: C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert SHA2 High Assurance Server CA (untrusted) depth=2: C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert High Assurance EV Root CA The -untrusted option is used to give the intermediate certificate(s); se.crt is the certificate to verify. The depth=2 result came from the system trusted CA store. If you don't have the intermediate certificate(s), you can't perform the verify. That's just how X.509 works. Depending on the certificate, it may contain a URI to get the intermediate from. As an example, openssl x509 -in se.crt -noout -text contains: Authority Information Access: OCSP - URI:http://ocsp.digicert.com CA Issuers - URI:http://cacerts.digicert.com/DigiCertSHA2HighAssuranceServerCA.crt That "CA Issuers" URI points to the intermediate cert (in DER format, so you need to use openssl x509 -inform der -in DigiCertSHA2HighAssuranceServerCA.crt -out DigiCertSHA2HighAssuranceServerCA.pem to convert it for further use by OpenSSL). If you run openssl x509 -in /tmp/DigiCertSHA2HighAssuranceServerCA.pem -noout -issuer_hash you get 244b5494 , which you can look for in the system root CA store at /etc/ssl/certs/244b5494.0 (just append .0 to the name). I don't think there is a nice, easy OpenSSL command to do all that for you.
{ "source": [ "https://unix.stackexchange.com/questions/368123", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121877/" ] }
368,155
I'm not sure about when to use nc , netcat or ncat . If one is the deprecated version of another? If one is only available on one distribution? If it is the same command but with different names? In fact I'm a bit confused. My question comes from wanting to do a network speed test between two CentOS 7 servers. I came across several examples using nc and dd but not many using netcat or ncat . Could someone clarify this for me please?
nc and netcat are two names for the same program (typically, one will be a symlink to the other). Though—for plenty of confusion—there are two different implementations of Netcat ("traditional" and "OpenBSD"), and they take different options and have different features. Ncat is the same idea, but from the Nmap project. There is also socat , which is a similar idea. There is also /dev/tcp , an (optional) Bash feature. However, if you're looking to do network speed tests then all of the above are the wrong answer. You're looking for iperf3 ( site 1 or site 2 or code ).
{ "source": [ "https://unix.stackexchange.com/questions/368155", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/103808/" ] }
368,210
I want to rsync multiple sources and I wonder the best way to achieve that. e.g. /etc/fstab /home/user/download I thought about 3 solutions : Solution 1 multiple call to rsync rsync -a /etc/fstab bkp rsync -a /home/user/download bkp con : harder to have agreggated stat Solution 2 create a tobackup folder that contains symlink, and use -L options sync -aL /home/user/tobackup bkp con : content to backup must not contain symlinks Solution 3 move files into to backup and create symlink in original location rsync -a /home/user/tobackup bkp con : some manual config Which one do you recommend ? Is there a better way ?
You can pass multiple source arguments. rsync -a /etc/fstab /home/user/download bkp This creates bkp/fstab and bkp/download , like the separate commands you gave. It may be desirable to preserve the source structure instead. To do this, use / as the source and use include-exclude rules to specify which files to copy. There are two ways to do this: Explicitly include each file as well as each directory component leading to it, with /*** at the end of directories when you want to copy the whole directory tree: rsync -a \ --include=/etc --include=/etc/fstab \ --include=/home --include=/home/user --include='/home/user/download/***' \ --exclude='*' / bkp Include all top-level directories with /*/ (so that rsync will traverse /etc and /home when looking for files to copy) and second-level directories with /*/*/ (for /home/user ), but strip away directories in which no file gets copied. This is more convenient because you don't have to list parents explicitly. You could even use --prune-empty-dirs --include='*/' instead of counting the number of levels, but this is impractical here as rsync would traverse the whole filesystem to explore directories even though none of the include rules can match anything outside /etc and /home/user/download . rsync -a --prune-empty-dirs \ --include='/*/' --include='/*/*/' \ --include=/etc/fstab \ --include='/home/user/download/***' \ --exclude='*' / bkp
{ "source": [ "https://unix.stackexchange.com/questions/368210", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32194/" ] }
368,318
I have a directory filled with files with names like logXX where XX is a two-character, zero-padded, uppercase hex number such as: log00 log01 log02 ... log0A log0B log0C ... log4E log4F log50 ... Generally there will be fewer than say 20 or 30 files total. The date and time on my particular system is not something that can be relied up on (an embedded system with no reliable NTP or GPS time sources). However the filenames will reliably increment as shown above. I wish to grep through all the files for the single most recent log entry of a certain type, I was hoping to cat the files together such as... cat /tmp/logs/log* | grep 'WARNING 07 -' | tail -n1 However it occurred to me that different versions of bash or sh or zsh etc. might have different ideas about how the * is expanded. The man bash page doesn't say whether or not the expansion of * would be a definitely ascending alphabetical list of matching filenames. It does seem to be ascending every time I've tried it on all the systems I have available to me -- but is it DEFINED behaviour or just implementation specific? In other words can I absolutely rely on cat /tmp/logs/log* to concatenate all my log files together in alphabetical order?
In all shells, globs are sorted by default. They were already by the /etc/glob helper called by Ken Thompson's shell to expand globs in the first version of Unix in the early 70s (and which gave globs their name). For sh , POSIX does require them to be sorted by way of strcoll() , that is using the sorting order in the user's locale, like for ls though some still do it via strcmp() , that is based on byte values only. $ dash -c 'echo *' Log01B log-0D log00 log01 log02 log0A log0B log0C log4E log4F log50 log① log② lóg01 $ bash -c 'echo *' log① log② log00 log01 lóg01 Log01B log02 log0A log0B log0C log-0D log4E log4F log50 $ zsh -c 'echo *' log① log② log00 log01 lóg01 Log01B log02 log0A log0B log0C log-0D log4E log4F log50 $ ls log② log① log00 log01 lóg01 Log01B log02 log0A log0B log0C log-0D log4E log4F log50 $ ls | sort log② log① log00 log01 lóg01 Log01B log02 log0A log0B log0C log-0D log4E log4F log50 You may notice above that for those shells that do sorting based on locale, here on a GNU system with a en_GB.UTF-8 locale, the - in the file names is ignored for sorting (most punctuation characters would). The ó is sorted in a more expected way (at least to British people), and case is ignored (except when it comes to decide ties). However, you'll notice some inconsistencies for log① log②. That's because the sorting order of ① and ② is not defined in GNU locales (currently; hopefully it will be fixed some day). They sort the same, so you get random results. Changing the locale will affect the sorting order. You can set the locale to C to get a strcmp() -like sort: $ bash -c 'echo *' log① log② log00 log01 lóg01 Log01B log02 log0.2 log0A log0B log0C log-0D log4E log4F log50 $ bash -c 'LC_ALL=C; echo *' Log01B log-0D log0.2 log00 log01 log02 log0A log0B log0C log4E log4F log50 log① log② lóg01 Note that some locales can cause some confusions even for all-ASCII all-alnum strings. Like Czech ones (on GNU systems at least) where ch is a collating element that sorts after h : $ LC_ALL=cs_CZ.UTF-8 bash -c 'echo *' log0Ah log0Bh log0Dh log0Ch Or, as pointed out by @ninjalj, even weirder ones in Hungarian locales: $ LC_ALL=hu_HU.UTF-8 bash -c 'echo *' logX LOGx LOGX logZ LOGz LOGZ logY LOGY LOGy In zsh , you can choose the sorting with glob qualifiers . For instance: echo *(om) # to sort by modification time echo *(oL) # to sort by size echo *(On) # for a *reverse* sort by name echo *(o+myfunction) # sort using a user-defined function echo *(N) # to NOT sort echo *(n) # sort by name, but numerically, and so on. The numeric sort of echo *(n) can also be enabled globally with the numericglobsort option: $ zsh -c 'echo *' log① log② log00 log01 lóg01 Log01B log02 log0.2 log0A log0B log0C log-0D log4E log4F log50 $ zsh -o numericglobsort -c 'echo *' log① log② log00 lóg01 Log01B log0.2 log0A log0B log0C log01 log02 log-0D log4E log4F log50 If you (as I was) are confused by that order in that particular instance (here using my British locale), see here for details.
{ "source": [ "https://unix.stackexchange.com/questions/368318", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
368,867
Doing an update on my CentOS 7 box and I noticed that there was a handful of DRPMs being installed. After doing some searches on google, there is no straight up Answer for this question so I thought it would fit here to ask. I am wondering what is a DRPM? How does it differ from a RPM package?
A drpm stands for delta rpm , which is an addition to an existing rpm , and only contains the different files. Source : Delta RPM packages contain the difference between an old and a new version of an RPM package. Applying a delta RPM on an old RPM results in the complete new RPM. It is not necessary to have a copy of the old RPM, because a delta RPM can also work with an installed RPM. The delta RPM packages are even smaller in size than patch RPMs, which is an advantage when transferring update packages over the Internet. The drawback is that update operations with delta RPMs involved consume considerably more CPU cycles than plain or patch RPMs. The README file referred to in the documentation can be found in the GitHub repository . You will see that deltarpm is based on bsdiff .
{ "source": [ "https://unix.stackexchange.com/questions/368867", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60089/" ] }
368,880
I have a directory full of RPM files recently installed (gotten by running yum install --downloadonly prior to the install). I want to know remove all these RPMs to get close to a 'fresh' install for testing reasons. Is there an easy way to uninstall all RPMs listed in the directory at once? I tried this: find . *.rpm | sed "s/.rpm$//g" | xargs sudo yum remove but I get the message "no match for arguments ./" for each rpm in the list, so something is wrong with the command.
A drpm stands for delta rpm , which is an addition to an existing rpm , and only contains the different files. Source : Delta RPM packages contain the difference between an old and a new version of an RPM package. Applying a delta RPM on an old RPM results in the complete new RPM. It is not necessary to have a copy of the old RPM, because a delta RPM can also work with an installed RPM. The delta RPM packages are even smaller in size than patch RPMs, which is an advantage when transferring update packages over the Internet. The drawback is that update operations with delta RPMs involved consume considerably more CPU cycles than plain or patch RPMs. The README file referred to in the documentation can be found in the GitHub repository . You will see that deltarpm is based on bsdiff .
{ "source": [ "https://unix.stackexchange.com/questions/368880", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33822/" ] }
369,097
On Xubuntu, for a long time I've had an issue where my Left mouse button stops working for some reason. It happens pretty much everyday. Everything else seems to work. The only way I can get my mouse to work again is to logout and login, which requires me to shutdown all my programs. Obviously this is very annoying, I've had this issue for almost a year and I've assumed that an update would fix it but it still happens. Is anyone else aware of this issue and possible fixes? I'm using Xubuntu as my Desktop Environment. I'm currently on Ubuntu 16.04 LTS. Edit: It happened again and I used xev and evtest to see what events are recognised. xev did not respond to Left button clicks but evtest did respond to Left button clicks. Edit (2018/01/22) : Just an update. I still have the problem, but I have a short term fix. When the left mouse button stops working, I use Ctrl+Alt+T to bring up the terminal. I enter xinput in the terminal, which brings up a list of devices. I search for which device is probably the mouse (it has name like generic mouse ) and I find the associated ID number. I then enter the command: xinput disable ID where ID is the ID number of the mouse. This fixes the problem until I shutdown the computer. Also, for more information about the problem, the same mouse works for my Windows 10 installation, so I think the mouse is fine. The same problem also occurs in Kali Linux, except that Kali linux doesn't have xinput installed so I can't use my quick fix.
I have a Dell Inspiron 15 7559. The left click stops working once in a while when I was using Ubuntu 16.04. After I installed Ubuntu 18.04, the left click stops working almost every time after I resume from suspend. The best solution I found is switching to another virtual console (TTY) by Alt + Ctrl + F1 . The mouse works normally after switching back with Alt + Ctrl + F7 .
{ "source": [ "https://unix.stackexchange.com/questions/369097", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/232247/" ] }
369,181
I am trying to print every Nth line out of a file with more than 300,000 records into a new file. This has to happen every Nth record until it reaches the end of the file.
awk 'NR % 5 == 0' input > output This prints every fifth line. To use an environment variable: NUM=5 awk -v NUM=$NUM 'NR % NUM == 0' input > output
{ "source": [ "https://unix.stackexchange.com/questions/369181", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/234535/" ] }
370,306
I have a JSON array returned from curl that looks like this: [ { "title": "Some Title", "tags":"tagA tag-B tagC" }, { "title": "Some Title 2", "tags":"tagA tagC" }, ... ] I'd like to convert it to... [ { "title": "Some Title", "tags":["tagA", "tag-B", "tagC"] }, { "title": "Some Title 2", "tags":["tagA", "tagC"] }, ... ] So far I have: (map(select(.tags!=null)) | map(.tags | split(" "))) as $tags | $tags and that appears to give me something like: [ [ "tagA", "tag-B", "tagC" ], [ "tagA", "tagC" ] ] But I don't seem to be able to weave that back into an output that would give me .tags as an array in the original objects with the original values...
You're making it a lot more complicated than it is. Just use map() and |= : jq 'map(.tags |= split(" "))' file.json Edit: If you want to handle entries without tags : jq 'map(try(.tags |= split(" ")))' file.json Alternatively, if you want to keep unchanged all entries without tags : jq 'map(try(.tags |= split(" ")) // .)' file.json Result: [ { "tags": [ "tagA", "tag-B", "tagC" ], "title": "Some Title" }, { "tags": [ "tagA", "tagC" ], "title": "Some Title 2" } ]
{ "source": [ "https://unix.stackexchange.com/questions/370306", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3570/" ] }
370,307
I live in a remote location and internet is scarce. Whenever i do sudo poweroff I want it to try git push also. And if push fails, abort shutdown, otherwise continue with shutdown. How can I achieve this without modifying the binary?
You're making it a lot more complicated than it is. Just use map() and |= : jq 'map(.tags |= split(" "))' file.json Edit: If you want to handle entries without tags : jq 'map(try(.tags |= split(" ")))' file.json Alternatively, if you want to keep unchanged all entries without tags : jq 'map(try(.tags |= split(" ")) // .)' file.json Result: [ { "tags": [ "tagA", "tag-B", "tagC" ], "title": "Some Title" }, { "tags": [ "tagA", "tagC" ], "title": "Some Title 2" } ]
{ "source": [ "https://unix.stackexchange.com/questions/370307", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/108755/" ] }
370,889
I have the code file="JetConst_reco_allconst_4j2t.png" if [[ $file == *_gen_* ]]; then echo "True" else echo "False" fi I test if file contains "gen". The output is "False". Nice! The problem is when I substitute "gen" with a variable testseq : file="JetConst_reco_allconst_4j2t.png" testseq="gen" if [[ $file == *_$testseq_* ]]; then echo "True" else echo "False" fi Now the output is "True". How could it be? How to fix the problem?
You need to interpolate the $testseq variable with one of the following ways: $file == *_"$testseq"_* (here $testseq considered as a fixed string) $file == *_${testseq}_* (here $testseq considered as a pattern). Or the _ immediately after the variable's name will be taken as part of the variable's name (it's a valid character in a variable name).
{ "source": [ "https://unix.stackexchange.com/questions/370889", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/226971/" ] }
370,932
I am trying to test a server that is working normal in web browser, with openssl s_client option, connecting it directly using openssl returns the 400 Bad Request: openssl s_client -servername example.com -connect example.com:443 -tls1 (some information about the certificate) GET / HTTP/1.1 (and the error occurs **immediately** - no time to include more headers like Host:) Important: I already tried to put the Host: header, the thing is that when i run GET, the error occur immediately leaving me no chance to include more headers. Replace example.com with my host...
According to https://bz.apache.org/bugzilla/show_bug.cgi?id=60695 my command was: openssl s_client -crlf -connect www.pgxperts.com:443 where -crlf means, according to help of the openssl command, -crlf - convert LF from terminal into CRLF Then I could input multiline commands and no "bad request" as response after the first commandline any more.
{ "source": [ "https://unix.stackexchange.com/questions/370932", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/170792/" ] }
370,941
i need to edit the page number of multiple url links in the text file. eg: http://gk4success.com/questions.php?page= 1 &parent=0&lang=2&c-id=27&q_type= ... http://gk4success.com/questions.php?page= 162 &parent=0&lang=2&c-id=27&q_type= There are 162 links in the web site and i cannot edit links 162 times,even if i copy the lines 162 times,then how do i edit the page numbers easily ? is there any easy way with any text editor ?
According to https://bz.apache.org/bugzilla/show_bug.cgi?id=60695 my command was: openssl s_client -crlf -connect www.pgxperts.com:443 where -crlf means, according to help of the openssl command, -crlf - convert LF from terminal into CRLF Then I could input multiline commands and no "bad request" as response after the first commandline any more.
{ "source": [ "https://unix.stackexchange.com/questions/370941", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185795/" ] }
370,954
I am attempting to write a bash script that will insert a string after matching on a string in /usr/lib/systemd/system/neutron-server.service I have been able to do this on other files easily as I was just insert variables into neccessary config files, but this one seems to be giving me trouble. I believe the error is that sed is not ignoring the special characters. In my attempt I have tried using sed of single quotes and double quotes (which I understand are for variables, but thought it might change something. Is there a better way of going about this or some special sed flags or syntax I am missing? sed ‘/--config-file /etc/neutron/plugin.ini/a\--config-file /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini‘ /usr/lib/systemd/system/neutron-server TL;DR - Insert --config-file /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini After --config-file /etc/neutron/plugin.ini Orginial File [Unit] Description=OpenStack Neutron Server After=syslog.target network.target [Service] Type=notify User=neutron ExecStart=/usr/bin/neutron-server --config-file /usr/share/neutron/neutron- dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server - -log-file /var/log/neutron/server.log PrivateTmp=true NotifyAccess=all KillMode=process TimeoutStartSec="infinity" [Install] WantedBy=multi-user.target File after desired change command. [Unit] Description=OpenStack Neutron Server After=syslog.target network.target [Service] Type=notify User=neutron ExecStart=/usr/bin/neutron-server --config-file /usr/share/neutron/neutron- dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config- file /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server - -log-file /var/log/neutron/server.log PrivateTmp=true NotifyAccess=all KillMode=process TimeoutStartSec="infinity" [Install] WantedBy=multi-user.target
According to https://bz.apache.org/bugzilla/show_bug.cgi?id=60695 my command was: openssl s_client -crlf -connect www.pgxperts.com:443 where -crlf means, according to help of the openssl command, -crlf - convert LF from terminal into CRLF Then I could input multiline commands and no "bad request" as response after the first commandline any more.
{ "source": [ "https://unix.stackexchange.com/questions/370954", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235932/" ] }
371,014
I have a laptop with Debian on it, and I am going to sell this laptop. Would it suffice to erase the Debian installation before selling it to completely clean up my laptop from my personal data, and if yes how can I uninstall Debian (so that there isn't any operating system on the laptop)?
This nixCraft post explain how to erase hard disk The secure removal of data is not as easy as you may think. When you delete a file using the default commands of the operating system (for example “rm” in Linux/BSD/MacOS/UNIX or “del” in DOS or emptying the recycle bin in WINDOWS) the operating system does NOT delete the file, the contents of the file remains on your hard disk. The only way to make recovering of your sensitive data nearly impossible is to overwrite (“wipe” or “shred”) the data with several defined patterns. For erasing hard disk permanently, you can use the standard dd command. However, I recommend using shred command or wipe command or scrub command. Warning : Check that the correct drive or partition has been targeted. Wrong drive or partition target going to result into data loss . Under no circumstances we can be help responsible for total or partial data loss, so please be careful with disk names. YOU HAVE BEEN WARNED! Erase disk permanently using a live Linux cd First, download a knoppix Live Linux CD or SystemRescueCd live CD. Next, burn a live cd and boot your laptop or desktop from live CD. You can now wipe any disk including Windows, Linux, Mac OS X or Unix-like system. 1. How do I use the shred command? Shred originally designed to delete file securely. It deletes a file securely, first overwriting it to hide its contents. However, the same command can be used to erase hard disk. For example, if your hard drive named as /dev/sda, then type the following command: # shred -n 5 -vz /dev/sda Where, -n 5: Overwrite 5 times instead of the default (25 times). -v : Show progress. -z : Add a final overwrite with zeros to hide shredding. The command is same for IDE hard disk hda (PC/Windows first hard disk connected to IDE) : # shred -n 5 -vz /dev/hda Note: Comment from @Gilles Replace shred -n 5 by shred -n 1 or by cat /dev/zero. Multiple passes are not useful unless your hard disk uses 1980s technology. In this example use shred and /dev/urandom as the source of random data: # shred -v --random-source=/dev/urandom -n1 /dev/DISK/TO/DELETE # shred -v --random-source=/dev/urandom -n1 /dev/sda 2. How to use the wipe command You can use wipe command to delete any file including disks: # wipe -D /path/to/file.doc 3. How to use the scrub command You can use disk scrubbing program such as scrub. It overwrites hard disks, files, and other devices with repeating patterns intended to make recovering data from these devices more difficult. Although physical destruction is unarguably the most reliable method of destroying sensitive data, it is inconvenient and costly. For certain classes of data, organizations may be willing to do the next best thing which is scribble on all the bytes until retrieval would require heroic efforts in a lab. The scrub implements several different algorithms. The syntax is: # scrub -p nnsa|dod|bsi|old|fastold|gutmann|random|random2 fileNameHere To erase /dev/sda, enter: # scrub -p dod /dev/sda 4. Use dd command to securely wipe disk You can wipe a disk is done by writing new data over every single bit. The dd command can be used as follows: # dd if=/dev/urandom of=/dev/DISK/TO/WIPE bs=4096 Wipe a /dev/sda disk, enter: # dd if=/dev/urandom of=/dev/sda bs=4096 5. How do I securely wipe drive/partition using a randomly-seeded AES cipher from OpenSSL? You can use openssl and pv command to securely erase the disk too. First, get the total /dev/sda disk size in bytes: # blockdev --getsize64 /dev/sda 399717171200 Next, type the following command to wipe a /dev/sda disk: # openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt </dev/zero | pv -bartpes 399717171200 | dd bs=64K of=/dev/sda 6. How to use badblocks command to securely wipe disk The syntax is: # badblocks -c BLOCK_SIZE_HERE -wsvf /dev/DISK/TO/WIPE # badblocks -wsvf /dev/DISK/TO/WIPE # badblocks -wsvf /dev/sda
{ "source": [ "https://unix.stackexchange.com/questions/371014", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
371,062
I'm running into weird behavior when trying to grep a man page on macOS. For example, the Bash man page clearly has an occurrence of the string NAME : $ man bash | head -5 | tail -1 NAME And if I grep for name I do get results, but if I grep for NAME I don't: $ man bash | grep 'NAME' $ man bash | grep NAME I've tried other uppercase words that I know are in there, and searching for SHELL yields nothing whereas searching for BASH yields results. What's going on here? Update : Thanks for all the answers! I thought it worth adding the context in which I ran into this. I wanted to write a bash function to wrap man and in cases where I've tried to look up the man page for a shell builtin, jump to the relevant section of the Bash man page. There might be a better way, but here's what I've got currently: man () { case "$(type -t "$1")" in builtin) local pattern="^ *$1" if bashdoc_match "$pattern \+[-[]"; then command man bash | less --pattern="$pattern +[-[]" elif bashdoc_match "$pattern\b"; then command man bash | less --pattern="$pattern[[:>:]]" else command man bash fi ;; keyword) command man bash | less --hilite-search --pattern='^SHELL GRAMMAR$' ;; *) command man "$@" ;; esac } bashdoc_match() { command man bash | col -b | grep -l "$1" > /dev/null }
If you add a | sed -n l to that tail command, to show non-printable characters, you'll probably see something like: N\bNA\bAM\bME\bE That is, each character is written as X Backspace X . On modern terminals, the character ends up being written over itself (as Backspace aka BS aka \b aka ^H is the character that moves the cursor one column to the left) with no difference. But in ancient tele-typewriters, that would cause the character to appear in bold as it gets twice as much ink. Still, pagers like more / less do understand that format to mean bold, so that's still what roff does to output bold text. Some man implementations would call roff in a way that those sequences are not used (or internally call col -b -p -x to strip them like in the case of the man-db implementation (unless the MAN_KEEP_FORMATTING environment variable is set)), and don't invoke a pager when they detect the output is not going to a terminal (so man bash | grep NAME would work there), but not yours. You can use col -b to remove those sequences (there are other types ( _ BS X ) as well for underline). For systems using GNU roff (like GNU or FreeBSD), you can avoid those sequences being used in the first place by making sure the -c -b -u options are passed to grotty , for instance by making sure the -P-cbu options is passed to groff . For instance by creating a wrapper script called groff containing: #! /bin/sh - exec /usr/bin/groff -P-cbu "$@" That you put ahead of /usr/bin/groff in $PATH . With macOS' man (also using GNU roff ), you can create a man-no-overstrike.conf with: NROFF /usr/bin/groff -mandoc -Tutf8 -P-cbu And call man as: man -C man-no-overstrike.conf bash | grep NAME Still with GNU roff , if you set the GROFF_SGR environment variable (or don't set the GROFF_NO_SGR variable depending on how the defaults have been set at compile time), then grotty (as long as it's not passed the -c option) will use ANSI SGR terminal escape sequences instead of those BS tricks for character attributes. less understand them when called with the -R option. FreeBSD's man calls grotty with the -c option unless you're asking for colours by setting the MANCOLOR variable (in which case -c is not passed to grotty and grotty reverts to the default of using ANSI SGR escape sequences there). MANCOLOR=1 man bash | grep NAME will work there. On Debian, GROFF_SGR is not the default. If you do: GROFF_SGR=1 man bash | grep NAME however, because man 's stdout is not a terminal, it takes it upon itself to also pass a GROFF_NO_SGR variable to grotty (I suppose so it can use col -bpx to strip the BS sequences as col doesn't know how to strip the SGR sequences, even though it still does it with MAN_KEEP_FORMATTING ) which overrides our GROFF_SGR . You can do instead: GROFF_SGR=1 MANPAGER='grep NAME' man bash (in a terminal) to have the SGR escape sequences. That time, you'll notice that some of those NAME s do appear in bold on the terminal (and in a less -R pager). If you feed the output to sed -n l ( MANPAGER='sed -n /NAME/l' ), you'll see something like: \033[1mNAME\033[0m$ Where \e[1m is the sequence to enable bold in ANSI compatible terminals, and \e[0m the sequence to revert all SGR attributes to the default. On that text grep NAME works as that text does contain NAME , but you could still have problems if looking for text where only parts of it is in bold/underline...
{ "source": [ "https://unix.stackexchange.com/questions/371062", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47044/" ] }
371,150
I used Linux a bit in college, and am familiar with the terms. I develop in .NET languages regularly, so I'm not computer illiterate. That said, I can't really say I understand the "compile it yourself" [CIY] mentality that exists in *nix circles. I know it's going away, but still hear it from time to time. As a developer, I know that setting up compilers and necessary dependencies is a pain in the butt, so I feel like CIY work flows have helped to make *nix a lot less accessible. What social or technical factors led to the rise of the CIY mentality?
Very simply, for much of the history of *nix, there was no other choice. Programs were distributed as source tarballs and the only way you had of using them was to compile from source. So it isn't so much a mentality as a necessary evil. That said, there are very good reasons to compile stuff yourself since they will then be compiled specifically for your hardware, you can choose what options to enable or not and you can therefore end up with a fine tuned executable, just the way you like it. That, however, is obviously only something that makes sense for expert users and not for people who just want a working machine to read their emails on. Now, in the Linux world, the main distributions have all moved away from this many years ago. You very, very rarely need to compile anything yourself these days unless you are using a distribution that is specifically designed for people who like to do this like Gentoo. For the vast majority of distributions, however, your average user will never need to compile anything since pretty much everything they'll ever need is present and compiled in their distribution's repositories. So this CIY mentality as you call it has essentially disappeared. It may well still be alive and kicking in the UNIX world, I have no experience there, but in Linux, if you're using a popular distribution with a decent repository, you will almost never need to compile anything yourself.
{ "source": [ "https://unix.stackexchange.com/questions/371150", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48548/" ] }
371,901
I'm using openssh7.5p1 and gnupg 2.1.21 on arch linux (these are the default versions that come with arch). I would like to use gpg-agent as an ssh agent. I put the following in my ~/.gnupg/gpg-agent.conf : pinentry-program /usr/bin/pinentry-qt enable-ssh-support Arch automatically starts a gpg-agent from systemd, so I set export SSH_AUTH_SOCK="$XDG_RUNTIME_DIR/gnupg/S.gpg-agent.ssh" When I run ssh-add -l , it reports no identities and ps reports a gpg-agent --supervised process as I would expect. Unfortunately, when I run ssh-add , no matter what the key type, it doesn't work. Here is an example of how I tried dsa: $ ssh-keygen -f testkey -t dsa -N '' Generating public/private dsa key pair. Your identification has been saved in testkey. Your public key has been saved in testkey.pub. $ ssh-add testkey Could not add identity "testkey": agent refused operation All other gpg functions work properly (encrypting/decrypting/signing). Also, the keys I generate work fine if I use them directly with ssh, and they work properly if I run the ssh-agent that came with openssh. The documentation says that ssh-add should add keys to ~/.gnupg/sshcontrol , but obviously nothing is happening. My question: What's the easiest way to load a key generated by openssh's ssh-keygen into gpg-agent , and can someone please cut and paste a terminal session showing how this works?
The answer was apparently to run: echo UPDATESTARTUPTTY | gpg-connect-agent I have no idea why the pinentry program worked fine for other uses such as decrypting files, but didn't work for ssh-add . While this now works, it also makes a copy of the ssh private key that doesn't show up under gpg -Kv , and furthermore doesn't seem to allow you to change the passphrase on your private key (since you can't edit it with --edit-key ). Basically I'm pretty unhappy with the way gpg-agent provides low visibility into where your secrets are being copied. If you hit this question because you hoped gpg-agent might be a better alternative to ssh-agent , then I'd encourage you to stick to ssh-agent instead of trying out my answer. The main reason to prefer gpg-agent is if you need to for smart-card use.
{ "source": [ "https://unix.stackexchange.com/questions/371901", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121912/" ] }
372,388
The Linux ps command shows different memory usages like RSS ( resident set size ), size in kB by default. Is there a way to show in MB or GB, like ls -s --human-readable does?
AFAIK you cannot achieve it simply by pure ps command with options. However you can use some text processors like awk and make it to do what you want: ps afu | awk 'NR>1 {$5=int($5/1024)"M";}{ print;}' This takes result from ps and then for every line except the first one it replacecs 5th column which is in KB normally, to MB adding M suffix. You can make it an alias and store it in .bashrc file so you can call it by something like myps . Most of people are asking how to preserve format or use other units and precision. For simple version you can use column -t output filter: ps afu | awk 'NR>1 {$5=int($5/1024)"M";}{ print;}' | column -t This however does not recognize spaces in last column correctly. Unfortunately we've got to deal with text formatting and prepare our own format string in printf -like format. ps afu | awk 'NR==1 {o=$0; a=match($0,$11);}; NR>1 {o=$0;$5=int(10*$5/1024)/10"M";}{ printf "%-8s %6s %-5s %-5s %9s %9s %-8s %-4s %-6s %-5s %s\n", $1, $2, $3, $4, $5, $6, $7, $8, $9, $10, substr(o, a);}' Explanation: NR==1 condition is for first line only (header). We are using original ps output to determine where COMMAND is starting: o=$0 stores unmodified entire line so we can use it later a=match($0,$11) finds location of 11th field (which should be where COMMAND column is starting in original output) NR>1 is for following lines (data). We are changing 5th field: $5=int(10*$5/1024)/10"M" changes value into Megabytes with one decimal place and adds "M" suffix. printf displays all fields in column-like flavor: %-10s means s for string, 10 for 10 characters wide, - for left align %8s means s for string, 8 for 8 characters wide, and because of no - output of this field is right-aligned. substr(o, a) takes substring of original line (hence o stored before) starting from position a calculated in previous condition, so we can have command output displayed with spaces preserved.
{ "source": [ "https://unix.stackexchange.com/questions/372388", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2832/" ] }
372,590
Today if I do $ yum remove packageA I am greeted with: Removing: packageA noarch 3.5.1.b37-15 @yumFS 293 k Removing for dependencies: packageB noarch 3.5.1.b125-7 @yumFS 87 M .. Is this ok? I would like to remove packageA without removing packageB (etc) is this possible?
Appears possible , by using rpm: $ rpm -e --nodeps packageA though obviously be very careful, since if you remove a dependency package and don't put it back that could lead to unexpected results for the packages still installed that depend on it and anticipate it being present...
{ "source": [ "https://unix.stackexchange.com/questions/372590", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8337/" ] }
372,627
As the title says, I want to be able to change environment variables in a parent process (specifically, a shell) from a child process (typically a script).  From pseudo terminal /dev/pts/id trying to export key=value from child script, so exported variables has to be passed somehow to parent, if possible? echoing cmd > /proc/$$/fd/0 doesn't execute cmd , only view command in shell terminal emulator, and of course using $(cmd) instead of cmd executes in subshell, and export doesn't add variables to parent process. I prefer that all the work be done in the child side. I was asked in comments, what am I trying to achieve? that is a general question, and I'm trying to use the positive answer to pass variables from a script executed (spawned) by a (parent) shell, so that the user can benefit from added variables without any further work.  For example, I would like to have a script install an application, and the application directory should be added in the parent shell path.
No , it's not possible Not without some kind of workaround . Environment variables can only be passed from parent to child (as part of environment export/inheritance), not the other way around.
{ "source": [ "https://unix.stackexchange.com/questions/372627", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/189097/" ] }
372,633
HFS+ formatted drive connected to an Ubuntu box via a SATA/USB cradle. No issues are reported for the partition by fsck.hfsplus. Attempting to run "ls" (or anything else) on the affected files results in "no such file or directory". Running "ls -lh" on the container folder throws the same complaint but still shows the file in the list, but with the following format: -rw-r--r-- 1 501 dialout 53M Mar 4 15:26 normal_file -????????? ? ? ? ? ? uncooperative_file I'm not concerned about the 501:dialout ownership of the other files (the drive is from a different machine). There are a few files that are being affected by this. They only seem to be files with Unicode and/or Emoji in the name. I've tried: "ls" with the "-b" and "-q" options, but neither has revealed anything "ls -lh > ~/tmp.txt" and editing in "vi" in an attempt to detect extraneous bytes in the name "chown root:root filename" "chmod 644 filename" The file shows up in the output of "ls" and tab-completion fills it in as well. But any sort of actual interaction fails. Anyone able to offer some guidance? Ultimately, I want to be able to rsync/scp these files to another box (which unfortunately doesn't play nicely with the drive cradle) and I figured being able to ls/mv would be a good starting point. EDIT: Using bash. Tab-completion fills in the full filename, though with some '???' in the place of certain characters (unsure of the original chars at this point). Locale on the source box: LANG=en_CA.UTF-8 LANGUAGE=en_CA:en LC_CTYPE="en_CA.UTF-8" LC_NUMERIC="en_CA.UTF-8" LC_TIME="en_CA.UTF-8" LC_COLLATE="en_CA.UTF-8" LC_MONETARY="en_CA.UTF-8" LC_MESSAGES="en_CA.UTF-8" LC_PAPER="en_CA.UTF-8" LC_NAME="en_CA.UTF-8" LC_ADDRESS="en_CA.UTF-8" LC_TELEPHONE="en_CA.UTF-8" LC_MEASUREMENT="en_CA.UTF-8" LC_IDENTIFICATION="en_CA.UTF-8" LC_ALL=
No , it's not possible Not without some kind of workaround . Environment variables can only be passed from parent to child (as part of environment export/inheritance), not the other way around.
{ "source": [ "https://unix.stackexchange.com/questions/372633", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26480/" ] }
372,850
Is it possible to run a command with parameters first of which starts with - (dash) e.g. /usr/bin/echo -n foo as different user and group, for example apache:apache using command su when login shell is set to /sbin/nologin ? I tried: su -s "/usr/bin/echo" -g apache apache -n foo fails with su: invalid option -- 'n' . It looks like first argument may not start with dash. su -c "/usr/bin/echo -n foo" -g apache apache fails with nologin: invalid option -- 'c' . It looks like -c can't be used if login shell is /sbin/nologin
su -s /bin/bash -c "/usr/bin/echo -n foo" -g apache apache -s /bin/bash overrides nologin and allows to interpret value of -c option -c "/usr/bin/echo -n foo" allows to avoid using dash-starting first argument
{ "source": [ "https://unix.stackexchange.com/questions/372850", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46334/" ] }
373,063
Long story short, I need to perform this all automatically on boot (embedded system). Our engineers will flash images to production devices. These images will contain a small partition table. On boot, I need to automatically expand the last partition (#3) to use all the available space on the disk. Here is what I get when I look at the free space on my disk. > parted /dev/sda print free Model: Lexar JumpDrive (scsi) Disk /dev/sda: 32.0GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 17.4kB 1049kB 1031kB Free Space 1 1049kB 25.3MB 24.2MB fat16 primary legacy_boot 25.3MB 26.2MB 922kB Free Space 2 26.2MB 475MB 449MB ext4 primary 3 475MB 1549MB 1074MB ext4 primary 1549MB 32.0GB 30.5GB Free Space I need to expand partition 3 by N (30.5GB) number of bytes How do I perform this step automatically, with no prompt? This needs to work with a dynamic size of space available after the 3rd partition.
In current versions of parted , resizepart should work for the partition ( parted understands 100% or things like -1s , the latter also needs -- to stop parsing options on the cmdline). To determine the exact value you can use unit s , print free . resize2fs comes afterwards for the filesystem. Old versions of parted had a resize command that would resize both partition and filesystem in one go, it even worked for vfat . In a Kobo ereader modification I used this to resize the 3rd partition of internal memory to the maximum: (it blindly assumes there to be no 4th partition and msdos table and things) start=$(cat /sys/block/mmcblk0/mmcblk0p3/start) end=$(($start+$(cat /sys/block/mmcblk0/mmcblk0p3/size))) newend=$(($(cat /sys/block/mmcblk0/size)-8)) if [ "$newend" -gt "$end" ] then parted -s /dev/mmcblk0 unit s resize 3 $start $newend fi So you can also obtain the values from /sys/block/.../ if the kernel supports it. But parted removed the resize command so you have to do two steps now, resizepart to grow the partition, and whatever tool your filesystem provides to grow that, like resize2fs for ext* .
{ "source": [ "https://unix.stackexchange.com/questions/373063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87529/" ] }
373,095
A month ago I wrote a Python script to map MAC and IP addresses from stdin. And two days ago I remembered it and used to filter output of tcpdump but it went wrong because of a typo. I typed tcpdump -ne > ./mac_ip.py and the output is nothing. But the output should be "Unknown" if it can't parse the input, so I did cat ./mac_ip.py and found all the tcpdump data instead of the program. Then I realized that I should use tcpdump -ne | ./mac_ip.py Is there any way to get my program back? Anyways I can write my program again, but if it happens again with more important program I should be able to do something. OR is there any way to tell output redirection to check for the file and warn if it is an executable?
Sadly I suspect you'll need to rewrite it. (If you have backups, this is the time to get them out. If not, I would strongly recommend you set up a backup regime for the future. Lots of options available, but off topic for this answer.) I find that putting executables in a separate directory, and adding that directory to the PATH is helpful. This way I don't need to reference the executables by explicit path. My preferred programs directory for personal (private) scripts is "$HOME"/bin and it can be added to the program search path with PATH="$HOME/bin:$PATH" . Typically this would be added to the shell startup scripts .bash_profile and/or .bashrc . Finally, there's nothing stopping you removing write permission for yourself on all executable programs: touch some_executable.py chmod a+x,a-w some_executable.py # chmod 555, if you prefer ls -l some_executable.py -r-xr-xr-x+ 1 roaima roaima 0 Jun 25 18:33 some_executable.py echo "The hunting of the Snark" > ./some_executable.py -bash: ./some_executable.py: Permission denied
{ "source": [ "https://unix.stackexchange.com/questions/373095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237535/" ] }
373,186
I have a script which runs in the background (without terminal windows or TTY) and occasionally does something. I now want it to do another thing when it does something and it is to open a Gnome Terminal window and execute 2 commands. Actually, I only need to execute 1 command but I want to make the terminal window stay open so I can see the output of the command. The command prints both on stdout and stderr and its output changes over time, so just writing it to a file and sending some kind of notification wouldn't do the job very well. I can get Gnome Terminal to open a window and execute 1 command: gnome-terminal -e "sleep 10" I chose sleep as the long-running command for simplicity. However, when adding another command, no terminal window opens: gnome-terminal -e "echo test; sleep 10" What's the solution to this?
gnome-terminal treats everything in quotes as one command, so in order to run many of them consecutively you need to start interpreter (usually a shell), and do stuff inside it, for instance: gnome-terminal -e 'sh -c "echo test; sleep 10"' BTW, you may want the window to stay open even after commands finish their job, in such case just start new shell, or replace a current with the new one: gnome-terminal -e 'sh -c "echo test; sleep 10; exec bash"'
{ "source": [ "https://unix.stackexchange.com/questions/373186", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147785/" ] }
373,223
Suppose the default shell for my account is zsh but I opened the terminal and fired up bash and executed a script named prac002.sh , which shell interpreter would be used to execute the script, zsh or bash? Consider the following example: papagolf@Sierra ~/My Files/My Programs/Learning/Shell % sudo cat /etc/passwd | grep papagolf [sudo] password for papagolf: papagolf:x:1000:1001:Rex,,,:/home/papagolf:/usr/bin/zsh # papagolf's default shell is zsh papagolf@Sierra ~/My Files/My Programs/Learning/Shell % bash # I fired up bash. (See that '%' prompt in zsh changes to '$' prompt, indicating bash.) papagolf@Sierra:~/My Files/My Programs/Learning/Shell$ ./prac002.sh Enter username : Rex Rex # Which interpreter did it just use? **EDIT : ** Here's the content of the script papagolf@Sierra ~/My Files/My Programs/Learning/Shell % cat ./prac002.sh read -p "Enter username : " uname echo $uname
Because the script does not begin with a #! shebang line indicating which interpreter to use, POSIX says that : If the execl() function fails due to an error equivalent to the [ENOEXEC] error defined in the System Interfaces volume of POSIX.1-2008, the shell shall execute a command equivalent to having a shell invoked with the pathname resulting from the search as its first operand , with any remaining arguments passed to the new shell, except that the value of "$0" in the new shell may be set to the command name. If the executable file is not a text file, the shell may bypass this command execution. In this case, it shall write an error message, and shall return an exit status of 126. That phrasing is a little ambiguous, and different shells have different interpretations. In this case, Bash will run the script using itself . On the other hand, if you ran it from zsh instead, zsh would use sh (whatever that is on your system) instead. You can verify that behaviour for this case by adding these lines to the script: echo $BASH_VERSION echo $ZSH_VERSION You'll note that, from Bash, the first line outputs your version, while the second never says anything, no matter which shell you use. If your /bin/sh is, say, dash , then neither line will output anything when the script is executed from zsh or dash. If your /bin/sh is a link to Bash, you'll see the first line output in all cases. If /bin/sh is a different version of Bash than you were using directly, you'll see different output when you run the script from bash directly and from zsh. The ps -p $$ command from rools's answer will also show useful information about the command the shell used to execute the script.
{ "source": [ "https://unix.stackexchange.com/questions/373223", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/152971/" ] }
373,461
Is there a way to open less and have it scroll to the end of the file? I'm always doing less app.log and then pressing G to go to the bottom. I'm hoping there's something like less --end or less -exec 'G' .
less +G app.log + will run an initial command when the file is opened G jumps to the end When multiple files are in play, ++ applies commands to every file being viewed. Not just the first one. For example, less ++G app1.log app2.log .
{ "source": [ "https://unix.stackexchange.com/questions/373461", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123650/" ] }
373,704
Problem : Find how many shells deep I am. Details : I open the shell from vim a lot. Build and run and exit. Sometimes I forget and open another vim inside and then yet another shell. :( I want to know how many shells deep I am, perhaps even have it on my shell screen at all times. (I can manage that part). My solution : Parse the process tree and look for vim and bash/zsh and figure out the current process's depth within it. Does something like that already exist? I could not find anything.
When I read your question, my first thought was $SHLVL .  Then I saw that you wanted to count vim levels in addition to shell levels.  A simple way to do this is to define a shell function: vim() { ( ((SHLVL++)); command vim "$@");} This will automatically and silently increment SHLVL each time you type a vim command.  You will need to do this for each variant of vi / vim that you ever use; e.g., vi() { ( ((SHLVL++)); command vi "$@");} view() { ( ((SHLVL++)); command view "$@");} The outer set of parentheses creates a subshell, so the manual change in the value of SHLVL doesn’t contaminate the current (parent) shell environment.  Of course the command keyword is there to prevent the functions from calling themselves (which would result in an infinite recursion loop).  And of course you should put these definitions into your .bashrc or other shell initialization file. There’s a slight inefficiency in the above.  In some shells (bash being one), if you say ( cmd 1 ; cmd 2 ; … ; cmd n ) where cmd n is an external, executable program (i.e., not a built-in command), the shell keeps an extra process lying around, just to wait for cmd n to terminate.  This is (arguably) not necessary; the advantages and disadvantages are debatable.  If you don’t mind tying up a bit of memory and a process slot (and to seeing one more shell process than you need when you do a ps ), then do the above and skip to the next section.  Ditto if you’re using a shell that doesn’t keep the extra process lying around.  But, if you want to avoid the extra process, a first thing to try is vim() { ( ((SHLVL++)); exec vim "$@");} The exec command is there to prevent the extra shell process from lingering. But, there’s a gotcha.  The shell’s handling of SHLVL is somewhat intuitive: When the shell starts, it checks whether SHLVL is set.  If it’s not set (or set to something other than a number), the shell sets it to 1.  If it is set (to a number), the shell adds 1 to it. But, by this logic, if you say exec sh , your SHLVL should go up.  But that’s undesirable, because your real shell level hasn’t increased.  The shell handles this by subtracting one from SHLVL when you do an exec : $ echo "$SHLVL" 1 $ set | grep SHLVL SHLVL=1 $ env | grep SHLVL SHLVL=1 $ (env | grep SHLVL) SHLVL=1 $ (env) | grep SHLVL SHLVL=1 $ (exec env) | grep SHLVL SHLVL=0 So vim() { ( ((SHLVL++)); exec vim "$@");} is a wash; it increments SHLVL only to decrement it again. You might as well just say vim , without benefit of a function. Note: According to Stéphane Chazelas (who knows everything) , some shells are smart enough not to do this if the exec is in a subshell. To fix this, you would do vim() { ( ((SHLVL+=2)); exec vim "$@");} Then I saw that you wanted to count vim levels independently of shell levels.  Well, the exact same trick works (well, with a minor modification): vim() { ( ((SHLVL++, VILVL++)); export VILVL; exec vim "$@");} (and so on for vi , view , etc.)  The export is necessary because VILVL isn’t defined as an environment variable by default.  But it doesn’t need to be part of the function; you can just say export VILVL as a separate command (in your .bashrc ).  And, as discussed above, if the extra shell process isn’t an issue for you, you can do command vim instead of exec vim , and leave SHLVL alone: vim() { ( ((VILVL++)); command vim "$@");} Personal Preference: You may want to rename VILVL to something like VIM_LEVEL .  When I look at “ VILVL ”, my eyes hurt; they can’t tell whether it’s a misspelling of “vinyl” or a malformed Roman numeral. If you are using a shell that doesn’t support SHLVL (e.g., dash), you can implement it yourself as long as the shell implements a startup file.  Just do something like if [ "$SHELL_LEVEL" = "" ] then SHELL_LEVEL=1 else SHELL_LEVEL=$(expr "$SHELL_LEVEL" + 1) fi export SHELL_LEVEL in your .profile or applicable file.  (You should probably not use the name SHLVL , as that will cause chaos if you ever start using a shell that supports SHLVL .) Other answers have addressed the issue of embedding environment variable value(s) into your shell prompt, so I won’t repeat that, especially you say you already know how to do it.
{ "source": [ "https://unix.stackexchange.com/questions/373704", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235096/" ] }
373,718
So I just installed and ran rkhunter which shows me green OKs / Not founds for everything except for: /usr/bin/lwp-request , like so: /usr/bin/lwp-request [ Warning ] In the log it says: Warning: The command '/usr/bin/lwp-request' has been replaced by a script: /usr/bin/lwp-request: Perl script text executable I already ran rkhunter --propupd and sudo apt-get update && sudo apt-get upgrade which didn't help. I installed Debian 9.0 just a few days ago and am a newcomer to Linux. Any suggestions on what to do? Edit : Furthermore chkrootkit gives me this: The following suspicious files and directories were found: /usr/lib/mono/xbuild-frameworks/.NETPortable /usr/lib/mono/xbuild-frameworks/.NETPortable/v5.0/SupportedFrameworks/.NET Framework 4.6.xml /usr/lib/mono/xbuild-frameworks/.NETFramework /usr/lib/python2.7/dist-packages/PyQt5/uic/widget-plugins/.noinit /usr/lib/python2.7/dist-packages/PyQt4/uic/widget-plugins/.noinit /usr/lib/mono/xbuild-frameworks/.NETPortable /usr/lib/mono/xbuild-frameworks/.NETFramework I guess that's a separate question? Or is this no issue at all? I don't know how to check if these files/directories are ok and needed. Edit : Note I once also got warnings for "Checking for passwd file changes" and "Checking for group file changes" even though I didn't change any such afaik. An earlier and later scan showed no warnings - these just showed once. Any ideas?
rkhunter needs to know what package manager you are using. Create or edit /etc/rkhunter.conf.local and add the following line: PKGMGR=DPKG If you are not on Debian or Ubuntu, then change DPKG for your actual package manager. This way, rkhunter will know to expect those executables to be scripts, and not flag the false positive. It will ensure that if the files are tampered with, then a new positive result will show.
{ "source": [ "https://unix.stackexchange.com/questions/373718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233262/" ] }
373,880
I am trying to mount hdfs on my local machine(ubuntu) using nfs by following the below link:-- https://www.cloudera.com/documentation/enterprise/5-2-x/topics/cdh_ig_nfsv3_gateway_configure.html#xd_583c10bfdbd326ba--6eed2fb8-14349d04bee--7ef4 So,at my machine I installed nfs-common using:- sudo apt-get install nfs-common Then,before mounting I have ran these commands:- rpcinfo -p 192.168.170.52 program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 48435 status 100024 1 tcp 54261 status 100005 1 udp 4242 mountd 100005 2 udp 4242 mountd 100005 3 udp 4242 mountd 100005 1 tcp 4242 mountd 100005 2 tcp 4242 mountd 100005 3 tcp 4242 mountd 100003 3 tcp 2049 nfs showmount -e 192.168.170.52 Export list for 192.168.170.52: / * after that i tried mounting the hdfs using:-- sudo mount -t nfs -o vers=3,proto=tcp,nolock 192.168.170.52:/ /mnt/hdfs_mount/ But i was getting this error:--- mount.nfs: mount system call failed Then i googled for the problem and installed nfs-kernel-server,portmap using sudo apt-get install nfs-kernel-server portmap After executing the above command,the output for:--- rpcinfo -p 192.168.170.52 is:-- 192.168.170.52: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) and for showmount -e 192.168.170.52 is:--- clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) Also the output for:-- sudo service nfs start comes out to be:-- Failed to start nfs.service: Unit nfs.service not found. Please help me with this.
I was testing this issue on CentOS 7. When you encounter such a problem you have to dig deeply. The problem: clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) is related to firewall. The command showmount -e IP_server shows all mounts that are available on server. This command works fine, but you have to be careful which port to open. It does not get through the firewall if only port 2049 has been opened. If the firewall on the NFS server has been configured to let NFS traffic get in, it will still block the showmount command. To test if you disable firewall on server you should get rid of this issue. So those ports should be open on server: firewall-cmd --permanent --add-service=rpc-bind firewall-cmd --permanent --add-service=mountd firewall-cmd --permanent --add-port=2049/tcp firewall-cmd --permanent --add-port=2049/udp firewall-cmd --reload Additional test 2049/NFS port for availability. semanage port -l | grep 2049 - returns SELinux context and the service name netstat -tulpen | grep 2049
{ "source": [ "https://unix.stackexchange.com/questions/373880", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179796/" ] }
373,882
Trying to write a comand that prints these of its parameters, which correspond to regular files containing text “main()” in any of its first 10 lines. What i should you to scan all files, and look for 10 first lines?
I was testing this issue on CentOS 7. When you encounter such a problem you have to dig deeply. The problem: clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) is related to firewall. The command showmount -e IP_server shows all mounts that are available on server. This command works fine, but you have to be careful which port to open. It does not get through the firewall if only port 2049 has been opened. If the firewall on the NFS server has been configured to let NFS traffic get in, it will still block the showmount command. To test if you disable firewall on server you should get rid of this issue. So those ports should be open on server: firewall-cmd --permanent --add-service=rpc-bind firewall-cmd --permanent --add-service=mountd firewall-cmd --permanent --add-port=2049/tcp firewall-cmd --permanent --add-port=2049/udp firewall-cmd --reload Additional test 2049/NFS port for availability. semanage port -l | grep 2049 - returns SELinux context and the service name netstat -tulpen | grep 2049
{ "source": [ "https://unix.stackexchange.com/questions/373882", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238110/" ] }
374,066
I have a file which contains already ordered data and I'd like to re-order the file according to the values in one key, without destroying the order of the data in the other keys. How do I prevent GNU sort from performing row sorting based on the values of keys I have not specified, or how do I specify to GNU sort to ignore a range of keys when sorting? File data.txt: 1 Don't 2 C 1 Sort 2 B 1 Me 2 A Expected output: 1 Don't 1 Sort 1 Me 2 C 2 B 2 A Command: sort -k 1,1 <data.txt Result: unwanted sorting I didn't ask for: 1 Don't 1 Me 1 Sort 2 A 2 B 2 C
You need a stable sort . From man sort : -s, --stable stabilize sort by disabling last-resort comparison viz.: $ sort -sk 1,1 <data.txt 1 Don't 1 Sort 1 Me 2 C 2 B 2 A Note that you probably also want a -n or --numeric-sort if your key is numeric (for example, you may get unexpected results when comparing 10 to 2 with the default - lexical - sort order). In which case it's just a matter of doing: sort -sn <data.txt No need to extract the first field as the numeric interpretation of the whole line will be the same as the one of the first field.
{ "source": [ "https://unix.stackexchange.com/questions/374066", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238243/" ] }
374,673
I've got a folder with many files (xyz1, xyz2, all the way up to xyz5025) and I need to run a script on every one of them, getting xyz1.faa, xyz2.faa, and so on as outputs. The command for a single file is: ./transeq xyz1 xyz1.faa -table 11 Is there a way to do that automatically? Maybe a for-do combo?
for file in xyz* do ./transeq "$file" "${file}.faa" -table 11 done This is a simple for loop that will iterate over every file that starts with xyz in the current directory and call the ./transeq program with the filename as the first argument, the filename followed by ".faa" as the second argument, followed by "-table 11".
{ "source": [ "https://unix.stackexchange.com/questions/374673", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238538/" ] }
374,748
I'm unable to update my system having two installations of Ubuntu: One is version 16.04 and the other version is 17.04. In both, I'm getting the same error. For ex., in Ubuntu 16.04, I run software updater and get the result as shown below. I did wait for some time but the updater didn't proceed ahead. Then I pressed the Stop button and it took me to the below pop-up. Then I pressed the button Install now and it took me to the next pop-up as shown below. I waited here for some time but it got stuck there. I'm unable to update in either installation. What is the solution as I can't do any update? (Also would like the viewer to see if unauthorized tampering, remotely or otherwise, can result in this error. If so, how to solve the issue?) If I fail to update, I may be compelled to take the trouble of reinstalling both the installations from scratch which I would like to avoid. Referring to the 3rd picture above that mentioned "installing updates": It did proceed ahead and updated completely. But after rebooting and running again the software updater , I came across a new issue. Now on running the software updater , it messages check your Internet connection . I've posted the question here .
I would first try a softer way. Stop the automatic updater: sudo dpkg-reconfigure -plow unattended-upgrades At the first prompt, choose not to download and install updates. Make a reboot. Make sure any packages in an unclean state are installed correctly: sudo dpkg --configure -a Get your system up-to-date: sudo apt update && sudo apt -f install && sudo apt full-upgrade Turn the automatic updater back on, now that the blockage is cleared: sudo dpkg-reconfigure -plow unattended-upgrades Select the package unattended-upgrades again.
{ "source": [ "https://unix.stackexchange.com/questions/374748", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46723/" ] }
374,823
I am connected with ssh and I want to copy a directory from my local to our remote server; how can I do that? I have read several post using scp but that doesn't worked for me. Some posts suggested using rsync but in my case I just want to copy one directory.
If you want to copy a directory from machine a to b while logged into a: scp -r /path/to/directory user@machine_b_ipaddress:/path/to/destination If you want to copy a directory from machine a to b while logged into b: scp -r user@machine_a_ipaddress:/path/to/directory /path/to/destination
{ "source": [ "https://unix.stackexchange.com/questions/374823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238780/" ] }
375,387
I want to trace the networking activity of a command, I tried tcpdump and strace without success. For an example, If I am installing a package or using any command that tries to reach some site, I want to view that networking activity (the site it tries to reach). I guess we can do this by using tcpdump. I tried but it is tracking all the networking activity of my system. Let's say if I run multiple networking related commmands and I want to track only particular command networking activity, that time it is difficult to find out the exact solution. Is there a way to do that? UPDATE: I don't want to track everything that goes on my network interface. I just want to track the command (for an example #yum install -y vim) networking activity. Such as the site it tries to reach.
netstat for simplicity Using netstat and grepping on the PID or process name: # netstat -np --inet | grep "thunderbird" tcp 0 0 192.168.134.142:45348 192.168.138.30:143 ESTABLISHED 16875/thunderbird tcp 0 0 192.168.134.142:58470 192.168.138.30:443 ESTABLISHED 16875/thunderbird And you could use watch for dynamic updates: watch 'netstat -np --inet | grep "thunderbird"' With: -n : Show numerical addresses instead of trying to determine symbolic host, port or user names -p : Show the PID and name of the program to which each socket belongs. --inet : Only show raw, udp and tcp protocol sockets. strace for verbosity You said you tried the strace tool, but did you try the option trace=network ? Note that the output can be quite verbose, so you might need some grepping. You could start by grepping on "sin_addr". strace -f -e trace=network <your command> 2>&1 | grep sin_addr Or, for an already running process, use the PID: strace -f -e trace=network -p <PID> 2>&1 | grep sin_addr
{ "source": [ "https://unix.stackexchange.com/questions/375387", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/234635/" ] }
375,392
Hi I'm trying out below command to match month and day (of 6 days ago, which is Jun 29) to search a directory using AWK, but the result is always '0' instead it is supposed to be around 1800. ls -ltr /test/output|awk -v month="$(date --date="6 days ago" +"\"%b\"")", -v day="$(date --date="6 days ago" +%d)" '$6 ==month && $7==day {print $9}'|wc -l tried this also ls -ltr /test/output|awk -v month="$(date --date="6 days ago" +%b)", -v day="$(date --date="6 days ago" +%d)" '$6 ==month && $7==day {print $9}'|wc -l but it is working if I hardcode Month ls -ltr /test/output|awk -v month="$(date --date="6 days ago" +"\"%b\"")", -v day="$(date --date="6 days ago" +%d)" '$6 =="Jun" && $7==day {print $9}'|wc -l Please suggest what I'm missing in the code?
netstat for simplicity Using netstat and grepping on the PID or process name: # netstat -np --inet | grep "thunderbird" tcp 0 0 192.168.134.142:45348 192.168.138.30:143 ESTABLISHED 16875/thunderbird tcp 0 0 192.168.134.142:58470 192.168.138.30:443 ESTABLISHED 16875/thunderbird And you could use watch for dynamic updates: watch 'netstat -np --inet | grep "thunderbird"' With: -n : Show numerical addresses instead of trying to determine symbolic host, port or user names -p : Show the PID and name of the program to which each socket belongs. --inet : Only show raw, udp and tcp protocol sockets. strace for verbosity You said you tried the strace tool, but did you try the option trace=network ? Note that the output can be quite verbose, so you might need some grepping. You could start by grepping on "sin_addr". strace -f -e trace=network <your command> 2>&1 | grep sin_addr Or, for an already running process, use the PID: strace -f -e trace=network -p <PID> 2>&1 | grep sin_addr
{ "source": [ "https://unix.stackexchange.com/questions/375392", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/234425/" ] }
375,600
I just installed Linux kernel version 4.12 on Ubuntu 17.04 using ukuu (Ubuntu Kernel Update Utility https://doc.ubuntu-fr.org/ubuntu_kernel_upgrade_utility ). The thing is, when I check the available I/O schedulers, I can't seem to find the BFQ nor the Kyber I/O scheduler : cat /sys/class/block/sda/queue/scheduler > noop deadline [cfq] So how to use one of the new schedulers in this Linux version ?
I'm not in Ubuntu, but what I did in Fedora may help you. BFQ is a blk-mq (Multi-Queue Block IO Queueing Mechanism) scheduler, so you need to enable blk-mq at boot time, edit your /etc/default/grub file and add scsi_mod.use_blk_mq=1 to your GRUB_CMDLINE_LINUX , this is my grub file, as an example: GRUB_TIMEOUT=3 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=false GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="quiet vt.global_cursor_default=0 scsi_mod.use_blk_mq=1" GRUB_DISABLE_RECOVERY="true" After that, you must update your grub. On Fedora we have to use sudo grub2-mkconfig -o /path/to/grub.cfg , which varies depending on the boot method . On Ubuntu, you can simply run: sudo update-grub Reboot, and if you get this: cat /sys/block/sda/queue/scheduler [mq-deadline] none Probably your kernel was compiled with BFQ as a module , and this can be the case also for Kyber. sudo modprobe bfq sudo cat /sys/block/sda/queue/scheduler [mq-deadline] bfq none You can add it at boot time by adding a /etc/modules-load.d/bfq.conf file containing bfq . It is important to note that enabling blk_mq turn it impossible to use non blk_mq schedulers, so you will lose noop cfq and the non mq deadline Apparently blk_mq scheduling system is not supporting elevator flags in grub, udev rules can be used instead, with a bonus of offering a more grained control. Create /etc/udev/rules.d/60-scheduler.rules if it did not exist and add: ACTION=="add|change", KERNEL=="sd*[!0-9]|sr*", ATTR{queue/scheduler}="bfq" As pointed here if needed you can distinguish between rotational (HDDs) and non-rotational (SSDs) devices in udev rules using the attribute ATTR{queue/rotational} . Be aware that Paolo Valente, BFQ developer, pointed in LinuxCon Europe that BFQ can be a better choice than the noop or deadline schedulers in terms of low latency guaranties, what makes a good advice to use it for SSDs too. Paolo's comparison: https://www.youtube.com/watch?v=1cjZeaCXIyM&feature=youtu.be Save it, and reload and trigger udev rules : sudo udevadm control --reload sudo udevadm trigger
{ "source": [ "https://unix.stackexchange.com/questions/375600", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/207157/" ] }
375,608
I'm setting up a crontab server to run several jobs to copy files from prod servers to lower environment servers. I need the cron server job to copy files from one server to another. Here is what I have. the ip's have been modified ssh -v -R localhost:50000:1.0.0.2:22 -i host1key.pem [email protected] 'rsync -e "ssh -i /home/ec2-user/host2key.pem -p 50000" -vuar /home/ec2-user/test.txt ec2-user@localhost:/home/ec2-user/test.txt' I'm using two different pem keys and users. I would think this command would work but I get this error in the debug log. Here is more to it and only show the portion that is erroring. It connects to [email protected] successfully. But errors on the 1.0.0.2 : debug1: connect_next: host 1.0.0.2 ([1.0.0.2]:22) in progress, fd=7 debug1: channel 1: new [127.0.0.1] debug1: confirm forwarded-tcpip debug1: channel 1: connected to 1.0.0.2 port 22 Host key verification failed. debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0 rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6] debug1: channel 0: free: client-session, nchannels 2 debug1: channel 1: free: 127.0.0.1, nchannels 1 Transferred: sent 5296, received 4736 bytes, in 0.9 seconds Bytes per second: sent 5901.2, received 5277.2 debug1: Exit status 12
I'm not in Ubuntu, but what I did in Fedora may help you. BFQ is a blk-mq (Multi-Queue Block IO Queueing Mechanism) scheduler, so you need to enable blk-mq at boot time, edit your /etc/default/grub file and add scsi_mod.use_blk_mq=1 to your GRUB_CMDLINE_LINUX , this is my grub file, as an example: GRUB_TIMEOUT=3 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=false GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="quiet vt.global_cursor_default=0 scsi_mod.use_blk_mq=1" GRUB_DISABLE_RECOVERY="true" After that, you must update your grub. On Fedora we have to use sudo grub2-mkconfig -o /path/to/grub.cfg , which varies depending on the boot method . On Ubuntu, you can simply run: sudo update-grub Reboot, and if you get this: cat /sys/block/sda/queue/scheduler [mq-deadline] none Probably your kernel was compiled with BFQ as a module , and this can be the case also for Kyber. sudo modprobe bfq sudo cat /sys/block/sda/queue/scheduler [mq-deadline] bfq none You can add it at boot time by adding a /etc/modules-load.d/bfq.conf file containing bfq . It is important to note that enabling blk_mq turn it impossible to use non blk_mq schedulers, so you will lose noop cfq and the non mq deadline Apparently blk_mq scheduling system is not supporting elevator flags in grub, udev rules can be used instead, with a bonus of offering a more grained control. Create /etc/udev/rules.d/60-scheduler.rules if it did not exist and add: ACTION=="add|change", KERNEL=="sd*[!0-9]|sr*", ATTR{queue/scheduler}="bfq" As pointed here if needed you can distinguish between rotational (HDDs) and non-rotational (SSDs) devices in udev rules using the attribute ATTR{queue/rotational} . Be aware that Paolo Valente, BFQ developer, pointed in LinuxCon Europe that BFQ can be a better choice than the noop or deadline schedulers in terms of low latency guaranties, what makes a good advice to use it for SSDs too. Paolo's comparison: https://www.youtube.com/watch?v=1cjZeaCXIyM&feature=youtu.be Save it, and reload and trigger udev rules : sudo udevadm control --reload sudo udevadm trigger
{ "source": [ "https://unix.stackexchange.com/questions/375608", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/239368/" ] }
375,889
Let's say I'm running a script (e.g. in Python). In order to find out how long the program took, one would run time python script1.py Is there a command which keeps track of how much RAM was used as the script was running? In order to find how much RAM is available, one could use free , but this command doesn't fit the task above.
The time(1) command (you may need to install it -perhaps as the time package-, it should be in /usr/bin/time ) accepts many arguments, including a format string (with -f or --format ) which understands (among others) %M Maximum resident set size of the process during its lifetime, in Kbytes. %K Average total (data+stack+text) memory use of the process, in Kbytes. Don't confuse the /usr/bin/time command with the time bash builtin . You may need to type the full file path /usr/bin/time (to ask your shell to run the command not the builtin) or type command time or \time (thanks to Toby Speight & to Arrow for their comments). So you might try (RSS being the resident set size ) /usr/bin/time -f "mem=%K RSS=%M elapsed=%E cpu.sys=%S user=%U" python script1.py You could also try /usr/bin/time --verbose python script1.py You are asking: how much RAM was used as the script was running? and this shows a misconception from your part. Application programs running on Linux (or any modern multi-process operating system) are using virtual memory , and each process (including the python process running your script) has its own virtual address space . A process don't run directly in physical RAM, but has its own virtual address space (and runs in it), and the kernel implements virtual memory by sophisticated demand-paging using lazy copy-on-write techniques and configures the MMU . The RAM is a physical device and resource used -and managed internally by the kernel- to implement virtual memory (read also about the page cache and about thrashing ). You may want to spend several days understanding more about operating systems . I recommend reading Operating Systems : Three Easy Pieces which is a freely downloadable book. The RAM is used by the entire operating system (not -directly- by individual processes) and the actual pages in RAM for a given process can vary during time (and could be somehow shared with other processes). Hence the RAM consumption of a given process is not well defined since it is constantly changing (you may want its average, or its peak value, etc...), and likewise for the size of its virtual address space. You could also use (especially if your script runs for several seconds) the top(1) utility (perhaps in some other terminal), or ps(1) or pmap(1) -maybe using watch(1) to repeat that ps or pmap command. You could even use directly /proc/ (see proc(5) ...) perhaps as watch cat /proc/$(pidof python)/status or /proc/$(pidof python)/stat or /proc/$(pidof python)/maps etc... But RAM usage (by the kernel for some process) is widely varying with time for a given process (and even its virtual address space is changing, e.g. by calls to mmap(2) and munmap used by ld-linux(8) , dlopen(3) , malloc(3) & free and many other functions needed to your Python interpreter...). You could also use strace(1) to understand the system calls done by Python for your script (so you would understand how it uses mmap & munmap and other syscalls(2) ). You might restrict strace with -e trace=%memory or -e trace=memory to get only memory (i.e. virtual address space) related system calls. BTW, the tracemalloc Python feature could be also useful. I guess that you only care about virtual memory , that is about virtual address space (but not about RAM), used by the Python interpreter to run your Python script. And that is changing during execution of the process. The RSS (or the maximal peak size of the virtual address space) could actually be more useful to know. See also LinuxAteMyRAM .
{ "source": [ "https://unix.stackexchange.com/questions/375889", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115891/" ] }
376,075
I have a simple script that I understand most of, it's the find command that's unclear. I've got a lot of documentation but it's not serving to make it much clearer. My thought is that it is working like a for-loop, the currently found file is swapped in for {} and copied to $HOME/$dir_name, but how does the search with -path and -prune -o work? It's annoying to have such specific and relevant documentation and still not know what's going on. #!/bin/bash # The files will be search on from the user's home # directory and can only be backed up to a directory # within $HOME read -p "Which file types do you want to backup " file_suffix read -p "Which directory do you want to backup to " dir_name # The next lines creates the directory if it does not exist test -d $HOME/$dir_name || mkdir -m 700 $HOME/$dir_name # The find command will copy files that match the # search criteria ie .sh . The -path, -prune and -o # options are to exclude the backdirectory from the # backup. find $HOME -path $HOME/$dir_name -prune -o \ -name "*$file_suffix" -exec cp {} $HOME/$dir_name/ \; exit 0 This is just the documentation that I know I should be able to figure this out from. -path pattern File name matches shell pattern pattern. The metacharacters do not treat / or . specially; so, for example, find . -path "./sr*sc" will print an entry for a directory called ./src/misc (if one exists). To ignore a whole directory tree, use -prune rather than checking every file in the tree. For example, to skip the directory src/emacs and all files and directories under it, and print the names of the other files found, do something like this: find . -path ./src/emacs -prune -o -print From Findutils manual -- Action: -exec command ; This insecure variant of the -execdir action is specified by POSIX. The main difference is that the command is executed in the directory from which find was invoked, meaning that {} is expanded to a relative path starting with the name of one of the starting directories, rather than just the basename of the matched file. While some implementations of find replace the {} only where it appears on its own in an argument, GNU find replaces {} wherever it appears. And For example, to compare each C header file in or below the current directory with the file /tmp/master: find . -name '*.h' -execdir diff -u '{}' /tmp/master ';'
-path works exactly like -name , but applies the pattern to the entire pathname of the file being examined, instead of to the last component. -prune forbids descending below the found file, in case it was a directory. Putting it all together, the command find $HOME -path $HOME/$dir_name -prune -o -name "*$file_suffix" -exec cp {} $HOME/$dir_name/ \; Starts looking for files in $HOME . If it finds a file matching $HOME/$dir_name it won't go below it ("prunes" the subdirectory). Otherwise ( -o ) if it finds a file matching *$file_suffix copies it into $HOME/$dir_name/ . The idea seems to be make a backup of some of the contents of $HOME in a subdirectory of $HOME . The parts with -prune is obviously necessary in order to avoid making backups of backups...
{ "source": [ "https://unix.stackexchange.com/questions/376075", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235424/" ] }
377,920
With the first version of Linux, is the correct version number 0.01 (as seen in Tanenbaum's OS book) or should the first version be written 0.0.1 including the dot?
The correct version is “0.01”, as used in the tarball at the time ( available here ) and in the release notes .
{ "source": [ "https://unix.stackexchange.com/questions/377920", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9115/" ] }
377,926
I tried different solutions but none of them work. I generated a key with ssh-keygen on a machine. I add this key on a linux server in /root/.ssh/authorized_keys , and it works perfectly. On a second linux server, I did the same. But it doesn't work. I tried different things : Put my key in /root/.ssh/authorized_keys2 Edited /etc/ssh/ssh_config and /etc/ssh/sshd_config (by commenting and uncommenting lines) Gave the right rights (700 to .ssh directory and 600 to authorized_keys files) Nothing seems to work... Still asking me a password. What could be the problem ? Thanks a lot. EDIT : `sshd_config Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key #HostKey /etc/ssh/ssh_host_ecdsa_key HostKey /etc/ssh/ssh_host_ed25519_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 1024 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 #PermitRootLogin without-password PermitRootLogin no StrictModes yes AllowUsers user RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes Ciphers [email protected],[email protected],aes256-ctr,aes128-ctr MACs [email protected],[email protected],[email protected],hmac-sha2-512,hmac-sha2-256,hmac-ripemd160 KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 My ssh -vv result OpenSSH_6.7p1 Debian-5+deb8u3, OpenSSL 1.0.1t 3 May 2016 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to IP [IP] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/id_rsa type 1 debug1: key_load_public: No such file or directory debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /root/.ssh/id_dsa type -1 debug1: key_load_public: No such file or directory debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: key_load_public: No such file or directory debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /root/.ssh/id_ed25519 type -1 debug1: key_load_public: No such file or directory debug1: identity file /root/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.7p1 Debian-5+deb8u3 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.7p1 Debian-5+deb8u3 debug1: match: OpenSSH_6.7p1 Debian-5+deb8u3 pat OpenSSH* compat 0x04000000 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],ssh-ed25519,[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],[email protected],arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],[email protected],arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1,[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1,[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-ed25519 debug2: kex_parse_kexinit: [email protected],[email protected],aes256-ctr,aes128-ctr debug2: kex_parse_kexinit: [email protected],[email protected],aes256-ctr,aes128-ctr debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],hmac-sha2-512,hmac-sha2-256,hmac-ripemd160 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],hmac-sha2-512,hmac-sha2-256,hmac-ripemd160 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: setup [email protected] debug1: kex: server->client aes128-ctr [email protected] none debug2: mac_setup: setup [email protected] debug1: kex: client->server aes128-ctr [email protected] none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<3072<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: bits set: 1580/3072 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: ED25519 67:fd:fd:6e:a5:c1:32:96:9d:33:32:1a:cf:83:94:ea debug1: Host '[IP]:22' is known and matches the ED25519 host key. debug1: Found key in /root/.ssh/known_hosts:1 debug2: bits set: 1546/3072 debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /root/.ssh/id_rsa (0x7f84b8ce74a0), debug2: key: /root/.ssh/id_dsa ((nil)), debug2: key: /root/.ssh/id_ecdsa ((nil)), debug2: key: /root/.ssh/id_ed25519 ((nil)), debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /root/.ssh/id_rsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password debug1: Trying private key: /root/.ssh/id_dsa debug1: Trying private key: /root/.ssh/id_ecdsa debug1: Trying private key: /root/.ssh/id_ed25519 debug2: we did not send a packet, disable method user@IP's password: /var/log/auth.log (on the distant server) Jul 12 12:24:43 ns3111463 sshd[12971]: input_userauth_request: invalid user root [preauth] Jul 12 12:24:44 ns3111463 sshd[12971]: Connection closed by IP [preauth] Jul 12 12:24:46 ns3111463 sshd[12973]: Connection closed by IP [preauth] Jul 12 12:24:48 ns3111463 sshd[11787]: Received signal 15; terminating. Jul 12 12:24:48 ns3111463 sshd[12977]: Server listening on 0.0.0.0 port 22. Jul 12 12:24:48 ns3111463 sshd[12977]: Server listening on :: port 22. Jul 12 12:24:50 ns3111463 sshd[12978]: Connection closed by IP [preauth] Jul 12 12:24:51 ns3111463 sshd[12980]: User root from IP not allowed because not listed in AllowUsers Jul 12 12:24:51 ns3111463 sshd[12980]: input_userauth_request: invalid user root [preauth] Jul 12 12:24:51 ns3111463 sshd[12980]: Connection closed by IP [preauth] Jul 12 12:25:49 ns3111463 sshd[12977]: Received signal 15; terminating. Jul 12 12:25:49 ns3111463 sshd[13029]: Server listening on 0.0.0.0 port 22. Jul 12 12:25:49 ns3111463 sshd[13029]: Server listening on :: port 22. Jul 12 12:25:50 ns3111463 sshd[13030]: User root from IP not allowed because not listed in AllowUsers Jul 12 12:25:50 ns3111463 sshd[13030]: input_userauth_request: invalid user root [preauth] Jul 12 12:25:51 ns3111463 sshd[13030]: Connection closed by IP [preauth] Jul 12 12:25:53 ns3111463 sshd[13032]: Connection closed by IP [preauth] Jul 12 12:37:41 ns3111463 sshd[13552]: Connection closed by IP [preauth] Jul 12 12:38:09 ns3111463 sshd[13598]: Connection closed by IP [preauth]
The correct version is “0.01”, as used in the tarball at the time ( available here ) and in the release notes .
{ "source": [ "https://unix.stackexchange.com/questions/377926", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240314/" ] }
377,931
Appreciate if any help for below query. Will require bash script. I am new to this scripting technology. I have the below file at some location - say filename as MemberFile.txt. # [ID ] #1 [ADDRE1 ] Address Line #1 [ADDRE2 ] Mumbai City [ADDRE3 ] India # [ID ] #2 [ADDRE1 ] House No 2 [ADDRE3 ] Green Society [ADDRE4 ] Kolkatta # [ID ] #3 [ADDRE1 ] Plot Num 77 [ADDRE2 ] House No # [567] [ADDRE3 ] greener Apt # File can have millions of such records. I wanted to quickly iterate through each record and get and store value for [ADDRE3 ] . Also check if that record contains either of word 'society' or 'Num' (case insensitive). If yes, then get value of tag [ID ] in that record. Expected output is #2 and #3. Please note that below one represents one record. [ID ] #1 [ADDRE1 ] Address Line #1 [ADDRE2 ] Mumbai City [ADDRE3 ] India
The correct version is “0.01”, as used in the tarball at the time ( available here ) and in the release notes .
{ "source": [ "https://unix.stackexchange.com/questions/377931", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240104/" ] }
377,979
Let's take a simple for loop #!/bin/bash for i in `seq 1 10`; do echo $i done AFAIK semicolon in bash scripts makes shell execute current command synchronously and then go to the next one. Pressing enter does literally the same except it doesn't allow you to enter the following command, flushing the buffer immediately. So why shell can't interpret the following line for i in `seq 1 10`; do; echo $i; done how does this for loop actually work?
The syntax of a for loop from the bash manual page is for name [ [ in [ word ... ] ] ; ] do list ; done The semicolons may be replaced with carriage returns, as noted elsewhere in the bash manual page: "A sequence of one or more newlines may appear in a list instead of a semicolon to delimit commands." However, the reverse is not true; you cannot arbitrarily replace newlines with semicolons. Your multiline script can be converted to a single line as long as you observe the above syntax rules and do not insert an extra semicolon after the do : for i in `seq 1 10`; do echo $i; done
{ "source": [ "https://unix.stackexchange.com/questions/377979", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176344/" ] }
377,986
We are currently testing a new application on Linux that previously worked on Windows. It appears there were some hard coded paths in the program and a bunch of test files with long filenames with backslashes were created. For example: directory\subdirectory\subdirectory\subdirectory\filename.ext I need to take these files, create the directory they were supposed to be in and move them there. So, for example, the file above should actually be: directory/subdirectory/subdirectory/subdirectory/filename.ext How can I do this?
The syntax of a for loop from the bash manual page is for name [ [ in [ word ... ] ] ; ] do list ; done The semicolons may be replaced with carriage returns, as noted elsewhere in the bash manual page: "A sequence of one or more newlines may appear in a list instead of a semicolon to delimit commands." However, the reverse is not true; you cannot arbitrarily replace newlines with semicolons. Your multiline script can be converted to a single line as long as you observe the above syntax rules and do not insert an extra semicolon after the do : for i in `seq 1 10`; do echo $i; done
{ "source": [ "https://unix.stackexchange.com/questions/377986", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240372/" ] }
378,016
When looking for matches with grep , I often notice that the subsequent search takes significantly less time than the first-- e.g. 25s vs. 2s. Obviously, it's not by reusing the data structures from its last run-- those should've been deallocated. Running a time command on grep , I noticed an interesting phenomenon: real 24m36.561s user 1m20.080s sys 0m7.230s Where does the rest of the time go? Is there anything I can do to make it run fast every time? (e.g. having another process read the files, before grep searches them.)
It is quite often related to the page cache . The first time, the data has to be read (physically) from the disk. The second time (for not too big files) it is likely to be sitting in the page cache. So you could issue first a command like cat(1) to bring the (not too big) file into the page cache (i.e. in RAM), then a second grep(1) (or any program reading the file) would generally run faster. (however, the data still needs to be read from the disk at some time) See also (sometimes useful in your application programs, but practically rarely) readahead(2) & posix_fadvise(2) and perhaps madvise(2) & sync(2) & fsync(2) etc.... Read also LinuxAteMyRAM . BTW, this is why it is recommended, when benchmarking a program, to run it several times. Also, this is why it could be useful to buy more RAM (even if you don't run programs using all of it for their data). If you want to understand more, read some book like e.g. Operating Systems : Three Easy Pieces
{ "source": [ "https://unix.stackexchange.com/questions/378016", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145784/" ] }
378,148
It is well-known that empty text files have zero bytes: However, each of them contains metadata , which according to my research, is stored in inodes , and do use space . Given this, it seems logical to me that it is possible to fill a disk by purely creating empty text files. Is this correct? If so, how many empty text files would I need to fill in a disk of, say, 1GB? To do some checks, I run df -i but this apparently shows the % of inodes being used(?) rather than how much they weigh. Filesystem Inodes IUsed IFree IUse% Mounted on udev 947470 556 946914 1% /dev tmpfs 952593 805 951788 1% /run /dev/sda2 28786688 667980 28118708 3% / tmpfs 952593 25 952568 1% /dev/shm tmpfs 952593 5 952588 1% /run/lock tmpfs 952593 16 952577 1% /sys/fs/cgroup /dev/sda1 0 0 0 - /boot/efi tmpfs 952593 25 952568 1% /run/user/1000 /home/lucho/.Private 28786688 667980 28118708 3% /home/lucho
This output suggests 28786688 inodes overall, after which the next attempt to create a file in the root filesystem (device /dev/sda2 ) will return ENOSPC ("No space left on device"). Explanation: on the original *nix filesystem design, the maximum number of inodes is set at filesystem creation time. Dedicated space is allocated for them. You can run out of inodes before you run out of space for data, or vice versa. The most common default Linux filesystem ext4 still has this limitation. For information about inode sizes on ext4, look at the manpage for mkfs.ext4. Linux supports other filesystems without this limitation. On btrfs , space is allocated dynamically. "The inode structure is relatively small, and will not contain embedded file data or extended attribute data." (ext3/4 allocates some space inside inodes for extended attributes ). Of course you can still run out of disk space by creating too much metadata / directory entries. Thinking about it, tmpfs is another example where inodes are allocated dynamically. It's hard to know what the maximum number of inodes reported by df -i would actually mean in practice for these filesystems. I wouldn't attach any meaning to the value shown. "XFS also allocates inodes dynamically. So does JFS. So did/does reiserfs. So does F2FS. Traditional Unix filesystems allocate inodes statically at mkfs time, and so do modern FSes like ext4 that trace their heritage back to it, but these days that's the exception, not the rule. "BTW, XFS does let you set a limit on the max percentage of space used by inodes, so you can run out of inodes before you get to the point where you can't append to existing files. (Default is 25% for FSes under 1TB, 5% for filesystems up to 50TB, 1% for larger than that.) Anyway, this space usage on metadata (inodes and extent maps) will be reflected in regular df -h " – Peter Cordes in a comment to this answer
{ "source": [ "https://unix.stackexchange.com/questions/378148", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/192321/" ] }
378,301
I want to launch the wine executable (Version 2.12), but I get the following error ( $ =shell prompt): $ wine bash: /usr/bin/wine: No such file or directory $ /usr/bin/wine bash: /usr/bin/wine: No such file or directory $ cd /usr/bin $ ./wine bash: ./wine: No such file or directory However, the file is there: $ which wine /usr/bin/wine The executable definitely is there and no dead symlink: $ stat /usr/bin/wine File: /usr/bin/wine Size: 9712 Blocks: 24 IO Block: 4096 regular file Device: 802h/2050d Inode: 415789 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2017-07-13 13:53:00.000000000 +0200 Modify: 2017-07-08 03:42:45.000000000 +0200 Change: 2017-07-13 13:53:00.817346043 +0200 Birth: - It is a 32-bit ELF: $ file /usr/bin/wine /usr/bin/wine: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=eaf6de433d8196e746c95d352e0258fe2b65ae24, stripped I can get the dynamic section of the executable: $ readelf -d /usr/bin/wine Dynamic section at offset 0x1efc contains 27 entries: Tag Type Name/Value 0x00000001 (NEEDED) Shared library: [libwine.so.1] 0x00000001 (NEEDED) Shared library: [libpthread.so.0] 0x00000001 (NEEDED) Shared library: [libc.so.6] 0x0000001d (RUNPATH) Library runpath: [$ORIGIN/../lib32] 0x0000000c (INIT) 0x7c000854 0x0000000d (FINI) 0x7c000e54 [more addresses without file names] However, I cannot list the shared object dependencies using ldd : $ ldd /usr/bin/wine /usr/bin/ldd: line 117: /usr/bin/wine: No such file or directory strace shows: execve("/usr/bin/wine", ["wine"], 0x7fff20dc8730 /* 66 vars */) = -1 ENOENT (No such file or directory) fstat(2, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 4), ...}) = 0 write(2, "strace: exec: No such file or di"..., 40strace: exec: No such file or directory ) = 40 getpid() = 23783 exit_group(1) = ? +++ exited with 1 +++ Edited to add suggestion by @jww : The problem appears to happen before dynamically linked libraries are requested, because no ld debug messages are generated: $ LD_DEBUG=all wine bash: /usr/bin/wine: No such file or directory Even when only printing the possible values of LD_DEBUG , the error occurs instead $ LD_DEBUG=help wine bash: /usr/bin/wine: No such file or directory Edited to add suggestion of @Raman Sailopal: The problem seems to lie within the executable, as copying the contents of /usr/bin/wine to another already created file produces the same error root:bin # cp cat testcmd root:bin # testcmd --help Usage: testcmd [OPTION]... [FILE]... Concatenate FILE(s) to standard output. [rest of cat help page] root:bin # dd if=wine of=testcmd 18+1 records in 18+1 records out 9712 bytes (9.7 kB, 9.5 KiB) copied, 0.000404061 s, 24.0 MB/s root:bin # testcmd bash: /usr/bin/testcmd: No such file or directory What is the problem or what can I do to find out which file or directory is missing? uname -a : Linux laptop 4.11.3-1-ARCH #1 SMP PREEMPT Sun May 28 10:40:17 CEST 2017 x86_64 GNU/Linux
This: $ file /usr/bin/wine /usr/bin/wine: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=eaf6de433d8196e746c95d352e0258fe2b65ae24, stripped Combined with this: $ ldd /usr/bin/wine /usr/bin/ldd: line 117: /usr/bin/wine: No such file or directory Strongly suggests that the system does not have the /lib/ld-linux.so.2 ELF interpreter. That is, this 64-bit system does not have any 32-bit compatibility libraries installed. Thus, @user1334609's answer is essentially correct.
{ "source": [ "https://unix.stackexchange.com/questions/378301", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54138/" ] }
378,307
I know about pwdx , but it would be really nice if top showed the PWD: in other words, I want to see PWD and CPU/mem usage side by side. Does anyone have a script or one-liner to combine the output from top / ps with pwdx with periodic refresh?
This: $ file /usr/bin/wine /usr/bin/wine: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=eaf6de433d8196e746c95d352e0258fe2b65ae24, stripped Combined with this: $ ldd /usr/bin/wine /usr/bin/ldd: line 117: /usr/bin/wine: No such file or directory Strongly suggests that the system does not have the /lib/ld-linux.so.2 ELF interpreter. That is, this 64-bit system does not have any 32-bit compatibility libraries installed. Thus, @user1334609's answer is essentially correct.
{ "source": [ "https://unix.stackexchange.com/questions/378307", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81987/" ] }
378,310
I made some Live USBs using both Tuxboot and USB Image Writer but I can't boot any of them. I've tried most of them on other computers and they all work. When I press ESC while booting I'm taken to the grub menu, but only my main system appears there (Linux Mint 18.1) and an option to go to the BIOS config. If I go to the BIOS and change the boot order so that EFI Flash goes first the system reboots but boots normally on my main OS. The next time I access the BIOS the order is returned to the original settings. A few notes: I have a samsung NP900 laptop with SSD I'm using the linux kernel v4.10 on a Mint 18.1 I have secure boot set as custom, and I haven't tried without it because I'm worried by system won't boot The output of efibootmgr is BootCurrent: 0006 Timeout: 0 seconds BootOrder: 0006,0005,0004,0002,0000 Boot0000* Windows Boot Manager Boot0002* Windows Boot Manager Boot0004* UEFI: Generic Flash Disk 5.00 Boot0005* UEFI: Generic Flash Disk 5.00 Boot0006* ubuntu Those two Windows entries are there, but I don't have any Windows installed. I changed the boot order with sudo efibootmgr -o 0004,0005,0006,0000,0002 and rebooted but again, system booted into main OS. And after I checked boot order again it was set as 0006,0005,0004,0002,0000 , which is not what it was when I restarted it.
This: $ file /usr/bin/wine /usr/bin/wine: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=eaf6de433d8196e746c95d352e0258fe2b65ae24, stripped Combined with this: $ ldd /usr/bin/wine /usr/bin/ldd: line 117: /usr/bin/wine: No such file or directory Strongly suggests that the system does not have the /lib/ld-linux.so.2 ELF interpreter. That is, this 64-bit system does not have any 32-bit compatibility libraries installed. Thus, @user1334609's answer is essentially correct.
{ "source": [ "https://unix.stackexchange.com/questions/378310", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121808/" ] }
378,373
I want to create a dummy, virtual output on my Xorg server on current Intel iGPU (on Ubuntu 16.04.2 HWE, with Xorg server version 1.18.4). It is the similiar to Linux Mint 18.2, which one of the xrandr output shows the following: Screen 0: minimum 8 x 8, current 1920 x 1080, maximum 32767 x 32767 ... eDP1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 0mm x 0mm ... VIRTUAL1 disconnected (normal left inverted right x axis y axis) ... In the Linux Mint 18.2, I can turn off the built-in display ( eDP1 ) and turn on the VIRTUAL1 display with any arbitrary mode supported by the X server, attach x11vnc to my main display and I'll get a GPU accelerated remote desktop. But in Ubuntu 16.04.2, that's not the case. The VIRTUAL* display doesn't exist at all from xrandr . Also, FYI, xrandr's output names is a little bit different on Ubuntu 16.04.2, where every number is prefixed with a - . E.g. eDP1 in Linux Mint becomes eDP-1 in Ubuntu, HDMI1 becomes HDMI-1 , and so on. So, how to add the virtual output in Xorg/xrandr? And how come Linux Mint 18.2 and Ubuntu 16.04.2 (which I believe uses the exact same Xorg server, since LM 18.2 is based on Ubuntu, right?) can have a very different xrandr configurations? Using xserver-xorg-video-dummy is not an option, because the virtual output won't be accelerated by GPU.
Create a 20-intel.conf file: sudo vi /usr/share/X11/xorg.conf.d/20-intel.conf Add the following configuration information into the file: Section "Device" Identifier "intelgpu0" Driver "intel" Option "VirtualHeads" "2" EndSection This tells the Intel GPU to create 2 virtual displays. You can change the number of VirtualHeads to your needs. Then logout and login. You should see VIRTUAL1 and VIRTUAL2 when you run xrandr . Note if you were using the modesetting driver previously (which is the modern default) switching to the intel driver will cause the names of displays to change from, eg, HDMI-1 or DP-1 to HDMI1 or DP1 .
{ "source": [ "https://unix.stackexchange.com/questions/378373", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240651/" ] }
378,990
I want to change in file /var/www with /home/lokesh/www with sed command sed -i 's///var//www///home//lokesh//www/g' lks.php but this give error sed: couldn't open file ww///home//lokesh//www/g: No such file or directory
Not sure if you know, but sed has a great feature where you do not need to use a / as the separator. So, your example could be written as: sed -i 's#/var/www#/home/lokesh/www#g' lks.php It does not need to be a # either, it could be any single character. For example, using a 3 as the separator: echo "foo" | sed 's3foo3bar3g' bar
{ "source": [ "https://unix.stackexchange.com/questions/378990", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77823/" ] }
379,181
This question is not about how to write a properly escaped string literal. I couldn't find any related question that isn't about how to escape variables for direct consumption within a script or by other programs. My goal is to enable a script to generate other scripts. This is because the tasks in the generated scripts will run anywhere from 0 to n times on another machine, and the data from which they are generated may change before they're run (again), so doing the operations directly, over a network will not work. Given a known variable that may contain special characters such as single quotes, I need to write that out as a fully escaped string literal, e.g. a variable foo containing bar'baz should appear in the generated script as: qux='bar'\''baz' which would be written by appending "qux=$foo_esc" to the other lines of script. I did it using Perl like this: foo_esc="'`perl -pe 's/('\'')/\\1\\\\\\1\\1/g' <<<"$foo"`'" but this seems like overkill. I have had no success in doing it with bash alone. I have tried many variations of these: foo_esc="'${file//\'/\'\\\'\'}'" foo_esc="'${file//\'/'\\''}'" but either extra slashes appear in the output (when I do echo "$foo" ), or they cause a syntax error (expecting further input if done from the shell).
Bash has a parameter expansion option for exactly this case : ${parameter@Q} The expansion is a string that is the value of parameter quoted in a format that can be reused as input. So in this case: foo_esc="${foo@Q}" This is supported in Bash 4.4 and up. There are several options for other forms of expansion as well, and for specifically generating complete assignment statements ( @A ).
{ "source": [ "https://unix.stackexchange.com/questions/379181", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161288/" ] }
379,272
When I do git push I get the command prompt like Username for 'https://github.com': then I enter my username manually like Username for 'https://github.com': myusername and then I hit Enter and I get prompt for my password Password for 'https://[email protected]': I want the username to be written automatically instead of manually having to type it all the time. I tried doing it with xdotool but it didn't work out. I have already done git config --global user.name myusername git config --global user.email [email protected] but still it always asks for me to type manually
In Terminal, enter the following to enable credential memory: $ git config --global credential.helper cache You may update the default password cache timeout (in seconds): # This cache timeout is in seconds $ git config --global credential.helper 'cache --timeout=3600' You may also use (but please use the single quotes, else double quotes may break for some characters): $ git config --global user.name 'your user name' $ git config --global user.password 'your password'
{ "source": [ "https://unix.stackexchange.com/questions/379272", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224025/" ] }
379,464
I have this shell script saved in a file: it does some basic string substitution. #!/bin/sh html_file=$1 echo "html_file = $html_file" substr=.pdf pdf_file="${html_file/.html/.pdf}" echo "pdf_file = $pdf_file" If I paste it into the command line, it works fine: $ html_file="/home/max/for_pauld/test_no_base64.html" echo "html_file = $html_file" substr=.pdf pdf_file="${html_file/.html/.pdf}" echo "pdf_file = $pdf_file" gives html_file = /home/max/for_pauld/test_no_base64.html pdf_file = /home/max/for_pauld/test_no_base64.pdf That's the output from echo above - it's working as intended. But, when I call the script, with $ saucer "/home/max/for_pauld/test_no_base64.html" I get this output: html_file = /home/max/for_pauld/test_no_base64.html /home/max/bin/saucer: 5: /home/max/bin/saucer: Bad substitution Is my script using a different version of bash or something? Do I need to change my shebang line?
What is sh sh (or the Shell Command Language) is a programming language described by the POSIX standard . It has many implementations ( ksh88 , dash , ...). bash can also be considered an implementation of sh (see below). Because sh is a specification, not an implementation, /bin/sh is a symlink (or a hard link) to an actual implementation on most POSIX systems. What is bash bash started as an sh -compatible implementation (although it predates the POSIX standard by a few years), but as time passed it has acquired many extensions. Many of these extensions may change the behavior of valid POSIX shell scripts, so by itself bash is not a valid POSIX shell. Rather, it is a dialect of the POSIX shell language. bash supports a --posix switch, which makes it more POSIX-compliant. It also tries to mimic POSIX if invoked as sh . sh = bash? For a long time, /bin/sh used to point to /bin/bash on most GNU/Linux systems. As a result, it had almost become safe to ignore the difference between the two. But that started to change recently. Some popular examples of systems where /bin/sh does not point to /bin/bash (and on some of which /bin/bash may not even exist) are: Modern Debian and Ubuntu systems, which symlink sh to dash by default; Busybox , which is usually run during the Linux system boot time as part of initramfs . It uses the ash shell implementation. BSDs, and in general any non-Linux systems. OpenBSD uses pdksh , a descendant of the Korn shell. FreeBSD's sh is a descendant of the original UNIX Bourne shell. Solaris has its own sh which for a long time was not POSIX-compliant; a free implementation is available from the Heirloom project . How can you find out what /bin/sh points to on your system? The complication is that /bin/sh could be a symbolic link or a hard link. If it's a symbolic link, a portable way to resolve it is: % file -h /bin/sh /bin/sh: symbolic link to bash If it's a hard link, try % find -L /bin -samefile /bin/sh /bin/sh /bin/bash In fact, the -L flag covers both symlinks and hardlinks, but the disadvantage of this method is that it is not portable — POSIX does not require find to support the -samefile option, although both GNU find and FreeBSD find support it. Shebang line Ultimately, it's up to you to decide which one to use, by writing the «shebang» line. E.g. #!/bin/sh will use sh (and whatever that happens to point to), #!/bin/bash will use /bin/bash if it's available (and fail with an error message if it's not). Of course, you can also specify another implementation, e.g. #!/bin/dash Which one to use For my own scripts, I prefer sh for the following reasons: it is standardized it is much simpler and easier to learn it is portable across POSIX systems — even if they happen not to have bash , they are required to have sh There are advantages to using bash as well. Its features make programming more convenient and similar to programming in other modern programming languages. These include things like scoped local variables and arrays. Plain sh is a very minimalistic programming language.
{ "source": [ "https://unix.stackexchange.com/questions/379464", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27368/" ] }
379,572
I have the following string: /tmp/test/folder1/test.txt I wish to use sed to substitute / for \/ - for example: \/tmp\/test\/folder1\/test.txt So I issue: echo "/tmp/test/folder1/test.txt" | sed "s/\//\\\\//g" Although it returns: sed: -e expression #1, char 9: unknown option to `s' I am escaping the forward slash and backslash - so not sure where I have gone wrong here
You need to escape (with backslash \ ) all substituted slashes / and all backslashes \ separately, so: $ echo "/tmp/test/folder1/test.txt" | sed 's/\//\\\//g' \/tmp\/test\/folder1\/test.txt but that's rather unreadable. However, sed allows to use almost any character as a separator instead of / , this is especially useful when one wants to substitute slash / itself, as in your case, so using for example semicolon ; as separator the command would become simpler: echo "/tmp/test/folder1/test.txt" | sed 's;/;\\/;g' Other cases: If one wants to stick with slash as a separator and use double quotes then all escaped backslashes have to be escaped one more time to preserve their literal values: echo "/tmp/test/folder1/test.txt" | sed "s/\//\\\\\//g" if one doesn't want quotes at all then yet another backslash is needed: echo "/tmp/test/folder1/test.txt" | sed s/\\//\\\\\\//g
{ "source": [ "https://unix.stackexchange.com/questions/379572", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/241485/" ] }
379,725
The locate program of findutils scans one or more databases of filenames and displays any matches. This can be used as a very fast find command if the file was present during the last file name database update. There are many kinds of databases nowadays, relational databases (with query language e.g. SQL), NoSQL databases document-oriented databases (e.g. MongoDB) Key-value database (e.g. Redis) Column-oriented databases (e.g. Cassandra) Graph database So what kind of database does updatedb update and locate use? Thanks.
Implementations of locate / updatedb typically use specific databases tailored to their requirements, rather than a generic database engine. You’ll find those specific databases documented by each implementation; for example: GNU findutils ’ is documented in locatedb(5) , and is pretty much just a list of files (with a specific compression algorithm); mlocate ’s is documented in mlocate.db(5) , and can also be considered a list of directories and files (with metadata).
{ "source": [ "https://unix.stackexchange.com/questions/379725", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
379,774
I had a good running installation of Debian Jessie, but then I ran apt-get update && apt-get upgrade && apt-get dist-upgrade . And then after rebooting, it came directly to the BIOS. I realized that Grub was missing, so I ran a live cd and entered Rescue mode , mounted my root partition, + the boot partition and ran these commands: Grub finds the linux image: root@debian:~# update-grub Generating grub configuration file ... Found background image: /usr/share/images/desktop-base/desktop-grub.png Found linux image: /boot/vmlinuz-4.9.0-3-amd64 Found initrd image: /boot/initrd.img-4.9.0-3-amd64 Found linux image: /boot/vmlinuz-4.9.0-0.bpo.3-amd64 Found initrd image: /boot/initrd.img-4.9.0-0.bpo.3-amd64 Found linux image: /boot/vmlinuz-3.16.0-4-amd64 Found initrd image: /boot/initrd.img-3.16.0-4-amd64 Found Ubuntu 16.10 (16.10) on /dev/sdb2 Adding boot menu entry for EFI firmware configuration done And then grub-install : root@debian:~# grub-install /dev/sda Installing for x86_64-efi platform. Could not prepare Boot variable: No such file or directory grub-install: error: efibootmgr failed to register the boot entry: Input/output error. lsblk : root@debian:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 223.6G 0 disk ├─sda1 8:1 0 92.6G 0 part / ├─sda2 8:2 0 130.4G 0 part └─sda3 8:3 0 573M 0 part /boot/efi Did I do something wrong? Is there too little space on my /boot/efi partition? root@debian:~# ls -l /boot/efi/EFI/debian/ total 120 -rwx------ 1 root root 121856 Jul 20 20:29 grubx64.efi efibootmgr doesn't show a Debian installation: root@debian:~# efibootmgr --verbose | grep debian Edit : I keep getting this error every time I try and create a boot loader using efibootmgr : grub-install: info: executing efibootmgr -c -d /dev/sda -p 3 -w -L grub -l \EFI\grub\grubx64.efi. Could not prepare Boot variable: No such file or directory grub-install: error: efibootmgr failed to register the boot entry: Input/output error.
Fixed the efibootmgr errors by mounting the Boot variables for efibootmgr : # mount -t efivarfs efivarfs /sys/firmware/efi/efivars And then efibootmgr gave me errors about space : Could not prepare Boot variable: No space left on device Fixed that by deleting dump files : # rm /sys/firmware/efi/efivars/dump-* And then ran the usual update-grub grub-install -v --target=x86_64-efi --recheck /dev/sda and it ran successfully!
{ "source": [ "https://unix.stackexchange.com/questions/379774", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237568/" ] }
379,982
I'm using tail -f a.txt to watch updates on a file called a.txt . If I update the file using something like ls -a >> a.txt in a second virtual console, the changes will display in real-time in the first one. If I update the file using Vim in a second virtual console, the changes will not display in the first one. I don't necessarily expect it to trigger an update in that window - but why exactly doesn't this update the terminal running the tail -f command?
If you edit a file with vim , typically it reads the file into memory, then writes a new file. So tail is now operating on an out of date copy of the file (which remains in the file system until tail (and any other program) stops using it. You can make tail follow the filename (rather than the file) by using: tail -F yourfile Note the upper case F .
{ "source": [ "https://unix.stackexchange.com/questions/379982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237109/" ] }
381,113
If I am in a deep directory, let's say: ~/Desktop/Dropbox/School/2017/C/A3/ then when I open up terminal, it says bob@bob-ubuntu:~/Desktop/Dropbox/School/2017/C/A3/$ and then I write my command. That is very long, and every line I write in the terminal goes to the next line. I want to know if there's a way so that it only displays my current directory. I want it to display: bob@bob-ubuntu: A3/$ This way it's much clear, and always I can do pwd to see my entire directory. I just don't want the entire directory visible in terminal because it takes too much space.
You need to modify PS1 in your shell startup file (probably .bashrc ). If it's there already, its setting will contain \w , which is what gives your working directory. Change that to \W (upper case). The line in bashrc file looks like below: if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\W\[\033[00m\]\$ ' Log out and in again, or do: . .bashrc or (you need to add this prefix '~/' if you are in others directory) source ~/.bashrc (or whatever your file is). If it isn't there, add something like: PS1='\u@\h: \W:\$' to .bashrc or whatever. Look up PS1 in the bash manual page to get more ideas. Be careful; bash can use several more than one initialisation file, e.g. .bashrc and .bash_profile ; it may be that PS1 is set in a system-wide one. But you can override that in one of your own files.
{ "source": [ "https://unix.stackexchange.com/questions/381113", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/230247/" ] }
381,230
I have a file in UTF-8 encoding with BOM and want to remove the BOM. Are there any linux command-line tools to remove the BOM from the file? $ file test.xml test.xml: XML 1.0 document, UTF-8 Unicode (with BOM) text, with very long lines
If you're not sure if the file contains a UTF-8 BOM, then this (assuming the GNU implementation of sed ) will remove the BOM if it exists, or make no changes if it doesn't. sed '1s/^\xEF\xBB\xBF//' < orig.txt > new.txt You can also overwrite the existing file with the -i option: sed -i '1s/^\xEF\xBB\xBF//' orig.txt If you are using the BSD version of sed (eg macOS) then you need to have bash do the escaping: sed $'1s/\xef\xbb\xbf//' < orig.txt > new.txt
{ "source": [ "https://unix.stackexchange.com/questions/381230", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80062/" ] }
381,408
So I'm trying to install Debian 9.0 to my laptop (UEFI) via DVD and everything worked fine except that the mousepad doesn't work (yet?) and that I'm getting this error: Debootstrap error Failed to determine the codename for the release. at the step "install the base system" after the partitioning. Any suggestions for what I should try to get it working? This thread somewhat suggests that some change to my partitioning could solve this issue. I selected "Guided - Use entire disk and set up encrypted LVM". Changing the 2 ext4 partitions to ext3 didn't help. Any help is welcome! Edit : I skipped to step "Check integrity of the CD" and it says The CD of yours is not a valid Debian-CD-ROM. Please change the CD. Note that during installation I connected to the Internet and that I used the same CD for the installation on another PC. Please help. Edit : Related question of mine . I rebooted and did the integrity-check offline where it says: "Integrity test failed The ./pool/main/k/kde-l10n-de_16.04.3-1_all.deb file failed the MD5 checksum verification. Your CR-ROM or this file may have been corrupted."
This question is old but I just came across a working fix for this. As it turns out, the issue was caused due to the USB drive being unmounted during the LVM setup process. It might've been a bad USB connector or USB drive. There is a very easy fix for which you don't even have to reboot or re-do any of the setup again. Press esc to enter the menu of the installer. Select Enter a shell (or command prompt) Run fdisk -l to find out the name and partition of your USB install drive Run mount /dev/sdc1 /cdrom (replace sdc1 with your USB drive) Run exit , then go back to Install the base system from the menu It will continue to install as normal. All credits and thanks go to this guy On Debian 10 busybox there is no fdisk command, so you can list disk and their partitions using ls /dev/sd* . Once you find your USB partition go to step 4.
{ "source": [ "https://unix.stackexchange.com/questions/381408", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233262/" ] }
381,616
I have a Makefile. Somewhere in the makefile there is a variable defined: FOO=hello Later on I need to append some text to FOO 's content. I tried it like this: FOO=$(FOO)_world I suggested that echo $(FOO) would output hello_world . Instead I get an error: *** Recursive variable 'FOO' references itself (eventually). Stop. Using the += operator is no option, because this would add a space in between.
You need the := in place of the recursive = : FOO := hello FOO := $(FOO)_world $(info FOO=$(FOO)) hello_world
{ "source": [ "https://unix.stackexchange.com/questions/381616", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/243323/" ] }
381,776
I want a quick and easy way to tell how many times a service managed by systemd has restarted (i.e. like a counter). Does that exist?
The counter has since been added and can be accessed with the following command: systemctl show foo.service -p NRestarts It will return a value if the service is in a restart loop, otherwise, will return nothing. https://github.com/systemd/systemd/pull/6495
{ "source": [ "https://unix.stackexchange.com/questions/381776", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145572/" ] }
382,060
When I run sudo and enter my password, a subsequent invocation of sudo within a few minutes will not need the password to be re-entered. How can I change the default timeout to require the password again?
man sudoers says: Once a user has been authenticated, [...] the user may then use sudo without a password for a short period of time (5 minutes unless overridden by the timestamp_timeout option). To change the timeout, run, sudo visudo and add the line: Defaults timestamp_timeout=30 where 30 is the new timeout in minutes. To always require a password, set to 0 . To set an infinite timeout, set the value to be negative. To totally disable the prompt for a password for user ravi : Defaults:ravi !authenticate
{ "source": [ "https://unix.stackexchange.com/questions/382060", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
382,535
I am a cat-owner and a cat-lover. But I don't like it when my cat sits on my keyboard and pushes randoms keys and messes everything up. I have an idea to have a function key that turns off the keyboard (except for one special key combination). I know there is already Ctl - S , but this freezes the keyboard and keeps track of the input until the keyboard is unlocked. Is there any way have the keyboard disregard all input except one hard-to-press-accidentally key combination? Bonus points: Is there any way to do the same thing in Windows?
Open a tiny terminal window somewhere on the screen and run cat in it. Whenever you want to protect the system from your cat, change focus to that window. Not many people know this but this feature was an important design goal for the cat program :). Unfortunately, really clever cats (like my evil beast) know what Ctrl-C is. If your cat is clever enough to figure out Ctrl-C , Ctrl-D , Ctrl-\ or Ctrl-Z , run cat using this sh script wrapper ( /usr/local/bin/toodamnsmartcat.sh ): #!/bin/sh trap "" TSTP INT QUIT stty raw -echo while true; do cat -v done
{ "source": [ "https://unix.stackexchange.com/questions/382535", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/244013/" ] }
382,808
I have the following bash script: for i in {0800..9999}; do for j in {001..032}; do wget http://example.com/"$i-$j".jpg done done All photos are exist and in fact each iteration does not depend from another. How to parallelize it with possibility of control the number of threads?
Confiq's answer is a good one for small i and j . However, given the size of i and j in your question, you will likely want to limit the overall number of processes spawned. You can do this with the parallel command or some versions of xargs . For example, using an xargs that supports the -P flag you could parallelize your inner loop as follows: for i in {0800..9999}; do echo {001..032} | xargs -n 1 -P 8 -I{} wget http://example.com/"$i-{}".jpg done GNU parallel has a large number of features for when you need more sophisticated behavior and makes it easy to parallelize over both parameters: parallel -a <(seq 0800 9999) -a <(seq 001 032) -P 8 wget http://example.com/{1}-{2}.jpg
{ "source": [ "https://unix.stackexchange.com/questions/382808", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/221889/" ] }
382,946
I'm using Amazon Linux. I'm trying to append some text onto a file. The file is owned by root. I thought by using "sudo", I could append the needed text, but I'm getting "permission denied", see below [myuser@mymachine ~]$ ls -al /etc/yum.repos.d/google-chrome.repo -rw-r--r-- 1 root root 186 Jul 31 15:50 /etc/yum.repos.d/google-chrome.repo [myuser@mymachine ~]$ sudo echo -e "[google-chrome]\nname=google-chrome\nbaseurl=http://dl.google.com/linux/chrome/rpm/stable/\$basearch\nenabled=1\ngpgcheck=1\ngpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub" >> /etc/yum.repos.d/google-chrome.repo -bash: /etc/yum.repos.d/google-chrome.repo: Permission denied How can I adjust my statement so that I can append the necessary text onto the file?
You have to use tee utility to redirect or append streams to a file which needs some permissions, like: echo something | sudo tee /etc/file or for append echo something | sudo tee -a /etc/file because by default your shell is running with your own user permissions and the redirection > or >> will be done with same permissions as your user, you are actually running the echo using the sudo and redirecting it without root permission. As an alternative you can also get a root shell then try normal redirect: sudo -i echo something >> /etc/pat/to/file exit or sudo -s for a non-login shell. you can also run a non interactive shell using root access: sudo bash -c 'echo something >> /etc/somewhere/file'
{ "source": [ "https://unix.stackexchange.com/questions/382946", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166917/" ] }
383,217
I want to be able to capture the exact output of a command substitution, including the trailing new line characters . I realise that they are stripped by default, so some manipulation may be required to keep them, and I want to keep the original exit code . For example, given a command with a variable number of trailing newlines and exit code: f(){ for i in $(seq "$((RANDOM % 3))"); do echo; done; return $((RANDOM % 256));} export -f f I want to run something like: exact_output f And have the output be: Output: $'\n\n' Exit: 5 I'm interested in both bash and POSIX sh .
POSIX shells The usual ( 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ) trick to get the complete stdout of a command is to do: output=$(cmd; ret=$?; echo .; exit "$ret") ret=$? output=${output%.} The idea is to add an extra .\n . Command substitution will only strip that \n . And you strip the . with ${output%.} . Note that in shells other than zsh , that will still not work if the output has NUL bytes. With yash , that won't work if the output is not text. Also note that in some locales, it matters what character you use to insert at the end. . should generally be fine (see below), but some other might not. For instance x (as used in some other answers) or @ would not work in a locale using the BIG5, GB18030 or BIG5HKSCS charsets. In those charsets, the encoding of a number of characters ends in the same byte as the encoding of x or @ (0x78, 0x40) For instance, ū in BIG5HKSCS is 0x88 0x78 (and x is 0x78 like in ASCII, all charsets on a system must have the same encoding for all the characters of the portable character set which includes English letters, @ and . ). So if cmd was printf '\x88' (which by itself is not a valid character in that encoding, but just a byte-sequence) and we inserted x after it, ${output%x} would fail to strip that x as $output would actually contain ū (the two bytes making up a byte sequence that is a valid character in that encoding). Using . or / should be generally fine , as POSIX requires: “The encoded values associated with <period> , <slash> , <newline> , and <carriage-return> shall be invariant across all locales supported by the implementation.”, which means that these will have the same binary represenation in any locale/encoding. “Likewise, the byte values used to encode <period> , <slash> , <newline> , and <carriage-return> shall not occur as part of any other character in any locale.”, which means that the above cannot happen, as no partial byte sequence could be completed by these bytes/characters to a valid character in any locale/encoding. (see 6.1 Portable Character Set ) The above does not apply to other characters of the Portable Character Set. Another approach, as discussed by @Isaac , would be to change the locale to C (which would also guarantee that any single byte can be correctly stripped), only for the stripping of the last character ( ${output%.} ). It would be typically necessary to use LC_ALL for that (in principle LC_CTYPE would be enough, but that could be accidentally overridden by any already set LC_ALL ). Also it would be necessary to restore the original value (or e.g. the non-POSIX compliant locale be used in a function). But beware, that some shells don't support changing the locale while running (though this is required by POSIX). By using . or / , all that can be avoided. bash/zsh alternatives With bash and zsh , assuming the output has no NULs, you can also do: IFS= read -rd '' output < <(cmd) To get the exit status of cmd , you can do wait "$!"; ret=$? in bash but not in zsh . rc/es/akanaga For completeness, note that rc / es / akanga have an operator for that. In them, command substitution, expressed as `cmd (or `{cmd} for more complex commands) returns a list (by splitting on $ifs , space-tab-newline by default). In those shells (as opposed to Bourne-like shells), the stripping of newline is only done as part of that $ifs splitting. So you can either empty $ifs or use the ``(seps){cmd} form where you specify the separators: ifs = ''; output = `cmd or: output = ``()cmd In any case, the exit status of the command is lost. You'd need to embed it in the output and extract it afterwards which would become ugly. fish In fish, command substitution is with (cmd) and doesn't involve a subshell. set var (cmd) Creates a $var array with all the lines in the output of cmd if $IFS is non-empty, or with the output of cmd stripped of up to one (as opposed to all in most other shells) newline character if $IFS is empty. So there's still an issue in that (printf 'a\nb') and (printf 'a\nb\n') expand to the same thing even with an empty $IFS . To work around that, the best I could come up with was: function exact_output set -l IFS . # non-empty IFS set -l ret set -l lines ( cmd set ret $status echo ) set -g output '' set -l line test (count $lines) -le 1; or for line in $lines[1..-2] set output $output$line\n end set output $output$lines[-1] return $ret end Since version 3.4.0 (released in March 2022), you can do instead: set output (cmd | string collect --allow-empty --no-trim-newlines) With older versions, you could do: read -z output < (begin; cmd; set ret $status; end | psub) With the caveat that $output is an empty list instead of a list with one empty element if there's no output. Version 3.4.0 also added support for $(...) which behaves like (...) except that it can also be used inside double quotes in which case it behaves like in the POSIX shell: the output is not split on lines but all trailing newline characters are removed. Bourne shell The Bourne shell did not support the $(...) form nor the ${var%pattern} operator, so it can be quite hard to achieve there. One approach is to use eval and quoting: eval " output='` exec 4>&1 ret=\` exec 3>&1 >&4 4>&- (cmd 3>&-; echo \"\$?\" >&3; printf \"'\") | awk 3>&- -v RS=\\\\' -v ORS= -v b='\\\\\\\\' ' NR > 1 {print RS b RS RS}; {print}; END {print RS}' \` echo \";ret=\$ret\" `" Here, we're generating a output='output of cmd with the single quotes escaped as '\'' ';ret=X to be passed to eval . As for the POSIX approach, if ' was one of those characters whose encoding can be found at the end of other characters, we'd have a problem (a much worse one as it would become a command injection vulnerability), but thankfully, like . , it's not one of those, and that quoting technique is generally the one that is used by anything that quotes shell code (note that \ has the issue, so shouldn't be used (also excludes "..." inside which you need to use backslashes for some characters). Here, we're only using it after a ' which is OK). tcsh See tcsh preserve newlines in command substitution `...` (not taking care of the exit status, which you could address by saving it in a temporary file ( echo $status > $tempfile:q after the command))
{ "source": [ "https://unix.stackexchange.com/questions/383217", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
383,365
When I try to login as root, this warning comes up. luvpreet@DHARI-Inspiron-3542:/$ sudo su Password: zsh compinit: insecure directories, run compaudit for list. Ignore insecure directories and continue [y] or abort compinit [n]? If I say yes, it simply logs in, and my shell changes from bash to zsh. If I say no, it says that ncompinit: initialization aborted and logs in. After login, my shell changes to zsh. All I ever did related to zsh, was download oh-my-zsh from github. What is happening and why ? Using - Ubuntu 16.04 on Dell.
You can list those insecure folders by: compaudit The root cause of "insecure" is these folders are group writable. There's a one line solution to fix that: compaudit | xargs chmod g-w Please see zsh, Cygwin and Insecure Directories and zsh compinit: insecure directories for reference.
{ "source": [ "https://unix.stackexchange.com/questions/383365", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/217864/" ] }
384,325
I want to create a new file or directory with ranger. I suppose I could use mkdir or touch , but I'm not sure if these would go in the current directory as viewed in ranger.
To create a directory in ranger , just type :mkdir exampledir or, :touch examplefile
{ "source": [ "https://unix.stackexchange.com/questions/384325", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124109/" ] }
384,345
In an image browser called gpicview ( http://lxde.sourceforge.net/gpicview/ ) I can enable fullscreen mode by pressing a button with a square and four arrows. But how can I disable the fullscreen mode?
To create a directory in ranger , just type :mkdir exampledir or, :touch examplefile
{ "source": [ "https://unix.stackexchange.com/questions/384345", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/245275/" ] }
384,457
I add this line to visudo , in order to give full permissions to yael user: yael ALL=(ALL) NOPASSWD: ALL But when I want to update the /etc/hosts file, I get permission denied: su – yael echo "10.10.10.10 yael_host">>/etc/hosts -bash: /etc/hosts: Permission denied sudo echo "10.10.10.10 yael_host">>/etc/hosts -bash: /etc/hosts: Permission denied ls -ltr /etc/hosts -rw-r--r--. 1 root root 185 Aug 7 09:29 /etc/hosts How can I give to user yael ability like root?
The source of the problem is that the output redirection is done by the shell (user yael) and not by sudo echo . In order to enforce that the writing to /etc/hosts will be done by user root instead of user yael - You can use the following format: echo "10.10.10.10 yael_host" | sudo tee --append /etc/hosts or sudo sh -c "echo '10.10.10.10 yael_host'>>/etc/hosts"
{ "source": [ "https://unix.stackexchange.com/questions/384457", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
384,461
I am trying to set up a privileged container on Ubuntu where apparmor denies to mount /run /run/lock and / sys/fs/cgroup while running lxc-start . Violations [ 1621.278919] audit: type=1400 audit(1499177276.634:12): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="/usr/bin/lxc-start" name="/run/" pid=2097 comm="systemd" fstype="tmpfs" srcname="tmpfs" flags="rw, nosuid, nodev, strictatime" [ 1621.302331] audit: type=1400 audit(1499177276.658:13): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="/usr/bin/lxc-start" name="/run/lock/" pid=2097 comm="systemd" fstype="tmpfs" srcname="tmpfs" flags="rw, nosuid, nodev, noexec" [ 1621.325944] audit: type=1400 audit(1499177276.682:14): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="/usr/bin/lxc-start" name="/sys/fs/cgroup/" pid=2097 comm="systemd" fstype="tmpfs" srcname="tmpfs" flags="rw, nosuid, nodev, noexec, strictatime" lxc-start --version : 2.0.6 Kernel version: 4.9 Any hints?
The source of the problem is that the output redirection is done by the shell (user yael) and not by sudo echo . In order to enforce that the writing to /etc/hosts will be done by user root instead of user yael - You can use the following format: echo "10.10.10.10 yael_host" | sudo tee --append /etc/hosts or sudo sh -c "echo '10.10.10.10 yael_host'>>/etc/hosts"
{ "source": [ "https://unix.stackexchange.com/questions/384461", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70728/" ] }
385,339
I noticed some time ago that usernames and passwords given to curl as command line arguments don't appear in ps output (although of course they may appear in your bash history). They likewise don't appear in /proc/PID/cmdline . (The length of the combined username/password argument can be derived, though.) Demonstration below: [root@localhost ~]# nc -l 80 & [1] 3342 [root@localhost ~]# curl -u iamsam:samiam localhost & [2] 3343 [root@localhost ~]# GET / HTTP/1.1 Authorization: Basic aWFtc2FtOnNhbWlhbQ== User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2 Host: localhost Accept: */* [1]+ Stopped nc -l 80 [root@localhost ~]# jobs [1]+ Stopped nc -l 80 [2]- Running curl -u iamsam:samiam localhost & [root@localhost ~]# ps -ef | grep curl root 3343 3258 0 22:37 pts/1 00:00:00 curl -u localhost root 3347 3258 0 22:38 pts/1 00:00:00 grep curl [root@localhost ~]# od -xa /proc/3343/cmdline 0000000 7563 6c72 2d00 0075 2020 2020 2020 2020 c u r l nul - u nul sp sp sp sp sp sp sp sp 0000020 2020 2020 0020 6f6c 6163 686c 736f 0074 sp sp sp sp sp nul l o c a l h o s t nul 0000040 [root@localhost ~]# How is this effect achieved? Is it somewhere in the source code of curl ? (I assume it is a curl feature, not a ps feature? Or is it a kernel feature of some sort?) Also: can this be achieved from outside the source code of a binary executable? E.g. by using shell commands, probably combined with root permissions? In other words could I somehow mask an argument from appearing in /proc or in ps output (same thing, I think) that I passed to some arbitrary shell command? (I would guess the answer to this is "no" but it seems worth including this extra half-a-question.)
When the kernel executes a process, it copies the command line arguments to read-write memory belonging to the process (on the stack, at least on Linux). The process can write to that memory like any other memory. When ps displays the argument, it reads back whatever is stored at that particular address in the process's memory. Most programs keep the original arguments, but it's possible to change them. The POSIX description of ps states that It is unspecified whether the string represented is a version of the argument list as it was passed to the command when it started, or is a version of the arguments as they may have been modified by the application. Applications cannot depend on being able to modify their argument list and having that modification be reflected in the output of ps. The reason this is mentioned is that most unix variants do reflect the change, but POSIX implementations on other types of operating systems may not. This feature is of limited use because the process can't make arbitrary changes. At the very least, the total length of the arguments cannot be increased, because the program can't change the location where ps will fetch the arguments and can't extend the area beyond its original size. The length can effectively be decreased by putting null bytes at the end, because arguments are C-style null-terminated strings (this is indistinguishable from having a bunch of empty arguments at the end). If you really want to dig, you can look at the source of an open-source implementation. On Linux, the source of ps isn't interesting, all you'll see there is that it reads the command line arguments from the proc filesystem , in /proc/ PID /cmdline . The code that generates the content of this file is in the kernel, in proc_pid_cmdline_read in fs/proc/base.c . The part of the process's memory (accessed with access_remote_vm ) goes from the address mm->arg_start to mm->arg_end ; these addresses are recorded in the kernel when the process starts and can't be changed afterwards. Some daemons use this ability to reflect their status, e.g. they change their argv[1] to a string like starting or available or exiting . Many unix variants have a setproctitle function to do this. Some programs use this ability to hide confidential data. Note that this is of limited use since the command line arguments are visible while the process starts. Most high-level languages copy the arguments to string objects and don't give a way to modify the original storage. Here's a C program that demonstrates this ability by changing argv elements directly. #include <stdlib.h> #include <stdio.h> #include <string.h> int main(int argc, char *argv[]) { int i; system("ps -p $PPID -o args="); for (i = 0; i < argc; i++) { memset(argv[i], '0' + (i % 10), strlen(argv[i])); } system("ps -p $PPID -o args="); return 0; } Sample output: ./a.out hello world 0000000 11111 22222 You can see argv modification in the curl source code. Curl defines a function cleanarg in src/tool_paramhlp.c which is used to change an argument to all spaces using memset . In src/tool_getparam.c this function is used a few times, e.g. by redacting the user password . Since the function is called from the parameter parsing, it happens early in a curl invocation, but dumping the command line before this happens will still show any passwords. Since the arguments are stored in the process's own memory, they cannot be changed from the outside except by using a debugger.
{ "source": [ "https://unix.stackexchange.com/questions/385339", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
385,357
I have files named file.88_0.pdb , file.88_1.pdb , ... , file.88_100.pdb . I want to cat them so that file.88_1.pdb gets pasted after file.88_0.pdb , file.88_2.pdb after file.88_1.pdb , and so on. If I do cat file.88_*.pdb > all.pdb , the files are put together in the following order: 0 1 10 11 12 13 14 15 16 17 18 19 2 20... , etc. How do I put them together so that the order is 0 1 2 3 4 5 6... ?
Use brace expansion cat file.88_{0..100}.pdb >>bigfile.pdb To ignoring printing the error messages for non-existent files use: cat file.88_{0..100}.pdb >>bigfile.pdb 2>/dev/null In the zsh shell also you have the (n) globbing qualifier to request a numerical sorting (as opposed to the default of alphabetical ) for globs: cat file.88_*.pdb(n) >>bigfile.pdb 2>/dev/null
{ "source": [ "https://unix.stackexchange.com/questions/385357", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90334/" ] }
385,405
I have the following folder structure folder | |--foo.txt | |--sub_folder | |--bar.txt I want to zip the content (files and sub folders) of folder without including the root folder in the zip. I have tried command zip -r package.zip folder But this includes the root folder. Also tried the following form zip -j -r package.zip folder But this will flatten all the directory structures and just include the files. How do I preserve the internal directory structure but ignore the parent folder?
zip stores paths relative to the current directory (when it is invoked), so you need to change that: (cd folder && zip -r "$OLDPWD/package.zip" .) && ensures that zip only runs if the directory was correctly changed, and the parentheses run everything in a subshell, so the current directory is restored at the end. Using OLDPWD avoids having to calculate the relative path to package.zip .
{ "source": [ "https://unix.stackexchange.com/questions/385405", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/188434/" ] }
386,395
I am using convert to create a PDF file from about 2,000 images: convert 0001.miff 0002.miff ... 2000.miff -compress jpeg -quality 80 out.pdf The process terminates reproducible when the output file has reached 2^31-1 bytes (2 GB −1) with the message convert: unknown `out.pdf'. The PDF file specification allows for ≈10 GB . I tried to pull more information from -debug all , but I didn’t see anything helpful in the logging output. The file system is ext3 which allows for files at least up to 16 GiB (may be more) . As to ulimit , file size is unlimited . /etc/security/limits.conf only contains commented-out lines. What else can cause this and how can I increase the limit? ImageMagick version: 6.4.3 2016-08-05 Q16 OpenMP Distribution: SLES 11.4 (i586)
Your limitation does not stem indeed from the filesystem; or from package versions I think . Your 2GB limit is coming from you using a 32-bit version of your OS. The option to increase the file would be installing a 64-bit version if the hardware supports it . See Large file support Traditionally, many operating systems and their underlying file system implementations used 32-bit integers to represent file sizes and positions. Consequently, no file could be larger than 2 32 − 1 bytes (4 GB − 1). In many implementations, the problem was exacerbated by treating the sizes as signed numbers, which further lowered the limit to 2 31 − 1 bytes (2 GB − 1).
{ "source": [ "https://unix.stackexchange.com/questions/386395", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28856/" ] }
386,476
When I want to search a whole tree for some content, I use find . -type f -print0 | xargs -0 grep <search_string> Is there a better way to do this in terms of performance or brevity?
Check if your grep supports -r option (for recurse ): grep -r <search_string> .
{ "source": [ "https://unix.stackexchange.com/questions/386476", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43524/" ] }
386,499
This will notify us if the file is empty: [[ ! -s $file ]] && echo "hello there I am empty file !!!" But how to check if a file has blank spaces (spaces or tabs)? empty file can include empty spaces / TAB
Just grep for a character other than space: grep -q '[^[:space:]]' < "$file" && printf '%s\n' "$file contains something else than whitespace characters"
{ "source": [ "https://unix.stackexchange.com/questions/386499", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
386,510
I want to monitor 2 different log files (at the time events appear in logs). tail -f /var/log/file1 -f /var/log/file2 For each file, I want to grep some patterns: tail -f /var/log/file1 | grep '\(pattern1\|pattern2\)' tail -f /var/log/file2 | grep '\(pattern3\|pattern4\|pattern5\)' I do not know how to have this working all together. Furthermore I would like to print file1 logs output in red and file2 logs output in blue: Again, I can do it for one file (snippet I grabbed in this forum): RED='\033[0;31m' BLUE='\033[0;34m' tail -fn0 /var/log/file1 | while read line; do if echo $line | grep -q '\(pattern1\|pattern2\)';then echo -e "{$RED}$line" fi done But I absolutely do not know how to do this for multiple files. Any ideas?
Just grep for a character other than space: grep -q '[^[:space:]]' < "$file" && printf '%s\n' "$file contains something else than whitespace characters"
{ "source": [ "https://unix.stackexchange.com/questions/386510", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/244315/" ] }
386,513
I know it can be done (LUKS) using an extra hard-drive which involves significant manual work. In Windows it's just a single click for encrypting/decrypting your hard-drives. Ubuntu offers a way to encrypt during installation but can anyone explain why there are no viable encryption mechanisms/tools in Linux as compared to Windows/Mac after the OS is installed? Is there an inherent bottleneck in OS architecture or it's just that no one have developed one yet?
Just grep for a character other than space: grep -q '[^[:space:]]' < "$file" && printf '%s\n' "$file contains something else than whitespace characters"
{ "source": [ "https://unix.stackexchange.com/questions/386513", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246664/" ] }
386,522
I just read here : up to 128TiB virtual address space per process (instead of 2GiB) 64TiB physical memory support instead of 4GiB (or 64GiB with the PAE extension) Why is that? I mean, the physical memory support is being limited by the kernel or by the current hardware? Why would you need twice the virtual memory space than the physical memory you can actually address?
Those limits don't come from Debian or from Linux, they come from the hardware. Different architectures (processor and memory bus) have different limitations. On current x86-64 PC processors, the MMU allows 48 bits of virtual address space . That means that the address space is limited to 256TB. With one bit to distinguish kernel addresses from userland addresses, that leaves 128TB for a process's address space. On current x86-64 processors, physical addresses can use up to 48 bits , which means you can have up to 256TB. The limit has progressively risen since the amd64 architecture was introduced (from 40 bits if I recall correctly). Each bit of address space costs some wiring and decoding logic (which makes the processor more expensive, slower and hotter), so hardware manufacturers have an incentive to keep the size down. Linux only allows physical addresses to go up to 2^46 (so you can only have up to 64TB) because it allows the physical memory to be entirely mapped in kernel space. Remember that there are 48 bits of address space; one bit for kernel/user leaves 47 bits for the kernel address space. Half of that at most addresses physical memory directly, and the other half allows the kernel to map whatever it needs. (Linux can cope with physical memory that can't be mapped in full at the same time, but that introduces additional complexity, so it's only done on platforms where it's require, such as x86-32 with PAE and armv7 with LPAE.) It's useful for virtual memory to be larger than physical memory for several reasons: It lets the kernel map the whole physical memory, and have space left for additional virtual mappings. In addition to mappings of physical memory, there are mappings of swap, of files and of device drivers. It's useful to have unmapped memory in places: guard pages to catch buffer overflows , large unmapped zones due to ASLR , etc.
{ "source": [ "https://unix.stackexchange.com/questions/386522", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
386,541
Hey I am using conduit curl method to create tasks from post. It work fine when I run from terminal with hardcoded values. But when I try to execute it with variables it throws an error: Script: #!/bin/bash echo "$1" echo "$2" echo "$3" echo "$4" echo "$5" echo '{ "transactions": [ { "type": "title", "value": "$1" }, { "type": "description", "value": "$2" }, { "type": "status", "value": "$3" }, { "type": "priority", "value": "$4" }, { "type": "owner", "value": "$5" } ] }' | arc call-conduit --conduit-uri https://mydomain.phacility.com/ --conduit-token mytoken maniphest.edit execution: ./test.sh "test003 ticket from api post" "for testing" "open" "high" "ahsan" Output: test003 ticket from api post for testing open high ahsan {"error":"ERR-CONDUIT-CORE","errorMessage":"ERR-CONDUIT-CORE: Validation errors:\n - User \"$5\" is not a valid user.\n - Task priority \"$4\" is not a valid task priority. Use a priority keyword to choose a task priority: unbreak, very, high, kinda, triage, normal, low, wish.","response":null} As you can see in error its reading $4 and $5 as values not variables. And I am failing to understand how to use $variables as input in these arguments.
Those limits don't come from Debian or from Linux, they come from the hardware. Different architectures (processor and memory bus) have different limitations. On current x86-64 PC processors, the MMU allows 48 bits of virtual address space . That means that the address space is limited to 256TB. With one bit to distinguish kernel addresses from userland addresses, that leaves 128TB for a process's address space. On current x86-64 processors, physical addresses can use up to 48 bits , which means you can have up to 256TB. The limit has progressively risen since the amd64 architecture was introduced (from 40 bits if I recall correctly). Each bit of address space costs some wiring and decoding logic (which makes the processor more expensive, slower and hotter), so hardware manufacturers have an incentive to keep the size down. Linux only allows physical addresses to go up to 2^46 (so you can only have up to 64TB) because it allows the physical memory to be entirely mapped in kernel space. Remember that there are 48 bits of address space; one bit for kernel/user leaves 47 bits for the kernel address space. Half of that at most addresses physical memory directly, and the other half allows the kernel to map whatever it needs. (Linux can cope with physical memory that can't be mapped in full at the same time, but that introduces additional complexity, so it's only done on platforms where it's require, such as x86-32 with PAE and armv7 with LPAE.) It's useful for virtual memory to be larger than physical memory for several reasons: It lets the kernel map the whole physical memory, and have space left for additional virtual mappings. In addition to mappings of physical memory, there are mappings of swap, of files and of device drivers. It's useful to have unmapped memory in places: guard pages to catch buffer overflows , large unmapped zones due to ASLR , etc.
{ "source": [ "https://unix.stackexchange.com/questions/386541", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246842/" ] }