source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
403,870
I need to hide some sensitive arguments to a program I am running, but I don't have access to the source code. I am also running this on a shared server so I can't use something like hidepid because I don't have sudo privileges. Here are some things I have tried: export SECRET=[my arguments] , followed by a call to ./program $SECRET , but this doesn't seem to help. ./program `cat secret.txt` where secret.txt contains my arguments, but the almighty ps is able to sniff out my secrets. Is there any other way to hide my arguments that doesn't involve admin intervention?
As explained here , Linux puts a program's arguments in the program's data space, and keeps a pointer to the start of this area. This is what is used by ps and so on to find and show the program arguments. Since the data is in the program's space, it can manipulate it. Doing this without changing the program itself involves loading a shim with a main() function that will be called before the real main of the program. This shim can copy the real arguments to a new space, then overwrite the original arguments so that ps will just see nuls. The following C code does this. /* https://unix.stackexchange.com/a/403918/119298 * capture calls to a routine and replace with your code * gcc -Wall -O2 -fpic -shared -ldl -o shim_main.so shim_main.c * LD_PRELOAD=/.../shim_main.so theprogram theargs... */ #define _GNU_SOURCE /* needed to get RTLD_NEXT defined in dlfcn.h */ #include <stdlib.h> #include <stdio.h> #include <string.h> #include <signal.h> #include <unistd.h> #include <dlfcn.h> typedef int (*pfi)(int, char **, char **); static pfi real_main; /* copy argv to new location */ char **copyargs(int argc, char** argv){ char **newargv = malloc((argc+1)*sizeof(*argv)); char *from,*to; int i,len; for(i = 0; i<argc; i++){ from = argv[i]; len = strlen(from)+1; to = malloc(len); memcpy(to,from,len); memset(from,'\0',len); /* zap old argv space */ newargv[i] = to; argv[i] = 0; } newargv[argc] = 0; return newargv; } static int mymain(int argc, char** argv, char** env) { fprintf(stderr, "main argc %d\n", argc); return real_main(argc, copyargs(argc,argv), env); } int __libc_start_main(pfi main, int argc, char **ubp_av, void (*init) (void), void (*fini)(void), void (*rtld_fini)(void), void (*stack_end)){ static int (*real___libc_start_main)() = NULL; if (!real___libc_start_main) { char *error; real___libc_start_main = dlsym(RTLD_NEXT, "__libc_start_main"); if ((error = dlerror()) != NULL) { fprintf(stderr, "%s\n", error); exit(1); } } real_main = main; return real___libc_start_main(mymain, argc, ubp_av, init, fini, rtld_fini, stack_end); } It is not possible to intervene on main() , but you can intervene on the standard C library function __libc_start_main , which goes on to call main. Compile this file shim_main.c as noted in the comment at the start, and run it as shown. I've left a printf in the code so you check that it is actually being called. For example, run LD_PRELOAD=/tmp/shim_main.so /bin/sleep 100 then do a ps and you will see a blank command and args being shown. There is still a small amount of time that the command args may be visible. To avoid this, you could, for example, change the shim to read your secret from a file and add it to the args passed to the program.
{ "source": [ "https://unix.stackexchange.com/questions/403870", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/260020/" ] }
404,036
In my Debian machine, the current version of apache2 is 2.4.10: root@9dd0fd95a309:/# apachectl -V Server version: Apache/2.4.10 (Debian) I would like to upgrade apache to a latest version (at least 2.4.26): I tried: root@9dd0fd95a309:/# apt-get install apache2 Reading package lists... Done Building dependency tree Reading state information... Done apache2 is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 48 not upgraded. But it doesn't find any update. What can i do to upgrade to a latest version ?
Do not manually upgrade Apache. Manual upgrading for security is unnecessary and probably harmful. How Debian releases software To see why this is, you must understand how Debian deals with packaging, versions, and security issues. Because Debian values stability over changes, the policy is to freeze the software versions in the packages of a stable release. This means that for a stable release very little changes, and once things work they should continue working for a long time. But, what if a serious bug or security issue is discovered after release of a Debian stable version? These are fixed, in the software version provided with Debian stable . So if Debian stable ships with Apache 2.4.10 , a security issue is found and fixed in 2.4.26 , Debian will take this security fix, and apply it to 2.4.10 , and distribute the fixed 2.4.10 to its users. This minimizes disruptions from version upgrades, but it makes version sniffing such as Tenable does meaningless. Serious bugs are collected and fixed in point releases (the .9 in Debian 8.9 ) every few months. Security fixes are fixed immediately and provided through an update channel. In general, as long as you run a supported Debian version, stick to stock Debian packages, and stay up to date on their security updates, you should be good. Your Tenable report To check if Debian stable is vulnerable for your issues, Tenable's "2.4.x < 2.4.27 multiple issues" is useless. We need to know exactly which security issues they are talking about. Luckily, every significant vulnerability is assigned a Common Vulnerability and Exposures (CVE) identifier, so we can talk easily about specific vulnerabilities. For example, on this page for Tenable issue 101788 we can see that that issue is about vulnerabilities CVE-2017-9788 and CVE-2017-9789. We can search for these vulnerabilities on the Debian security tracker . If we do that, we can see that CVE-2017-9788 has the status "fixed" in or before version 2.4.10-10+deb8u11 . Likewise, CVE-2017-9789 is fixed . Tenable issue 10095 is about CVE-2017-3167 , CVE-2017-3169 , CVE-2017-7659 , CVE-2017-7668 , and CVE-2017-7679 , all fixed. So if you're on version 2.4.10-10+deb8u11 , you should be safe from all these vulnerabilities! You can check this with dpkg -l apache2 (ensure your terminal is wide enough to show the full version number). Staying up to date So, how do you ensure you're up to date with these security updates? First, you need to have the security repository in your /etc/apt/sources.list or /etc/apt/sources.list.d/* , something like this: deb http://security.debian.org/ jessie/updates main This is a normal part of any installation, you should not have to do anything special. Next, you must ensure that you install updated packages. This is your responsibility; it is not done automatically. A simple but tedious way is to log in regularly and run # apt-get update # apt-get upgrade Judging from the fact that you report your Debian version as 8.8 (we're at 8.9) and the ... and 48 not upgraded. from your post, you might want to do this soon. To be notified of security updates, I higly recommend subscribing to the Debian security announcements mailinglist . Another option is ensuring your server can send you emails, and installing a package like apticron , which emails you when packages on your system need updating. Basically, it regularly runs the apt-get update part, and pesters you to do the apt-get upgrade part. Finally, you could install something like unattended-upgrades , which not only checks for updates, but automatically installs the updates without human intervention. Upgrading the packages automatically without human supervision carries some risk, so you need to decide for yourself if that is a good solution for you. I use it and I'm happy with it, but caveat updator. Why upgrading yourself is harmful In my second sentence, I said upgrading to the latest Apache version is probably harmful . The reason for this is simple: if you follow Debian's version of Apache, and make a habit of installing the security updates, then you are in a good position, security-wise. Debians security team identifies and fixes security issues, and you can enjoy that work with minimal effort. If, however, you install Apache 2.4.27+, say by downloading it from the Apache website and compiling it yourself, then the work of keeping up with security issues is fully yours. You need to track security issues, and go through the work of downloading/compiling/etc every time a problem is found. It turns out this is a fair amount of work, and most people slack off. So they end up running their self-compiled version of Apache that becomes more and more vulnerable as issues are found. And so they end up a lot worse than if they simply had followed Debian's security updates. So yes, probably harmful. That's not to say there's no place for compiling software yourself (or selectively taking packages from Debian testing or unstable), but in general, I recommend against it. Duration of security updates Debian doesn't maintain its releases forever. As a general rule, a Debian release recieves full security support for one year after it has been obsoleted by a newer release. The release you're running, Debian 8 / jessie , is an obsoleted stable release ( oldstable in Debian terms). It will receive full security support until May 2018 , and long-term support until April 2020. I'm not entirely sure what the extent of this LTS support is. The current Debian stable release is Debian 9 / stretch . Consider upgrading to Debian 9 , which comes with newer versions of all software, and full security support for several years (likely until mid-2020). I recommend upgrading at a time that is convenient for you, but well before May 2018. Closing remarks Earlier, I wrote that Debian backports security fixes. This ended up being untenable for some software due to the high pace of development and high rate of security issues. These packages are the exception, and actually updated to a recent upstream version. Packages I know of this applies to are chromium (the browser), firefox , and nodejs . Finally, this entire way of dealing with security updates is not unique to Debian; many distributions work like this, especially the ones that favour stability over new software.
{ "source": [ "https://unix.stackexchange.com/questions/404036", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/236919/" ] }
404,043
I want to know if there is a way to remove application launchers in GNOME's activities menu: I also want to know If I can make folders (or groups) like the existing utilities folder in the picture: After I install applications they always install other dependencies which I don't want to browse through every time I am searching for an application. In Openbox this was exeptionally well done using ~/.config/openbox/menu.xml where I specified exact file / folder structure which benefited my productivity.
App launchers shown in GNOME Activities are located either in /usr/share/applications/ or ~/.local/share/applications/ as .desktop files. You can hide an individual app launcher from Activities by adding an extra NoDisplay=true line to the corresponding .desktop file. It is generally not advisable to edit the .desktop file located in /usr/share/applications/ . Instead copy the file to ~/.local/share/applications/ first and make the change to the copied file. If you can't find the right .desktop file in any of the two locations mentioned above, try /usr/local/share/applications too.
{ "source": [ "https://unix.stackexchange.com/questions/404043", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9135/" ] }
404,199
On my Archlinux system, the /usr/lib/systemd/system/mdmonitor.service file contains these lines: [Service] Environment= MDADM_MONITOR_ARGS=--scan EnvironmentFile=-/run/sysconfig/mdadm ExecStartPre=-/usr/lib/systemd/scripts/mdadm_env.sh ExecStart=/sbin/mdadm --monitor $MDADM_MONITOR_ARGS I suspect (confirmed by some googling) that the =- means that the service should not fail if the specified files are absent. However I failed to find that behaviour in the manpage of systemd unit files. Where is the official documentation for the =- assignment?
This is documented in systemd.exec : EnvironmentFile= [...] The argument passed should be an absolute filename or wildcard expression, optionally prefixed with " - ", which indicates that if the file does not exist, it will not be read and no error or warning message is logged. And in systemd.service : ExecStart= … For each of the specified commands, the first argument must be an absolute path to an executable. Optionally, this filename may be prefixed with a number of special characters: Table 1. Special executable prefixes … ExecStartPre= , ExecStartPost= … If any of those commands (not prefixed with - ) fail, the rest are not executed and the unit is considered failed. (To find the most complete documentation for a systemd directive, look it up in systemd.directives .)
{ "source": [ "https://unix.stackexchange.com/questions/404199", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81661/" ] }
404,207
I'm using UNIX language to extract my files. I used this command to extract it: value_1=$(cat tmp.csv | head -1001 | cut -f 3-6 -d','> tmp1.csv) value_2=$(cat tmp.csv | head -2002 | tail -1001 | cut -f 4-6 -d','> tmp2.csv) paste -d ',' tmp1.csv tmp2.csv > final.csv My "tmp.csv" file is : 0 0 0 17.92204 -3.017933 35.14229 1 0 1 18.27151 -3.179997 35.20044 2 0 2 18.22776 -3.566021 34.87167 . . 0 1 0 20.89817 -2.37854 66.51003 1 1 1 21.48396 -2.461451 66.48988 2 1 2 21.78348 -2.575202 66.51389 But the result is like this : 0 17.92204 -3.017933 35.14229 20.89817 -2.37854 66.51003 1 18.27151 -3.179997 35.20044 21.48396 -2.461451 66.48988 2 18.22776 -3.566021 34.87167 21.78348 -2.575202 66.51389 I want to make the result like this : 0 17.92204 -3.017933 35.14229 20.89817 -2.37854 66.51003 1 18.27151 -3.179997 35.20044 21.48396 -2.461451 66.48988 2 18.22776 -3.566021 34.87167 21.78348 -2.575202 66.51389 I was wondering if it would be possible to achieve that without manually handling?
This is documented in systemd.exec : EnvironmentFile= [...] The argument passed should be an absolute filename or wildcard expression, optionally prefixed with " - ", which indicates that if the file does not exist, it will not be read and no error or warning message is logged. And in systemd.service : ExecStart= … For each of the specified commands, the first argument must be an absolute path to an executable. Optionally, this filename may be prefixed with a number of special characters: Table 1. Special executable prefixes … ExecStartPre= , ExecStartPost= … If any of those commands (not prefixed with - ) fail, the rest are not executed and the unit is considered failed. (To find the most complete documentation for a systemd directive, look it up in systemd.directives .)
{ "source": [ "https://unix.stackexchange.com/questions/404207", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/260260/" ] }
404,219
This should be a common enough task. I have just installed Archlinux. Next, I installed openbox , surprisingly finding that X is not among it's dependencies. So I installed xorg-server as per the opening lines in the wiki . However, # startx bash: startx: command not found On my Debian box, $ dpkg -S /usr/bin/startx xinit: /usr/bin/startx Yet on the Arch box pacman -S xinit error: target not found: xinit Later the wiki again refers to /usr/bin/startx which doesn't exist for me. What am I missing?
This is documented in systemd.exec : EnvironmentFile= [...] The argument passed should be an absolute filename or wildcard expression, optionally prefixed with " - ", which indicates that if the file does not exist, it will not be read and no error or warning message is logged. And in systemd.service : ExecStart= … For each of the specified commands, the first argument must be an absolute path to an executable. Optionally, this filename may be prefixed with a number of special characters: Table 1. Special executable prefixes … ExecStartPre= , ExecStartPost= … If any of those commands (not prefixed with - ) fail, the rest are not executed and the unit is considered failed. (To find the most complete documentation for a systemd directive, look it up in systemd.directives .)
{ "source": [ "https://unix.stackexchange.com/questions/404219", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20506/" ] }
404,258
My understanding is that Ubuntu is based on Debian. For example, on the Wikipedia page for Ubuntu it states " It is a Linux distribution based on the Debian architecture. " How can I find out what version of Debian a particular version of Ubuntu is based on (if any)? For example, the current stable release of Ubuntu is " Artful Aardvark " (17.10) which announces that it is based on the Linux 4.13 kernel, but does not seem to say anything about the Debian version. The current stable release of Debian is code named " Stretch " (9.2) which advertises a 4.9 kernel (on the afore-linked Stretch page). How can I find out the details of the relationship between them? Is there a particular command that will reveal this information?
Ubuntu releases aren’t based on Debian releases. During the development of an Ubuntu release, packages are imported from Debian unstable, until the Debian import freeze (in the past, LTS releases imported from testing, and this is what the linked wiki page still suggests; however looking at my packages shows that 18.04 is importing packages from unstable). This means that a given Ubuntu release will have non-Ubuntu-maintained packages in whatever version was in Debian at the time of the import freeze (barring explicit sync requests ); but that doesn’t match what the next release of Debian will contain. So trying to tie a release of Ubuntu to a release of Debian would just end up being misleading. You can look at the contents of /etc/debian_version to see the Debian codename of the version (under construction) from which packages were pulled; you can also match Debian import freeze dates from the release schedules (for example, Artful’s , Bionic’s , Cosmic’s , or Disco’s ). You’ll see from this that the same Debian release feeds multiple Ubuntu releases ( e.g. Stretch, which ended up being Debian 9, fed Xenial, Yakkety, Zesty and Artful; Buster, which will end up being Debian 10, fed Bionic and Cosmic, and is feeding Disco), with quite different package versions each time.
{ "source": [ "https://unix.stackexchange.com/questions/404258", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47542/" ] }
404,414
How can I test that my terminal / tmux is correctly setup to display truecolor / 24-bit color / 16.8 million colours?
The following script will produce a test pattern like: You can optionally call it as: width=1000 truecolor-test and it will print a pattern of width columns. #!/bin/bash # Based on: https://gist.github.com/XVilka/8346728 awk -v term_cols="${width:-$(tput cols || echo 80)}" 'BEGIN{ s="/\\"; for (colnum = 0; colnum<term_cols; colnum++) { r = 255-(colnum*255/term_cols); g = (colnum*510/term_cols); b = (colnum*255/term_cols); if (g>255) g = 510-g; printf "\033[48;2;%d;%d;%dm", r,g,b; printf "\033[38;2;%d;%d;%dm", 255-r,255-g,255-b; printf "%s\033[0m", substr(s,colnum%2+1,1); } printf "\n"; }'
{ "source": [ "https://unix.stackexchange.com/questions/404414", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
404,667
I have come across a .service that contains the following: [Install] WantedBy=multi-user.target The original .service file can be found HERE . I am on Ubuntu 16.04LTS.
This is the dependencies handling mechanism in systemd. multi-user.target is the alternative for runlevel 3 in systemV world. That said, reaching multi-user.target includes starting the "Confluent ZooKeeper" service. Probably that's what you need indeed.
{ "source": [ "https://unix.stackexchange.com/questions/404667", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/260600/" ] }
404,792
Sometimes I restart a device and need to ssh back in when it's ready. I want to run the ssh command every 5 seconds until the command succeeds. My first attempt: watch -n5 ssh [email protected] && exit 1 How can I do this?
Another option would be to use until . until ssh [email protected]; do sleep 5 done If you do this repeatedly for a number of hosts, put it in a function in your ~/.bashrc . repeat() { read -p "Enter the hostname or IP of your server :" servername until ssh $servername; do sleep 5 done }
{ "source": [ "https://unix.stackexchange.com/questions/404792", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16792/" ] }
404,822
I need to create a shell script that checks for the presence of a file and if it doesn't exist, creates it and moves on to the next command, or just moves on to the next command. What I have doesn't do that. #!/bin/bash # Check for the file that gets created when the script successfully finishes. if [! -f /Scripts/file.txt] then : # Do nothing. Go to the next step? else mkdir /Scripts # file.txt will come at the end of the script fi # Next command (macOS preference setting) defaults write ... Return is line 5: [!: command not found mkdir: /Scripts: File exists No idea what to do. Every place a Google search brings me indicates something different.
Possibly simpler solution, no need to do explicit tests, just use: mkdir -p /Scripts touch /Scripts/file.txt If you don't want the "modification" time of an existing file.txt to be changed by touch , you can use touch -a /Scripts/file.txt to make touch only change the "access" and "change" times.
{ "source": [ "https://unix.stackexchange.com/questions/404822", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/256085/" ] }
405,783
We've noticed that some of our automatic tests fail when they run at 00:30 but work fine the rest of the day. They fail with the message gimme gimme gimme in stderr , which wasn't expected. Why are we getting this output?
Dear @colmmacuait , I think that if you type "man" at 0001 hours it should print "gimme gimme gimme". #abba @marnanel - 3 November 2011 er, that was my fault, I suggested it. Sorry. Pretty much the whole story is in the commit. The maintainer of man is a good friend of mine, and one day six years ago I jokingly said to him that if you invoke man after midnight it should print " gimme gimme gimme ", because of the Abba song called " Gimme gimme gimme a man after midnight ": Well, he did actually put it in . A few people were amused to discover it, and we mostly forgot about it until today. I can't speak for Col , obviously, but I didn't expect this to ever cause any problems: what sort of test would break on parsing the output of man with no page specified? I suppose I shouldn't be surprised that one turned up eventually, but it did take six years. (The commit message calls me Thomas, which is my legal first name though I don't use it online much.) This issue has been fixed with commit 84bde8 : Running man with man -w will no longer trigger this easter egg.
{ "source": [ "https://unix.stackexchange.com/questions/405783", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173916/" ] }
405,916
I have Windows 10 HOME installed on my system. After I installed Windows 10 HOME, I installed Ubuntu 17.10 on a separate partition so that I could dual boot. I removed Ubuntu 17.10 by deleting the partition it was installed on. Now I am unable to start my system. At boot, my system stops at the Grub command line. I want to boot to my Windows 10 installation which I haven't removed from my system. This is displayed at startup: GNU GRUB version 2.02 ~beta3-4ubuntu7 minimal BASH-like editing is supported.for the first word, TAB lists possible commands completions.anywhere else TAB lists the possible device or file completion. grub> How can I boot my Windows partition from this grub command? Laptop :- Toshiba satellite C55 - C5241
GRUB uses the contents of /boot/grub/ located on your Linux partition to boot your system normally. Because of this GRUB has very minimal functionality. If you are on a Legacy BIOS system you're out of luck and you'll need to Windows disk for boot repair. (this is because GRUB can't load its NTFS driver because you deleted it). If you have a UEFI system which is most likely then you can still load Windows pretty easily. First type: chainloader +1 If this says unknown command you're out of luck because GRUB didn't embed this command so you must have deleted it. If it reboots back to grub prompt then you have a legacy BIOS and you're out of luck. If it says invalid efi path then you should be able to proceed. Type: ls (hd0,gpt1)/ this should return "/efi" Now do: chainloader (hd0,gpt1)/EFI/Microsoft/Boot/bootmgfw.efi boot
{ "source": [ "https://unix.stackexchange.com/questions/405916", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/261540/" ] }
406,245
How do we allow certain set of Private IPs to enter through SSH login(RSA key pair) into Linux Server?
You can limit which hosts can connect by configuring TCP wrappers or filtering network traffic (firewalling) using iptables . If you want to use different authentication methods depending on the client IP address, configure SSH daemon instead (option 3). Option 1: Filtering with IPTABLES Iptables rules are evaluated in order, until first match. For example, to allow traffic from 192.168.0.0/24 network and otherwise drop the traffic (to port 22). The DROP rule is not required if your iptables default policy is configured to DROP . iptables -A INPUT -p tcp --dport 22 --source 192.168.0.0/24 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j DROP You can add more rules before the drop rule to match more networks/hosts. If you have a lot of networks or host addresses, you should use ipset module. There is also iprange module which allows using any arbitrary range of IP addresses. Iptables are not persistent across reboots. You need to configure some mechanism to restore iptables on boot. iptables apply only to IPv4 traffic. Systems which have ssh listening to IPv6 address the necessary configuration can be done with ip6tables . Option 2: Using TCP wrappers Note: this might not be an option on modern distributions, as support for tcpwrappers was removed from OpenSSH 6.7 You can also configure which hosts can connect using TCP wrappers. With TCP wrappers, in addition to IP addresses you can also use hostnames in rules. By default, deny all hosts. /etc/hosts.deny : sshd : ALL Then list allowed hosts in hosts.allow. For example to allow network 192.168.0.0/24 and localhost . /etc/hosts.allow : sshd : 192.168.0.0/24 sshd : 127.0.0.1 sshd : [::1] Option 3: SSH daemon configuration You can configure ssh daemon in sshd_config to use different authentication method depending on the client address/hostname. If you only want to block other hosts from connecting, you should use iptables or TCP wrappers instead. First remove default authentication methods: PasswordAuthentication no PubkeyAuthentication no Then add desired authentication methods after a Match Address in the end of the file. Placing Match in the end of the file is important, since all the configuration lines after it are placed inside the conditional block until the next Match line. For example: Match Address 127.0.0.* PubkeyAuthentication yes Other clients are still able to connect, but logins will fail because there is no available authentication methods. Match arguments and allowed conditional configuration options are documented in sshd_config man page . Match patterns are documented in ssh_config man page .
{ "source": [ "https://unix.stackexchange.com/questions/406245", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/261580/" ] }
406,247
I have this situation where there's a lot of files with similar names (but they all follow a pattern) in different subfolders file1 file1 (Copy) /folder1/file2.txt /folder1/file2 (Copy).txt /folder1/file3.png /folder1/file3 (Copy).png Each file is in the same folder of its copy and has the same extension, the difference is that it has (Copy) at the end of the name I want to get all these files and delete the oldest one, then eventually rename the file from, for example, file1 (Copy) to file1 (that is, remove the (Copy) suffix) if it needs to be renamed. I was thinking of using find and mv but I'm not sure how to tell it to move the most recent one.
You can limit which hosts can connect by configuring TCP wrappers or filtering network traffic (firewalling) using iptables . If you want to use different authentication methods depending on the client IP address, configure SSH daemon instead (option 3). Option 1: Filtering with IPTABLES Iptables rules are evaluated in order, until first match. For example, to allow traffic from 192.168.0.0/24 network and otherwise drop the traffic (to port 22). The DROP rule is not required if your iptables default policy is configured to DROP . iptables -A INPUT -p tcp --dport 22 --source 192.168.0.0/24 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j DROP You can add more rules before the drop rule to match more networks/hosts. If you have a lot of networks or host addresses, you should use ipset module. There is also iprange module which allows using any arbitrary range of IP addresses. Iptables are not persistent across reboots. You need to configure some mechanism to restore iptables on boot. iptables apply only to IPv4 traffic. Systems which have ssh listening to IPv6 address the necessary configuration can be done with ip6tables . Option 2: Using TCP wrappers Note: this might not be an option on modern distributions, as support for tcpwrappers was removed from OpenSSH 6.7 You can also configure which hosts can connect using TCP wrappers. With TCP wrappers, in addition to IP addresses you can also use hostnames in rules. By default, deny all hosts. /etc/hosts.deny : sshd : ALL Then list allowed hosts in hosts.allow. For example to allow network 192.168.0.0/24 and localhost . /etc/hosts.allow : sshd : 192.168.0.0/24 sshd : 127.0.0.1 sshd : [::1] Option 3: SSH daemon configuration You can configure ssh daemon in sshd_config to use different authentication method depending on the client address/hostname. If you only want to block other hosts from connecting, you should use iptables or TCP wrappers instead. First remove default authentication methods: PasswordAuthentication no PubkeyAuthentication no Then add desired authentication methods after a Match Address in the end of the file. Placing Match in the end of the file is important, since all the configuration lines after it are placed inside the conditional block until the next Match line. For example: Match Address 127.0.0.* PubkeyAuthentication yes Other clients are still able to connect, but logins will fail because there is no available authentication methods. Match arguments and allowed conditional configuration options are documented in sshd_config man page . Match patterns are documented in ssh_config man page .
{ "source": [ "https://unix.stackexchange.com/questions/406247", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/262038/" ] }
406,256
In exploring a hung umount , I bumped into /run/mount/utab in some strace output. What is the purpose of /run/mount/utab ? Where can I read more about /run/mount/utab : purpose format what interacts with it (and how)
You can limit which hosts can connect by configuring TCP wrappers or filtering network traffic (firewalling) using iptables . If you want to use different authentication methods depending on the client IP address, configure SSH daemon instead (option 3). Option 1: Filtering with IPTABLES Iptables rules are evaluated in order, until first match. For example, to allow traffic from 192.168.0.0/24 network and otherwise drop the traffic (to port 22). The DROP rule is not required if your iptables default policy is configured to DROP . iptables -A INPUT -p tcp --dport 22 --source 192.168.0.0/24 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j DROP You can add more rules before the drop rule to match more networks/hosts. If you have a lot of networks or host addresses, you should use ipset module. There is also iprange module which allows using any arbitrary range of IP addresses. Iptables are not persistent across reboots. You need to configure some mechanism to restore iptables on boot. iptables apply only to IPv4 traffic. Systems which have ssh listening to IPv6 address the necessary configuration can be done with ip6tables . Option 2: Using TCP wrappers Note: this might not be an option on modern distributions, as support for tcpwrappers was removed from OpenSSH 6.7 You can also configure which hosts can connect using TCP wrappers. With TCP wrappers, in addition to IP addresses you can also use hostnames in rules. By default, deny all hosts. /etc/hosts.deny : sshd : ALL Then list allowed hosts in hosts.allow. For example to allow network 192.168.0.0/24 and localhost . /etc/hosts.allow : sshd : 192.168.0.0/24 sshd : 127.0.0.1 sshd : [::1] Option 3: SSH daemon configuration You can configure ssh daemon in sshd_config to use different authentication method depending on the client address/hostname. If you only want to block other hosts from connecting, you should use iptables or TCP wrappers instead. First remove default authentication methods: PasswordAuthentication no PubkeyAuthentication no Then add desired authentication methods after a Match Address in the end of the file. Placing Match in the end of the file is important, since all the configuration lines after it are placed inside the conditional block until the next Match line. For example: Match Address 127.0.0.* PubkeyAuthentication yes Other clients are still able to connect, but logins will fail because there is no available authentication methods. Match arguments and allowed conditional configuration options are documented in sshd_config man page . Match patterns are documented in ssh_config man page .
{ "source": [ "https://unix.stackexchange.com/questions/406256", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
406,410
I found this Q/A with the solution to print all the keys in an object: jq -r 'keys[] as $k | "\($k), \(.[$k] | .ip)"' In my case I want to perform the above but on a sub-object: jq -r '.connections keys[] as $k | "\($k), \(.[$k] | .ip)"' What is the proper syntax to do this?
Simply pipe to keys function: Sample input.json : { "connections": { "host1": { "ip": "10.1.2.3" }, "host2": { "ip": "10.1.2.2" }, "host3": { "ip": "10.1.18.1" } } } jq -r '.connections | keys[] as $k | "\($k), \(.[$k] | .ip)"' input.json The output: host1, 10.1.2.3 host2, 10.1.2.2 host3, 10.1.18.1
{ "source": [ "https://unix.stackexchange.com/questions/406410", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16792/" ] }
406,454
If I export an image with lets say 300 DPI and I read out its meta-info with any application that can do it (like file , exiftool , identify , mediainfo etc.), I always get a value showing Image-Width and Image-Height. In this case: 2254 x 288 how do I get the 300 DPI value, or the corresponding value from any other image file? Since in my case the proportional value of Image-Width and Image-Height does not matter I want to be able to check the resolution of any image to be able to compile new images with the same quality independent of their proportion, since this varies on every file. For my workflow I'm especially interested in any command line solution, though any others are of course highly appreciated too.
You could use identify from imagemagick : identify -format '%x,%y\n' image.png Note however that in this case (a PNG image) identify will return the resolution in PPCM (pixels per centimeter) so to get PPI (pixels per inch) you need to add -units PixelsPerInch to your command (e.g. you could also use the fx operator to round value to integer): identify -units PixelsPerInch -format '%[fx:int(resolution.x)]\n' image.png There's also exiftool : exiftool -p '$XResolution,$YResolution' image.png though it assumes the image file has those tags defined .
{ "source": [ "https://unix.stackexchange.com/questions/406454", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240990/" ] }
406,936
The zswap documentation says: Zswap seeks to be simple in its policies. Sysfs attributes allow for one user controlled policy: * max_pool_percent - The maximum percentage of memory that the compressed pool can occupy. This specifies the maximum percentage of memory the compressed pool can occupy. How do I find out: The current percentage of memory occupied by the compressed pool How much of this pool is in use Compression ratios, hit rates, and other useful info
Current statistics: # grep -R . /sys/kernel/debug/zswap/ Compression ratio: # cd /sys/kernel/debug/zswap # perl -E "say $(cat stored_pages) * 4096 / $(cat pool_total_size)" Current settings: $ grep -R . /sys/module/zswap
{ "source": [ "https://unix.stackexchange.com/questions/406936", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
407,385
At first this was a bit funny, like playing "Bash Roulette" ...but now it's getting old lol Any command in my terminal that exits with non-zero code closes my terminal window I was told that perhaps I have set -e set in some bash script somewhere that my terminal sources. I have checked .bash_profile / .bashrc / .profile and it doesn't look like set -e is in there. Would there be any other obvious culprits?
Alright, so indeed, it was a wayward set -e that caused my trouble. The way I found the set -e was using bash -lx The best thing to do would be to use: bash -lx > lx.log 2>&1 then open that log file and do a search for set ... once you find that wayward set -e you can remove that line and your problem should be gone! (Machine restart might be a good idea tho). In my case, the set -e was in a file that .bash_profile sources, but the line was not in .bash_profile itself.
{ "source": [ "https://unix.stackexchange.com/questions/407385", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
407,647
On RHEL 7 or CentOS 7, the systemctl or systemd command works fine. I know it won't work in RHEL 6 or CentOS 6. Can you tell me the alternative command for starting/stopping a service, for example: systemctl start iptables.service ?
In earlier versions of RHEL use the service command as explained in the documentation here . # service service_name start Therefore, in your case: # service iptables start You can replace start with restart , stop , status . List all services with: # service --status-all
{ "source": [ "https://unix.stackexchange.com/questions/407647", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/261580/" ] }
407,649
I recently installed Zorin OS on my Lenovo Yoga 2. I completely got rid of Windows and am not dual booting. I am now trying to get back to Windows 10. I created a Windows 10 USB drive, but I can't get the computer to boot from it. When I change the UEFI boot order, it still boots into Zorin, and when I go back to the UEFI, it has changed the boot order back. I also created another Zorin USB drive, thinking I would boot from it, then format the Zorin partition so it would have to boot from USB. Same thing, it just won't boot from USB. Is there a way to trigger booting from USB from within Zorin? If not, any ideas on how to get rid of Zorin some other way?
In earlier versions of RHEL use the service command as explained in the documentation here . # service service_name start Therefore, in your case: # service iptables start You can replace start with restart , stop , status . List all services with: # service --status-all
{ "source": [ "https://unix.stackexchange.com/questions/407649", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/263288/" ] }
408,072
I don't necessarily want the answer but if someone could point me to some literature or examples. I would like to figure it out. When I run the script I receive an error: Syntax error near unexpected token fi I have deduced that my problem is in my if statement by making my if statements comments and adding echo "$NAME" which displays the names in the /etc/ . When I make changes, remove the # from if and fi and add # to wc -c "$NAME" , I receive the syntax error I listed above. I have added ; between ] then. I have also moved then to the next line with no resolution. #!/bin/bash for NAME in /etc/* do if [ -r "$NAME" -af "$NAME" ] then wc -c "$NAME" fi done
Keywords like if , then , else , fi , for , case and so on need to be in a place where the shell expects a command name. Otherwise they are treated as ordinary words. For example, echo if just prints if , it doesn't start a conditional instruction. Thus, in the line if [ -r "$NAME" -af "$NAME" ] then the word then is an argument of the command [ (which it would complain about if it ever got to run). The shell keeps looking for the then , and finds a fi in command position. Since there's an if that's still looking for its then , the fi is unexpected, there's a syntax error. You need to put a command terminator before then so that it's recognized as a keyword. The most common command terminator is a line break, but before then , it's common to use a semicolon (which has exactly the same meaning as a line break). if [ -r "$NAME" -af "$NAME" ]; then or if [ -r "$NAME" -af "$NAME" ] then Once you fix that you'll get another error from the command [ because it doesn't understand -af . You presumably meant if [ -r "$NAME" -a -f "$NAME" ]; then Although the test commands look like options, you can't bundle them like this. They're operators of the [ command and they need to each be a separate word (as do [ and ] ). By the way, although [ -r "$NAME" -a -f "$NAME" ] works, I recommend writing either [ -r "$NAME" ] && [ -f "$NAME" ] or [[ -r $NAME && -f $NAME ]] It's best to keep [ … ] conditionals simple because the [ command can't distinguish operators from operand easily. If $NAME looks like an operator and appears in a position where the operator is valid, it could be parsed as an operator. This won't happen in the simple cases seen in this answer, but more complex cases can be risky. Writing this with separate calls to [ and using the shell's logical operators avoids this problem. The second syntax uses the [[ … ]] conditional construct which exists in bash (and ksh and zsh, but not plain sh). This construct is special syntax, whereas [ is parsed like any other command, thus you can use things like && inside and you don't need to quote variables except in arguments to some string operators ( = , == , != , =~ ) (see When is double-quoting necessary? for details).
{ "source": [ "https://unix.stackexchange.com/questions/408072", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/263688/" ] }
408,192
I find that under my root directory, there are some directories that have the same inode number: $ ls -aid */ .*/ 2 home/ 2 tmp/ 2 usr/ 2 var/ 2 ./ 2 ../ 1 sys/ 1 proc/ I only know that the directories' names are kept in the parent directory, and their data is kept in the inode of the directories themselves. I'm confused here. This is what I think when I trace the pathname /home/user1. First I get into the inode 2 which is the root directory which contains the directory lists. Then I find the name home paired with inode 2. So I go back to the disk to find inode 2? And I get the name user1 here?
They're on different devices. If we look at the output of stat , we can also see the device the file is on: # stat / | grep Inode Device: 801h/2049d Inode: 2 Links: 24 # stat /opt | grep Inode Device: 803h/2051d Inode: 2 Links: 5 So those two are on separate devices/filesystems. Inode numbers are only unique within a filesystem so there is nothing unusual here. On ext2/3/4 inode 2 is also always the root directory , so we know they are the roots of their respective filesystems. The combination of device number + inode is likely to be unique over the whole system. (There are filesystems that don't have inodes in the traditional sense, but I think they still have to fake some sort of a unique identifier in their place anyway.) The device numbers there appear to be the same as those shown on the device nodes, so /dev/sda1 holds the filesystem where / is on: # ls -l /dev/sda1 brw-rw---- 1 root disk 8, 1 Sep 21 10:45 /dev/sda1
{ "source": [ "https://unix.stackexchange.com/questions/408192", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/249654/" ] }
408,338
When I log in to an SSH server/host I get asked whether the hash of its public key is correct, like this: # ssh 1.2.3.4 The authenticity of host '[1.2.3.4]:22 ([[1.2.3.4]:22)' can't be established. RSA key fingerprint is SHA256:CxIuAEc3SZThY9XobrjJIHN61OTItAU0Emz0v/+15wY. Are you sure you want to continue connecting (yes/no)? no Host key verification failed. In order to be able to compare, I used this command on the SSH server previously and saved the results to a file on the client: # ssh-keygen -lf /etc/ssh/ssh_host_rsa_key.pub 2048 f6:bf:4d:d4:bd:d6:f3:da:29:a3:c3:42:96:26:4a:41 /etc/ssh/ssh_host_rsa_key.pub (RSA) For some great reason (no doubt) one of these commands uses a different (newer?) way of displaying the hash, thereby helping man-in-the-middle attackers enormously because it requires a non-trivial conversion to compare these. How do I compare these two hashes, or better: force one command to use the other's format? The -E option to ssh-keygen is not available on the server.
ssh # ssh -o "FingerprintHash sha256" testhost The authenticity of host 'testhost (256.257.258.259)' can't be established. ECDSA key fingerprint is SHA256:pYYzsM9jP1Gwn1K9xXjKL2t0HLrasCxBQdvg/mNkuLg. # ssh -o "FingerprintHash md5" testhost The authenticity of host 'testhost (256.257.258.259)' can't be established. ECDSA key fingerprint is MD5:de:31:72:30:d0:e2:72:5b:5a:1c:b8:39:bf:57:d6:4a. ssh-keyscan & ssh-keygen Another approach is to download the public key to a system which supports both MD5 and SHA256 hashes: # ssh-keyscan testhost >testhost.ssh-keyscan # cat testhost.ssh-keyscan testhost ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItb... testhost ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0U... testhost ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMKHh... # ssh-keygen -lf testhost.ssh-keyscan -E sha256 256 SHA256:pYYzsM9jP1Gwn1K9xXjKL2t0HLrasCxBQdvg/mNkuLg testhost (ECDSA) 2048 SHA256:bj+7fjKSRldiv1LXOCTudb6piun2G01LYwq/OMToWSs testhost (RSA) 256 SHA256:hZ4KFg6D+99tO3xRyl5HpA8XymkGuEPDVyoszIw3Uko testhost (ED25519) # ssh-keygen -lf testhost.ssh-keyscan -E md5 256 MD5:de:31:72:30:d0:e2:72:5b:5a:1c:b8:39:bf:57:d6:4a testhost (ECDSA) 2048 MD5:d5:6b:eb:71:7b:2e:b8:85:7f:e1:56:f3:be:49:3d:2e testhost (RSA) 256 MD5:e6:16:94:b5:16:19:40:41:26:e9:f8:f5:f7:e7:04:03 testhost (ED25519)
{ "source": [ "https://unix.stackexchange.com/questions/408338", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115138/" ] }
408,346
sudo apt-get install pppoe will download pppoe package and install it. Is it possible to just download pppoe package and not install it with apt-get command? wget http://ftp.us.debian.org/debian/pool/main/p/ppp/ppp_2.4.7-1+4_amd64.deb ppp_2.4.7-1+4_amd64.deb is in the current directory now. cd /tmp sudo apt-get install -d ppp Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: ppp 0 upgraded, 1 newly installed, 0 to remove and 95 not upgraded. Need to get 0 B/346 kB of archives. After this operation, 949 kB of additional disk space will be used. Download complete and in download only mode No ppp_2.4.7-1+4_amd64.deb or ppp related package in /tmp. sudo find /tmp -name ppp* Nothing found. Where is the package ppp in /tmp with command cd /tmp sudo apt-get install -d ppp ??
Use --download-only : sudo apt-get install --download-only pppoe This will download pppoe and any dependencies you need, and place them in /var/cache/apt/archives . That way a subsequent apt-get install pppoe will be able to complete without any extra downloads.
{ "source": [ "https://unix.stackexchange.com/questions/408346", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102745/" ] }
408,413
[4.13.12-1-ARCH with gnome3 and gdm on Xorg] I already have set my VISUAL and EDITOR env-vars to vim . Similarly I did try SYSTEMD_EDITOR="vim"; export SYSTEMD_EDITOR in my ~/.bashrc, to no avail. When modifying unit files in Arch (systemd) via $ sudo systemctl edit _unit_ I find myself staring at nano . Life is too short and I want vim by all means. How do I do this ?
First method, you can add this line to ~/.bashrc : export SYSTEMD_EDITOR=vim And then sudo visudo and add this line: Defaults env_keep += "SYSTEMD_EDITOR" Start new bash session to take effect, then run sudo systemctl edit <foo> as usual. Second method is use update-alternatives : Install your desired editor , e.g. vim.gtk3 : $ which editor editor is /usr/bin/editor $ sudo update-alternatives --install "$(which editor)" editor "$(which vim.gtk3)" 15 Then choose your desired editor : $ sudo update-alternatives --config editor There are 7 choices for the alternative editor (providing /usr/bin/editor). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/bin/vim.gtk3 50 auto mode 1 /bin/ed -100 manual mode * 2 /bin/nano 40 manual mode 3 /usr/bin/code 0 manual mode 4 /usr/bin/gedit 5 manual mode 5 /usr/bin/vim.basic 30 manual mode 6 /usr/bin/vim.gtk3 50 manual mode 7 /usr/bin/vim.tiny 15 manual mode Press <enter> to keep the current choice[*], or type selection number: 6 update-alternatives: using /usr/bin/vim.gtk3 to provide /usr/bin/editor (editor) in manual mode Third method is direct set the EDITOR on runtime: sudo EDITOR=vim systemctl edit <foo> The precedence are first method > third method > second method . Don't try to set "GUI" editor such as gedit because Why don't gksu/gksudo or launching a graphical application with sudo work with Wayland? and Gedit uses 100% of the CPU while editing files
{ "source": [ "https://unix.stackexchange.com/questions/408413", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72707/" ] }
408,424
Some processes seem to eat up the disk space on Linux (Ubuntu 16). I upgraded the disk of a laptop to 400 Gb one month ago. Now I have around 5 Gb of a free space. I spent many hours reading different posts on this forum and trying different commands. For example: sudo du -x -d1 -h /var | sort -hr 297G /var 296G /var/lib 207M /var/cache 154M /var/dell 118M /var/log 59M /var/opt 18M /var/backups 17M /var/tmp 7,9M /var/crash 92K /var/spool 20K /var/www 4,0K /var/snap 4,0K /var/metrics 4,0K /var/mail 4,0K /var/local I tried to use du , find and Disk Usage Analyzer, but haven't found the issue: find . -size +1G find: ‘./.ssh/typos_ssh_keys/id_rsa.pub’: Permission denied find: ‘./.ssh/typos_ssh_keys/id_rsa’: Permission denied find: ‘./.local/share/Trash/expunged/3448374582/work/Catalina’: Permission denied find: ‘./.local/share/Trash/expunged/3448374582/conf/Catalina’: Permission denied find: ‘./.local/share/jupyter/runtime’: Permission denied find: ‘./.dbus’: Permission denied find: ‘./.cache/dconf’: Permission denied find: ‘./.gvfs’: Permission denied I was reading that there might be some logs that consume a lot of space. But I have not found such logs. Any help will be highly appreciated?
First method, you can add this line to ~/.bashrc : export SYSTEMD_EDITOR=vim And then sudo visudo and add this line: Defaults env_keep += "SYSTEMD_EDITOR" Start new bash session to take effect, then run sudo systemctl edit <foo> as usual. Second method is use update-alternatives : Install your desired editor , e.g. vim.gtk3 : $ which editor editor is /usr/bin/editor $ sudo update-alternatives --install "$(which editor)" editor "$(which vim.gtk3)" 15 Then choose your desired editor : $ sudo update-alternatives --config editor There are 7 choices for the alternative editor (providing /usr/bin/editor). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/bin/vim.gtk3 50 auto mode 1 /bin/ed -100 manual mode * 2 /bin/nano 40 manual mode 3 /usr/bin/code 0 manual mode 4 /usr/bin/gedit 5 manual mode 5 /usr/bin/vim.basic 30 manual mode 6 /usr/bin/vim.gtk3 50 manual mode 7 /usr/bin/vim.tiny 15 manual mode Press <enter> to keep the current choice[*], or type selection number: 6 update-alternatives: using /usr/bin/vim.gtk3 to provide /usr/bin/editor (editor) in manual mode Third method is direct set the EDITOR on runtime: sudo EDITOR=vim systemctl edit <foo> The precedence are first method > third method > second method . Don't try to set "GUI" editor such as gedit because Why don't gksu/gksudo or launching a graphical application with sudo work with Wayland? and Gedit uses 100% of the CPU while editing files
{ "source": [ "https://unix.stackexchange.com/questions/408424", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264003/" ] }
408,859
I am having some strange behavoir with zsh ( 5.4.2_1 installed with homebrew ) on osx not using the first occurrence of an executable in the path. Here is the scenario: echo $PATH returns: /usr/local/Cellar/zplug/HEAD-9fdb388/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin brew is in both /usr/local/Cellar/zplug/HEAD-9fdb388/bin and usr/local/bin/brew This is confirmed by running which -a brew which returns: /usr/local/Cellar/zplug/HEAD-9fdb388/bin/brew /usr/local/bin/brew But when I run which brew it returns: /usr/local/bin/brew and brew does run /usr/local/bin/brew rather than /usr/local/Cellar/zplug/HEAD-9fdb388/bin/brew How can this happen when brew is earlier in the path ? Help appreciated.
which -a cmd looks for all regular files named cmd which you have execute permission for in the directories in $path (in addition to aliases, functions, builtins...). While which cmd returns the command that zsh would run ( which is a builtin in zsh like in tcsh but unlike most other shells). zsh , like most other shells remembers the paths of executables in a hash table so as not to have to look them up in all the directories in $path each time you invoke them. That hash table (exposed in the $commands associative array in zsh ) can be manipulated with the hash command (standard POSIX shell command). If you have run the brew command (or which/type/whence brew , or used command completion or anything that would have primed that hash/cache) before it was added to /usr/local/Cellar/zplug/HEAD-9fdb388/bin or before /usr/local/Cellar/zplug/HEAD-9fdb388/bin was added to $path , zsh would have remembered its path and stored it as $commands[brew]=/usr/local/bin/brew . In that case, you can use hash -r (as in the Bourne shell) or rehash (as in csh) to have zsh forget the remembered commands (invalidate that cache ), so it can look it up next time and find it in the new location.
{ "source": [ "https://unix.stackexchange.com/questions/408859", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264351/" ] }
409,225
This answer reveals that one can copy all files - including hidden ones - from directory src into directory dest like so: mkdir dest cp -r src/. dest There is no explanation in the answer or its comments as to why this actually works, and nobody seems to find documentation on this either. I tried out a few things. First, the normal case: $ mkdir src src/src_dir dest && touch src/src_file src/.dotfile dest/dest_file $ cp -r src dest $ ls -A dest dest_file src Then, with /. at the end: $ mkdir src src/src_dir dest && touch src/src_file src/.dotfile dest/dest_file $ cp -r src/. dest $ ls -A dest dest_file .dotfile src_dir src_file So, this behaves simlarly to * , but also copies hidden files. $ mkdir src src/src_dir dest && touch src/src_file src/.dotfile dest/dest_file $ cp -r src/* dest $ ls -A dest dest_file src_dir src_file . and .. are proper hard-links as explained here , just like the directory entry itself. Where does this behaviour come from, and where is it documented?
The behaviour is a logical result of the documented algorithm for cp -R . See POSIX , step 2f: The files in the directory source_file shall be copied to the directory dest_file , taking the four steps (1 to 4) listed here with the files as source_files . . and .. are directories, respectively the current directory, and the parent directory. Neither are special as far as the shell is concerned, so neither are concerned by expansion, and the directory will be copied including hidden files. * , on the other hand, will be expanded to a list of files, and this is where hidden files are filtered out. src/. is the current directory inside src , which is src itself; src/src_dir/.. is src_dir ’s parent directory, which is again src . So from outside src , if src is a directory, specifying src/. or src/src_dir/.. as the source file for cp are equivalent, and copy the contents of src , including hidden files. The point of specifying src/. is that it will fail if src is not a directory (or symbolic link to a directory), whereas src wouldn’t. It will also copy the contents of src only, without copying src itself; this matches the documentation too: If target exists and names an existing directory, the name of the corresponding destination path for each file in the file hierarchy shall be the concatenation of target , a single slash character if target did not end in a slash, and the pathname of the file relative to the directory containing source_file . So cp -R src/. dest copies the contents of src to dest/. (the source file is . in src ), whereas cp -R src dest copies the contents of src to dest/src (the source file is src ). Another way to think of this is to compare copying src/src_dir and src/. , rather than comparing src/. and src . . behaves just like src_dir in the former case.
{ "source": [ "https://unix.stackexchange.com/questions/409225", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67771/" ] }
409,407
I want to list all the users’ directories on the machine. Usually, I will do: ls -l /home But I use it in a script that will be deployed on others’ machines and maybe on those machines they don't call it home (e.g. myHome). So I want to generalize it to ls -l ~ . But it just lists my user’s home directory instead of all users’ home directories (basically I want to get a list of the users’ names on the machine). How can I generalize it?
Many systems have a getent command to list or query the content of the Name Service databases like passwd , group , services , protocols ... getent passwd | cut -d: -f6 Would list the home directories (the 6 th colon delimited field) of all the users in databases that can be enumerated . The user name itself is in the first field, so for the list of user names: getent passwd | cut -d: -f1 (note that it doesn't mean those users can login to the system or their home directory have been created, but that they are known to the system, they can be translated to a user id). For databases that can't be enumerated, you can try and query each possible user id individually: getent passwd {0..65535} | cut -d: -f1,6 (here assuming uids stop at 65535 (some systems support more) and a shell that supports zsh's {x..y} form of brace expansion). But you wouldn't want to do that often on systems where the user database is networked (and there's limited local caching) like LDAP, NIS+, SQL... as that could imply a lot of network traffic (and load on the directory server) to make all those queries. That also means that if there are several users sharing the same uid, you'll only get one entry for each uid, so miss the others. If you don't have getent , you could resort to perl : perl -le 'while (@e = getpwent) {print $e[7]}' for getent passwd ( $e[0] for the user names), or: perl -le 'for ($i=0;$i<65536;++$i) { if (@e = getpwuid $i) {print $e[0] ": " $e[7]}}' for getent passwd {0..65535} with the same caveats. In shells, you can use ~user to get the home directory of user , but in most shells, that only works for a limited set of user names (the list of allowed characters in user names supported for that ~ expansion operator varies from shell to shell) and with several shells (including bash ), ~$user won't work (you'd need to resort to eval when the name of the user is stored in a variable there). And you'd still have to find a way to get the list of user names. Some shells have builtin support to get that list of usernames. bash : compgen -u would return the list of users in databases that can be enumerated. zsh : the $userdirs associative array maps user names to their home directory (also limited to databases that can be enumerated, but if you do a ~user expansion for a user that is in a non-enumerable database, an entry will be added to $userdirs ). So you can do: printf '%s => %s\n' "${(kv@)userdirs}" to list users with their home directory. That only works when zsh is interactive though . tcsh , fish and yash are three other shells that can complete user names (for instance when completing ~<Tab> arguments), but it doesn't look like they let you obtain that list of user names programmatically.
{ "source": [ "https://unix.stackexchange.com/questions/409407", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/251897/" ] }
409,462
The following shell command was expected to print only odd lines of the input stream: echo -e "aaa\nbbb\nccc\nddd\n" | (while true; do head -n 1; head -n 1 >/dev/null; done) But instead it just prints the first line: aaa . The same doesn't happen when it is used with -c ( --bytes ) option: echo 12345678901234567890 | (while true; do head -c 5; head -c 5 >/dev/null; done) This command outputs 1234512345 as expected. But this works only in the coreutils implementation of the head utility. The busybox implementation still eats extra characters, so the output is just 12345 . I guess this specific way of implementation is done for optimization purposes. You can't know where the line ends, so you don't know how many characters you need to read. The only way not to consume extra characters from the input stream is to read the stream byte by byte. But reading from the stream one byte at a time may be slow. So I guess head reads the input stream to a big enough buffer and then counts lines in that buffer. The same can't be said for the case when --bytes option is used. In this case you know how many bytes you need to read. So you may read exactly this number of bytes and not more than that. The corelibs implementation uses this opportunity, but the busybox one does not, it still reads more byte than required into a buffer. It is probably done to simplify the implementation. So the question. Is it correct for the head utility to consume more characters from the input stream than it was asked? Is there some kind of standard for Unix utilities? And if there is, does it specify this behavior? PS You have to press Ctrl+C to stop the commands above. The Unix utilities do not fail on reading beyond EOF . If you don't want to press, you may use a more complex command: echo 12345678901234567890 | (while true; do head -c 5; head -c 5 | [ `wc -c` -eq 0 ] && break >/dev/null; done) which I didn't use for simplicity.
Is it correct for the head utility to consume more characters from the input stream than it was asked? Yes, it’s allowed (see below). Is there some kind of standard for Unix utilities? Yes, POSIX volume 3, Shell & Utilities . And if there is, does it specify this behavior? It does, in its introduction: When a standard utility reads a seekable input file and terminates without an error before it reaches end-of-file, the utility shall ensure that the file offset in the open file description is properly positioned just past the last byte processed by the utility. For files that are not seekable, the state of the file offset in the open file description for that file is unspecified. head is one of the standard utilities , so a POSIX-conforming implementation has to implement the behaviour described above. GNU head does try to leave the file descriptor in the correct position, but it’s impossible to seek on pipes, so in your test it fails to restore the position. You can see this using strace : $ echo -e "aaa\nbbb\nccc\nddd\n" | strace head -n 1 ... read(0, "aaa\nbbb\nccc\nddd\n\n", 8192) = 17 lseek(0, -13, SEEK_CUR) = -1 ESPIPE (Illegal seek) ... The read returns 17 bytes (all the available input), head processes four of those and then tries to move back 13 bytes, but it can’t. (You can also see here that GNU head uses an 8 KiB buffer.) When you tell head to count bytes (which is non-standard), it knows how many bytes to read, so it can (if implemented that way) limit its read accordingly. This is why your head -c 5 test works: GNU head only reads five bytes and therefore doesn’t need to seek to restore the file descriptor’s position. If you write the document to a file, and use that instead, you’ll get the behaviour you’re after: $ echo -e "aaa\nbbb\nccc\nddd\n" > file $ < file (while true; do head -n 1; head -n 1 >/dev/null; done) aaa ccc
{ "source": [ "https://unix.stackexchange.com/questions/409462", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/152606/" ] }
409,609
I believe this should be simple but I can't get it to work properly. These are the commands I can run on command line: cd /home/debian/ap # Start a virtualenv source venv-ap/bin/activate # This needs to happen inside the virtualenv and takes ~20 seconds crossbar start # Outside the virtualenv, perhaps in a different command line window python3 /home/debian/myscript.py These commands have to be done in this order. Due to the virtualenv, the non-executable for crossbar, and the separate python script afterwards, I haven't been able to figure out the best way to get this to work. My current work-in-progress: [Unit] Description=Start CB After=network.target [Service] Type=simple User=debian ExecStartPre=source /home/debian/ap/venv-ap/bin/activate ExecStart=cd /home/debian/ap/ && crossbar start Restart=always [Install] WantedBy=multi-user.target
This doesn't work because source is a shell command, so systemd's ExecStart= or ExecStartPre= won't understand them directly... (BTW, the same is true for cd and the && .) You could achieve that by running a shell explicitly and running all your commands together there: ExecStart=/bin/sh -c 'cd /home/debian/ap/ && source venv-ap/bin/activate && crossbar start' But a better approach is, instead of sourcing the "activate" script, to use the python executable in the bin/ of your virtualenv directly. If you look at virtualenv's usage document , you'll notice it says: ENV/bin is created, where executables live - noticeably a new python . Thus running a script with #! /path/to/ENV/bin/python would run that script under this virtualenv’s python. In other words, assuming crossbar is the Python script you want to run that requires the venv-ap virtualenv, simply begin crossbar with: #!/home/debian/ap/venv-ap/bin/python And it will automatically use the virtualenv whenever invoked. Also possible, invoking the Python interpreter from the virtualenv directly, with: ExecStart=/home/debian/ap/venv-ap/bin/python /path/to/crossbar start (Also, regarding running in a specific directory, setting WorkingDirectory=/home/debian/ap is better than using a cd command. You don't need a shell that way, and systemd can do better error handling for you.)
{ "source": [ "https://unix.stackexchange.com/questions/409609", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/146761/" ] }
410,269
I am wondering if it is theoretically possible to build a Linux distro that can both support rpm and debian packages. Are there any distros live out there that support both? And if not is it even possible?
Bedrock Linux does this. Not saying I've done this, or that it is a good idea, but it is being done.
{ "source": [ "https://unix.stackexchange.com/questions/410269", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100193/" ] }
410,281
I'm trying to manually create my own custom usb drive, with a bunch of iso files on it, and a partition for data. I used the instruction I put here to create my key, but to sum-up, I have done a partition /dev/sda1 for data a partition /dev/sda2 that has grub installed a partition /dev/sda3 that contains my iso files in the folder linux-iso/ I put in the file grub2/grub/conf (on /dev/sda2 ) the following file : insmod loopback insmod iso9660 menuentry 'XUbuntu 16.04 "Xenial Xerus" -- amd64' { set isofile="/linux-iso/xubuntu-16.04.1-desktop-amd64.iso" search --no-floppy --set -f $isofile loopback loop $isofile linux (loop)/casper/vmlinuz.efi locale=fr_FR bootkbd=fr console-setup/layoutcode=fr iso-scan/filename=$isofile boot=casper persistent file=/cdrom/preseed/ubuntu.seed noprompt ro quiet splash noeject -- initrd (loop)/casper/initrd.lz } menuentry 'Debian 9.3.0 amd64 netinst test 3' { set isofile="/linux-iso/debian-9.3.0-amd64-netinst.iso" search --no-floppy --set -f $isofile loopback loop $isofile linux (loop)/install.amd/vmlinuz priority=low config fromiso=/dev/sdb3/$isofile initrd (loop)/install.amd/initrd.gz } This way, when I load ubuntu everything works great... But when I load debian it fails at the step "Configure CD-Rom", with the error: Incorrect CD-ROM detected. The CD-ROM drive contains a CD which cannot be used for installation. Please insert a suitable CD to continue with the installation." I also tried to mount /dev/sdb3 at /cdrom , but in that case I've an error on the next step: Load installer components from CD: There was a problem reading data from the CD-ROM. Please make sure it is in the drive. Failed to copy file from CD-ROM. Retry?" Do you know how to solve this problem? Thank you!
Bedrock Linux does this. Not saying I've done this, or that it is a good idea, but it is being done.
{ "source": [ "https://unix.stackexchange.com/questions/410281", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/169695/" ] }
410,471
I'm trying to watch for any new output of a log file. Another script (not under my control) is deleting the file then creating a new one with the same name. Using tail -f doesn't work because the file is being deleted.
If your tail supports it, use tail -F , it works nicely with disappearing and re-appearing files. Just make sure you start tail from a directory which will stay in place. -F is short-hand for --follow=name --retry : tail will follow files by name rather than file descriptor, and will retry when files are inaccessible ( e.g. because they’ve been deleted). (A number of bugs relating to --follow=name with --retry were fixed in coreutils 8.26, so you may run into issues with earlier versions; e.g. retrying when the directory containing the tailed file is deleted appears to only work in all cases with version 8.26 or later.)
{ "source": [ "https://unix.stackexchange.com/questions/410471", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/265540/" ] }
410,474
I am writing a set of bash scripts. The first, wrapper calls two scripts: do_something and do_something_else . In pseudo code: $ wrapper do_something if exitcode of do_something = 0 then do_something_else else exit with error fi exit success This would generate a log file: $ cat /var/logs/wrapper.log | tail -3 Deleting file 299 Deleting file 300 wrapper ran successfully on 01/01/18 00:01:00 GMT I have two goals: create a log of the entire process. In other words, everything that do_something , do_something_else and wrapper send to stdout and stderr I want in one log file that shows the daily run of this script so I can grep for errors. I want to pre-compile do_something , do_something_else and wrapper so I can put them in /usr/bin and scp them to all my systems. This way I have one source in dev and quick running un-editable code in prod. Is this possible?
If your tail supports it, use tail -F , it works nicely with disappearing and re-appearing files. Just make sure you start tail from a directory which will stay in place. -F is short-hand for --follow=name --retry : tail will follow files by name rather than file descriptor, and will retry when files are inaccessible ( e.g. because they’ve been deleted). (A number of bugs relating to --follow=name with --retry were fixed in coreutils 8.26, so you may run into issues with earlier versions; e.g. retrying when the directory containing the tailed file is deleted appears to only work in all cases with version 8.26 or later.)
{ "source": [ "https://unix.stackexchange.com/questions/410474", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/231139/" ] }
410,477
I'm running Ubuntu Server 16.04 and my upgrade to linux-image-4.4.0-103-generic fails because my /boot directly is almost full (188MB out of 200MB). gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-4.4.0-103-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-4.4.0-103-generic.postinst line 1052. dpkg: error processing package linux-image-4.4.0-103-generic (--configure): subprocess installed post-installation script returned error exit status 2 No apport report written because the error message indicates its a followup error from a previous failure. dpkg: dependency problems prevent configuration of linux-image-extra-4.4.0-103-generic: linux-image-extra-4.4.0-103-generic depends on linux-image-4.4.0-103-generic; however: Package linux-image-4.4.0-103-generic is not configured yet. dpkg shows that I only have the 2 most recent kernels installed (4.4.0-96-generic and 4.4.0-97-generic). claude@shannon:~$ sudo dpkg --list 'linux-image*' Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trigpend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Architecture Description +++-==============================================-============================- ============================-=================================================== =============================================== un linux-image <none> <none> (no description available) un linux-image-4.2.0-27-generic <none> <none> (no description available) un linux-image-4.2.0-42-generic <none> <none> (no description available) iF linux-image-4.4.0-103-generic 4.4.0-103.126 amd64 Linux kernel image for version 4.4.0 on 64 bit x86 SMP un linux-image-4.4.0-59-generic <none> <none> (no description available) un linux-image-4.4.0-62-generic <none> <none> (no description available) un linux-image-4.4.0-63-generic <none> <none> (no description available) un linux-image-4.4.0-64-generic <none> <none> (no description available) un linux-image-4.4.0-72-generic <none> <none> (no description available) un linux-image-4.4.0-77-generic <none> <none> (no description available) rc linux-image-4.4.0-81-generic 4.4.0-81.104 amd64 Linux kernel image for version 4.4.0 on 64 bit x86 SMP rc linux-image-4.4.0-83-generic 4.4.0-83.106 amd64 Linux kernel image for version 4.4.0 on 64 bit x86 SMP ii linux-image-4.4.0-96-generic 4.4.0-96.119 amd64 Linux kernel image for version 4.4.0 on 64 bit x86 SMP ii linux-image-4.4.0-97-generic 4.4.0-97.120 amd64 Linux kernel image for version 4.4.0 on 64 bit x86 SMP rc linux-image-extra-4.2.0-27-generic 4.2.0-27.32~14.04.1 amd64 Linux kernel extra modules for version 4.2.0 on 64 bit x86 SMP rc linux-image-extra-4.2.0-42-generic 4.2.0-42.49~14.04.1 amd64 Linux kernel extra modules for version 4.2.0 on 64 bit x86 SMP iU linux-image-extra-4.4.0-103-generic 4.4.0-103.126 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMP rc linux-image-extra-4.4.0-59-generic 4.4.0-59.80 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMP rc linux-image-extra-4.4.0-62-generic 4.4.0-62.83 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMP rc linux-image-extra-4.4.0-63-generic 4.4.0-63.84 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMP rc linux-image-extra-4.4.0-64-generic 4.4.0-64.85 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMP rc linux-image-extra-4.4.0-72-generic 4.4.0-72.93 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMP rc linux-image-extra-4.4.0-77-generic 4.4.0-77.98 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMP rc linux-image-extra-4.4.0-81-generic 4.4.0-81.104 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMP rc linux-image-extra-4.4.0-83-generic 4.4.0-83.106 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMP ii linux-image-extra-4.4.0-96-generic 4.4.0-96.119 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMP ii linux-image-extra-4.4.0-97-generic 4.4.0-97.120 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMP iU linux-image-generic 4.4.0.103.108 amd64 Generic Linux kernel image I thought about uninstalling one of them to make room for the new one, but uname -r shows 4.4.0.96-generic as the current kernel, not 4.4.0-97-generic. I'm not sure why the more recent kernel isn't being used, and I don't want to uninstall either one if I don't have to. claude@shannon:~$ uname -r 4.4.0-96-generic sudo apt-get autoremove fails because /boot is too full gzip: stdout: No space left on device (and so on) How do I install the latest kernel and remove the old kernel packages?
If your tail supports it, use tail -F , it works nicely with disappearing and re-appearing files. Just make sure you start tail from a directory which will stay in place. -F is short-hand for --follow=name --retry : tail will follow files by name rather than file descriptor, and will retry when files are inaccessible ( e.g. because they’ve been deleted). (A number of bugs relating to --follow=name with --retry were fixed in coreutils 8.26, so you may run into issues with earlier versions; e.g. retrying when the directory containing the tailed file is deleted appears to only work in all cases with version 8.26 or later.)
{ "source": [ "https://unix.stackexchange.com/questions/410477", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/192353/" ] }
410,550
ls returns output in several columns, whereas ls|cat returns byte-identical output with ls -1 for directories I've tried. Still I see ls -1 piped in answers, like ls -1|wc -l . Is there ever a reason to prefer ls -1 ? Why does ...|cat change the output of ls ?
ls tests whether output is going to a terminal. If the output isn't going to a terminal, then -1 is the default. (This can be overridden by one of the -C , -m , or -x options.) Thus, when ls is used in a pipeline and you haven't overridden it with another option, ls will use -1 . You can rely on this because this behavior is required by POSIX POSIX Specification POSIX requires -1 as the default whenever output is not going to a terminal: The POSIX spec : The default format shall be to list one entry per line to standard output; the exceptions are to terminals or when one of the -C, -m, or -x options is specified. If the output is to a terminal, the format is implementation-defined. Those three options which override the default single-column format are: -C Write multi-text-column output with entries sorted down the columns, according to the collating sequence. The number of text columns and the column separator characters are unspecified, but should be adapted to the nature of the output device. This option disables long format output. -m Stream output format; list pathnames across the page, separated by a <comma> character followed by a <space> character. Use a <newline> character as the list terminator and after the separator sequence when there is not room on a line for the next list entry. This option disables long format output. -x The same as -C, except that the multi-text-column output is produced with entries sorted across, rather than down, the columns. This option disables long format output. GNU Documentation From GNU ls manual : ‘-1’ ‘--format=single-column’ List one file per line. This is the default for ls when standard output is not a terminal . See also the -b and -q options to suppress direct output of newline characters within a file name. [Emphasis added] Examples Let's create three files: $ touch file{1..3} When output goes to a terminal, GNU ls chooses to use a multi-column format: $ ls file1 file2 file3 When output goes to a pipeline, the POSIX spec requires that single-column is the default: $ ls | cat file1 file2 file3 The three exceptions which override the default single-column behavior are -m for comma-separated, -C for columns sorted down, and -x for columns sorted across: $ ls -m | cat file1, file2, file3 $ ls -C | cat file1 file2 file3 $ ls -x | cat file1 file2 file3
{ "source": [ "https://unix.stackexchange.com/questions/410550", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/263916/" ] }
410,579
I am using Ubuntu 16.04 LTS . I have python3 installed. There are two versions installed, python 3.4.3 and python 3.6 . Whenever I use python3 command, it takes python 3.4.3 by default. I want to use python 3.6 with python3 . python3 --version shows version 3.4.3 I am installing ansible which supports version > 3.5 . So, whenever, I type ansible in the terminal, it throws error because of python 3.4 sudo update-alternatives --config python3 update-alternatives: error: no alternatives for python3
From the comment: sudo update-alternatives --config python Will show you an error: update-alternatives: error: no alternatives for python3 You need to update your update-alternatives , then you will be able to set your default python version. sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.4 1 sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.6 2 Then run : sudo update-alternatives --config python Set python3.6 as default. Or use the following command to set python3.6 as default: sudo update-alternatives --set python /usr/bin/python3.6
{ "source": [ "https://unix.stackexchange.com/questions/410579", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/265641/" ] }
410,636
I know how to create an arithmetic for loop in bash . How can one do an equivalent loop in a POSIX shell script? As there are various ways of achieving the same goal, feel free to add your own answer and elaborate a little on how it works. An example of one such bash loop follows: #!/bin/bash for (( i=1; i != 10; i++ )) do echo "$i" done
I have found useful information in Shellcheck.net wiki , I quote: Bash¹: for ((init; test; next)); do foo; done POSIX: : "$((init))" while [ "$((test))" -ne 0 ]; do foo; : "$((next))"; done though beware that i++ is not POSIX so would have to be translated, for instance to i += 1 or i = i + 1 . : is a null command that always has a successful exit code. "$((expression))" is an arithmetic expansion that is being passed as an argument to : . You can assign to variables or do arithmetic/comparisons in the arithmetic expansion. So the above script in the question can be POSIX-wise re-written using those rules like this: #!/bin/sh : "$((i=1))" while [ "$((i != 10))" -ne 0 ] do echo "$i" : "$((i = i + 1))" done Though here, you can make it more legible with: #!/bin/sh i=1 while [ "$i" -ne 10 ] do echo "$i" i=$((i + 1)) done as in init , we're assigning a constant value, so we don't need to evaluate an arithmetic expression. The i != 10 in test can easily be translated to a [ expression, and for next , using a shell variable assignment as opposed to a variable assignment inside an arithmetic expression, lets us get rid of : and the need for quoting. Beside i++ -> i = i + 1 , there are more translations of ksh/bash-specific constructs that are not POSIX that you might have to do: i=1, j=2 . The , arithmetic operator is not really POSIX (and conflicts with the decimal separator in some locales with ksh93). You could replace it with another operator like + as in : "$(((i=1) + (j=2)))" but using i=1 j=2 would be a lot more legible. a[0]=1 : no arrays in POSIX shells i = 2**20 : no power operator in POSIX shell syntax. << is supported though so for powers of two, one can use i = 1 << 20 . For other powers, one can resort to bc : i=$(echo "3 ^ 20" | bc) i = RANDOM % 3 : not POSIX. The closest in the POSIX toolchest is i=$(awk 'BEGIN{srand(); print int(rand() * 3)}') . ¹ technically, that syntax is from the ksh93 shell and is also available in zsh in addition to bash
{ "source": [ "https://unix.stackexchange.com/questions/410636", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
410,668
Installed Debian Stretch (9.3). Installed Vim and removed Nano. Vim is selected as the default editor. Every time I run crontab -e , I get these warnings: root@franklin:~# crontab -e no crontab for root - using an empty one /usr/bin/sensible-editor: 25: /usr/bin/sensible-editor: /bin/nano: not found /usr/bin/sensible-editor: 28: /usr/bin/sensible-editor: nano: not found /usr/bin/sensible-editor: 31: /usr/bin/sensible-editor: nano-tiny: not found No modification made I've tried reconfiguring the sensible-utils package, but it gives no input (indicating success with whatever it's doing), but the warnings still appear. root@franklin:~# dpkg-reconfigure sensible-utils root@franklin:~# Although these warnings don't prevent me from doing anything, I find them quite annoying. How can I get rid of them?
I found my own answer and so I'm posting it here, in case it helps someone else. In the root user's home directory, /root , there was a file alled .selected_editor , which still retained this content: # Generated by /usr/bin/select-editor SELECTED_EDITOR="/bin/nano" The content suggests that the command select-editor is used to select a new editor, but at any rate, I removed the file (being in a bad mood and feeling the urge to obliterate something) and was then given the option of selecting the editor again when running crontab -e , at which point I selected vim.basic , and all was fine after that. The new content of the file reflects that selection now: # Generated by /usr/bin/select-editor SELECTED_EDITOR="/usr/bin/vim.basic"
{ "source": [ "https://unix.stackexchange.com/questions/410668", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5838/" ] }
411,159
I know there are two "levels" of programs: User space and kernel space. My question is: I want to see only kernel programs,or better: programs on kernel space. Is this approach correct? ps -ef|grep "\[" root 1 0 0 20:23 ? 00:00:00 init [4] root 2 0 0 20:23 ? 00:00:00 [kthreadd] root 3 2 0 20:23 ? 00:00:00 [ksoftirqd/0] root 5 2 0 20:23 ? 00:00:00 [kworker/0:0H] root 7 2 0 20:23 ? 00:00:06 [rcu_sched] root 8 2 0 20:23 ? 00:00:00 [rcu_bh] root 9 2 0 20:23 ? 00:00:00 [migration/0] root 10 2 0 20:23 ? 00:00:00 [migration/1] root 11 2 0 20:23 ? 00:00:00 [ksoftirqd/1] root 13 2 0 20:23 ? 00:00:00 [kworker/1:0H] root 14 2 0 20:23 ? 00:00:00 [migration/2] ....
Kernel processes (or "kernel threads") are children of PID 2 ( kthreadd ), so this might be more accurate: ps --ppid 2 -p 2 -o uname,pid,ppid,cmd,cls Add --deselect to invert the selection and see only user-space processes. (This question was pretty much an exact inverse of this one .) In 2.4.* and older kernels, this PID 2 convention did not exist yet.
{ "source": [ "https://unix.stackexchange.com/questions/411159", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
411,164
I know that locate has to have a database generated and is much faster, and that find does not need a database generated and is not as fast. So in what situations is find and locate more efficient/affective/give a better end result?
Kernel processes (or "kernel threads") are children of PID 2 ( kthreadd ), so this might be more accurate: ps --ppid 2 -p 2 -o uname,pid,ppid,cmd,cls Add --deselect to invert the selection and see only user-space processes. (This question was pretty much an exact inverse of this one .) In 2.4.* and older kernels, this PID 2 convention did not exist yet.
{ "source": [ "https://unix.stackexchange.com/questions/411164", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/241691/" ] }
411,304
Suppose I have a non-associative array that has been defined like my_array=(foo bar baz) How can I check whether the array contains a given string? I’d prefer a solution that can be used within the conditional of an if block (e.g. if contains $my_array "something"; then ... ).
array=(foo bar baz foo) pattern=f* value=foo if (($array[(I)$pattern])); then echo array contains at least one value that matches the pattern fi if (($array[(Ie)$value])); then echo value is amongst the values of the array fi $array[(I)foo] returns the index of the last occurrence of foo in $array and 0 if not found. The e flag is for it to be an e xact match instead of a pattern match. To check the $value is among a literal list of values, you could pass that list of values to an anonymous function and look for the $value in $@ in the body of the function: if ()(( $@[(Ie)$value] )) foo bar baz and some more; then echo "It's one of those" fi To know how many times the value is found in the array, you could use the ${A:*B} operator (elements of array A that are also in array B ): array=(foo bar baz foo) value=foo search=("$value") (){print -r $# occurrence${2+s} of $value in array} "${(@)array:*search}" Or using pattern matching on the array elements: (){print -r $# occurrence${2+s} of $value in array} "${(M@)array:#$value}"
{ "source": [ "https://unix.stackexchange.com/questions/411304", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88560/" ] }
411,664
I recently saw a video where someone executed ^foo^bar in Bash. What is that combination for?
Bash calls this a quick substitution . It's in the "History Expansion" section of the Bash man page, under the "Event Designators" section ( online manual ): ^ string1 ^ string2 ^ Quick substitution. Repeat the previous command, replacing string1 with string2. Equivalent to !!:s/ string1 / string2 / So ^foo^bar would run the previously executed command, but replace the first occurence of foo with bar . Note that for s/old/new/ , the bash man page says "The final delimiter is optional if it is the last character of the event line." This is why you can use ^foo^bar and aren't required to use ^foo^bar^ . (See this answer for a bunch of other designators, although I didn't mention this one there).
{ "source": [ "https://unix.stackexchange.com/questions/411664", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264975/" ] }
411,691
I want to walk a file and compare two lines to see if they begin with the same 12 characters. If they do, I want to delete the first line and then compare the remaining line with the next line in the file until all lines have been compared. The file contains the list of files in the directory, already sorted. There can be two or more files (always in sequence) that start with the same 12 characters. I only want the last one. I saw a similar solution, in an early post: sed '$!N; /\(.*\)\n\1:FOO/D; P;D' file but I could not modify it to work for me.
Bash calls this a quick substitution . It's in the "History Expansion" section of the Bash man page, under the "Event Designators" section ( online manual ): ^ string1 ^ string2 ^ Quick substitution. Repeat the previous command, replacing string1 with string2. Equivalent to !!:s/ string1 / string2 / So ^foo^bar would run the previously executed command, but replace the first occurence of foo with bar . Note that for s/old/new/ , the bash man page says "The final delimiter is optional if it is the last character of the event line." This is why you can use ^foo^bar and aren't required to use ^foo^bar^ . (See this answer for a bunch of other designators, although I didn't mention this one there).
{ "source": [ "https://unix.stackexchange.com/questions/411691", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/266472/" ] }
411,811
As far as I know, every operating system has a different way to mark the end of line (EOL) character. Commercial operating systems use carriage return for EOL (carriage return and line feed on Windows, carriage return only on Mac). Linux, on the other hand, just uses line feed for EOL. Why doesn't Linux use carriage return for EOL (and solely line feed instead)?
Windows uses CR LF because it inherited it from MS-DOS. MS-DOS uses CR LF because it was inspired by CP/M which was already using CR LF . CP/M and many operating systems from the eighties and earlier used CR LF because it was the way to end a line printed on a teletype (return to the beginning of the line and jump to the next line, just like regular typewriters). This simplified printing a file because there was less or no pre-processing required. There was also mechanical requirements that prevented a single character to be usable. Some time might be required to allow the carriage to return and the platen to rotate. Gnu/Linux uses LF because it is a Unix clone . 1 Unix used a single character, LF , from the beginning to save space and standardize to a canonical end-of-line, using two characters was inefficient and ambiguous. This choice was inherited from Multics which used it as early as 1964. Memory, storage, CPU power and bandwidth were very sparse so saving one byte per line was worth doing. When a file was printed, the driver was converting the line feed (new-line) to the control characters required by the target device. LF was preferred to CR because the latter still had a specific usage. By repositioning the printed character to the beginning of the same line, it allowed to overstrike already typed characters. Apple initially decided to also use a single character but for some reason picked the other one: CR . When it switched to a BSD interface, it moved to LF . These choices have nothing to do with the fact an OS is commercial or not. 1 This is the answer to your question.
{ "source": [ "https://unix.stackexchange.com/questions/411811", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/255231/" ] }
411,822
So I wrote a systemd service file for Manjaro Linux to run two shell commands to set some kernel parameters at runtime to enable custom power saving actions. BTW: This was a tip by a german computer magazine. Originally I should place the shell commands in the /etc/rc.local file but I want to make it with a systemd service. Because rc.local is considered as deprecated and I want to learn something new. Below you see my service file saved as /etc/systemd/system/power-savings.service . Because there are two ExecStart directives I have chosen Type=oneshot . [Unit] Description=Enable custom power saving actions provided by c't magazine # Quelle(n): c't 25/2016, S. 77 # c't 26/2016, S. 12 [Service] Type=oneshot # SATA Link Power Management aktivieren ExecStart=/usr/bin/sh -c 'for I in /sys/class/scsi_host/host?/link_power_management_policy; do echo min_power > $I; done' # Energieverwaltung für den Audiocodec aktivieren ExecStart=/usr/bin/sh -c 'echo 1 > /sys/module/snd_hda_intel/parameters/power_save' [Install] WantedBy=multi-user.target I verified the service file with: $ sudo systemd-analyze verify /etc/systemd/system/power-savings.service Then I reloaded the daemon with: $ sudo systemctl daemon-reload I enabled it with: $ sudo systemctl enable power-savings.service Then I have run the service with: $ sudo systemctl start power-savings.service And it worked! The kernel parameters have been set. But then, after rebooting my system the service seems not to have any effect. Although, the service status said success... $ systemctl status power-savings.service Process: 412 ExecStart=/usr/bin/bash -c echo 1 > /sys/module/snd_hda_intel/parameters/power_save (code=exited, status=0/SUCCESS) Process: 404 ExecStart=/usr/bin/bash -c for I in /sys/class/scsi_host/host?/link_power_management_policy; do echo min_power > $I; done (code=exited, status=0/SUCCESS) Main PID: 412 (code=exited, status=0/SUCCESS) But the kernel parameter were not set. So my service works during a user session, only. Unfortunately not during system boot up as it was intended to work. Is there something I might have missed? Do I need some of those After or Require directives? How am I able to debug what really happens with the ExecStart directives?
Windows uses CR LF because it inherited it from MS-DOS. MS-DOS uses CR LF because it was inspired by CP/M which was already using CR LF . CP/M and many operating systems from the eighties and earlier used CR LF because it was the way to end a line printed on a teletype (return to the beginning of the line and jump to the next line, just like regular typewriters). This simplified printing a file because there was less or no pre-processing required. There was also mechanical requirements that prevented a single character to be usable. Some time might be required to allow the carriage to return and the platen to rotate. Gnu/Linux uses LF because it is a Unix clone . 1 Unix used a single character, LF , from the beginning to save space and standardize to a canonical end-of-line, using two characters was inefficient and ambiguous. This choice was inherited from Multics which used it as early as 1964. Memory, storage, CPU power and bandwidth were very sparse so saving one byte per line was worth doing. When a file was printed, the driver was converting the line feed (new-line) to the control characters required by the target device. LF was preferred to CR because the latter still had a specific usage. By repositioning the printed character to the beginning of the same line, it allowed to overstrike already typed characters. Apple initially decided to also use a single character but for some reason picked the other one: CR . When it switched to a BSD interface, it moved to LF . These choices have nothing to do with the fact an OS is commercial or not. 1 This is the answer to your question.
{ "source": [ "https://unix.stackexchange.com/questions/411822", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208465/" ] }
411,825
I found this cool trick from Postgres.app website echo /Users/user1/latest/bin | sudo tee /etc/paths.d/postgresapp I want to know, how can I make this work for any logged in user. I want something like echo {whatever the home directory of the logged in user at runtime}/latest/bin | sudo tee /etc/paths.d/postgresapp My first thought was to try the $HOME variable, but my home variable points to my home directory, whereas I want this to work when any user logs in to Mac and uses terminal.app.
Windows uses CR LF because it inherited it from MS-DOS. MS-DOS uses CR LF because it was inspired by CP/M which was already using CR LF . CP/M and many operating systems from the eighties and earlier used CR LF because it was the way to end a line printed on a teletype (return to the beginning of the line and jump to the next line, just like regular typewriters). This simplified printing a file because there was less or no pre-processing required. There was also mechanical requirements that prevented a single character to be usable. Some time might be required to allow the carriage to return and the platen to rotate. Gnu/Linux uses LF because it is a Unix clone . 1 Unix used a single character, LF , from the beginning to save space and standardize to a canonical end-of-line, using two characters was inefficient and ambiguous. This choice was inherited from Multics which used it as early as 1964. Memory, storage, CPU power and bandwidth were very sparse so saving one byte per line was worth doing. When a file was printed, the driver was converting the line feed (new-line) to the control characters required by the target device. LF was preferred to CR because the latter still had a specific usage. By repositioning the printed character to the beginning of the same line, it allowed to overstrike already typed characters. Apple initially decided to also use a single character but for some reason picked the other one: CR . When it switched to a BSD interface, it moved to LF . These choices have nothing to do with the fact an OS is commercial or not. 1 This is the answer to your question.
{ "source": [ "https://unix.stackexchange.com/questions/411825", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180441/" ] }
412,002
When inserting a USB stick or device to computer, there is always the risk that the device is malicious, will act as an HID and potentially do some damage on the computer. How can I prevent this problem? Is disabling HID on specific USB port sufficient? How do I do that?
Install USBGuard — it provides a framework for authorising USB devices before activating them. With the help of a tool such as USBGuard Notifier or the USBGuard Qt applet , it can pop up a notification when you connect a new device, asking you what to do; and it can store permanent rules for known devices so you don’t have to confirm over and over. Rules are defined using a comprehensive language with support for any USB attribute (including serial number, insertion port...), so you can write rules that are as specific as you want — whitelist this keyboard if it has this identifier, this serial number, is connected to this port, etc.
{ "source": [ "https://unix.stackexchange.com/questions/412002", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/141945/" ] }
412,065
In trying to access a cluster in my lab by ssh and it work. but then I'm not able to do anything : user@users:~> nautilus X11 connection rejected because of wrong authentication. Could not parse arguments: Cannot open display or user@users:~> gedit X11 connection rejected because of wrong authentication. (gedit:151222): Gtk-WARNING **: cannot open display: localhost:11.0 It worked until today... and I don't know how to check if something had change. I don't have the root password for this machine, is there anything i can do ? I have read lot of thing about this error such as this but nothing solved... EDIT : The local OS is Ubuntu 16 and the server is OpenSuse. I'm connecting this way : ssh -XY -p22 [email protected] EDIT 2 : user@users:~> env MODULE_VERSION_STACK=3.1.6 LESSKEY=/etc/lesskey.bin NNTPSERVER=news INFODIR=/usr/local/info:/usr/share/info:/usr/info MANPATH=/usr/local/man:/usr/share/man HOSTNAME=users XKEYSYMDB=/usr/share/X11/XKeysymDB HOST=users TERM=xterm-256color SHELL=/bin/bash PROFILEREAD=true HISTSIZE=1000 SSH_CLIENT=10.44.0.1 49729 22 MORE=-sl SSH_TTY=/dev/pts/2 JRE_HOME=/usr/lib64/jvm/jre USER=user LS_COLORS=no=00:fi=00:di=01;34:ln=00;36:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=41;33;01:ex=00;32:*.cmd=00;32:*.exe=01;32:*.com=01;32:*.bat=01;32:*.btm=01;32:*.dll=01;32:*.tar=00;31:*.tbz=00;31:*.tgz=00;31:*.rpm=00;31:*.deb=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.lzma=00;31:*.zip=00;31:*.zoo=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.tb2=00;31:*.tz2=00;31:*.tbz2=00;31:*.avi=01;35:*.bmp=01;35:*.fli=01;35:*.gif=01;35:*.jpg=01;35:*.jpeg=01;35:*.mng=01;35:*.mov=01;35:*.mpg=01;35:*.pcx=01;35:*.pbm=01;35:*.pgm=01;35:*.png=01;35:*.ppm=01;35:*.tga=01;35:*.tif=01;35:*.xbm=01;35:*.xpm=01;35:*.dl=01;35:*.gl=01;35:*.wmv=01;35:*.aiff=00;32:*.au=00;32:*.mid=00;32:*.mp3=00;32:*.ogg=00;32:*.voc=00;32:*.wav=00;32: LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib:/usr/local/cuda-5.5/lib64: XNLSPATH=/usr/share/X11/nls ENV=/etc/bash.bashrc HOSTTYPE=x86_64 FROM_HEADER= MSM_PRODUCT=MSM PAGER=less CSHEDIT=emacs XDG_CONFIG_DIRS=/etc/xdg MINICOM=-c on MODULE_VERSION=3.1.6 MAIL=/var/mail/user PATH=/usr/local/cuda-5.5/bin:/home/user/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/usr/lib64/jvm/jre/bin:/usr/lib/mit/bin:/usr/lib/mit/sbin CPU=x86_64 JAVA_BINDIR=/usr/lib64/jvm/jre/bin INPUTRC=/home/user/.inputrc PWD=/home/user JAVA_HOME=/usr/lib64/jvm/jre LANG=en_US.UTF-8 PYTHONSTARTUP=/etc/pythonstart MODULEPATH=/usr/share/modules:/usr/share/modules/modulefiles LOADEDMODULES= QT_SYSTEM_DIR=/usr/share/desktop-data SHLVL=1 HOME=/home/user LESS_ADVANCED_PREPROCESSOR=no OSTYPE=linux LS_OPTIONS=-N --color=tty -T 0 XCURSOR_THEME=DMZ MSM_HOME=/usr/local/MegaRAID Storage Manager WINDOWMANAGER=/usr/bin/gnome G_FILENAME_ENCODING=@locale,UTF-8,ISO-8859-15,CP1252 LESS=-M -I MACHTYPE=x86_64-suse-linux LOGNAME=user XDG_DATA_DIRS=/usr/share:/etc/opt/kde3/share:/opt/kde3/share SSH_CONNECTION=172.17.10.15 22 MODULESHOME=/usr/share/modules LESSOPEN=lessopen.sh %s INFOPATH=/usr/local/info:/usr/share/info:/usr/info DISPLAY=localhost:12.0 XAUTHLOCALHOSTNAME=users LESSCLOSE=lessclose.sh %s %s G_BROKEN_FILENAMES=1 JAVA_ROOT=/usr/lib64/jvm/jre COLORTERM=1 _=/usr/bin/env
Xauthority Mini How To On GNU/Linux systems running an X11 display server, the file ~/.Xauthority stores authentication cookies or cryptographic keys used to authorize connection to the display. In most cases, the authentication mechanism is a symmetric cookie which is referred to as a Magic Cookie . The same cookie is used by the server as well as the client. Each X11 authentication cookie is under the control of the individual system authenticated user. Since the authetication cookie is stored as a plain text security token, the permissions on the ~/.Xauthority file should be rw for the owner only, 600 in octal format. However, the permissions on the authorization file are not enforced. A user can list, export, create, or delete authentication cookies using the xauth program. The following command will create an authoratization cookie for DISPLAY 32 . xauth add localhost:32 - `mcookie` Manual creation and manipulation of cookies is usually not needed when using X11 forwarding with ssh , because ssh starts an X11 proxy on the remote machine and automatically generates authorization cookies on the local display. However, for certain configurations the authorization cookie may need to be manually created and copied to the local machine. This can be done in an ssh session and then use scp to copy the cookie. ssh into remote machine: ssh -XY user@remote Check if an authorization cookie is present for the current X11 display echo $DISPLAY xauth list If there's no environment variable named $DISPLAY then the X11 proxy did not start properly. It's important to note that DISPLAY 0 is typically locally logged in users and is only running if an xserver has been locally started via xinit . There is no requirement for a locally started X11 server in order for X11 forwarding to function through ssh . If there's a $DISPLAY environment variable set but no corresponding authorization cookie for that display number, you can create one: xauth add $DISPLAY - `mcookie` And verify that there is now a cookie: xauth list You can copy that cookie and merge it into the local machine: user@remote> xauth nextract ~/xcookie $DISPLAY user@remote> exit user@local> scp user@remote:~/xcookie ~/xcookie user@local> xauth nmerge ~/xcookie And then verify that the cookie has been installed: user@local> xauth list Try out your X11 forwarding ssh connection. Notes on ~/.Xauthority ~/.Xauthority is a binary file which contains all the authorization information for each display the user may access. Each record is delimited by the two bytes 0x0100 . Each field is preceeded by a hexidemical count of the field's number of bytes. All text is encoded in hexidecimal ASCII. The following table is the basic structure of the most common configuration of a MIT MAGIC COOKIE authorization: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0100 0004 61616161 0002 3435 0012 4d49542d4d414749432d434f4f4b49452d31 0010 c0bdd1c539be89a2090f1bbb6b414c2c ----------------- ----------- ------------------ ------------ ---------------------- ------------- -------------------------------------- ------------ --------------------------------------- start-of-record 0xNumBytes 0xASCII Hostname 0xNumBytes 0xASCII Display Num 0xNumBytes 0xASCII Auth Type 0xNumBytes 0xkey ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The top line is retrievable from the ~/.Xauthority file via the xauth nlist command. Of course, your authorization file will have different information from my example. If the Security Extensions are in use with the X11 server, there are several configuration options for each authorization line including time limited authorization per cookie.
{ "source": [ "https://unix.stackexchange.com/questions/412065", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/262164/" ] }
412,234
I have the following file: ---------- 1 Steve Steve 341 2017-12-21 01:51 myFile.txt I switched the user to root in the terminal, and I have noticed the following behaviors: I can read this file and write to it. I can't execute this file. If I set the x bit in the user permissions ( ---x------ ) or the group permissions ( ------x--- ) or the others permissions ( ---------x ) of the file, then I would be able to execute this file. Can anyone explain to me or point me to a tutorial that explains all of the rules that apply when the root user is dealing with files and directories?
Privileged access to files and directories is actually determined by capabilities, not just by being root or not. In practice, root usually has all possible capabilities, but there are situations where all/many of them could be dropped, or some given to other users (their processes). In brief, you already described how the access control checks work for a privileged process. Here's how the different capabilities actually affect it: The main capability here is CAP_DAC_OVERRIDE , a process that has it can "bypass file read, write, and execute permission checks". That includes reading and writing to any files, as well as reading, writing and accessing directories. It doesn't actually apply to executing files that are not marked as executable. The comment in generic_permission ( fs/namei.c ), before the access checks for files, says that Read/write DACs are always overridable. Executable DACs are overridable when there is at least one exec bit set. And the code checks that there's at least one x bit set if you're trying to execute the file. I suspect that's only a convenience feature, to prevent accidentally running random data files and getting errors or odd results. Anyway, if you can override permissions, you could just make an executable copy and run that. (Though it might make a difference in theory for setuid files of a process was capable of overriding file permissions ( CAP_DAC_OVERRIDE ), but didn't have other related capabilities ( CAP_FSETID / CAP_FOWNER / CAP_SETUID ). But having CAP_DAC_OVERRIDE allows editing /etc/shadow and stuff like that, so it's approximately equal to just having full root access anyway.) There's also the CAP_DAC_READ_SEARCH capability that allows to read any files and access any directories, but not to execute or write to them; and CAP_FOWNER that allows a process to do stuff that's usually reserved only for the file owner, like changing the permission bits and file group. Overriding the sticky bit on directories is mentioned only under CAP_FOWNER , so it seems that CAP_DAC_OVERRIDE would not be enough to ignore that. (It would give you write permission, but usually in sticky directories you have that anyway, and +t limits it.) (I think special devices count as "files" here. At least generic_permission() only has a type check for directories, but I didn't check outside of that.) Of course, there are still situations where even capabilities will not help you modify files: some files in /proc and /sys , since they're not really actual files SELinux and other security modules that might limit root chattr immutable +i and append only +a flags on ext2/ext3/ext4, both of which stop even root, and prevent also file renames etc. network filesystems, where the server can do its own access control, e.g. root_squash in NFS maps root to nobody FUSE, which I assume could do anything read-only mounts read-only devices
{ "source": [ "https://unix.stackexchange.com/questions/412234", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/228808/" ] }
412,259
Currently, I'm running these two commands to create a quick backup of the directory. Is there a way to combine the two commands into one, so that I am copying and renaming the new directory in one command? #cp -R /tf/Custom_App /tf/Custom_App_backups/ #mv /tf/Custom_App_backups/Custom_App /tf/Custom_App_backups/Custom_App_2017-12-21
You should be able to do just cp -R /tf/Custom_App /tf/Custom_App_backups/Custom_App_2017-12-21 However , if the target directory already exists, this would append the final part of the source path to the destination path, creating /tf/Custom_App_backups/Custom_App_2017-12-21/Custom_App , and then copy the rest of the tree within that. To prevent this, use /tf/Custom_App/. as the source. Of course, in that case you might want to rm -r /tf/Custom_App_backups/Custom_App_2017-12-21 first, if you don't want older files lying around there after the copy. The difference between /some/dir and /some/dir/. was discussed a while back in cp behaves weirdly when . (dot) or .. (dot dot) are the source directory
{ "source": [ "https://unix.stackexchange.com/questions/412259", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106228/" ] }
412,446
I want to disable ping response all the time on my Ubuntu operating system, the following commands work but only until the system reboots: Ping off: echo "1" > /proc/sys/net/ipv4/icmp_echo_ignore_all Ping on: echo "0" > /proc/sys/net/ipv4/icmp_echo_ignore_all How would I be able to leave echo off even after having rebooted my laptop?
How would I be able to leave echo off even when I am rebooting my laptop? You can use one of the following three ways (as root): Edit /etc/sysctl.conf Add the following line to your /etc/sysctl.conf : net.ipv4.icmp_echo_ignore_all=1 Then: sysctl -p Using iptables: iptables -I INPUT -p icmp --icmp-type echo-request -j DROP With cron Run crontab -e as root, then add the following line: @reboot echo "1" > /proc/sys/net/ipv4/icmp_echo_ignore_all Start and enable the service: systemctl start cron.service systemctl enable cron.service
{ "source": [ "https://unix.stackexchange.com/questions/412446", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/267110/" ] }
412,716
I have read that bash can do integer arithmetic without using an external command, for example: echo "$((3 * (2 + 1)))" Can bash also do floating-point arithmetic without using an external command?
No. Bash cannot perform floating point arithmetic natively. This is not what you're looking for but may help someone else: Alternatives bc bc allows floating point arithmetic, and can even convert whole numbers to floating point by setting the scale value. (Note the scale value only affects division within bc but a workaround for this is ending any formula with division by 1) $ echo '10.1 / 1.1' | bc -l 9.18181818181818181818 $ echo '55 * 0.111111' | bc -l 6.111105 $ echo 'scale=4; 1 + 1' | bc -l 2 $ echo 'scale=4; 1 + 1 / 1' | bc -l 2.0000 awk awk is a programming language in itself, but is easily leveraged to perform floating point arithmetic in your bash scripts, but that's not all it can do! echo | awk '{print 10.1 / 1.1}' 9.18182 $ awk 'BEGIN{print 55 * 0.111111}' 6.11111 $ echo | awk '{print log(100)}' 4.60517 $ awk 'BEGIN{print sqrt(100)}' 10 I used both echo piped to awk and a BEGIN to show two ways of doing this. Anything within an awk BEGIN statement will be executed before input is read, however without input or a BEGIN statement awk wouldn't execute so you need to feed it input. Perl Another programming language that can be leveraged within a bash script. $ perl -l -e 'print 10.1 / 1.1' 9.18181818181818 $ somevar="$(perl -e 'print 55 * 0.111111')"; echo "$somevar" 6.111105 Python Another programming language that can be leveraged within a bash script. $ python -c 'print 10.1 / 1.1' 9.18181818182 $ somevar="$(python -c 'print 55 * 0.111111')"; echo "$somevar" 6.111105 Ruby Another programming language that can be leveraged within a bash script. $ ruby -l -e 'print 10.1 / 1.1' 9.18181818181818 $ somevar="$(ruby -e 'print 55 * 0.111111')"; echo "$somevar" 6.111105
{ "source": [ "https://unix.stackexchange.com/questions/412716", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/267288/" ] }
413,012
When I run date +"%Y%m%d%H%M%S" I receive 20171225203309 here in CET time zone. Can I use date to obtain a the current time in the same format, but for timezone GMT?
You can use date -u ( universal time ) which is equivalent to GMT. Quoting date manual: ‘-u’ ‘--utc’ ‘--universal’ Use Universal Time by operating as if the ‘TZ’ environment variable were set to the string ‘UTC0’. UTC stands for Coordinated Universal Time, established in 1960. Universal Time is often called “Greenwich Mean Time” (GMT) for historical reasons. Typically, systems ignore leap seconds and thus implement an approximation to UTC rather than true UTC.
{ "source": [ "https://unix.stackexchange.com/questions/413012", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105620/" ] }
413,204
Say I log into a shell on a unix system and begin tapping away commands. I initially begin in my user's home directory ~ . I might from there cd down to the directory Documents . The command to change working directory here is very simple intuitively to understand: the parent node has a list of child nodes that it can access, and presumably it uses an (optimised) variant of a search to locate the existence of a child node with the name the user entered, and the working directory is then "altered" to match this — correct me if I'm wrong there. It may even be simpler that the shell simply "naively" tries to attempt to access the directory exactly as per the user's wishes and when the file system returns some type of error, the shell displays a response accordingly. What I am interested in however, is how the same process works when I navigate up a directory, i.e. to a parent, or a parent's parent. Given my unknown, presumably "blind" location of Documents , one of possibly many directories in the entire file system tree with that name, how does Unix determine where I should be placed next? Does it make a reference to pwd and examine that? If yes, how does pwd track the current navigational state?
The other answers are oversimplifications, each presenting only parts of the story, and are wrong on a couple of points. There are two ways in which the working directory is tracked: For every process, in the kernel-space data structure that represents that process, the kernel stores two vnode references to the vnodes of the working directory and the root directory for that process. The former reference is set by the chdir() and fchdir() system calls, the latter by chroot() . One can see them indirectly in /proc on Linux operating systems or via the fstat command on FreeBSD and the like: % fstat -p $$|head -n 5 USER CMD PID FD MOUNT INUM MODE SZ|DV R/W JdeBP zsh 92648 text / 24958 -r-xr-xr-x 702360 r JdeBP zsh 92648 ctty /dev 148 crw--w---- pts/4 rw JdeBP zsh 92648 wd /usr/home/JdeBP 4 drwxr-xr-x 124 r JdeBP zsh 92648 root / 4 drwxr-xr-x 35 r % When pathname resolution operates, it begins at one or the other of those referenced vnodes, according to whether the path is relative or absolute. (There is a family of …at() system calls that allow pathname resolution to begin at the vnode referenced by an open (directory) file descriptor as a third option.) In microkernel Unices the data structure is in application space, but the principle of holding open references to these directories remains the same. Internally, within shells such as the Z, Korn, Bourne Again, C, and Almquist shell, the shell additionally keeps track of the working directory using string manipulation of an internal string variable. It does this whenever it has cause to call chdir() . If one changes to a relative pathname, it manipulates the string to append that name. If one changes to an absolute pathname, it replaces the string with the new name. In both cases, it adjusts the string to remove . and .. components and to chase down symbolic links replacing them with their linked-to names. ( Here is the Z shell's code for that , for example.) The name in the internal string variable is tracked by a shell variable named PWD (or cwd in the C shells). This is conventionally exported as an environment variable (named PWD ) to programs spawned by the shell. These two methods of tracking things are revealed by the -P and -L options to the cd and pwd shell built-in commands, and by the differences between the shells' built-in pwd commands and both the /bin/pwd command and the built-in pwd commands of things like (amongst others) VIM and NeoVIM. % mkdir a ; ln -s a b % (cd b; pwd; /bin/pwd; printenv PWD) /usr/home/JdeBP/b /usr/home/JdeBP/a /usr/home/JdeBP/b % (cd b; pwd -P; /bin/pwd -P) /usr/home/JdeBP/a /usr/home/JdeBP/a % (cd b; pwd -L; /bin/pwd -L) /usr/home/JdeBP/b /usr/home/JdeBP/b % (cd -P b; pwd; /bin/pwd; printenv PWD) /usr/home/JdeBP/a /usr/home/JdeBP/a /usr/home/JdeBP/a % (cd b; PWD=/hello/there /bin/pwd -L) /usr/home/JdeBP/a % As you can see: obtaining the "logical" working directory is a matter of looking at the PWD shell variable (or environment variable if one is not the shell program); whereas obtaining the "physical" working directory is a matter of calling the getcwd() library function. The operation of the /bin/pwd program when the -L option is used is somewhat subtle. It cannot trust the value of the PWD environment variable that it has inherited. After all, it need not have been invoked by a shell and intervening programs may not have implemented the shell's mechanism of making the PWD environment variable always track the name of the working directory. Or someone may do what I did just there. So what it does is (as the POSIX standard says) check that the name given in PWD yields the same thing as the name . , as can be seen with a system call trace: % ln -s a c % (cd b; truss /bin/pwd -L 3>&1 1>&2 2>&3 | grep -E '^stat|__getcwd') stat("/usr/home/JdeBP/b",{ mode=drwxr-xr-x ,inode=120932,size=2,blksize=131072 }) = 0 (0x0) stat(".",{ mode=drwxr-xr-x ,inode=120932,size=2,blksize=131072 }) = 0 (0x0) /usr/home/JdeBP/b % (cd b; PWD=/usr/local/etc truss /bin/pwd -L 3>&1 1>&2 2>&3 | grep -E '^stat|__getcwd') stat("/usr/local/etc",{ mode=drwxr-xr-x ,inode=14835,size=158,blksize=10240 }) = 0 (0x0) stat(".",{ mode=drwxr-xr-x ,inode=120932,size=2,blksize=131072 }) = 0 (0x0) __getcwd("/usr/home/JdeBP/a",1024) = 0 (0x0) /usr/home/JdeBP/a % (cd b; PWD=/hello/there truss /bin/pwd -L 3>&1 1>&2 2>&3 | grep -E '^stat|__getcwd') stat("/hello/there",0x7fffffffe730) ERR#2 'No such file or directory' __getcwd("/usr/home/JdeBP/a",1024) = 0 (0x0) /usr/home/JdeBP/a % (cd b; PWD=/usr/home/JdeBP/c truss /bin/pwd -L 3>&1 1>&2 2>&3 | grep -E '^stat|__getcwd') stat("/usr/home/JdeBP/c",{ mode=drwxr-xr-x ,inode=120932,size=2,blksize=131072 }) = 0 (0x0) stat(".",{ mode=drwxr-xr-x ,inode=120932,size=2,blksize=131072 }) = 0 (0x0) /usr/home/JdeBP/c % As you can see: it only calls getcwd() if it detects a mismatch; and it can be fooled by setting PWD to a string that does indeed name the same directory, but by a different route. The getcwd() library function is a subject in its own right. But to précis: Originally it was purely a library function, that built up a pathname from the working directory back up to the root by repeatedly trying to look up the working directory in the .. directory. It stopped when it reached a loop where .. was the same as its working directory or when there was an error trying to open the next .. up. This would be a lot of system calls under the covers. Nowadays the situation is slightly more complex. On FreeBSD, for example (this being true for other operating systems as well), it is a true system call, as you can see in the system call trace given earlier. All of the traversal from the working directory vnode up to the root is done in a single system call, which takes advantage of things like kernel mode code's direct access to the directory entry cache to do the pathname component lookups much more efficiently. However, note that even on FreeBSD and those other operating systems the kernel does not keep track of the working directory with a string. Navigating to .. is again a subject in its own right. Another précis: Although directories conventionally (albeit, as already alluded to, this is not required) contain an actual .. in the directory data structure on disc, the kernel tracks the parent directory of each directory vnode itself and can thus navigate to the .. vnode of any working directory. This is somewhat complicated by the mountpoint and changed root mechanisms, which are beyond the scope of this answer. Aside Windows NT in fact does a similar thing. There is a single working directory per process, set by the SetCurrentDirectory() API call and tracked per process by the kernel via an (internal) open file handle to that directory; and there is a set of environment variables that Win32 programs (not just the command interpreters, but all Win32 programs) use to track the names of multiple working directories (one per drive), appending to or overwriting them whenever they change directory. Conventionally, unlike the case with Unix and Linux operating systems, Win32 programs do not display these environment variables to users. One can sometimes see them in Unix-like subsystems running on Windows NT, though, as well as by using the command interpreters' SET commands in a particular way. Further reading " pwd " . The Open Group Base Specifications Issue 7. IEEE 1003.1:2008. The Open Group. 2016. "Pathname Resolution" . The Open Group Base Specifications Issue 7. IEEE 1003.1:2008. The Open Group. 2016. https://askubuntu.com/a/636001/43344 How are files opened in unix? what is inode for, in FreeBSD or Solaris Strange environment variable !::=::\ in Cygwin Why does CDPATH not work as documented in the manuals? How can I set zsh to use physical paths? Going into a directory linked by a link
{ "source": [ "https://unix.stackexchange.com/questions/413204", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/267637/" ] }
413,449
Simple question. Does the bash shell have any support for using pointers when writing a shell script? I am familiar with expansion notation, ${var[@]} when iterating over the array $var , but it is not clear this is utilizing pointers to iterate over the array indices. Does bash provide access to memory addresses like other languages? If bash does not support using pointers, what other shells do?
A pointer (to a location of memory ) is not really a useful concept in anything higher-level than C, be it something like Python or the shell. References to objects are of course useful in high-level languages, perhaps even necessary for building complex data structures. But in most cases thinking in terms of memory addresses is too low level to be very useful. In Bash (and other shells), you can get the values of array elements with the ${array[index]} notation, assign them with array[index]=... and get the number of elements in the array with ${#array[@]} . The expression inside the brackets is an arithmetic expression. As a made-up example, we could add a constant prefix to all array members: for ((i=0 ; i < ${#array[@]} ; i++ )) ; do array[i]="foo-${array[i]}" done (If we only cared about the values, and not the indexes, just for x in "${array[@]}" ; do... would be fine.) With associative or sparse arrays , a numerical loop doesn't make much sense, but instead we'd need to fetch the array keys/indexes with ${!array[@]} . E.g. declare -A assoc=([foo]="123" [bar]="456") for i in "${!assoc[@]}" ; do echo "${assoc[$i]}" done In addition to that, Bash has two ways to point indirectly to another variable: indirect expansion , using the ${!var} syntax , which uses the value of the variable whose name is in var , and namerefs , which need to be created with the declare builtin (or the ksh -compatible synonym, typeset ). declare -n ref=var makes ref a reference to the variable var . Namerefs also support indexing, in that if we have arr=(a b c); declare -n ref=arr; then ${ref[1]} will expand to b . Using ${!p[1]} would instead take p as an array, and refer to the variable named by its second element. In Bash, namerefs are literally that, references by name , and using a nameref from inside a function will use the local value of the named variable. This will print local value of var . #!/bin/bash fun() { local var="local value of var" echo "$ref"; } var="global var" declare -n ref=var fun BashFAQ has a longer article on indirection , too.
{ "source": [ "https://unix.stackexchange.com/questions/413449", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42920/" ] }
413,576
I have been told that the spaces are important in bash or other shell scripts and I should not change the existence of spaces unless I know what I am doing. By "changing the existence" I mean either inserting a space between two non-space characters or removing a space between two non-space characters, e.g. changing var="$val" to var ="$val" or vice versa. I want to ask Are there any cases in which using a single space or using multiple consecutive spaces in a shell script makes a difference? . (Of course, inserting/deleting a space in quotes makes a difference ,like changing from echo "a b" to echo "a b" or vice versa. I am looking for examples other than this trivial example.) I have come across this question but that one is about adding and removing spaces between two non-space characters for which I know many examples that it would make a difference. Any help would be appreciated. Include more varieties of shells if possible.
Outside of quotes, the shell uses whitespace (spaces, tabs, newline, carriage-return, etc) as a word/token separator. That means: Things not separated by whitespace are considered to be one "word". Things separated by one-or-more whitespace characters are considered to be two (or more) words. The actual number of whitespace chars between each "thing" doesn't matter, as long as there is at least one.
{ "source": [ "https://unix.stackexchange.com/questions/413576", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259023/" ] }
413,664
I have a huge (70GB), one line , text file and I want to replace a string (token) in it. I want to replace the token <unk> , with another dummy token ( glove issue ). I tried sed : sed 's/<unk>/<raw_unk>/g' < corpus.txt > corpus.txt.new but the output file corpus.txt.new has zero-bytes! I also tried using perl: perl -pe 's/<unk>/<raw_unk>/g' < corpus.txt > corpus.txt.new but I got an out of memory error. For smaller files, both of the above commands work. How can I replace a string is such a file? This is a related question, but none of the answers worked for me. Edit : What about splitting the file in chunks of 10GBs (or whatever) each and applying sed on each one of them and then merging them with cat ? Does that make sense? Is there a more elegant solution?
The usual text processing tools are not designed to handle lines that don't fit in RAM. They tend to work by reading one record (one line), manipulating it, and outputting the result, then proceeding to the next record (line). If there's an ASCII character that appears frequently in the file and doesn't appear in <unk> or <raw_unk> , then you can use that as the record separator. Since most tools don't allow custom record separators, swap between that character and newlines. tr processes bytes, not lines, so it doesn't care about any record size. Supposing that ; works: <corpus.txt tr '\n;' ';\n' | sed 's/<unk>/<raw_unk>/g' | tr '\n;' ';\n' >corpus.txt.new You could also anchor on the first character of the text you're searching for, assuming that it isn't repeated in the search text and it appears frequently enough. If the file may start with unk> , change the sed command to sed '2,$ s/… to avoid a spurious match. <corpus.txt tr '\n<' '<\n' | sed 's/^unk>/raw_unk>/g' | tr '\n<' '<\n' >corpus.txt.new Alternatively, use the last character. <corpus.txt tr '\n>' '>\n' | sed 's/<unk$/<raw_unk/g' | tr '\n>' '>\n' >corpus.txt.new Note that this technique assumes that sed operates seamlessly on a file that doesn't end with a newline, i.e. that it processes the last partial line without truncating it and without appending a final newline. It works with GNU sed. If you can pick the last character of the file as the record separator, you'll avoid any portability trouble.
{ "source": [ "https://unix.stackexchange.com/questions/413664", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47587/" ] }
413,671
How can I force pcmanfm to refresh its thumbnails. I have a directory of photos in JPG format, (taken with iphone). I have rotated some of these using Ubuntu Image Viewer. When I rotate the image the thumbnail does not update. How can I force it to update? I have tried deleting all thumbnails form ~/.cache/thumbnails and selecting "reload folder" in pcmanfm but no joy. Any suggestions? Where are the thumbnails actually stored? Using pcmanfm 1.2.4 on Ubuntu 16.04.
The usual text processing tools are not designed to handle lines that don't fit in RAM. They tend to work by reading one record (one line), manipulating it, and outputting the result, then proceeding to the next record (line). If there's an ASCII character that appears frequently in the file and doesn't appear in <unk> or <raw_unk> , then you can use that as the record separator. Since most tools don't allow custom record separators, swap between that character and newlines. tr processes bytes, not lines, so it doesn't care about any record size. Supposing that ; works: <corpus.txt tr '\n;' ';\n' | sed 's/<unk>/<raw_unk>/g' | tr '\n;' ';\n' >corpus.txt.new You could also anchor on the first character of the text you're searching for, assuming that it isn't repeated in the search text and it appears frequently enough. If the file may start with unk> , change the sed command to sed '2,$ s/… to avoid a spurious match. <corpus.txt tr '\n<' '<\n' | sed 's/^unk>/raw_unk>/g' | tr '\n<' '<\n' >corpus.txt.new Alternatively, use the last character. <corpus.txt tr '\n>' '>\n' | sed 's/<unk$/<raw_unk/g' | tr '\n>' '>\n' >corpus.txt.new Note that this technique assumes that sed operates seamlessly on a file that doesn't end with a newline, i.e. that it processes the last partial line without truncating it and without appending a final newline. It works with GNU sed. If you can pick the last character of the file as the record separator, you'll avoid any portability trouble.
{ "source": [ "https://unix.stackexchange.com/questions/413671", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268027/" ] }
413,840
I have a LINUX machine (remote), and a MAC machine (local). Our system administrator set up an "SSH" method, whereby I can ssh from my MAC, to my LINUX machine, via this command on my MAC: ssh [email protected] -p 12345 When I do this, I am prompted to put in the password for my LINUX machine, and when I do, I have access, which is great. What I want to do now though, is be able to scp from my MAC machine, to my LINUX machine, so that I can transfer files over. How do I do that? I have googled around but I am not sure what to do. Thank you
To copy from REMOTE to LOCAL : scp -P 12345 user@server:/path/to/remote/file /path/to/local/file To copy from LOCAL to REMOTE : scp -P 12345 /path/to/local/file user@server:/path/to/remote/file Note: The switch to specify port for scp is -P instead of -p If you want to copy all files in a directory you can use wildcards like below: scp -P 12345 user@server:/path/to/remote/dir/* /path/to/local/dir/ or even scp -P 12345 user@server:/path/to/remote/dir/*.txt /path/to/local/dir/
{ "source": [ "https://unix.stackexchange.com/questions/413840", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54970/" ] }
413,844
I use the source command in my bash script in order to read/print the variables values more linuxmachines_mount_point.txt export linuxmachine01="sdb sdc sdf sdd sde sdg" export linuxmachine02="sde sdd sdb sdf sdc" export linuxmachine03="sdb sdd sdc sde sdf" export linuxmachine06="sdb sde sdf sdd" source linuxmachines_mount_point.txt echo $linuxmachine01 sdb sdc sdf sdd sde sdg What is the opposite of source in order to unset the variables? Expected results echo $linuxmachine01 < no output >
Using a subshell (Recommended) Run the source command in a subshell: ( source linuxmachines_mount_point.txt cmd1 $linuxmachine02 other_commands_using_variables etc ) echo $linuxmachine01 # Will return nothing Subshells are defined by parens: (...) . Any shell variables set within the subshell are forgotten when the subshell ends. Using unset This unsets any variable exported by linuxmachines_mount_point.txt : unset $(awk -F'[ =]+' '/^export/{print $2}' linuxmachines_mount_point.txt) -F'[ =]+' tells awk to use any combination of spaces and equal signs as the field separator. /^export/{print $2} This tells awk to select lines that begin with export and then print the second field. unset $(...) This runs the command inside $(...) , captures its stdout, and unsets any variables named by its output.
{ "source": [ "https://unix.stackexchange.com/questions/413844", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
413,878
I've got a JSON array like so: { "SITE_DATA": { "URL": "example.com", "AUTHOR": "John Doe", "CREATED": "10/22/2017" } } I'm looking to iterate over this array using jq so I can set the key of each item as the variable name and the value as it's value. Example: URL="example.com" AUTHOR="John Doe" CREATED="10/22/2017" What I've got so far iterates over the array but creates a string: constants=$(cat ${1} | jq '.SITE_DATA' | jq -r "to_entries|map(\"\(.key)=\(.value|tostring)\")|.[]") Which outputs: URL=example.com AUTHOR=John Doe CREATED=10/22/2017 I am looking to use these variables further down in the script: echo ${URL} But this echos an empty output at the moment. I'm guessing I need an eval or something in there but can't seem to put my finger on it.
Your original version isn't going to be eval able because the author name has spaces in it - it would be interpreted as running a command Doe with the environment variable AUTHOR set to John . There's also virtually never a need to pipe jq to itself - the internal piping & dataflow can connect different filters together. All of this is only sensible if you completely trust the input data (e.g. it's generated by a tool you control). There are several possible problems otherwise detailed below, but let's assume the data itself is certain to be in the format you expect for the moment. You can make a much simpler version of your jq program: jq -r '.SITE_DATA | to_entries | .[] | .key + "=" + (.value | @sh)' which outputs: URL='example.com' AUTHOR='John Doe' CREATED='10/22/2017' There's no need for a map : .[] deals with taking each object in the array through the rest of the pipeline as a separate item , so everything after the last | is applied to each one separately. At the end, we just assemble a valid shell assignment string with ordinary + concatenation, including appropriate quotes & escaping around the value with @sh . All the pipes matter here - without them you get fairly unhelpful error messages, where parts of the program are evaluated in subtly different contexts. This string is eval able if you completely trust the input data and has the effect you want: eval "$(jq -r '.SITE_DATA | to_entries | .[] | .key + "=" + (.value | @sh)' < data.json)" echo "$AUTHOR" As ever when using eval , be careful that you trust the data you're getting, since if it's malicious or just in an unexpected format things could go very wrong. In particular, if the key contains shell metacharacters like $ or whitespace, this could create a running command. It could also overwrite, for example, the PATH environment variable unexpectedly. If you don't trust the data, either don't do this at all or filter the object to contain just the keys you want first: jq '.SITE_DATA | { AUTHOR, URL, CREATED } | ...' You could also have a problem in the case that the value is an array, so .value | tostring | @sh will be better - but this list of caveats may be a good reason not to do any of this in the first place. It's also possible to build up an associative array instead where both keys and values are quoted: eval "declare -A data=($(jq -r '.SITE_DATA | to_entries | .[] | @sh "[\(.key)]=\(.value)"' < test.json))" After this, ${data[CREATED]} contains the creation date, and so on, regardless of what the content of the keys or values are. This is the safest option, but doesn't result in top-level variables that could be exported. It may still produce a Bash syntax error when a value is an array, or a jq error if it is an object, but won't execute code or overwrite anything.
{ "source": [ "https://unix.stackexchange.com/questions/413878", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/227429/" ] }
414,042
I have a script mycommand.sh that I can't run twice. I want to split output to two different files one file containing the lines that match a regex and one file containing the lines that don't match a regex. What I wish to have is basically something like this: ./mycommand.sh | grep -E 'some|very*|cool[regex].here;)' --match file1.txt --not-match file2.txt I know I can just redirect the output to a file and then to two different greps with and without -v option and redirect their output to two different files. But I was jsut wondering if it was possible to do it with one grep. So, Is it possible to achieve what I want in a single line?
There are many ways to accomplish this. Using awk The following sends any lines matching coolregex to file1. All other lines go to file2: ./mycommand.sh | awk '/[coolregex]/{print>"file1";next} 1' >file2 How it works: /[coolregex]/{print>"file1";next} Any lines matching the regular expression coolregex are printed to file1 . Then, we skip all remaining commands and jump to start over on the next line. 1 All other lines are sent to stdout. 1 is awk's cryptic shorthand for print-the-line. Splitting into multiple streams is also possible: ./mycommand.sh | awk '/regex1/{print>"file1"} /regex2/{print>"file2"} /regex3/{print>"file3"}' Using process substitution This is not as elegant as the awk solution but, for completeness, we can also use multiple greps combined with process substitution: ./mycommand.sh | tee >(grep 'coolregex' >File1) | grep -v 'coolregex' >File2 We can also split up into multiple streams: ./mycommand.sh | tee >(grep 'coolregex' >File1) >(grep 'otherregex' >File3) >(grep 'anotherregex' >File4) | grep -v 'coolregex' >File2
{ "source": [ "https://unix.stackexchange.com/questions/414042", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/231067/" ] }
414,226
In the Wikipedia article on Regular expressions , it seems that [[:digit:]] = [0-9] = \d . What are the circumstances where they do not equal? What is the difference? After some research, I think one difference is that bracket expression [:expr:] is locale dependent.
Yes, it is [[:digit:]] ~ [0-9] ~ \d (where ~ means approximate). In most programming languages (where it is supported) \d ≡ `[[:digit:]]` # (is identical to, it is a short hand for). The \d exists in less instances than [[:digit:]] (available in grep -P but not in POSIX). Unicode digits There are many digits in UNICODE , for example: 123456789 # Hindu-Arabic Arabic numerals ٠١٢٣٤٥٦٧٨٩ # ARABIC-INDIC ۰۱۲۳۴۵۶۷۸۹ # EXTENDED ARABIC-INDIC/PERSIAN ߀߁߂߃߄߅߆߇߈߉ # NKO DIGIT ०१२३४५६७८९ # DEVANAGARI All of which may be included in [[:digit:]] or \d , and even some cases of [0-9] . POSIX For the specific POSIX BRE or ERE: The \d is not supported (not in POSIX but is in GNU grep -P ). [[:digit:]] is required by POSIX to correspond to the digit character class, which in turn is required by ISO C to be the characters 0 through 9 and nothing else. So only in C locale all [0-9] , [0123456789] , \d and [[:digit:]] mean exactly the same. The [0123456789] has no possible misinterpretations, [[:digit:]] is available in more utilities and in some cases mean only [0123456789] . The \d is supported by few utilities. As for [0-9] , the meaning of range expressions is only defined by POSIX in the C locale; in other locales it might be different (might be codepoint order or collation order or something else). [0123456789] The most basic option for all ASCII digits. Always valid, (AFAICT) no known instance where it fails. It match only English Digits: 0123456789 . [0-9] It is generally believed that [0-9] is only the ASCII digits 0123456789 . That is painfully false in some instances: Linux in some locale that is not "C" (June of 2020) systems, for example: Assume: str='0123456789 ٠١٢٣٤٥٦٧٨٩ ۰۱۲۳۴۵۶۷۸۹ ߀߁߂߃߄߅߆߇߈߉ ०१२३४५६७८९' Try grep to discover that it allows most of them: $ echo "$str" | grep -o '[0-9]\+' 0123456789 ٠١٢٣٤٥٦٧٨ ۰۱۲۳۴۵۶۷۸ ߀߁߂߃߄߅߆߇߈ ०१२३४५६७८ That sed has some troubles. Should remove only 0123456789 but removes almost all digits. That means that it accepts most digits but not some nine's (???): $ echo "$str" | sed 's/[0-9]\{1,\}//g' ٩ ۹ ߉ ९ That even expr suffers of the same issues of sed: expr "$str" : '\([0-9 ]*\)' # also matching spaces. 0123456789 ٠١٢٣٤٥٦٧٨ And also ed printf '%s\n' 's/[0-9]/x/g' '1,p' Q | ed -v <(echo "$str") 105 xxxxxxxxxx xxxxxxxxx٩ xxxxxxxxx۹ xxxxxxxxx߉ xxxxxxxxx९ [[:digit:]] There are many languages: Perl, Java, Python, C. In which [[:digit:]] (and \d ) calls for an extended meaning. For example, this perl code will match all the digits from above: $ str='0123456789 ٠١٢٣٤٥٦٧٨٩ ۰۱۲۳۴۵۶۷۸۹ ߀߁߂߃߄߅߆߇߈߉ ०१२३४५६७८९' $ echo "$str" | perl -C -pe 's/[^\d]//g;' ; echo 0123456789٠١٢٣٤٥٦٧٨٩۰۱۲۳۴۵۶۷۸۹߀߁߂߃߄߅߆߇߈߉०१२३४५६७८९ Which is equivalent to select all characters that have the Unicode properties of Numeric and digits : $ echo "$str" | perl -C -pe 's/[^\p{Nd}]//g;' ; echo 0123456789٠١٢٣٤٥٦٧٨٩۰۱۲۳۴۵۶۷۸۹߀߁߂߃߄߅߆߇߈߉०१२३४५६७८९ Which grep could reproduce (the specific version of pcre may have a different internal list of numeric code points than Perl): $ echo "$str" | grep -oP '\p{Nd}+' 0123456789 ٠١٢٣٤٥٦٧٨٩ ۰۱۲۳۴۵۶۷۸۹ ߀߁߂߃߄߅߆߇߈߉ ०१२३४५६७८९ shells Some implementations may understand a range to be something different than plain ASCII order (ksh93 for example) (when tested on May 2018 version (AT&T Research) 93u+ 2012-08-01): $ LC_ALL=en_US.utf8 ksh -c 'echo "${1//[0-9]}"' sh "$str" ۹ ߀߁߂߃߄߅߆߇߈߉ ९ Now (June 2020), the same package ksh93 from debian (same version sh (AT&T Research) 93u+ 2012-08-01): $ LC_ALL=en_US.utf8 ksh -c 'echo "${1//[0-9]}"' sh "$str" ٩ ۹ ߉ ९ And that seems to me as a sure source of bugs waiting to happen.
{ "source": [ "https://unix.stackexchange.com/questions/414226", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/267519/" ] }
414,636
I've seen the questions and answers about needing to double-escape the arguments to remote ssh commands. My question is: Exactly where and when does the second parsing get done? If I run the following: $ ssh otherhost pstree -a -p I see the following in the output: |-sshd,3736 | `-sshd,1102 | `-sshd,1109 | `-pstree,1112 -a -p The parent process for the remote command ( pstree ) is sshd , there doesn't appear to be any shell there that would be parsing the command line arguments to the remote command, so it doesn't seem as if double quoting or escaping would be necessary (but it definitely is). If instead I ssh there first and get a login shell, and then run pstree -a -p I see the following in the output: ├─sshd,3736 │ └─sshd,3733 │ └─sshd,3735 │ └─bash,3737 │ └─pstree,4130 -a -p So clearly there's a bash shell there that would do command line parsing in that case. But the case where I use a remote command directly, there doesn't seem to be a shell, so why is double quoting necessary?
There is always a remote shell. In the SSH protocol, the client sends the server a string to execute. The SSH command line client takes its command line arguments and concatenates them with a space between the arguments. The server takes that string, runs the user's login shell and passes it that string. (More precisely: the server runs the program that is registered as the user's shell in the user database, passing it two command line arguments: -c and the string sent by the client. The shell is not invoked as a login shell: the server does not set the zeroth argument to a string beginning with - .) It is impossible to bypass the remote shell. The protocol doesn't have anything like sending an array of strings that could be parsed as an argv array on the server. And the SSH server will not bypass the remote shell because that could be a security restriction: using a restricted program as the user's shell is a way to provide a restricted account that is only allowed to run certain commands (e.g. an rsync-only account or a git-only account). You may not see the shell in pstree because it may be already gone. Many shells have an optimization where if they detect that they are about to do “run this external command, wait for it to complete, and exit with the command's status”, then the shell runs “ execve of this external command” instead. This is what's happening in your first example. Contrast the following three commands: ssh otherhost pstree -a -p ssh otherhost 'pstree -a -p' ssh otherhost 'pstree -a -p; true' The first two are identical: the client sends exactly the same data to the server. The third one sends a shell command which defeats the shell's exec optimization.
{ "source": [ "https://unix.stackexchange.com/questions/414636", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52104/" ] }
414,639
I don't know how vlc is able to do it; I guess it takes sort of time-stamp of a movie and puts it in cache or somewhere like that. This is the way it works in vlc - a. You see a media file, say it consists of 1.5 hours, b. At some point, say after 15-30 minutes or whenever you feel, you stopped because you had some other work, a call came or anything which disrupted your viewing. c. After some time you start the media file again. In vlc in the top-right corner it would give a small button saying continue from where you left off. d. If you select that button/option, it starts playing the media file from where you last left off. I have also seen using 2-3 media files in succession and even then it remembers the position. Is it possible to have similar functionality in mpv? Is there a way this already works, or this would be a feature request I would need to make at mplayer github?
You can run mpv with the --save-position-on-quit option. e.g. mpv --save-position-on-quit /path/to/video.mkv Alternatively, if you want mpv to do that by default, you can add that option to its config file. For example: echo "save-position-on-quit" >> ~/.config/mpv/mpv.conf Or use your favourite text editor to add the same line. The -- option prefix is not needed in the config file. If you want this option to be the default for all users on the system rather than just your own user, the config file to edit (as root) is /etc/mpv/mpv.conf if mpv was installed as a package. And probably /usr/local/etc/mpv/mpv.conf if installed by compiling the source.
{ "source": [ "https://unix.stackexchange.com/questions/414639", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
414,892
I have a bash script that runs as long as the Linux machine is powered on. I start it as shown below: ( /mnt/apps/start.sh 2>&1 | tee /tmp/nginx/debug_log.log ) & After it lauches, I can see the tee command in my ps output as shown below: $ ps | grep tee 418 root 0:02 tee /tmp/nginx/debug_log.log 3557 root 0:00 grep tee I have a function that monitors the size of the log that tee produces and kills the tee command when the log reaches a certain size: monitor_debug_log_size() { ## Monitor the file size of the debug log to make sure it does not get too big while true; do cecho r "CHECKING DEBUG LOG SIZE... " debugLogSizeBytes=$(stat -c%s "/tmp/nginx/debug_log.log") cecho r "DEBUG LOG SIZE: $debugLogSizeBytes" if [ $((debugLogSizeBytes)) -gt 100000 ]; then cecho r "DEBUG LOG HAS GROWN TO LARGE... " sleep 3 #rm -rf /tmp/nginx/debug_log.log 1>/dev/null 2>/dev/null kill -9 `pgrep -f tee` fi sleep 30 done } To my surprise, killing the tee command also kills by start.sh instance. Why is this? How can I end the tee command but have my start.sh continue to run? Thanks.
When tee terminates, the command feeding it will continue to run, until it attempts to write more output. Then it will get a SIGPIPE (13 on most systems) for trying to write to a pipe with no readers. If you modify your script to trap SIGPIPE and take some appropriate action (like, stop writing output), then you should be able to have it continue after tee is terminated. Better yet, rather than killing tee at all, use logrotate with the copytruncate option for simplicity. To quote logrotate(8) : copytruncate Truncate the original log file in place after creating a copy, instead of moving the old log file and optionally creating a new one. It can be used when some program cannot be told to close its logfile and thus might continue writing (appending) to the previous log file forever. Note that there is a very small time slice between copying the file and truncating it, so some logging data might be lost. When this option is used, the create option will have no effect, as the old log file stays in place.
{ "source": [ "https://unix.stackexchange.com/questions/414892", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63801/" ] }
415,422
Using Kazam 1.4.5 on Debian stretch, How to stop recording with Kazam? The problem is: the icon on the task-bar does not allows any interaction, so I am looking for some shortcut with the keyboard, however, I could not find any. The result is, currently the video is recording for ever until I kill the process.
Obviously, I found the solution 5 minutes after to post the question. start recording: Super + Control + r pause recording: Super + Control + p finish recording: Super + Control + f show Kazam: Super + Control + s quit Kazam: Super + Control + q Note: Super is usually this "Windows logo" key.
{ "source": [ "https://unix.stackexchange.com/questions/415422", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142142/" ] }
415,433
When I moved my ssd on ubuntu to a larger ssd I managed to end up with this So I assumed Gparted would allow me to remove /dev/sda3 its empty and then grow /dev/sda5 into the space created but i'm clearly not understanding this process. As I cant find a way yo do it. Data in /dev/sda5 must be kept
Obviously, I found the solution 5 minutes after to post the question. start recording: Super + Control + r pause recording: Super + Control + p finish recording: Super + Control + f show Kazam: Super + Control + s quit Kazam: Super + Control + q Note: Super is usually this "Windows logo" key.
{ "source": [ "https://unix.stackexchange.com/questions/415433", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269320/" ] }
415,477
Both ALAC and FLAC are lossless audio formats and files will usually have more or less the same size when converted from one format to the other. I use ffmpeg -i track.flac track.m4a to convert between these two formats but I notice that the resulting ALAC files are much smaller than the original ones. When using a converter software like the MediaHuman Audio Converter, the size of the ALACs will remain around the same size as the FLACs so I guess I'm missing some flags here that are causing ffmpeg to downsample the signal.
Ok, I was probably a little quick to ask here but for the sake of future reference here is the answer: One should pass the flag -acodec alac to ffmpeg for a lossless conversion between FLAC and ALAC: ffmpeg -i track.flac -acodec alac track.m4a
{ "source": [ "https://unix.stackexchange.com/questions/415477", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235207/" ] }
415,679
I'm familiar with "jq" for parsing json. I work with one service that produces a json response where one of the properties is itself a json string. How do I convert that quoted value to a valid json string so I can then process it with jq? For instance, if I just view the plain pretty-printed json from "jq .", here's a short excerpt of the output: "someJsonString": "{\"date\":\"2018-01-08\", ... I can use jq to get the value of that property, but I need to convert the quoted string to valid json by "unescaping" it. I suppose I could pipe it into sed, removing the opening and ending double quotes, and removing all backslashes (" sed -e 's/^"//' -e 's/"$//' -e 's/\\//g' "). That seems to work, but that doesn't seem like the most robust solution. Update : Just to be a little clearer on what I'm doing, here are a couple of elided samples that show what I've tried: % curl -s -q -L 'http://.../1524.json' | jq '.results[0].someJsonString' | jq . "{\"date\":\"2018-01-08\",... % echo $(curl -s -q -L 'http:/.../1524.json' | jq '.results[0].someJsonString') | jq . "{\"date\":\"2018-01-08\",... Update : Here's a completely standalone example: % cat stuff.json | jq . { "stuff": "{\"date\":\"2018-01-08\"}" } % cat stuff.json | jq '.stuff' "{\"date\":\"2018-01-08\"}" % cat stuff.json | jq '.stuff' | jq . "{\"date\":\"2018-01-08\"}" Update : If I tried to process that last output with a real jq expression, it does something like this: % cat stuff.json | jq '.stuff' | jq '.date' assertion "cb == jq_util_input_next_input_cb" failed: file "/usr/src/ports/jq/jq-1.5-3.x86_64/src/jq-1.5/util.c", line 371, function: jq_util_input_get_position Aborted (core dumped)
With jq 's fromjson function: Sample stuff.json contents: { "stuff": "{\"date\":\"2018-01-08\"}" } jq -c '.stuff | fromjson' stuff.json The output: {"date":"2018-01-08"}
{ "source": [ "https://unix.stackexchange.com/questions/415679", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123728/" ] }
415,787
Let's say I have a machine (Arago dist) with a user password of 12 alphanumerical characters. When I log myself in via ssh using password authentication, I noticed a couple of days ago, that I can either only input 8 of the password characters or the whole password followed with whatever I'd like. The common outcome in both situations is a successful login. Why is this happening? In this particular case, I don't want to use Public key authentication based on multiple reasons. As an additional info, in this distro the files /etc/shadow and /etc/security/policy.conf are missing. Here the server ssh config: [user@machine:~] cat /etc/ssh/sshd_config # $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. Banner /etc/ssh/welcome.msg #Port 22 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # Disable legacy (protocol version 1) support in the server for new # installations. In future the default will change to require explicit # activation of protocol 1 Protocol 2 # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 1024 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH #LogLevel INFO # Authentication: #LoginGraceTime 2m PermitRootLogin no #StrictModes yes #MaxAuthTries 6 #MaxSessions 10 #RSAAuthentication yes #PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. #UsePAM no #AllowAgentForwarding yes #AllowTcpForwarding yes #GatewayPorts no #X11Forwarding no #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes #PrintLastLog yes #TCPKeepAlive yes #UseLogin no UsePrivilegeSeparation no #PermitUserEnvironment no Compression no ClientAliveInterval 15 ClientAliveCountMax 4 #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner none # override default of no subsystems Subsystem sftp /usr/libexec/sftp-server # Example of overriding settings on a per-user basis #Match User anoncvs # X11Forwarding no # AllowTcpForwarding no # ForceCommand cvs server Here the ssh client output: myself@ubuntu:~$ ssh -vvv [email protected] OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.1 [192.168.1.1] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/myself/.ssh/id_rsa" as a RSA1 public key debug1: identity file /home/myself/.ssh/id_rsa type 1 debug1: identity file /home/myself/.ssh/id_rsa-cert type -1 debug1: identity file /home/myself/.ssh/id_dsa type -1 debug1: identity file /home/myself/.ssh/id_dsa-cert type -1 debug1: identity file /home/myself/.ssh/id_ecdsa type -1 debug1: identity file /home/myself/.ssh/id_ecdsa-cert type -1 debug1: identity file /home/myself/.ssh/id_ed25519 type -1 debug1: identity file /home/myself/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.6 debug1: match: OpenSSH_5.6 pat OpenSSH_5* compat 0x0c000000 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "192.168.1.1" from file "/home/myself/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /home/myself/.ssh/known_hosts:26 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,[email protected],[email protected],[email protected],aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,[email protected],[email protected],[email protected],aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none debug2: kex_parse_kexinit: none debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: setup hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: setup hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<3072<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: bits set: 1481/3072 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 91:66:c0:07:e0:c0:df:b7:8e:49:97:b5:36:12:12:ea debug3: load_hostkeys: loading entries for host "192.168.1.1" from file "/home/myself/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /home/myself/.ssh/known_hosts:26 debug3: load_hostkeys: loaded 1 keys debug1: Host '192.168.1.1' is known and matches the RSA host key. debug1: Found key in /home/myself/.ssh/known_hosts:26 debug2: bits set: 1551/3072 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/myself/.ssh/id_rsa (0x802b9240), debug2: key: /home/myself/.ssh/id_dsa ((nil)), debug2: key: /home/myself/.ssh/id_ecdsa ((nil)), debug2: key: /home/myself/.ssh/id_ed25519 ((nil)), debug3: input_userauth_banner debug1: Authentications that can continue: publickey,password,keyboard-interactive debug3: start over, passed a different list publickey,password,keyboard-interactive debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/myself/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: /home/myself/.ssh/id_dsa debug3: no such identity: /home/myself/.ssh/id_dsa: No such file or directory debug1: Trying private key: /home/myself/.ssh/id_ecdsa debug3: no such identity: /home/myself/.ssh/id_ecdsa: No such file or directory debug1: Trying private key: /home/myself/.ssh/id_ed25519 debug3: no such identity: /home/myself/.ssh/id_ed25519: No such file or directory debug2: we did not send a packet, disable method debug3: authmethod_lookup keyboard-interactive debug3: remaining preferred: password debug3: authmethod_is_enabled keyboard-interactive debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug3: userauth_kbdint: disable: no info_req_seen debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: debug3: authmethod_is_enabled password debug1: Next authentication method: password [email protected]'s password: debug3: packet_send2: adding 64 (len 57 padlen 7 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). Authenticated to 192.168.1.1 ([192.168.1.1]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: fd 3 setting TCP_NODELAY debug3: packet_set_tos: set IP_TOS 0x10 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env XDG_VTNR debug3: Ignored env MANPATH debug3: Ignored env XDG_SESSION_ID debug3: Ignored env CLUTTER_IM_MODULE debug3: Ignored env SELINUX_INIT debug3: Ignored env XDG_GREETER_DATA_DIR debug3: Ignored env COMP_WORDBREAKS debug3: Ignored env SESSION debug3: Ignored env NVM_CD_FLAGS debug3: Ignored env GPG_AGENT_INFO debug3: Ignored env TERM debug3: Ignored env SHELL debug3: Ignored env XDG_MENU_PREFIX debug3: Ignored env VTE_VERSION debug3: Ignored env NVM_PATH debug3: Ignored env GVM_ROOT debug3: Ignored env WINDOWID debug3: Ignored env UPSTART_SESSION debug3: Ignored env GNOME_KEYRING_CONTROL debug3: Ignored env GTK_MODULES debug3: Ignored env NVM_DIR debug3: Ignored env USER debug3: Ignored env LD_LIBRARY_PATH debug3: Ignored env LS_COLORS debug3: Ignored env XDG_SESSION_PATH debug3: Ignored env XDG_SEAT_PATH debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env SESSION_MANAGER debug3: Ignored env DEFAULTS_PATH debug3: Ignored env XDG_CONFIG_DIRS debug3: Ignored env PATH debug3: Ignored env DESKTOP_SESSION debug3: Ignored env QT_IM_MODULE debug3: Ignored env QT_QPA_PLATFORMTHEME debug3: Ignored env NVM_NODEJS_ORG_MIRROR debug3: Ignored env GVM_VERSION debug3: Ignored env JOB debug3: Ignored env PWD debug3: Ignored env XMODIFIERS debug3: Ignored env GNOME_KEYRING_PID debug1: Sending env LANG = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env gvm_pkgset_name debug3: Ignored env GDM_LANG debug3: Ignored env MANDATORY_PATH debug3: Ignored env IM_CONFIG_PHASE debug3: Ignored env COMPIZ_CONFIG_PROFILE debug3: Ignored env GDMSESSION debug3: Ignored env SESSIONTYPE debug3: Ignored env XDG_SEAT debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env GOROOT debug3: Ignored env LANGUAGE debug3: Ignored env GNOME_DESKTOP_SESSION_ID debug3: Ignored env DYLD_LIBRARY_PATH debug3: Ignored env gvm_go_name debug3: Ignored env LOGNAME debug3: Ignored env GVM_OVERLAY_PREFIX debug3: Ignored env COMPIZ_BIN_PATH debug3: Ignored env XDG_DATA_DIRS debug3: Ignored env QT4_IM_MODULE debug3: Ignored env DBUS_SESSION_BUS_ADDRESS debug3: Ignored env PrlCompizSessionClose debug3: Ignored env PKG_CONFIG_PATH debug3: Ignored env GOPATH debug3: Ignored env NVM_BIN debug3: Ignored env LESSOPEN debug3: Ignored env NVM_IOJS_ORG_MIRROR debug3: Ignored env INSTANCE debug3: Ignored env TEXTDOMAIN debug3: Ignored env XDG_RUNTIME_DIR debug3: Ignored env DISPLAY debug3: Ignored env XDG_CURRENT_DESKTOP debug3: Ignored env GTK_IM_MODULE debug3: Ignored env LESSCLOSE debug3: Ignored env TEXTDOMAINDIR debug3: Ignored env GVM_PATH_BACKUP debug3: Ignored env COLORTERM debug3: Ignored env XAUTHORITY debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 Here the sshd server output: debug1: sshd version OpenSSH_5.6p1 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: rexec_argv[0]='/usr/sbin/sshd' debug1: rexec_argv[1]='-d' Set /proc/self/oom_adj from 0 to -17 debug1: Bind to port 22 on 0.0.0.0. Server listening on 0.0.0.0 port 22. socket: Address family not supported by protocol debug1: Server will not fork when running in debugging mode. debug1: rexec start in 4 out 4 newsock 4 pipe -1 sock 7 debug1: inetd sockets after dupping: 3, 3 Connection from 192.168.1.60 port 53445 debug1: Client protocol version 2.0; client software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8 debug1: match: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: client->server aes128-ctr hmac-md5 none debug1: kex: server->client aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST received debug1: SSH2_MSG_KEX_DH_GEX_GROUP sent debug1: expecting SSH2_MSG_KEX_DH_GEX_INIT debug1: SSH2_MSG_KEX_DH_GEX_REPLY sent debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user user service ssh-connection method none debug1: attempt 0 failures 0 debug1: userauth_send_banner: sent Failed none for user from 192.168.1.60 port 53445 ssh2 debug1: userauth-request for user user service ssh-connection method publickey debug1: attempt 1 failures 0 debug1: test whether pkalg/pkblob are acceptable debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: trying public key file //.ssh/authorized_keys debug1: Could not open authorized keys '//.ssh/authorized_keys': No such file or directory debug1: restore_uid: 0/0 debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: trying public key file //.ssh/authorized_keys2 debug1: Could not open authorized keys '//.ssh/authorized_keys2': No such file or directory debug1: restore_uid: 0/0 Failed publickey for user from 192.168.1.60 port 53445 ssh2 debug1: userauth-request for user user service ssh-connection method keyboard-interactive debug1: attempt 2 failures 1 debug1: keyboard-interactive devs debug1: auth2_challenge: user=user devs= debug1: kbdint_alloc: devices '' Failed keyboard-interactive for user from 192.168.1.60 port 53445 ssh2 debug1: Unable to open the btmp file /var/log/btmp: No such file or directory debug1: userauth-request for user user service ssh-connection method password debug1: attempt 3 failures 2 Could not get shadow information for user Accepted password for user from 192.168.1.60 port 53445 ssh2 debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 1048576 max 16384 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request pty-req reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req pty-req debug1: Allocating pty. debug1: session_pty_req: session 0 alloc /dev/pts/1 debug1: server_input_channel_req: channel 0 request env reply 0 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req env debug1: server_input_channel_req: channel 0 request shell reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req shell debug1: Setting controlling tty using TIOCSCTTY. /etc/pam.d/sshd: # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # Standard Un*x authentication. auth include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. account include common-accountt # Standard Un*x session setup and teardown. session include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Standard Un*x password updating. password include common-password
In the chat, it turned out the system was using traditional (non-shadow) password storage and traditional Unix password hashing algorithm. Both are poor choices in today's security environment. Since the traditional password hashing algorithm only stores and compares the first 8 characters of the password, that explains the behavior noticed in the original question. The posted sshd output includes the line: Could not get shadow information for user I would assume this means at least sshd (or possibly the PAM Unix password storage library) on this system includes shadow password functionality, but for some reason, the system vendor has chosen not to use it.
{ "source": [ "https://unix.stackexchange.com/questions/415787", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160654/" ] }
415,799
I have a zip file with size of 1.5 GB. Its content is one ridiculous large plain-text file (60 GB) and I currently do not have enough space on my disk left to extract it all nor do I want to extract it all, even if I had. As for my use case, it would suffice if I can inspect parts of the content. Hence I want to unzip the file as a stream and access a range of the file (like one can via head and tail on a normal text file). Either by memory (e.g. extract max 100kb starting from 32GB mark) or by lines (give me the plain text lines 3700-3900). Is there a way to achieve that?
Note that gzip can extract zip files (at least the first entry in the zip file). So if there's only one huge file in that archive, you can do: gunzip < file.zip | tail -n +3000 | head -n 20 To extract the 20 lines starting with the 3000th one for instance. Or: gunzip < file.zip | tail -c +3000 | head -c 20 For the same thing with bytes (assuming a head implementation that supports -c ). For any arbitrary member in the archive, in a Unixy way: bsdtar xOf file.zip file-to-extract | tail... | head... With the head builtin of ksh93 (like when /opt/ast/bin is ahead in $PATH ), you can also do: .... | head -s 2999 -c 20 .... | head --skip=2999 --bytes=20 Note that in any case gzip / bsdtar / unzip will always need to uncompress (and discard here) the entire section of the file that leads to the portion that you want to extract. That's down to how the compression algorithm works.
{ "source": [ "https://unix.stackexchange.com/questions/415799", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12471/" ] }
415,990
Say I have a script doing: some-command "$var1" "$var2" ... And, in the event that var1 is empty, I'd rather that it be replaced with nothing instead of the empty string, so that the command executed is: some-command "$var2" ... and not: some-command '' "$var2" ... Is there a simpler way than testing the variable and conditionally including it? if [ -n "$1" ]; then some-command "$var1" "$var2" ... # or some variant using arrays to build the command # args+=("$var1") else some-command "$var2" ... fi Is there a parameter substitution than can expand to nothing in bash, zsh, or the like? I might still want to use globbing in the rest of the arguments, so disabling that and unquoting the variable is not an option.
Posix compliant shells and Bash have ${parameter:+word} : If parameter is unset or null, null shall be substituted; otherwise, the expansion of word (or an empty string if word is omitted) shall be substituted. So you can just do: ${var1:+"$var1"} and have var1 be checked, and "$var1" be used if it's set and non-empty (with the ordinary double-quoting rules). Otherwise it expands to nothing. Note that only the inner part is quoted here, not the whole thing. The same also works in zsh. You have to repeat the variable, so it's not ideal, but it works out exactly as you wanted. If you want a set-but-empty variable to expand to an empty argument, use ${var1+"$var1"} instead.
{ "source": [ "https://unix.stackexchange.com/questions/415990", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70524/" ] }
416,180
I have installed Ubuntu 17.10 on my notebook. However, I cannot connect to wi-fi because there is a "No Wi-Fi Adapter Found" message. I don't have any idea what to do next. My notebook : Asus X555LN-XX507H Network Adapter : Broadcom 802.11n BCM43142 (14e4:4365) (This is a follow-on from my earlier post, https://unix.stackexchange.com/questions/415639/kali-linux-no-wifi-adapter-found , where I was advised to try an easier system than Kali.)
Just connect using usb cable to do usb tethering, open terminal by Ctrl+Alt+T and type: sudo apt-get install --reinstall bcmwl-kernel-source Then, reboot.
{ "source": [ "https://unix.stackexchange.com/questions/416180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269490/" ] }
416,617
When scripting, I usually write my ifs with the following syntax as it is easier for me to understand that what comes next is not true. if [ ! "$1" = "$2" ]; then Others say that the way below is better if [ "$1" != "$2" ]; then The thing is when I ask why and whether there are any differences no one seems to have any answer. So, are there any differences between the two syntaxes? Is one of them safer than the other? Or is it just a matter of preference/habit?
Beside the cosmetic/preference arguments, one reason could be that there are more implementations where [ ! "$a" = "$b" ] fails in corner cases than with [ "$a" != "$b" ] . Both cases should be safe if implementations follow the POSIX algorithm , but even today (early 2018 as of writing), there are still implementations that fail. For instance, with a='(' b=')' : $ (a='(' b=')'; busybox test "$a" != "$b"; echo "$?") 0 $ (a='(' b=')'; busybox test ! "$a" = "$b"; echo "$?") 1 With dash versions prior to 0.5.9, like the 0.5.8 found as sh on Ubuntu 16.04 for instance: $ a='(' b=')' dash -c '[ "$a" != "$b" ]; echo "$?"' 0 $ a='(' b=')' dash -c '[ ! "$a" = "$b" ]; echo "$?"' 1 (fixed in 0.5.9, see https://www.mail-archive.com/[email protected]/msg00911.html ) Those implementations treat [ ! "(" = ")" ] as [ ! "(" "text" ")" ] that is [ ! "text" ] (test whether "text" is the null string) while POSIX mandates it to be [ ! "x" = "y" ] (test "x" and "y" for equality). Those implementations fail because they perform the wrong test in that case. Note that there's yet another form: ! [ "$a" = "$b" ] That one requires a POSIX shell (won't work with the old Bourne shell). Note that several implementations have had problems with [ "$a" = "$b" ] (and [ "$a" != "$b" ] ) as well and still do like the [ builtin of /bin/sh on Solaris 10 (a Bourne shell, the POSIX shell being in /usr/xpg4/bin/sh ). That's why you see things like: [ "x$a" != "x$b" ] In scripts trying to be portable to old systems.
{ "source": [ "https://unix.stackexchange.com/questions/416617", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264448/" ] }
416,820
I used to think deleting my bash history was enough to clear my bash history, but yesterday my cat was messing around the right side of my keyboard and when I got back into my computer I saw something I typed a month ago, then I started to press all the keys like crazy looking for what could've triggered it. Turns out UPARROW key shows my bash history even after deleting .bash_history. How can I delete my bash history for real?
In some cases (some bash versions), doing a: $ history -c; history -w Or simply $ history -cw Will clear history in memory (up and down arrow will have no commands to list) and then write that to the $HISTFILE file (if the $HISTFILE gets truncated by the running bash instance). Sometimes bash choose to not truncate the $HISTFILE file even with histappend option unset and $HISFILEZIZE set to 0. In such cases, the nuke option always works: history -c; >$HISTFILE That clear the history list of commands recorded in memory and all commands previously recorded to file. That will ensure that the running shell has no recorded history either in memory or disk, however, other running instances of bash (where history is active) may have a full copy of commands read from $HISTFILE when bash was started (or when a history -r is executed). If it is also required that nothing else (no new commands) of the present session would be written to the history file, then, unset HISTFILE will prevent any such logging.
{ "source": [ "https://unix.stackexchange.com/questions/416820", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270184/" ] }
416,877
I know what's ugoa (owner, group, others, all) or rwx (read/right/execute) or 4,2,1 or - , f , d , l , and I tried to read in man chmod to understand what's a capital X in chmod but there wasn't an entry for it. I then read in this article in posix/chmod but was stuck in this passage: Set the executable bit only if the target a) is a directory b) has already at least one executable bit set for any one of user, group, others. I also read in this article that gives this code example: chmod -R u=rwX,g=rX,o=rX testdir/ I understand there is a recursive permission on the testdir/ , in regards to the owner (u), group (g), and others (o) but I admit I still miss the intention of the capital X. Maybe a didactic phrasing here could shed some light on this (the main reason I publish this here is because I didn't find an SE session on this). Update Sorry all, I missed that in the man. I didn't imagine the X would appear before the list of arguments and I thought the search returns x instead X, my bad.
The manpage says: execute/search only if the file is a directory or already has execute permission for some user ( X ) POSIX says: The perm symbol X shall represent the execute/search portion of the file mode bits if the file is a directory or if the current (unmodified) file mode bits have at least one of the execute bits (S_IXUSR, S_IXGRP, or S_IXOTH) set. It shall be ignored if the file is not a directory and none of the execute bits are set in the current file mode bits. This is a conditional permission flag: chmod looks at whatever it is currently processing, and if it’s a directory, or if it has any execute bit set in its current permissions (owner, group or other), it acts as if the requested permission was x , otherwise it ignores it. The condition is verified at the time chmod applies the specific X instruction, so you can clear execute bits in the same run with a-x,a=rwX to only set the executable bit on directories. You can see whether a file has an execute bit set by looking at the “access” part of stat ’s output, or the first column of ls -l . Execute bits are represented by x . -rwxr-xr-x is common for executables and indicates that the executable bit is set for the owner, group and other users; -rw-r--r-- is common for other files and indicates that the executable bit is not set (but the read bit is set for everyone, and the write bit for the owner). See Understanding UNIX permissions and their attributes which has much more detail. Thus in your example, u=rwX sets the owner permissions to read and write in all cases, and for directories and executable files, execute; likewise for group ( g=rX ) and other ( o=rX ), read, and execute for directories and executable files. The intent of this operator is to allow the user to give chmod a variety of files and directories, and get the correct execute permissions (assuming none of the files had an invalid execute bit set). It avoids having to distinguish between files and directories (as in the traditional find . -type f -exec chmod 644 {} + and find . -type d -exec chmod 755 {} + commands), and attempts to deal with executables in a sensible way. (Note that macOS chmod apparently only supports X for + operations.)
{ "source": [ "https://unix.stackexchange.com/questions/416877", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/258010/" ] }
416,886
For resizing LVM2 partition, one needs to perform the following 2 commands: # lvextend -L+1G /dev/myvg/homevol # resize2fs /dev/myvg/homevol However, when I perform lvextend , I see that the changes are already applied to the partition (as shown in Gnome Disks). So why do I still need to do resize2fs ?
The lvextend command (without the --resizefs option) only makes the LVM-side arrangements to enlarge the block device that is the logical volume. No matter what the filesystem type (or even whether or not there is a filesystem at all) on the LV, these operations are always similar. If the LV contains an ext2/3/4 filesystem, the next step is to update the filesystem metadata to make the filesystem aware that it has the more space available, and to create/extend the necessary metadata structures to manage the added space. In the case of ext2/3/4 filesystems, this involves at least: creating new inodes to the added space extending the block allocation data structures so that the filesystem can tell whether any block of the added space is in use or free potentially moving some data blocks around if they are in the way of the previously-mentioned data structure extension This part is specific to the filesystem type, although the ext2/3/4 filesystem types are similar enough that they can all be resized with a single resize2fs tool. For XFS, filesystems, you would use a xfs_growfs tool instead. Other filesystems may have their own extension tools. And if the logical volume did not contain a filesystem but instead something like a "raw" database or an Oracle ASM volume, a yet another procedure would need to be applied. Each filesystem has different internal workings and so the conditions for extending a filesystem will be different for each. It took a while until a common API was designed for filesystem extension; that made it possible to implement the fsadm resize command, which provides an unified syntax for extending several filesystem types. The --resizefs option of lvextend just uses the fsadm resize command. In a nutshell: After lvextend , LVM-level tools such as lvs , vgs , lvdisplay and vgdisplay will see the updated size, but the filesystem and any tools operating on it, like df , won't see it yet.
{ "source": [ "https://unix.stackexchange.com/questions/416886", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199779/" ] }
416,945
At the command line I often use "simple" commands like mv foo/bar baz/bar but I don't know what to call all the parts of this: ┌1┐ ┌──2───┐ git checkout master │ └──────3──────┘ └───────4─────────┘ I (think I) know that 1 is a command and 2 's an argument, and I'd probably call 3 an argument list (is that correct?). However, I don't know what to call 4 . How are more complex "commands" labelled? find transcripts/?.? -name '*.txt' | parallel -- sh -c 'echo $1 $2' {} {/} I'd appreciate an answer that breaks down what to call 1,2,3,4 and what to call each part of e.g. this "command" above. It would be great to learn also about other things that are unique/surprising that I haven't included here.
The common names for each part is as follows: ┌1┐ ┌──2───┐ git checkout master │ └──────3──────┘ └───────4─────────┘ Command name (first word or token of command line that is not a redirection or variable assignment and after aliases have been expanded). Token, word, or argument to the command. From man bash: word: A sequence of characters considered as a single unit by the shell. Also known as a token. Generally: Arguments Command line. The concatenation of two simple commands with a | is a pipe sequence or pipeline: ┌─1┐ ┌──────2──────┐ ┌─2─┐ ┌──2──┐ ┌──1───┐ ┌2┐┌2┐┌2┐┌────2─────┐ ┌2┐ ┌2┐ find transcripts/?.? -name '*.txt' | parallel -- sh -c 'echo $1 $2' {} {/} │ └────────────3──────────────┘ └────────────3──────────────┘ └───────────────────────────────────4─────────────────────────────────────┘ Mind that there are redirection and variable assignments also: ┌──5──┐ ┌1┐ ┌─2─┐ ┌─2─┐ ┌───6──┐ ┌1┐ ┌─5─┐ <infile tee file1 file2 | LC_ALL=C cat >file └─────────7───────────┘ └───────7────────┘ └─────────────────────4────────────────────┘ Where (beside the numbers from above): redirection. Variable assignment. Simple command. This is not an exaustive list of all the element a command line could have. Such a list is too complex for this short answer.
{ "source": [ "https://unix.stackexchange.com/questions/416945", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172877/" ] }
417,052
Apologies, this title is not the most elegant I've ever devised. But I assume a lot of people will have wondered this, and my question may be a dupe... all I can say is I haven't found it. When I say "scrolling" up, I mean using the "up arrow" key on the keyboard, which obviously scrolls you up through the history, starting at the most recent command. So you find a command maybe 30 commands back... and you run it. And then you want to run the command which originally came after it... is there is a snappy way of doing this? Or how do those fluent in BASH do this?
Running the command with Ctrl + o instead of Enter will run a command from history and then queue up the next one instead of returning to the front of the bash history.
{ "source": [ "https://unix.stackexchange.com/questions/417052", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220752/" ] }
417,323
What is the main difference between the directory cron.d (as in /etc/cron.d/ ) and crontab ? As far as I understand, one could create a file like /etc/cron.d/my_non_crontab_cronjobs and put whatever one wants inside it, just as one would put them in crontab via crontab -e . So what is the main difference between the two?
The differences are documented in detail in the cron(8) manpage in Debian. The main difference is that /etc/cron.d is populated with separate files, whereas crontab manages one file per user; it’s thus easier to manage the contents of /etc/cron.d using scripts (for automated installation and updates), and easier to manage crontab using an editor (for end users really). Other important differences are that not all distributions support /etc/cron.d , and that the files in /etc/cron.d have to meet a certain number of requirements (beyond being valid cron jobs): they must be owned by root, and must conform to run-parts ’ naming conventions ( no dots , only letters, digits, underscores, and hyphens). If you’re considering using /etc/cron.d , it’s usually worth considering one of /etc/cron.hourly , /etc/cron.daily , /etc/cron.weekly , or /etc/cron.monthly instead.
{ "source": [ "https://unix.stackexchange.com/questions/417323", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/258010/" ] }
417,426
I submitted lots of SLURM job script with debug time limit (I forgot to change the time for actual run). Now they are all submitted at the same time, so they all start with job ID 197xxxxx. Now, I can do squeue -u $USER | grep 197 | awk '{print $1}' to print the job ID's I want to delete. But how do I use scancel command on all these ID's. The output from the above shell command would look like 19726664 19726663 19726662 19726661 19726660 19726659 19726658 19726657 19726656 19726655 19726654 19726653 19726652 19726651 19726650
squeue -u $USER | grep 197 | awk '{print $1}' | xargs -n 1 scancel Check the documentation for xargs for details. If scancel accepts multiple job ids (it should), you may omit the -n 1 part.
{ "source": [ "https://unix.stackexchange.com/questions/417426", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115661/" ] }
417,428
I want to copy over .jpg and .png files with scp , but there files with different extensions in the same folder I'm copying from. I am doing the following: scp [email protected]:/folder/*.{jpg,png} . I am asked to enter my password for each extension type. Is there a way to do this in such a way that I enter my password only once?
Just replace it with: scp [email protected]:'/folder/*.{jpg,png}' . Please note the pair of single quotes. In your case, your local shell is evaluating the expression, turning it really into: scp [email protected]:/folder/*.jpg [email protected]:/folder/*.png . hence the two passwords asked. In this solution, the pair of single quotes protects it from evaluation by the local shell, so it's the remote shell called by (the remote) scp which is evaluating the expression.
{ "source": [ "https://unix.stackexchange.com/questions/417428", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90334/" ] }
417,906
I am running a fresh install of CentOS 7 GNOME so I could RDP from Windows.  I followed the “Connect to GNOME desktop environment via XRDP” instructions , but when I connect I get an additional login that says authentication is required to create a color profile How do I remove this additional login? In an attempt to solve this problem I tried a solution at “Griffon's IT Library” , but it did not work because link is a lot more then just a solution to this problem.  I pasted the solution below. When you login into your system via remote session, you will see this message popping up.  You can simply cancel and you will be able to proceed till the next time you login and start a new session. To avoid this prompt, we will need to change the polkit configuration.  Using admin privileges, create a file called 02-allow-colord.conf under the following directory /etc/polkit-1/localauthority.conf.d/ The file should contains [sic] the following instructions and you should not be prompted anymore with such authentication request while remoting into your system polkit.addRule(function(action, subject) { if ((action.id == “org.freedesktop.color-manager.create-device” || action.id == “org.freedesktop.color-manager.create-profile” || action.id == “org.freedesktop.color-manager.delete-device” || action.id == “org.freedesktop.color-manager.delete-profile” || action.id == “org.freedesktop.color-manager.modify-device” || action.id == “org.freedesktop.color-manager.modify-profile”) && subject.isInGroup(“{group}”)) { return polkit.Result.YES; } });
I had the same problem and found a different work-around here: https://github.com/TurboVNC/turbovnc/issues/47#issuecomment-412005377 This variant is claimed to work independent of authentication scheme (e.g. LDAP). Create /etc/polkit-1/localauthority/50-local.d/color.pkla (note: .pkla extension is required) with the following contents: [Allow colord for all users] Identity=unix-user:* Action=org.freedesktop.color-manager.create-device;org.freedesktop.color-manager.create-profile;org.freedesktop.color-manager.delete-device;org.freedesktop.color-manager.delete-profile;org.freedesktop.color-manager.modify-device;org.freedesktop.color-manager.modify-profile;org.freedesktop.packagekit.system-sources-refresh ResultAny=yes ResultInactive=yes ResultActive=yes Worked for me. update See next comment in linked github thread... 18.04 users may want to try the above answer but with the following changes: [Allow colord for all users] Identity=unix-user:* Action=org.freedesktop.color-manager.create-device;org.freedesktop.color-manager.create-profile;org.freedesktop.color-manager.delete-device;org.freedesktop.color-manager.delete-profile;org.freedesktop.color-manager.modify-device;org.freedesktop.color-manager.modify-profile;org.freedesktop.packagekit.system-sources-refresh ResultAny=no ResultInactive=no ResultActive=yes
{ "source": [ "https://unix.stackexchange.com/questions/417906", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270255/" ] }
418,117
For example, for managing a disk partition for another system where the user exists. I know I can simply create a user temporarily but I find this question interesting.
Yes, you can chown to a numerical UID that does not have a corresponding user.
{ "source": [ "https://unix.stackexchange.com/questions/418117", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136914/" ] }
418,131
I'm running Pop_OS on a System 76 laptop. It's running Gnome and for some reason after re-installing the OS on a new drive, (the original SSD borked on me) the font in the notifications are HUGE! We're talking 72pt here! Anyways after a couple hours of looking around the interwebs and poking around the system, I've found nothing! Possible causes are from installing a Gnome extension that I removed. I've tried removing the extensions I installed. I've also tried adding and re-removing the extensions that I tried. No Luck. Here is an image of what I'm dealing with. I'd just like to reset the notifications back to default.
Yes, you can chown to a numerical UID that does not have a corresponding user.
{ "source": [ "https://unix.stackexchange.com/questions/418131", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271287/" ] }
418,401
I'm installing Debian 9 on an HP ProLiant DL180. When I boot from a USB drive, it opens grub2 and when I type boot it gives an error : you need to load kernel first .
From grub-rescue type set then hit the Tab , it will help you to set the first parameters , e,g.: set prefix=(hd0,gpt2)/boot/grub set root=(hd0,gpt2) insmod normal normal you need to load kernel first To load the kernel forward with the following commands: insmod linux linux /vmlinuz root=/dev/sda2 initrd /initrd.img boot Change /dev/sda2 with your root partition , change gpt2 with msdos if you don't have a GUID partition table. To correctly set your boot parameters, see Ubuntu documentation : search and set
{ "source": [ "https://unix.stackexchange.com/questions/418401", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/215550/" ] }
418,424
I want to configure lamp stack for my ubuntu distro, but I have some troubles. After sudo apt-get install lamp-server^ I get: Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'libhttp-message-perl' for task 'lamp-server' Note, selecting 'libencode-locale-perl' for task 'lamp-server' Note, selecting 'php7.0-cli' for task 'lamp-server' Note, selecting 'mysql-client-5.7' for task 'lamp-server' Note, selecting 'libapache2-mod-php' for task 'lamp-server' Note, selecting 'rename' for task 'lamp-server' Note, selecting 'mysql-server-5.7' for task 'lamp-server' Note, selecting 'php-common' for task 'lamp-server' Note, selecting 'libaprutil1' for task 'lamp-server' Note, selecting 'mysql-server' for task 'lamp-server' Note, selecting 'php7.0-opcache' for task 'lamp-server' Note, selecting 'libcgi-fast-perl' for task 'lamp-server' Note, selecting 'libwrap0' for task 'lamp-server' Note, selecting 'libhttp-date-perl' for task 'lamp-server' Note, selecting 'perl-modules-5.22' for task 'lamp-server' Note, selecting 'liblwp-mediatypes-perl' for task 'lamp-server' Note, selecting 'libfcgi-perl' for task 'lamp-server' Note, selecting 'libcgi-pm-perl' for task 'lamp-server' Note, selecting 'libaprutil1-dbd-sqlite3' for task 'lamp-server' Note, selecting 'php7.0-common' for task 'lamp-server' Note, selecting 'libaio1' for task 'lamp-server' Note, selecting 'libio-html-perl' for task 'lamp-server' Note, selecting 'ssl-cert' for task 'lamp-server' Note, selecting 'apache2-data' for task 'lamp-server' Note, selecting 'libperl5.22' for task 'lamp-server' Note, selecting 'libapr1' for task 'lamp-server' Note, selecting 'libaprutil1-ldap' for task 'lamp-server' Note, selecting 'libhtml-tagset-perl' for task 'lamp-server' Note, selecting 'mysql-client-core-5.7' for task 'lamp-server' Note, selecting 'php7.0-json' for task 'lamp-server' Note, selecting 'php7.0-readline' for task 'lamp-server' Note, selecting 'tcpd' for task 'lamp-server' Note, selecting 'liblua5.1-0' for task 'lamp-server' Note, selecting 'mysql-common' for task 'lamp-server' Note, selecting 'libhtml-template-perl' for task 'lamp-server' Note, selecting 'libtimedate-perl' for task 'lamp-server' Note, selecting 'apache2-bin' for task 'lamp-server' Note, selecting 'perl' for task 'lamp-server' Note, selecting 'apache2' for task 'lamp-server' Note, selecting 'php-mysql' for task 'lamp-server' Note, selecting 'apache2-utils' for task 'lamp-server' Note, selecting 'libhtml-parser-perl' for task 'lamp-server' Note, selecting 'libapache2-mod-php7.0' for task 'lamp-server' Note, selecting 'liburi-perl' for task 'lamp-server' Note, selecting 'mysql-server-core-5.7' for task 'lamp-server' Note, selecting 'php7.0-mysql' for task 'lamp-server' libaio1 is already the newest version (0.3.110-2). libapache2-mod-php is already the newest version (1:7.0+35ubuntu6). libapr1 is already the newest version (1.5.2-3). libaprutil1 is already the newest version (1.5.4-1build1). libaprutil1-dbd-sqlite3 is already the newest version (1.5.4-1build1). libaprutil1-ldap is already the newest version (1.5.4-1build1). libcgi-fast-perl is already the newest version (1:2.10-1). libcgi-pm-perl is already the newest version (4.26-1). libencode-locale-perl is already the newest version (1.05-1). libfcgi-perl is already the newest version (0.77-1build1). libhtml-parser-perl is already the newest version (3.72-1). libhtml-tagset-perl is already the newest version (3.20-2). libhtml-template-perl is already the newest version (2.95-2). libhttp-date-perl is already the newest version (6.02-1). libhttp-message-perl is already the newest version (6.11-1). libio-html-perl is already the newest version (1.001-1). liblua5.1-0 is already the newest version (5.1.5-8ubuntu1). liblwp-mediatypes-perl is already the newest version (6.02-1). libtimedate-perl is already the newest version (2.3000-2). liburi-perl is already the newest version (1.71-1). libwrap0 is already the newest version (7.6.q-25). php-common is already the newest version (1:35ubuntu6). php-mysql is already the newest version (1:7.0+35ubuntu6). rename is already the newest version (0.20-4). ssl-cert is already the newest version (1.0.37). tcpd is already the newest version (7.6.q-25). apache2 is already the newest version (2.4.18-2ubuntu3.5). apache2-bin is already the newest version (2.4.18-2ubuntu3.5). apache2-data is already the newest version (2.4.18-2ubuntu3.5). apache2-utils is already the newest version (2.4.18-2ubuntu3.5). libapache2-mod-php7.0 is already the newest version (7.0.22-0ubuntu0.16.04.1). libperl5.22 is already the newest version (5.22.1-9ubuntu0.2). mysql-client-5.7 is already the newest version (5.7.20-0ubuntu0.16.04.1). mysql-client-core-5.7 is already the newest version (5.7.20-0ubuntu0.16.04.1). mysql-common is already the newest version (5.7.20-0ubuntu0.16.04.1). mysql-server is already the newest version (5.7.20-0ubuntu0.16.04.1). mysql-server-5.7 is already the newest version (5.7.20-0ubuntu0.16.04.1). mysql-server-core-5.7 is already the newest version (5.7.20-0ubuntu0.16.04.1). perl is already the newest version (5.22.1-9ubuntu0.2). perl-modules-5.22 is already the newest version (5.22.1-9ubuntu0.2). php7.0-cli is already the newest version (7.0.22-0ubuntu0.16.04.1). php7.0-common is already the newest version (7.0.22-0ubuntu0.16.04.1). php7.0-json is already the newest version (7.0.22-0ubuntu0.16.04.1). php7.0-mysql is already the newest version (7.0.22-0ubuntu0.16.04.1). php7.0-opcache is already the newest version (7.0.22-0ubuntu0.16.04.1). php7.0-readline is already the newest version (7.0.22-0ubuntu0.16.04.1). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue? [Y/n] y Setting up mysql-server-5.7 (5.7.20-0ubuntu0.16.04.1) ... Renaming removed key_buffer and myisam-recover options (if present) Job for mysql.service failed because the control process exited with error code. See "systemctl status mysql.service" and "journalctl -xe" for details. invoke-rc.d: initscript mysql, action "start" failed. ● mysql.service - MySQL Community Server Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since sob 2018-01-20 10:55:17 CET; 17ms ago Process: 4551 ExecStartPost=/usr/share/mysql/mysql-systemd-start post (code=exited, status=0/SUCCESS) Process: 4550 ExecStart=/usr/sbin/mysqld (code=exited, status=1/FAILURE) Process: 4542 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS) Main PID: 4550 (code=exited, status=1/FAILURE) sty 20 10:55:17 len-machine systemd[1]: Failed to start MySQL Community Server. sty 20 10:55:17 len-machine systemd[1]: mysql.service: Unit entered failed s.... sty 20 10:55:17 len-machine systemd[1]: mysql.service: Failed with result 'e.... Hint: Some lines were ellipsized, use -l to show in full. dpkg: error processing package mysql-server-5.7 (--configure): subprocess installed post-installation script returned error exit status 1 Setting up oracle-java8-installer (8u151-1~webupd8~0) ... Using wget settings from /var/cache/oracle-jdk8-installer/wgetrc Downloading Oracle Java 8... --2018-01-20 10:55:18-- http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz Resolving download.oracle.com (download.oracle.com)... 104.104.142.192 Connecting to download.oracle.com (download.oracle.com)|104.104.142.192|:80... connected. HTTP request sent, awaiting response... 302 Moved Temporarily Location: https://edelivery.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz [following] --2018-01-20 10:55:18-- https://edelivery.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz Resolving edelivery.oracle.com (edelivery.oracle.com)... 2a02:26f0:d8:39a::2d3e, 2a02:26f0:d8:389::2d3e, 104.81.108.164 Connecting to edelivery.oracle.com (edelivery.oracle.com)|2a02:26f0:d8:39a::2d3e|:443... connected. HTTP request sent, awaiting response... 302 Moved Temporarily Location: http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz?AuthParam=1516442239_54c9d78d4d9e3a8f11df3af6b410580b [following] --2018-01-20 10:55:19-- http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz?AuthParam=1516442239_54c9d78d4d9e3a8f11df3af6b410580b Connecting to download.oracle.com (download.oracle.com)|104.104.142.192|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2018-01-20 10:55:20 ERROR 404: Not Found. download failed Oracle JDK 8 is NOT installed. dpkg: error processing package oracle-java8-installer (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.7; however: Package mysql-server-5.7 is not configured yet. dpkg: error processing package mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.7 oracle-java8-installer mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) I don't have an idea what is going on. Do you some tips how to solve that?
From grub-rescue type set then hit the Tab , it will help you to set the first parameters , e,g.: set prefix=(hd0,gpt2)/boot/grub set root=(hd0,gpt2) insmod normal normal you need to load kernel first To load the kernel forward with the following commands: insmod linux linux /vmlinuz root=/dev/sda2 initrd /initrd.img boot Change /dev/sda2 with your root partition , change gpt2 with msdos if you don't have a GUID partition table. To correctly set your boot parameters, see Ubuntu documentation : search and set
{ "source": [ "https://unix.stackexchange.com/questions/418424", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/190580/" ] }
418,509
I ssh ed to my server and ran wget -r -np zzz.aaa/bbb/ccc and it started working. Then my Internet connection(at my home) got interrupted and I got worried assuming that wget has been hup ped because the ssh connection was lost and therefore the terminal had died. But then I ssh ed to my server an realized that it was still running and putting the output in wget.log and downloading stuff. Can someone please explain to me what might have happened here? This is what ps gives me: PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND 32283 0.6 29.4 179824 147088 ? S 14:00 1:53 wget -r -np zzz.aaa/bbb/ccc What it does (question mark) ? mean in the column of tty ?
Programs (and scripts) can choose to ignore most signals, except a few like KILL . The HUP signal can be caught and ignored if the software so wishes to. This is from src/main.c of the wget sources (version 1.19.2): /* Hangup signal handler. When wget receives SIGHUP or SIGUSR1, it will proceed operation as usual, trying to write into a log file. If that is impossible, the output will be turned off. */ A bit further down the signal handler is installed: /* Setup the signal handler to redirect output when hangup is received. */ if (signal(SIGHUP, SIG_IGN) != SIG_IGN) signal(SIGHUP, redirect_output_signal); So it looks like wget is not ignoring the HUP signal, but it chooses to continue processing with its output redirected to the log file. Requested in comments: The meaning of the ? in the TTY column of the output from ps in the question is that the wget process is not any longer associated with a terminal/TTY. The TTY went away when the SSH connection went down.
{ "source": [ "https://unix.stackexchange.com/questions/418509", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/231067/" ] }
418,765
I got two NICs on the server side, eth0 ? 192.168.8.140 and eth1 ? 192.168.8.142. The client sends data to 192.168.8.142, and I expect iftop to show the traffic for eth1, but it does not. All networks go through eth0, so how can I test the two NICs? Why does all the traffic go through eth0 instead of eth1? I expected I could get 1 Gbit/s per interface. What's wrong with my setup or configuration? Server ifconfig eth0 Link encap:Ethernet HWaddr 00:00:00:19:26:B0 inet addr:192.168.8.140 Bcast:0.0.0.0 Mask:255.255.252.0 inet6 addr: 0000::0000:0000:fe19:26b0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:45287446 errors:0 dropped:123343 overruns:2989 frame:0 TX packets:3907747 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:66881007720 (62.2 GiB) TX bytes:261053436 (248.9 MiB) Memory:f7e00000-f7efffff eth1 Link encap:Ethernet HWaddr 00:00:00:19:26:B1 inet addr:192.168.8.142 Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: 0000::0000:0000:fe19:26b1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:19358 errors:0 dropped:511 overruns:0 frame:0 TX packets:14 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1772275 (1.6 MiB) TX bytes:1068 (1.0 KiB) Memory:f7c00000-f7cfffff Server side # Listen for incomming from 192.168.8.142 nc -v -v -n -k -l 192.168.8.142 8000 | pv > /dev/null Listening on [192.168.8.142] (family 0, port 8000) Connection from 192.168.8.135 58785 received! Client # Send to 192.168.8.142 time yes | pv |nc -s 192.168.8.135 -4 -v -v -n 192.168.8.142 8000 >/dev/null Connection to 192.168.8.142 8000 port [tcp/*] succeeded! Server side $ iftop -i eth0 interface: eth0 IP address is: 192.168.8.140 TX: cumm: 6.34MB peak: 2.31Mb rates: 2.15Mb 2.18Mb 2.11Mb RX: 2.55GB 955Mb 874Mb 892Mb 872Mb TOTAL: 2.56GB 958Mb 877Mb 895Mb 874Mb $ iftop -i eth1 interface: eth1 IP address is: 192.168.8.142 TX: cumm: 0B peak: 0b rates: 0b 0b 0b RX: 4.51KB 3.49Kb 3.49Kb 2.93Kb 2.25Kb TOTAL: 4.51KB 3.49Kb 3.49Kb 2.93Kb 2.25Kb $ ip link show eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:00:00:19:26:b0 brd ff:ff:ff:ff:ff:ff $ ip link show eth1 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:00:00:19:26:b1 brd ff:ff:ff:ff:ff:ff
There are two possible design models for a TCP/IP network stack: a strong host model and a weak host model. You're expecting behavior that would match the strong host model. Linux is designed to use the weak host model. In general the weak host model is more common as it reduces the complexity of the routing code and thus might offer better performance. Otherwise the two host models are just different design principles: neither is inherently better than the other. Basically, the weak host model means that outgoing traffic will be sent out the first interface listed in the routing table that matches the IP address of the destination (or selected gateway, if the destination is not reachable directly), without regard to the source IP address . This is basically why it's generally inadvisable to use two separate physical interfaces if you need two IP addresses on the same network segment. Instead assign two IP addresses for one interface (IP aliases: e.g. eth1 = 192.168.8.142 and eth1:0 = 192.168.8.140). If you need more bandwidth than a single interface can provide, bond (or team, if applicable) two or more interfaces together, and then run both IPs on the bond/team. By tweaking a number of sysctl settings and using the "advanced routing" functionality to set up independent routing tables for each NIC, it is possible to make Linux behave like a strong-host-model system. But that is a very special configuration, and I would recommend thinking twice before implementing it. See the answers at Linux Source Routing, Strong End System Model / Strong Host Model? if you really need it.
{ "source": [ "https://unix.stackexchange.com/questions/418765", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47774/" ] }
418,784
What is the min and max values of the following exit codes in Linux: The exit code returned from a binary executable (for example: a C program). The exit code returned from a bash script (when calling exit ). The exit code returned from a function (when calling return ). I think this is between 0 and 255 .
The number passed to the _exit() / exit_group() system call (sometimes referred as the exit code to avoid the ambiguity with exit status which is also referring to an encoding of either the exit code or signal number and additional info depending on whether the process was killed or exited normally) is of type int , so on Unix-like systems like Linux, typically a 32bit integer with values from -2147483648 (-2 31 ) to 2147483647 (2 31 -1). However, on all systems, when the parent process (or the child subreaper or init if the parent died) uses the wait() , waitpid() , wait3() , wait4() system calls to retrieve it, only the lower 8 bits of it are available (values 0 to 255 (2 8 -1)). When using the waitid() API (or a signal handler on SIGCHLD), on most systems (and as POSIX now more clearly requires in the 2016 edition of the standard (see _exit() specification )), the full number is available (in the si_status field of the returned structure). That is not the case on Linux yet though which also truncates the number to 8 bits with the waitid() API, though that's likely to change in the future. Generally, you'd want to only use values 0 (generally meaning success) to 125 only, as many shells use values above 128 in their $? representation of the exit status to encode the signal number of a process being killed and 126 and 127 for special conditions. You may want to use 126 to 255 on exit() to mean the same thing as they do for the shell's $? (like when a script does ret=$?; ...; exit "$ret" ). Using values outside 0 -> 255 is generally not useful. You'd generally only do that if you know the parent will use the waitid() API on systems that don't truncate and you happen to have a need for the 32bit range of values. Note that if you do a exit(2048) for instance, that will be seen as success by parents using the traditional wait*() APIs. More info at: Default exit code when process is terminated? That Q&A should hopefully answer most of your other questions and clarify what is meant by exit status . I'll add a few more things: A process cannot terminate unless it's killed or calls the _exit() / exit_group() system calls. When you return from main() in C , the libc calls that system call with the return value. Most languages have a exit() function that wraps that system call, and the value they take, if any is generally passed as is to the system call. (note that those generally do more things like the clean-up done by C's exit() function that flushes the stdio buffers, runs the atexit() hooks...) That's the case of at least: $ strace -e exit_group awk 'BEGIN{exit(1234)}' exit_group(1234) = ? $ strace -e exit_group mawk 'BEGIN{exit(1234)}' exit_group(1234) = ? $ strace -e exit_group busybox awk 'BEGIN{exit(1234)}' exit_group(1234) = ? $ echo | strace -e exit_group sed 'Q1234' exit_group(1234) = ? $ strace -e exit_group perl -e 'exit(1234)' exit_group(1234) = ? $ strace -e exit_group python -c 'exit(1234)' exit_group(1234) = ? $ strace -e exit_group expect -c 'exit 1234' exit_group(1234) = ? $ strace -e exit_group php -r 'exit(1234);' exit_group(1234) = ? $ strace -e exit_group zsh -c 'exit 1234' exit_group(1234) You occasionaly see some that complain when you use a value outside of 0-255: $ echo 'm4exit(1234)' | strace -e exit_group m4 m4:stdin:1: exit status out of range: `1234' exit_group(1) = ? Some shells complain when you use a negative value: $ strace -e exit_group dash -c 'exit -1234' dash: 1: exit: Illegal number: -1234 exit_group(2) = ? $ strace -e exit_group yash -c 'exit -- -1234' exit: `-1234' is not a valid integer exit_group(2) = ? POSIX leaves the behaviour undefined if the value passed to the exit special builtin is outside 0->255. Some shells show some unexpected behaviours if you do: bash (and mksh but not pdksh on which it is based) takes upon itself to truncate the value to 8 bits: $ strace -e exit_group bash -c 'exit 1234' exit_group(210) = ? So in those shells, if you do want to exit with a value outside of 0-255, you have to do something like: exec zsh -c 'exit -- -12345' exec perl -e 'exit(-12345)' That is execute another command in the same process that can call the system call with the value you want. as mentioned at that other Q&A, ksh93 has the weirdest behaviour for exit values from 257 to 256+max_signal_number where instead of calling exit_group() , it kills itself with the corresponding signal¹. $ ksh -c 'exit "$((256 + $(kill -l STOP)))"' zsh: suspended (signal) ksh -c 'exit "$((256 + $(kill -l STOP)))"' and otherwise truncates the number like bash / mksh . ¹ That's likely to change in the next version though. Now that the development of ksh93 has been taken over as a community effort outside of AT&T, that behaviour, even though encouraged somehow by POSIX, is being reverted
{ "source": [ "https://unix.stackexchange.com/questions/418784", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271801/" ] }
418,820
I have a Debian stretch where I'm running transmission-daemon as a service. I keep my seeded files on an external USB hard disk drive mounted on /mnt/external-disk . This disk has an ext4 filesystem, and I mapped it in /etc/fstab by uuid. The problem is: When the service transmission-daemon starts at boot it doesn't check if the external filesystem is already mounted so it doesn't find the files on it, and I get a data error and the torrent files are not seeded, but the service starts. To resolve this problem I checked the systemd documentation, and I found what was missing: The line RequiresMountsFor= in the [Unit] section of the transmission-daemon.service file is located in the tree below /lib/systemd/ . After I added that line with the path of the mountpoint /mnt/external-disk the problem disappeared and the service was working fine. If I rebooted the machine, the service was working, and the files were seeded. This worked until I had a apt-get dist-upgrade where the package transmission-daemon was involved and after it stopped. So I checked the transmission-daemon.service , and I found the modification I made was missing. I added the line RequiresMountsFor= another time with the proper path, and the problem was fixed again. My question is: How can I make this modification persistent?
You should override the unit with a unit in /etc . The easiest way to do this is to use systemctl edit : sudo systemctl edit transmission-daemon will open an editor and allow you to create a override snippet. An override snippet ensures that future changes to the package’s unit (in /lib ) are taken into account: the reference will be the package’s unit, with your overrides applied on top. All you need to use this in your case is a .conf file in /etc/systemd/system/transmission-daemon.service.d/ , containing only the section and RequiresMountsFor line. systemctl edit will do this for you, creating an override.conf file in the appropriate location. Alternatively, you can copy the full /lib/systemd/system/transmission-daemon.service unit to /etc/systemd/system and edit that. Again, systemctl edit can take care of this for you, with the --full option. Look for “Example 2. Overriding vendor settings” in the systemd.unit documentation for details.
{ "source": [ "https://unix.stackexchange.com/questions/418820", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172889/" ] }
419,374
I'm trying to restart services after a yum update on RHEL 7.4. I could restart every service using systemctl, but needs-restarting from yum utils tells me that I should also restart systemd itself: # needs-restarting 1 : /usr/lib/systemd/systemd --system --deserialize 21 Can I restart systemd without rebooting the server, and how? I found a few mentions of systemctl daemon-reload , but this doesn't make it disappear from the needs-restarting list.
To restart the daemon, run systemctl daemon-reexec This is documented in the systemctl manpage : Reexecute the systemd manager. This will serialize the manager state, reexecute the process and deserialize the state again. This command is of little use except for debugging and package upgrades. Sometimes, it might be helpful as a heavy-weight daemon-reload . While the daemon is being reexecuted, all sockets systemd listening on behalf of user configuration will stay accessible. Unfortunately needs-restarting can’t determine that systemd has actually restarted. systemd execs itself to restart, which doesn’t reset the process’s start time; but needs-restarting compares the executable’s modification time with the process’s start time to determine whether a process needs to be restarted (among other things), and as a result it always considers that systemd needs to be restarted... To determine whether systemd really needs to be restarted, you can check the output of lsof -p1 | grep deleted : systemd uses a library, libsystemd-shared , which is shipped in the same package and is thus upgraded along with the daemon, so if systemd needs to be restarted you’ll see it using a deleted version of the library. If lsof shows no deleted files, systemd doesn’t need to be restarted. (Thanks to Jeff Schaller for the hint!)
{ "source": [ "https://unix.stackexchange.com/questions/419374", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30018/" ] }
419,697
After finding out that several common commands (such as read ) are actually Bash builtins (and when running them at the prompt I'm actually running a two-line shell script which just forwards to the builtin), I was looking to see if the same is true for true and false . Well, they are definitely binaries. sh-4.2$ which true /usr/bin/true sh-4.2$ which false /usr/bin/false sh-4.2$ file /usr/bin/true /usr/bin/true: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=2697339d3c19235 06e10af65aa3120b12295277e, stripped sh-4.2$ file /usr/bin/false /usr/bin/false: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=b160fa513fcc13 537d7293f05e40444fe5843640, stripped sh-4.2$ However, what I found most surprising was their size. I expected them to be only a few bytes each, as true is basically just exit 0 and false is exit 1 . sh-4.2$ true sh-4.2$ echo $? 0 sh-4.2$ false sh-4.2$ echo $? 1 sh-4.2$ However I found to my surprise that both files are over 28KB in size. sh-4.2$ stat /usr/bin/true File: '/usr/bin/true' Size: 28920 Blocks: 64 IO Block: 4096 regular file Device: fd2ch/64812d Inode: 530320 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2018-01-25 19:46:32.703463708 +0000 Modify: 2016-06-30 09:44:27.000000000 +0100 Change: 2017-12-22 09:43:17.447563336 +0000 Birth: - sh-4.2$ stat /usr/bin/false File: '/usr/bin/false' Size: 28920 Blocks: 64 IO Block: 4096 regular file Device: fd2ch/64812d Inode: 530697 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2018-01-25 20:06:27.210764704 +0000 Modify: 2016-06-30 09:44:27.000000000 +0100 Change: 2017-12-22 09:43:18.148561245 +0000 Birth: - sh-4.2$ So my question is: Why are they so big? What's in the executable other than the return code? PS: I am using RHEL 7.4
In the past, /bin/true and /bin/false in the shell were actually scripts. For instance, in a PDP/11 Unix System 7: $ ls -la /bin/true /bin/false -rwxr-xr-x 1 bin 7 Jun 8 1979 /bin/false -rwxr-xr-x 1 bin 0 Jun 8 1979 /bin/true $ $ cat /bin/false exit 1 $ $ cat /bin/true $ Nowadays, at least in bash , the true and false commands are implemented as shell built-in commands. Thus no executable binary files are invoked by default, both when using the false and true directives in the bash command line and inside shell scripts. From the bash source, builtins/mkbuiltins.c : char *posix_builtins[] = { "alias", "bg", "cd", "command", "**false**", "fc", "fg", "getopts", "jobs", "kill", "newgrp", "pwd", "read", "**true**", "umask", "unalias", "wait", (char *)NULL }; Also per @meuh comments: $ command -V true false true is a shell builtin false is a shell builtin So it can be said with a high degree of certainty the true and false executable files exist mainly for being called from other programs . From now on, the answer will focus on the /bin/true binary from the coreutils package in Debian 9 / 64 bits. ( /usr/bin/true running RedHat. RedHat and Debian use both the coreutils package, analysed the compiled version of the latter having it more at hand). As it can be seen in the source file false.c , /bin/false is compiled with (almost) the same source code as /bin/true , just returning EXIT_FAILURE (1) instead, so this answer can be applied for both binaries. #define EXIT_STATUS EXIT_FAILURE #include "true.c" As it also can be confirmed by both executables having the same size: $ ls -l /bin/true /bin/false -rwxr-xr-x 1 root root 31464 Feb 22 2017 /bin/false -rwxr-xr-x 1 root root 31464 Feb 22 2017 /bin/true Alas, the direct question to the answer why are true and false so large? could be, because there are not anymore so pressing reasons to care about their top performance. They are not essential to bash performance, not being used anymore by bash (scripting). Similar comments apply to their size, 26KB for the kind of hardware we have nowadays is insignificant. Space is not at premium for the typical server/desktop anymore, and they do not even bother anymore to use the same binary for false and true , as it is just deployed twice in distributions using coreutils . Focusing, however, in the real spirit of the question, why something that should be so simple and small, gets so large? The real distribution of the sections of /bin/true is as these charts shows; the main code+data amounts to roughly 3KB out of a 26KB binary, which amounts to 12% of the size of /bin/true . The true utility got indeed more cruft code over the years, most notably the standard support for --version and --help . However, that it is not the (only) main justification for it being so big, but rather, while being dynamically linked (using shared libs), also having part of a generic library commonly used by coreutils binaries linked as a static library. The metada for building an elf executable file also amounts for a significant part of the binary, being it a relatively small file by today´s standards. The rest of the answer is for explaining how we got to build the following charts detailing the composition of the /bin/true executable binary file and how we arrived to that conclusion. As @Maks says, the binary was compiled from C; as per my comment also, it is also confirmed it is from coreutils. We are pointing directly to the author(s) git https://github.com/wertarbyte/coreutils/blob/master/src/true.c , instead of the gnu git as @Maks (same sources, different repositories - this repository was selected as it has the full source of the coreutils libraries) We can see the various building blocks of the /bin/true binary here (Debian 9 - 64 bits from coreutils ): $ file /bin/true /bin/true: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=9ae82394864538fa7b23b7f87b259ea2a20889c4, stripped $ size /bin/true text data bss dec hex filename 24583 1160 416 26159 662f true Of those: text (usually code) is around 24KB data (initialised variables, mostly strings) are around 1KB bss (uninitialized data) 0.5KB Of the 24KB, around 1KB is for fixing up the 58 external functions. That still leaves around roughly 23KB for rest of the code. We will show down bellow that the actual main file - main()+usage() code is around 1KB compiled, and explain what the other 22KB are used for. Drilling further down the binary with readelf -S true , we can see that while the binary is 26159 bytes, the actual compiled code is 13017 bytes, and the rest is assorted data/initialisation code. However, true.c is not the whole story and 13KB seems pretty much excessive if it were only that file; we can see functions called in main() that are not listed in the external functions seen in the elf with objdump -T true ; functions that are present at: https://github.com/coreutils/gnulib/blob/master/lib/progname.c https://github.com/coreutils/gnulib/blob/master/lib/closeout.c https://github.com/coreutils/gnulib/blob/master/lib/version-etc.c Those extra functions not linked externally in main() are: set_program_name() close_stdout() version_etc() So my first suspicion was partly correct, whilst the library is using dynamic libraries, the /bin/true binary is big *because it has some static libraries included with it* (but that is not the only cause). Compiling C code is not usually that inefficient for having such space unaccounted for, hence my initial suspicion something was amiss. The extra space, almost 90% of the size of the binary, is indeed extra libraries/elf metadata. While using Hopper for disassembling/decompiling the binary to understand where functions are, it can be seen the compiled binary code of true.c/usage() function is actually 833 bytes, and of the true.c/main() function is 225 bytes, which is roughly slightly less than 1KB. The logic for version functions, which is buried in the static libraries, is around 1KB. The actual compiled main()+usage()+version()+strings+vars are only using up around 3KB to 3.5KB. It is indeed ironic, such small and humble utilities have became bigger in size for the reasons explained above. related question: Understanding what a Linux binary is doing true.c main() with the offending function calls: int main (int argc, char **argv) { /* Recognize --help or --version only if it's the only command-line argument. */ if (argc == 2) { initialize_main (&argc, &argv); set_program_name (argv[0]); <----------- setlocale (LC_ALL, ""); bindtextdomain (PACKAGE, LOCALEDIR); textdomain (PACKAGE); atexit (close_stdout); <----- if (STREQ (argv[1], "--help")) usage (EXIT_STATUS); if (STREQ (argv[1], "--version")) version_etc (stdout, PROGRAM_NAME, PACKAGE_NAME, Version, AUTHORS, <------ (char *) NULL); } exit (EXIT_STATUS); } The decimal size of the various sections of the binary: $ size -A -t true true : section size addr .interp 28 568 .note.ABI-tag 32 596 .note.gnu.build-id 36 628 .gnu.hash 60 664 .dynsym 1416 728 .dynstr 676 2144 .gnu.version 118 2820 .gnu.version_r 96 2944 .rela.dyn 624 3040 .rela.plt 1104 3664 .init 23 4768 .plt 752 4800 .plt.got 8 5552 .text 13017 5568 .fini 9 18588 .rodata 3104 18624 .eh_frame_hdr 572 21728 .eh_frame 2908 22304 .init_array 8 2125160 .fini_array 8 2125168 .jcr 8 2125176 .data.rel.ro 88 2125184 .dynamic 480 2125272 .got 48 2125752 .got.plt 392 2125824 .data 128 2126240 .bss 416 2126368 .gnu_debuglink 52 0 Total 26211 Output of readelf -S true $ readelf -S true There are 30 section headers, starting at offset 0x7368: Section Headers: [Nr] Name Type Address Offset Size EntSize Flags Link Info Align [ 0] NULL 0000000000000000 00000000 0000000000000000 0000000000000000 0 0 0 [ 1] .interp PROGBITS 0000000000000238 00000238 000000000000001c 0000000000000000 A 0 0 1 [ 2] .note.ABI-tag NOTE 0000000000000254 00000254 0000000000000020 0000000000000000 A 0 0 4 [ 3] .note.gnu.build-i NOTE 0000000000000274 00000274 0000000000000024 0000000000000000 A 0 0 4 [ 4] .gnu.hash GNU_HASH 0000000000000298 00000298 000000000000003c 0000000000000000 A 5 0 8 [ 5] .dynsym DYNSYM 00000000000002d8 000002d8 0000000000000588 0000000000000018 A 6 1 8 [ 6] .dynstr STRTAB 0000000000000860 00000860 00000000000002a4 0000000000000000 A 0 0 1 [ 7] .gnu.version VERSYM 0000000000000b04 00000b04 0000000000000076 0000000000000002 A 5 0 2 [ 8] .gnu.version_r VERNEED 0000000000000b80 00000b80 0000000000000060 0000000000000000 A 6 1 8 [ 9] .rela.dyn RELA 0000000000000be0 00000be0 0000000000000270 0000000000000018 A 5 0 8 [10] .rela.plt RELA 0000000000000e50 00000e50 0000000000000450 0000000000000018 AI 5 25 8 [11] .init PROGBITS 00000000000012a0 000012a0 0000000000000017 0000000000000000 AX 0 0 4 [12] .plt PROGBITS 00000000000012c0 000012c0 00000000000002f0 0000000000000010 AX 0 0 16 [13] .plt.got PROGBITS 00000000000015b0 000015b0 0000000000000008 0000000000000000 AX 0 0 8 [14] .text PROGBITS 00000000000015c0 000015c0 00000000000032d9 0000000000000000 AX 0 0 16 [15] .fini PROGBITS 000000000000489c 0000489c 0000000000000009 0000000000000000 AX 0 0 4 [16] .rodata PROGBITS 00000000000048c0 000048c0 0000000000000c20 0000000000000000 A 0 0 32 [17] .eh_frame_hdr PROGBITS 00000000000054e0 000054e0 000000000000023c 0000000000000000 A 0 0 4 [18] .eh_frame PROGBITS 0000000000005720 00005720 0000000000000b5c 0000000000000000 A 0 0 8 [19] .init_array INIT_ARRAY 0000000000206d68 00006d68 0000000000000008 0000000000000008 WA 0 0 8 [20] .fini_array FINI_ARRAY 0000000000206d70 00006d70 0000000000000008 0000000000000008 WA 0 0 8 [21] .jcr PROGBITS 0000000000206d78 00006d78 0000000000000008 0000000000000000 WA 0 0 8 [22] .data.rel.ro PROGBITS 0000000000206d80 00006d80 0000000000000058 0000000000000000 WA 0 0 32 [23] .dynamic DYNAMIC 0000000000206dd8 00006dd8 00000000000001e0 0000000000000010 WA 6 0 8 [24] .got PROGBITS 0000000000206fb8 00006fb8 0000000000000030 0000000000000008 WA 0 0 8 [25] .got.plt PROGBITS 0000000000207000 00007000 0000000000000188 0000000000000008 WA 0 0 8 [26] .data PROGBITS 00000000002071a0 000071a0 0000000000000080 0000000000000000 WA 0 0 32 [27] .bss NOBITS 0000000000207220 00007220 00000000000001a0 0000000000000000 WA 0 0 32 [28] .gnu_debuglink PROGBITS 0000000000000000 00007220 0000000000000034 0000000000000000 0 0 1 [29] .shstrtab STRTAB 0000000000000000 00007254 000000000000010f 0000000000000000 0 0 1 Key to Flags: W (write), A (alloc), X (execute), M (merge), S (strings), I (info), L (link order), O (extra OS processing required), G (group), T (TLS), C (compressed), x (unknown), o (OS specific), E (exclude), l (large), p (processor specific) Output of objdump -T true (external functions dynamically linked on run-time) $ objdump -T true true: file format elf64-x86-64 DYNAMIC SYMBOL TABLE: 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __uflow 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 getenv 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 free 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 abort 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __errno_location 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strncmp 0000000000000000 w D *UND* 0000000000000000 _ITM_deregisterTMCloneTable 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 _exit 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __fpending 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 textdomain 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fclose 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 bindtextdomain 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 dcgettext 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __ctype_get_mb_cur_max 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strlen 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.4 __stack_chk_fail 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 mbrtowc 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strrchr 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 lseek 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 memset 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fscanf 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 close 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __libc_start_main 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 memcmp 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fputs_unlocked 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 calloc 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strcmp 0000000000000000 w D *UND* 0000000000000000 __gmon_start__ 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.14 memcpy 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fileno 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 malloc 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fflush 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 nl_langinfo 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 ungetc 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __freading 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 realloc 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fdopen 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 setlocale 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3.4 __printf_chk 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 error 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 open 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fseeko 0000000000000000 w D *UND* 0000000000000000 _Jv_RegisterClasses 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __cxa_atexit 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 exit 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fwrite 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3.4 __fprintf_chk 0000000000000000 w D *UND* 0000000000000000 _ITM_registerTMCloneTable 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 mbsinit 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 iswprint 0000000000000000 w DF *UND* 0000000000000000 GLIBC_2.2.5 __cxa_finalize 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3 __ctype_b_loc 0000000000207228 g DO .bss 0000000000000008 GLIBC_2.2.5 stdout 0000000000207220 g DO .bss 0000000000000008 GLIBC_2.2.5 __progname 0000000000207230 w DO .bss 0000000000000008 GLIBC_2.2.5 program_invocation_name 0000000000207230 g DO .bss 0000000000000008 GLIBC_2.2.5 __progname_full 0000000000207220 w DO .bss 0000000000000008 GLIBC_2.2.5 program_invocation_short_name 0000000000207240 g DO .bss 0000000000000008 GLIBC_2.2.5 stderr
{ "source": [ "https://unix.stackexchange.com/questions/419697", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/254388/" ] }
420,211
Which distribution is the one in the picture below. More precisely, in which distribution can I find that top bar with the navigation numbers on the left ?
Some random distro that happens to be running i3 window manager. https://i3wm.org/ Per i3wm site the window manager is distributed in Debian, Arch, Gentoo, Ubuntu, FreeBSD, NetBSD, OpenBSD, OpenSUSE, Megeia, Fedora, Exherbo, PiBang and Slackware.
{ "source": [ "https://unix.stackexchange.com/questions/420211", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272913/" ] }
420,365
Suppose I have a folder containing files with names like file1.txt file2.txt file2.txt etc. I would like to run a command on each of them, like so: mycommand file1.txt -o file1-processed.txt mycommand file2.txt -o file2-processed.txt mycommand file3.txt -o file3-processed.txt etc. There are several similar questions on this site - the difference is that I want to insert the -processed test into the middle of the file name, before the extension. It seems like find should be the tool for the job. If it wasn't for the -o flag I could do find *.txt -exec mycommand "{}" ";" However, the {} syntax gives the whole file name, e.g. file1.txt etc., so I can't add the " -processed " in between the filename and its extension. A similar problem exists with using a simple bash for loop. Is there a simple way to accomplish this task, using find or otherwise?
If all the files to be processed are in the same folder, you don't need to use find , and can make do with native shell globbing. for foo in *.txt ; do mycommand "${foo}" -o "${foo%.txt}-processed.txt" done The shell idiom ${foo%bar} removes the smallest suffix string matching the pattern bar from the value of foo , in this case the .txt extension, so we can replace it with the suffix you want.
{ "source": [ "https://unix.stackexchange.com/questions/420365", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273047/" ] }