source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
318,281
I'm using tmux and OSX. When copying and pasting from the terminal with tmux I'm able to hold down Option and select text. However I can't get the text to stay inside the pane. So when I want to copy text I either need to cycle the pane to the far left, or zoom the pane, as shown below. This in addition to having to hold down the Option key is a pain. I know I can enter visual mode and use vim movements to get there, but I'd rather have a way to use my mouse. Has anyone found a workaround for this?
Put this block of code in your ~/.tmux.conf . This will enable mouse integration letting you copy from a pane with your mouse without having to zoom. set -g mouse on bind -n WheelUpPane if-shell -F -t = "#{mouse_any_flag}" "send-keys -M" "if -Ft= '#{pane_in_mode}' 'send-keys -M' 'select-pane -t=; copy-mode -e; send-keys -M'" bind -n WheelDownPane select-pane -t= \; send-keys -M bind -n C-WheelUpPane select-pane -t= \; copy-mode -e \; send-keys -M bind -t vi-copy C-WheelUpPane halfpage-up bind -t vi-copy C-WheelDownPane halfpage-down bind -t emacs-copy C-WheelUpPane halfpage-up bind -t emacs-copy C-WheelDownPane halfpage-down # To copy, drag to highlight text in yellow, press Enter and then release mouse # Use vim keybindings in copy mode setw -g mode-keys vi # Update default binding of `Enter` to also use copy-pipe unbind -t vi-copy Enter bind-key -t vi-copy Enter copy-pipe "pbcopy" After that, restart your tmux session. Highlight some text with mouse, but don't let go the mouse. Now while the text is stil highlighted and mouse pressed, press return key. The highlighted text will disappear and will be copied to your clipboard. Now release the mouse. Apart from this, there are also some cool things you can do with the mouse like scroll up and down, select the active pane, etc. If you are using a newer version of tmux on macOS, try the following instead of the one above: # macOS only set -g mouse on bind -n WheelUpPane if-shell -F -t = "#{mouse_any_flag}" "send-keys -M" "if -Ft= '#{pane_in_mode}' 'send-keys -M' 'select-pane -t=; copy-mode -e; send-keys -M'" bind -n WheelDownPane select-pane -t= \; send-keys -M bind -n C-WheelUpPane select-pane -t= \; copy-mode -e \; send-keys -M bind -T copy-mode-vi C-WheelUpPane send-keys -X halfpage-up bind -T copy-mode-vi C-WheelDownPane send-keys -X halfpage-down bind -T copy-mode-emacs C-WheelUpPane send-keys -X halfpage-up bind -T copy-mode-emacs C-WheelDownPane send-keys -X halfpage-down # To copy, left click and drag to highlight text in yellow, # once you release left click yellow text will disappear and will automatically be available in clibboard # # Use vim keybindings in copy mode setw -g mode-keys vi # Update default binding of `Enter` to also use copy-pipe unbind -T copy-mode-vi Enter bind-key -T copy-mode-vi Enter send-keys -X copy-pipe-and-cancel "pbcopy" bind-key -T copy-mode-vi MouseDragEnd1Pane send-keys -X copy-pipe-and-cancel "pbcopy" If using iTerm on macOS, goto iTerm2 > Preferences > “General” tab, and in the “Selection” section, check “Applications in terminal may access clipboard”. And if you are using Linux and a newer version of tmux, then # Linux only set -g mouse on bind -n WheelUpPane if-shell -F -t = "#{mouse_any_flag}" "send-keys -M" "if -Ft= '#{pane_in_mode}' 'send-keys -M' 'select-pane -t=; copy-mode -e; send-keys -M'" bind -n WheelDownPane select-pane -t= \; send-keys -M bind -n C-WheelUpPane select-pane -t= \; copy-mode -e \; send-keys -M bind -T copy-mode-vi C-WheelUpPane send-keys -X halfpage-up bind -T copy-mode-vi C-WheelDownPane send-keys -X halfpage-down bind -T copy-mode-emacs C-WheelUpPane send-keys -X halfpage-up bind -T copy-mode-emacs C-WheelDownPane send-keys -X halfpage-down # To copy, left click and drag to highlight text in yellow, # once you release left click yellow text will disappear and will automatically be available in clibboard # # Use vim keybindings in copy mode setw -g mode-keys vi # Update default binding of `Enter` to also use copy-pipe unbind -T copy-mode-vi Enter bind-key -T copy-mode-vi Enter send-keys -X copy-pipe-and-cancel "xclip -selection c" bind-key -T copy-mode-vi MouseDragEnd1Pane send-keys -X copy-pipe-and-cancel "xclip -in -selection clipboard" In Debian and Debian based distros (Ubuntu, Kali), you might need to install xclip : sudo apt-get install -y xclip (You may also check out https://github.com/gpakosz/.tmux for many other tmux options.)
{ "source": [ "https://unix.stackexchange.com/questions/318281", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173557/" ] }
318,382
I have a luks-encrypted partition that was protected by a passphrase and a key file. The key file was for routine access and the passphrase was in a sealed envelope for emergencies. May months went by and I accidentally shredded the key file, so I recovered by using the passphrase from the envelope. Now I want to know, I have two active key slots but I don't know which contains the useless key file pass phrase and which has my emergency passphrase in it. Obviously if I remove the wrong one I'll lose all the data on the drive. #cryptsetup luksDump /dev/sda2 LUKS header information for /dev/sda2 Version: 1 Cipher name: aes Cipher mode: xts-plain64 Hash spec: sha256 Payload offset: 4096 MK bits: 256 MK digest: xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx MK salt: xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx MK iterations: 371000 UUID: 28c39f66-dcc3-4488-bd54-11ba239f7e68 Key Slot 0: ENABLED Iterations: 2968115 Salt: xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx Key material offset: 8 AF stripes: 4000 Key Slot 1: ENABLED Iterations: 2968115 Salt: xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx Key material offset: 264 AF stripes: 4000 Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: DISABLED
As you've discovered, you can use cryptsetup luksDump to see which key slots have keys. You can check the passphrase for a particular slot with cryptsetup luksOpen --test-passphrase --key-slot 0 /dev/sda2 && echo correct This succeeds if you enter the correct passphrase for key slot 0 and fails otherwise (including if the passphrase is correct for some other key slot). If you've forgotten one of the passphrases then you can only find which slot it's in by elimination, and if you've forgotten two of the passphrases then there's no way to tell which is which (otherwise the passphrase hash would be broken). To remove the passphrase you've forgotten, you can safely run cryptsetup luksKillSlot /dev/sda2 0 and enter the passphrase you remember. To wipe a key slot, cryptsetup requires the passphrase for a different key slot, at least when it isn't running in batch mode (i.e. no --batch-mode , --key-file=- or equivalent option).
{ "source": [ "https://unix.stackexchange.com/questions/318382", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17749/" ] }
318,385
I'm trying to generate a gpg key $ gpg --full-gen-key but eventurally I get an error gpg: agent_genkey failed: No such file or directory Key generation failed: No such file or directory I'm on Arch Linux. $ gpg --version gpg (GnuPG) 2.1.15 libgcrypt 1.7.3 Copyright (C) 2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Home: /home/me123/.gnupg ............. The directory /home/me123/.gnupg exists
Did you delete the /home/me123/.gnupg directory and then it was recreated by gpg? If so, that's likely what is confusing the agent. Either restart the agent ( gpgconf --kill gpg-agent ) or, more drastically, reboot your machine and try again.
{ "source": [ "https://unix.stackexchange.com/questions/318385", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198038/" ] }
318,386
Currently, I have my server output my uptime to a HTML page using: TIME=$(uptime -p) echo ""${TIME}"!" >> /var/www/html/index.new Which generates an output of: up 1 day, 1 hour, 2 minutes! I would like to also (for the sake of curiosity) to be able to display my system's record uptime, though am uncertain as to the best way to log this and display it back in the (uptime -p) [day, hr, min] format. Is there a pre-existing tool which can do this? Or would I need to log uptime to a file and pull out the highest value with grep or something similar?
Did you delete the /home/me123/.gnupg directory and then it was recreated by gpg? If so, that's likely what is confusing the agent. Either restart the agent ( gpgconf --kill gpg-agent ) or, more drastically, reboot your machine and try again.
{ "source": [ "https://unix.stackexchange.com/questions/318386", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/196651/" ] }
318,824
Upgraded here a few VM servers to Debian 9. Now when using ssh , we cannot copy and paste between remote terminals. The cursor seems to be doing the movements, and marking the text, albeit in a funnier/different way than the usual, but nothing gets copied other to the clipboard when doing command-C / command-V or copy and paste in the respective menu. We also tried doing the mouse movements with Shift and other keyboard combinations, without positive results. This is happening in OS/X, namely Sierra and El Capitan, and in Windows, using mobaXterm terminals too. The situation is due to vim´s awareness of having a mouse. Following other questions in Stack Overflow, I created /etc/vim/vimrc.local with set mouse="r" and set mouse="v ; it did not work out well. Finally setup up set mouse="" in the same file, with some moderate success. However, it also does not work well 100% of the time. What else can be done?
Solution: change mouse=a to mouse=r in your local .vimrc . The problem with setting this in /usr/share/vim/vim80/defaults.vim as the accepted answer says, is that it will be overwritten on every update. I searched for a long time and ended up on this one: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864074 LOCAL SOLUTION (flawed): The first solution is to use local .vimrc files and set it there. So you could create a local .vimrc ( ~/.vimrc ) for every user and set your options there. Or create one in /etc/skel so it will be automatically created for every new user you create. But when you use local .vimrc files, you have to set all options there, because if there is a local .vimrc , the defaults.vim doesn't get loaded at all! And if there is no local .vimrc all your settings are being overwritten from defaults.vim . GLOBAL SOLUTION (preferrable): I wanted a global configuration for all users, which loads the default options and then adds or overwrites the defaults with my personal settings. Luckily there is an option for that in Debian: The /etc/vim/vimrc.local will be loaded after the /etc/vim/vimrc . So you can create this file and load defaults, preventing them from being loaded again (at the end) and then add your personal options: Please create the following file: /etc/vim/vimrc.local " This file loads the default vim options at the beginning and prevents " that they are being loaded again later. All other options that will be set, " are added, or overwrite the default settings. Add as many options as you " whish at the end of this file. " Load the defaults source $VIMRUNTIME/defaults.vim " Prevent the defaults from being loaded again later, if the user doesn't " have a local vimrc (~/.vimrc) let skip_defaults_vim = 1 " Set more options (overwrites settings from /usr/share/vim/vim80/defaults.vim) " Add as many options as you whish " Set the mouse mode to 'r' if has('mouse') set mouse=r endif (Note that $VIMRUNTIME used in the above snippet has a value like /usr/share/vim/vim80/defaults.vim .) If you also want to enable the "old copy/paste behavior", add the following lines at the end of that file as well: " Toggle paste/nopaste automatically when copy/paste with right click in insert mode: let &t_SI .= "\<Esc>[?2004h" let &t_EI .= "\<Esc>[?2004l" inoremap <special> <expr> <Esc>[200~ XTermPasteBegin() function! XTermPasteBegin() set pastetoggle=<Esc>[201~ set paste return "" endfunction
{ "source": [ "https://unix.stackexchange.com/questions/318824", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138261/" ] }
318,859
I usually use watch Linux utility to watch the output of a command repeatedly every n seconds, like in watch df -h /some_volume/ . But I seem not to be able to use watch with a piped series of command like: $ watch ls -ltr|tail -n 1 If I do that, watch is really watching ls -ltr and the output is being passed to tail -n 1 which doesn't output anything. If I try this: $ watch (ls -ltr|tail -n 1) I get $ watch: syntax error near unexpected token `ls' And any of the following fails some reason or another: $ watch <(ls -ltr|tail -n 1) $ watch < <(ls -ltr|tail -n 1) $ watch $(ls -ltr|tail -n 1) $ watch `ls -ltr|tail -n 1)` And finally if do this: $ watch echo $(ls -ltr|tail -n 1) I see no change in the output at the given interval because the command inside $() is run just once and the resulting output string is always printed ("watched") as a literal. So, how do I make the watch command work with a piped chain of commands [other that putting them inside a script]?
watch 'command | othertool | yet-another-tool'
{ "source": [ "https://unix.stackexchange.com/questions/318859", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40195/" ] }
319,257
I have (well, I had ) a directory: /media/admin/my_data It was approximately 49GB in size and had tens of thousands of files in it. The directory is the mount point of an active LUKS partition. I wanted to rename the directory to: /media/admin/my_data_on_60GB_partition I didn't realise at the time, but I issued the command from home directory so I ended up doing: ~% sudo mv /media/admin/my_data my_data_on_60GB_partition So then the mv program started to move /media/admin/my_data and its contents to a new directory ~/my_data_on_60GB_partition . I used Ctrl + C to cancel the command part way through, so now I have a whole bunch of files split across directories: ~/my_data_on_60GB_partition <--- about 2GB worth files in here and /media/admin/my_data <---- about 47GB of orig files in here The new directory ~/my_data_on_60GB_partition and some of its subdirectories are owned by root. I'm assuming the mv program must have copied the files as root initially and then after the transfer chown 'ed them back to my user account. I have a somewhat old backup of the directory/partition. My question is, is it possible to reliably restore the bunch of files that were moved? That is, can I just run: sudo mv ~/my_data_on_60GB_partition/* /media/admin/my_data or should I give up trying to recover, as the files are possibly corrupted and partially complete, etc.? OS - Ubuntu 16.04 mv --version mv (GNU coreutils) 8.25
When moving files between filesystems, mv doesn't delete a file before it's finished copying it, and it processes files sequentially (I initially said it copies then deletes each file in turn, but that's not guaranteed — at least GNU mv copies then deletes each command-line argument in turn, and POSIX specifies this behaviour ). So you should have at most one incomplete file in the target directory, and the original will still be in the source directory. To move things back, add the -i flag so mv doesn't overwrite anything: sudo mv -i ~/my_data_on_60GB_partition/* /media/admin/my_data/ (assuming you don't have any hidden files to restore from ~/my_data_on_60GB_partition/ ), or better yet (given that, as you discovered, you could have many files waiting to be deleted), add the -n flag so mv doesn't overwrite anything but doesn't ask you about it: sudo mv -n ~/my_data_on_60GB_partition/* /media/admin/my_data/ You could also add the -v flag to see what's being done. With any POSIX-compliant mv , the original directory structure should still be intact, so alternatively you could check that — and simply delete /media/admin/my_data ... (In the general case though, I think the mv -n variant is the safe approach — it handles all forms of mv , including e.g. mv /media/admin/my_data/* my_data_on_60GB_partition/ .) You'll probably need to restore some permissions; you can do that en masse using chown and chmod , or restore them from backups using getfacl and setfacl (thanks to Sato Katsura for the reminder ).
{ "source": [ "https://unix.stackexchange.com/questions/319257", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
319,363
I have a problem with evince PDF-document viewer. I have a printer that is well configured with cups, and I can print PDFs from other PDF viewers such as Okular, but not with Evince. There are simply no printer listed when I want to print with Evince, only "print to a file", or "print with lpr". I can use lpr to print with evince, but I have to type the command with the options I want, which is not very practical. I'm running Debian Testing (Stretch) with Evince 3.22.1. I tried to delete the files ~/.cups/lpoptions and ~/.config/evince/print-settings but it did not solve the problem.
I had the same issue and I couldn't print any images either with most GTK+ applications. The latest GTK3 (3.22) requires the package gtk3-print-backends for printers to be listed in GTK3 print dialogs. Installing that package did the trick for me. I'm running Arch Linux.
{ "source": [ "https://unix.stackexchange.com/questions/319363", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/159872/" ] }
319,740
I am trying to set the eth0 interface to use dhcp to get an ipv4 address, using the command line. I can manually change the ip address using sudo ifconfig eth0 x.x.x.x netmask x.x.x.x Is there a similar command to use to set eth0 to get an address using dhcp? I tried typing: sudo dhclient eth0 however the ip address doesn't change when I type this. The /etc/network/interfaces file was set to iface eth0 inet manual which I then changed to: auto eth0 iface eth0 inet dhcp however this doesn't change the eth0 ip address even if the system is rebooted.
If your dhcp is properly configured to give you an IP address, the command: dhclient eth0 -v should work. The option -v enable verbose log messages, it can be useful. If your eth0 is already up, before asking for a new IP address, try to deconfigure eth0 . To configure the network interfaces based on interface definitions in the file /etc/network/interfaces you can use ifup and ifdown commands.
{ "source": [ "https://unix.stackexchange.com/questions/319740", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
319,979
Say I create a bridge interface on linux ( br0 ) and add to it some interfaces ( eth0 , tap0 , etc.). My understanding is that this interface act like a virtual switch with all its interfaces/ports that I add to it. What is the meaning of assigning a MAC and an IP address to that interface? Does the interface act as an additional port on the switch/bridge which allows other ports to access the host machine? I have seen some pages talk about assigning an IP address to a bridge. Is the MAC assignation implied (or automatic)?
Because a bridge is an ethernet device it needs a MAC address. A linux bridge can originate things like spanning-tree protocol frames, and traffic like that needs an origin MAC address. A bridge does not require an ip address. There are many situations in which you won't have one. However, in many cases you may have one, such as: When the bridge is acting as the default gateway for a group of containers or virtual machines (or even physical interfaces). In this case it needs an ip address (because routing happens at the IP layer). When your "primary" NIC is a member of the bridge, such that the bridge is your connectivity to the outside world. In this case, rather than assigning an ip address to (for example) eth0 , you would assign it to the bridge device instead. If the bridge is not required for ip routing, then it doesn't need an ip address. Examples of this situation include: When the bridge is being used to create a private network of devices with no external connectivity, or with external connectivity provided through a device other than the bridge.
{ "source": [ "https://unix.stackexchange.com/questions/319979", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36146/" ] }
320,103
Can someone please explain to me, what the difference is between creating mdadm array using partitions or the whole disks directly? Supposing I intend to use the whole drives. Imagine a RAID6 created in two ways, either: mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 or: mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd What is the difference, and possible problems arising from any of the two variants? For example, I mean the reliability or manageability or recovery operations on such arrays, etc.
The most important difference is that it allows you to increase the flexibility for disk replacement. It is better detailed below along with a number of other recommendations. One should consider to use a partition instead of the entire disk. This should be under the general recommendations for setting up an array and may certainly spare you some headaches in the future when further disk replacements get necessary. The most important arguments is: Disks from different manufacturers (or even different models of the "same" capacity from the same manufacturer) don't necessarily have the exact same disk size and, even the smallest size difference, will prevent you from replacing a failed disk with a newer one if the second is smaller than the first. Partitioning allows you to workaround this; Side note on why to use different manufacturers disks: Disks will fail, this is not a matter of a "if" but a "when". Disks of the same manufacturer and the same model have similar properties, and so, higher chances of failing together under the same conditions and time of use. The suggestion so is to use disks from different manufacturers, different models and, in special, that do not belong to the same batch (consider buying from different stores if you are buying disks of the same manufacturer and model). This is not uncommon that a second disk fail happen during a resotre after a disk replacement when disks of the same batch are used. You certainly don't want this to happen to you. So the recommendations: 1) Partition the disks that will be used with a slightly smaller capacity than the overall disk space (e.g, I have a RAID5 array of 2TB disks and I intentionally partitioned them wasting about 100MB in each). Then, use /dev/sd?1 of each one for composing the array - This will add a safety margin in case a new replacing disk has less space than the original ones used to assemble the array when it was created; 2) Use disks from different manufacturers; 3) Use disks of different models if different manufacturers are not an option for you; 4) Use disks from different batches; 5) Proactively replace disks before they fail and not all at the same time. This may be a little paranoid and really depends on the criticity of the data you have. I use to have disks that have 6 months differences in age from each other; 6) Make regular backups (always, regardless if you use an array or not). Raid doesn't serve the same purpose of backups. Arrays assure you high availability, Backups allow you to restore lost files (including the ones that get accidentally deleted or are damaged by viruses, some examples of something that using arrays will not protect you from). OBS: Except for all the non-neglectable rational above, there aren't much further technical differences between using /dev/sd? vs /dev/sd?#. Good luck
{ "source": [ "https://unix.stackexchange.com/questions/320103", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
320,465
Summary When I create a new tmux session, my prompt pulls from a default bash configuration and I have to manually run source ~/.bashrc for my customized prompt. Analysis I am using a RHEL 7 machine. I began noticing this behavior after a bash update a while back, but haven't gotten around to asking the question until now (and am not sure which update this began happening around). For example, I've customized my prompt to look like: [user@hostname ~]$ Whenever I start a new tmux session, it uses what appears to be the bash default: -sh-4.2$ A quick run of source ~/.bashrc always fixes the issue, but it's annoying that I have to do this every time I want to fix something small. Any ideas on how to get tmux to do this automatically again? If any more information is needed, I am happy to provide. tmux.conf For reference, I have my tmux.conf file below, although it is hardly what you could call custom. setw -g mode-keys vi # reload tmux.conf bind r source-file ~/.tmux.conf \; display-message " ✱ ~/.tmux.conf is reloaded"
As far as I know, by default tmux runs a login shell. When bash is invoked as an interactive login shell, it looks for ~/.bash_profile , ~/.bash_login , and ~/.profile . So you have to put source ~/.bashrc in one of those files. Another way to solve this issue is to put in your file .tmux.conf the line: set-option -g default-shell "/bin/bash"
{ "source": [ "https://unix.stackexchange.com/questions/320465", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47463/" ] }
320,783
So if I've got a variable VAR='10 20 30 40 50 60 70 80 90 100' and echo it out echo "$VAR" 10 20 30 40 50 60 70 80 90 100 However, further down the script I need to reverse the order of this variable so it shows as something like echo "$VAR" | <code to reverse it> 100 90 80 70 60 50 40 30 20 10 I tried using rev and it literally reversed everything so it came out as echo "$VAR" | rev 001 09 08 07 06 05 04 03 02 01
On GNU systems, the reverse of cat is tac: $ tac -s" " <<< "$VAR " # Please note the added final space. 100 90 80 70 60 50 40 30 20 10
{ "source": [ "https://unix.stackexchange.com/questions/320783", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198441/" ] }
320,957
I recently upgraded my disk from a 128GB SSD to 512GB SSD. The / partition is encrypted with LUKS. I'm looking for help extending the partition to use all the free space on the new disk. I've already dd'd the old drive onto the new one: [root@localhost ~]# fdisk -l /dev/sda Disk /dev/sda: 477 GiB, 512110190592 bytes, 1000215216 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x00009f33 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 1026047 1024000 500M 83 Linux /dev/sda2 1026048 250064895 249038848 118.8G 83 Linux There's about 380GB of unused space after sda2. More relevant info: [root@localhost ~]# vgs VG #PV #LV #SN Attr VSize VFree fedora_chocbar 1 3 0 wz--n- 118.75g 4.00m [root@localhost ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home fedora_chocbar -wi-a----- 85.55g root fedora_chocbar -wi-a----- 29.30g swap fedora_chocbar -wi-a----- 3.89g [root@localhost ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/encrypted fedora_chocbar lvm2 a-- 118.75g 4.00m There seems to be a lot of info regarding how to do this, but very little explanation. I appreciate any help on this.
OK! The definitive answer finally. My steps to expand a LUKS encrypted volume... cryptsetup luksOpen /dev/sda2 crypt-volume to open the encrypted volume. parted /dev/sda to extend the partition. resizepart NUMBER END . vgchange -a n fedora_chocbar . Stop using the VG so you can do the next step. cryptsetup luksClose crypt-volume . Close the encrypted volume for the next steps. cryptsetup luksOpen /dev/sda2 crypt-volume . Open it again. cryptsetup resize crypt-volume . Will automatically resize the LUKS volume to the available space. vgchange -a y fedora_chocbar . Activate the VG. pvresize /dev/mapper/crypt-volume . Resize the PV. lvresize -l+100%FREE /dev/fedora_chocbar/home . Resize the LV for /home to 100% of the free space. e2fsck -f /dev/mapper/fedora_chocbar-home . Throw some fsck magic at the resized fs. resize2fs /dev/mapper/fedora_chocbar-home . Resize the filesystem in /home (automatically uses 100% free space) I hope someone else finds this useful. I now have 300+GB for my test VMs on my laptop!
{ "source": [ "https://unix.stackexchange.com/questions/320957", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198571/" ] }
321,219
Suppose I have a directory on a local machine, behind a firewall: local:/home/meee/workdir/ And a directory on a remote machine, on the other side of the firewall: remote:/a1/a2/.../aN/one/two/ remote:/a1/a2/.../aN/one/dont-copy-me{1,2,3,...}/ ...such that N >= 0. My local machine has a script that uses rsync . I want this script to copy only one/two/ from the remote machine for a variable-but-known 'N' such that I end up with: local:/home/meee/workdir/one/two/ If I use rsync remote:/a1/a2/.../aN/one/two/ ~/workdir/ , I end up with: local:/home/meee/workdir/two/ If I use rsync --relative remote:/a1/a2/.../aN/one/two/ ~/workdir/ , I end up with: local:/home/meee/workdir/a1/a2/.../aN/one/two/ Neither one of these is what I want. Are there rsync flags which can achieve the desired result? If not, can anyone think of a straightforward solution?
For -- relative you have to insert a dot into the source directory path: rsync -av --relative remote:/a1/a2/.../aN/./one/two ~/workdir/ See the manual: -R, --relative [...] It is also possible to limit the amount of path information that is sent as implied directories for each path you specify. With a modern rsync on the sending side (beginning with 2.6.7), you can insert a dot and a slash into the source path, like this: rsync -avR /foo/./bar/baz.c remote:/tmp/
{ "source": [ "https://unix.stackexchange.com/questions/321219", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/108009/" ] }
321,427
The problem is that I really don't know if I am confused with PermitRootLogin or it is not working well. I put it in the sshd_config, and when I connect via ssh, I am able to do su - in order to have root permissions. So shouldn't PermitRootLogin no permit me that?
PermitRootLogin only configures whether root can login directly via ssh (e.g. ssh [email protected] ). When you login using a different user account, whatever you do in your shell is not influenced by sshd 's config. From man sshd_config : PermitRootLogin Specifies whether root can log in using ssh(1). The argument must be “yes”, “without-password”, “forced-commands-only”, or “no”. The default is “yes”. […] If this option is set to “no”, root is not allowed to log in. You can however use your login.defs or pam config to limit which users can use the su command: Server Fault: Disable su on machine
{ "source": [ "https://unix.stackexchange.com/questions/321427", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/194664/" ] }
321,492
I am using wget to download a static html page. The W3C Validator tells me the page is encoded in UTF-8. Yet when I cat the file after download, I get a bunch of binary nonsense. I'm on Ubuntu, and I thought the default encoding was UTF-8? That's what my locale file seems to say. Why is this happening and how can I correct it? Also, looks like Content-Encoding: gzip . Perhaps this makes a diff? This is the simple request: wget https://www.example.com/page.html I also tried this: wget https://www.example.com/page.html -q -O - | iconv -f utf-16 -t utf-8 > output.html Which returned: iconv: illegal input sequence at position 40 cat'ing the file returns binary that looks like this: l�?חu�`�q"�:)s��dġ__��~i��6n)T�$H�#���QJ Result of xxd output.html | head -20 : 00000000: 1f8b 0800 0000 0000 0003 bd56 518f db44 ...........VQ..D 00000010: 107e a6bf 62d4 8a1e 48b9 d8be 4268 9303 .~..b...H...Bh.. 00000020: 8956 082a 155e 7a02 21dd cbd8 3bb6 97ae .V.*.^z.!...;... 00000030: 77cd ee38 39f7 a1bf 9d19 3bb9 0bbd 9c40 w..89.....;....@ 00000040: 2088 12c5 de9d 9df9 be99 6f67 f751 9699 .........og.Q.. 00000050: 500d 1d79 5eee a265 faec 7151 e4ab 6205 P..y^..e..qQ..b. 00000060: 4dd3 0014 1790 e7d0 77c0 ef2f cbf8 cde3 M.......w../.... 00000070: cf1f 7d6c 7d69 ec16 d0d9 c67f 7d7d 56c9 ..}l}i......}}V. 00000080: 04c5 eb33 35fc e49e 2563 e908 ca10 0d45 ...35...%c.....E 00000090: 31ce afcf a022 e77a 34c6 fa46 46be d88f 1....".z4..FF... 000000a0: a41e ab79 446d 76d6 702b cf45 9e7f ba77 ...yDmv.p+.E...w 000000b0: 7dc2 779c 274e cc18 483c 3a12 0f75 f07c }.w.'N..H<:..u.| 000000c0: 5e63 67dd b886 ab48 e550 b5c4 f0e3 db0d ^cg....H.P...... 000000d0: 54c1 85b8 8627 2ff3 2ff3 17f9 0626 d31d T....'/./....&.. 000000e0: d9a6 e5b5 4076 663f 94ec 7b5a 17cf 7ade ....@vf?..{Z..z. 000000f0: 00d3 0d9f 4fcc d733 ef8d a0bb 0a06 c7eb ....O..3........ 00000100: b304 6fb1 b1cc 18ed 90e0 8710 43aa 424f ..o.........C.BO 00000110: 50c7 d0c1 2bac 09be 4d1c 2566 335e 666c P...+...M.%f3^fl 00000120: 1e20 951d 58fd 6774 f3e9 f317 749f 7fc4 . ..X.gt....t... 00000130: d651 cdca f5a7 b0a5 aea4 08ab 055c e4c5 .Q...........\.. Also, strangely, the output file seems to open properly in TextWrangler!
This is a gzip compressed file. You can find this out by running the file command, which figures out the file format from magic numbers in the data (this is how programs such as Text Wrangler figure out that the file is compressed as well): file output.html wget -O - … | file - The server (I guessed it from the content you showed) is sending gzipped data and correctly setting the header Content-Encoding: gzip but wget doesn't support that. In recent versions, wget sends Accept-encoding: identity , to tell the server not to compress or otherwise encode the data. In older versions, you can send the header manually: wget --header 'Accept-encoding: identity' … However this particular server appears to be broken: it sends compressed data even when told not to encode the data in any way. So you'll have to decompress the data manually. wget -O output.html.gz … && gunzip output.html.gz
{ "source": [ "https://unix.stackexchange.com/questions/321492", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198965/" ] }
321,687
I need to add a route that won't be deleted after reboot. I read these two ways of doing it : Add ip route add -net 172.X.X.0/24 gw 172.X.X.X dev ethX to the file /etc/network/interfaces or Create the file /etc/network/if-up.d/route with: #!/bin/sh route add -net 172.X.X.0/24 gw 172.X.X.X dev ethX and make it executable : chmod +x /etc/network/if-up.d/route So I'm confused. What is the best way of doing it?
You mentioned /etc/network/interfaces , so it's a Debian system... Create a named routing table. As an example, I have used the name, "mgmt," below. echo '200 mgmt' >> /etc/iproute2/rt_tables Above, the kernel supports many routing tables and refers to these by unique integers numbered 0-255. A name, mgmt, is also defined for the table. Below, a look at a default /etc/iproute2/rt_tables follows, showing that some numbers are reserved. The choice in this answer of 200 is arbitrary; one might use any number that is not already in use, 1-252. # # reserved values # 255 local 254 main 253 default 0 unspec # # local # Below, a Debian 7/8 interfaces file defines eth0 and eth1 . eth1 is the 172 network. eth0 could use DHCP as well. 172.16.100.10 is the IP address to assign to eth1 . 172.16.100.1 is the IP address of the router. source /etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback # The production network interface auto eth0 allow-hotplug eth0 # iface eth0 inet dhcp # Remove the stanzas below if using DHCP. iface eth0 inet static address 10.10.10.140 netmask 255.255.255.0 gateway 10.10.10.1 # The management network interface auto eth1 allow-hotplug eth1 iface eth1 inet static address 172.16.100.10 netmask 255.255.255.0 post-up ip route add 172.16.100.0/24 dev eth1 src 172.16.100.10 table mgmt post-up ip route add default via 172.16.100.1 dev eth1 table mgmt post-up ip rule add from 172.16.100.10/32 table mgmt post-up ip rule add to 172.16.100.10/32 table mgmt Reboot or restart networking. Update - Expounding on EL I noticed in a comment that you were "wondering for RHEL as well." In Enterprise Linux ("EL" - RHEL/CentOS/et al), create a named routing table as mentioned, above. The EL /etc/sysconfig/network file: NETWORKING=yes HOSTNAME=host.sld.tld GATEWAY=10.10.10.1 The EL /etc/sysconfig/network-scripts/ifcfg-eth0 file, using a static configuration (without NetworkManager and not specifying "HWADDR" and "UUID" for the example, below) follows. DEVICE=eth0 TYPE=Ethernet ONBOOT=yes NM_CONTROLLED=no BOOTPROTOCOL=none IPADDR=10.10.10.140 NETMASK=255.255.255.0 NETWORK=10.10.10.0 BROADCAST=10.10.10.255 THE EL /etc/sysconfig/network-scripts/ifcfg-eth1 file (without NetworkManager and not specifying "HWADDR" and "UUID" for the example, below) follows. DEVICE=eth1 TYPE=Ethernet ONBOOT=yes NM_CONTROLLED=no BOOTPROTOCOL=none IPADDR=172.16.100.10 NETMASK=255.255.255.0 NETWORK=172.16.100.0 BROADCAST=172.16.100.255 The EL /etc/sysconfig/network-scripts/route-eth1 file: 172.16.100.0/24 dev eth1 table mgmt default via 172.16.100.1 dev eth1 table mgmt The EL /etc/sysconfig/network-scripts/rule-eth1 file: from 172.16.100.0/24 lookup mgmt Update for RHEL8 This method described above works with RHEL 6 & RHEL 7 as well as the derivatives, but for RHEL 8 and derivatives, one must first install network-scripts to use the method described above. dnf install network-scripts The installation produces a warning that network-scripts will be removed in one of the next major releases of RHEL and that NetworkManager provides ifup / ifdown scripts as well.
{ "source": [ "https://unix.stackexchange.com/questions/321687", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/103808/" ] }
321,697
This question is inspired by Why is using a shell loop to process text considered bad practice ? I see these constructs for file in `find . -type f -name ...`; do smth with ${file}; done and for dir in $(find . -type d -name ...); do smth with ${dir}; done being used here almost on a daily basis even if some people take the time to comment on those posts explaining why this kind of stuff should be avoided... Seeing the number of such posts (and the fact that sometimes those comments are simply ignored) I thought I might as well ask a question: Why is looping over find 's output bad practice and what's the proper way to run one or more commands for each file name/path returned by find ?
Why is looping over find 's output bad practice? The simple answer is: Because filenames can contain any character. Therefore, there is no printable character you can reliably use to delimit filenames. Newlines are often used (incorrectly) to delimit filenames, because it is unusual to include newline characters in filenames. However, if you build your software around arbitrary assumptions, you at best simply fail to handle unusual cases, and at worst open yourself up to malicious exploits that give away control of your system. So it's a question of robustness and safety. If you can write software in two different ways, and one of them handles edge cases (unusual inputs) correctly, but the other one is easier to read, you might argue that there is a tradeoff. (I wouldn't. I prefer correct code.) However, if the correct, robust version of the code is also easy to read, there is no excuse for writing code that fails on edge cases. This is the case with find and the need to run a command on each file found. Let's be more specific: On a UNIX or Linux system, filenames may contain any character except for a / (which is used as a path component separator), and they may not contain a null byte. A null byte is therefore the only correct way to delimit filenames. Since GNU find includes a -print0 primary which will use a null byte to delimit the filenames it prints, GNU find can safely be used with GNU xargs and its -0 flag (and -r flag) to handle the output of find : find ... -print0 | xargs -r0 ... However, there is no good reason to use this form, because: It adds a dependency on GNU findutils which doesn't need to be there, and find is designed to be able to run commands on the files it finds. Also, GNU xargs requires -0 and -r , whereas FreeBSD xargs only requires -0 (and has no -r option), and some xargs don't support -0 at all. So it's best to just stick to POSIX features of find (see next section) and skip xargs . As for point 2— find 's ability to run commands on the files it finds—I think Mike Loukides said it best: find 's business is evaluating expressions -- not locating files. Yes, find certainly locates files; but that's really just a side effect. --Unix Power Tools POSIX specified uses of find What's the proper way to run one or more commands for each of find 's results? To run a single command for each file found, use: find dirname ... -exec somecommand {} \; To run multiple commands in sequence for each file found, where the second command should only be run if the first command succeeds, use: find dirname ... -exec somecommand {} \; -exec someothercommand {} \; To run a single command on multiple files at once: find dirname ... -exec somecommand {} + find in combination with sh If you need to use shell features in the command, such as redirecting the output or stripping an extension off the filename or something similar, you can make use of the sh -c construct. You should know a few things about this: Never embed {} directly in the sh code. This allows for arbitrary code execution from maliciously crafted filenames. Also, it's actually not even specified by POSIX that it will work at all. (See next point.) Don't use {} multiple times, or use it as part of a longer argument. This isn't portable. For example, don't do this: find ... -exec cp {} somedir/{}.bak \; To quote the POSIX specifications for find : If a utility_name or argument string contains the two characters "{}", but not just the two characters "{}", it is implementation-defined whether find replaces those two characters or uses the string without change. ... If more than one argument containing the two characters "{}" is present, the behavior is unspecified. The arguments following the shell command string passed to the -c option are set to the shell's positional parameters, starting with $0 . Not starting with $1 . For this reason, it's good to include a "dummy" $0 value, such as find-sh , which will be used for error reporting from within the spawned shell. Also, this allows use of constructs such as "$@" when passing multiple files to the shell, whereas omitting a value for $0 would mean the first file passed would be set to $0 and thus not included in "$@" . To run a single shell command per file, use: find dirname ... -exec sh -c 'somecommandwith "$1"' find-sh {} \; However it will usually give better performance to handle the files in a shell loop so that you don't spawn a shell for every single file found: find dirname ... -exec sh -c 'for f do somecommandwith "$f"; done' find-sh {} + (Note that for f do is equivalent to for f in "$@"; do and handles each of the positional parameters in turn—in other words, it uses each of the files found by find , regardless of any special characters in their names.) Further examples of correct find usage: (Note: Feel free to extend this list.) Filter files generated by `find` by parsed output of `file` command substring removal in find -exec How to Do this List Comparison with Find? Using literal empty curly braces {} inside sed command from find -exec How do I delete file by filename that are set as dates? bash: Deleting directories not containing given strings Grep word within a file then copy the file Remove certain types of files except in a folder
{ "source": [ "https://unix.stackexchange.com/questions/321697", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22142/" ] }
321,709
I was trying to run flume on Ubuntu 16.04 as systemd service and have following in /etc/systemd/system/flume-ng.service [Unit] Description=Apache Flume [Service] ExecStart=/usr/bin/nohup /opt/flume/current/bin/flume-ng agent -c /etc/flume-ng/conf -f /etc/flume-ng/conf/flume.conf --name a1 & ExecStop=/opt/flume/current/bin/flume-ng agent stop [Install] WantedBy=multi-user.target I tried adding following lines StandardOutput=/var/log/flume-ng/log1.log StandardError=/var/log/flume-ng/log2.log which didn't work for me. I did run systemctl daemon-reload and systemctl restart flume-ng anyone know how this works ?
ExecStart=/usr/bin/nohup … This is wrong. Remove it. This service is not running in an interactive login session. There is no controlling terminal, or session leader, to send a hangup signal to it in the first place. ExecStart=… & This is wrong. Remove it. This is not shell script. & has no special shell-like meaning, and in any case would be the wrong way to start a service. StandardOutput=/var/log/flume-ng/log1.log StandardError=/var/log/flume-ng/log2.log These are wrong. Do not use these. systemd already sends the standard output and error of the service process(es) to its journal, without any such settings in the service unit. You can view it with journalctl -e -u flume-ng.service
{ "source": [ "https://unix.stackexchange.com/questions/321709", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66640/" ] }
321,710
I am running Mac OS 10.11.6 El Capitan. There is a link I would like to download programmatically: https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.16-osx10.11-x86_64.dmg If I paste this URL into any browser (e.g. Safari) the download works perfectly. However, if I try to download the same URL from the command line using curl , it doesn't work—the result is an empty file: $ ls -lA $ curl -O https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.16-osx10.11-x86_64.dmg % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 $ ls -lA total 0 -rw-r--r-- 1 myname staff 0 Nov 7 14:07 mysql-5.7.16-osx10.11-x86_64.dmg $ Of course I can get the file through the browser, but I would like to understand why the curl command above doesn't work. Why can't curl download this file correctly, when it is evidently present on the website and can be correctly accessed and downloaded through a graphical web browser?
There is a redirect on the webserver-side to the following URL: http://cdn.mysql.com//Downloads/MySQL-5.7/mysql-5.7.16-osx10.11-x86_64.dmg . Because it's a CDN, the exact behaviour (whether you get redirected or not) might depend on your location. curl does not follow redirects by default. To tell it to do so, add the -L argument: curl -L -O https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.16-osx10.11-x86_64.dmg
{ "source": [ "https://unix.stackexchange.com/questions/321710", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198344/" ] }
321,968
I created a passwordless ssh connection to my remote server from my mac. It worked(!) and then I closed my terminal, re-opened it, tried again, and got the following (username, my_ip are not real): ssh -vvv username@my_ip OpenSSH_7.2p2, LibreSSL 2.4.1 debug1: Reading configuration data /Users/Me/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 20: Applying options for * debug1: /etc/ssh/ssh_config line 53: Applying options for * debug2: resolving "my_ip" port 22 debug2: ssh_connect_direct: needpriv 0 debug1: Connecting to my_ip [my_ip] port 22. debug1: Connection established. debug1: identity file /Users/Me/.ssh/id_rsa type 1 debug1: key_load_public: No such file or directory debug1: identity file /Users/Me/.ssh/id_rsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /Users/Me/.ssh/id_dsa type -1 debug1: key_load_public: No such file or directory debug1: identity file /Users/Mes/.ssh/id_dsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /Users/Me/.ssh/id_ecdsa type -1 debug1: key_load_public: No such file or directory debug1: identity file /Users/Me/.ssh/id_ecdsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /Users/Me/.ssh/id_ed25519 type -1 debug1: key_load_public: No such file or directory debug1: identity file /Users/Me/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_7.2 ssh_exchange_identification: read: Connection reset by peer When I checked my .ssh folder, id_rsa was there but none of the others were. From the error, it looks like I need to somehow create these files but am not sure how to do so. Any help would be appreciated.
debug1: key_load_public: No such file or directory The line above is not error, but just simple debug log saying that ssh client is not able to find separate public key (named ~/.ssh/id_rsa.pub ). This file is not needed to connect to the remote server, but it can be useful. The actual error ssh_exchange_identification: read: Connection reset by peer points to error in server configuration. The server is running, but fails to accept the SSH connection. Check the server log for more information. Similar problems
{ "source": [ "https://unix.stackexchange.com/questions/321968", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138677/" ] }
322,035
/bin/sh , the Bourne shell created in 1977, used to be the default shell for Unix systems. Nowadays this file still exists but mostly just as a symbolic link to the default POSIX-compatible shell installed on the system: on RHEL/CentOS it points to /bin/bash , the Bourne Again shell on Ubuntu Linux it points to /bin/dash , the Debian Almquist shell on Debian it points to /bin/dash (6.0 and later; older Debian releases had it point to /bin/bash ) Which made me curious: Is there a Unix system, or Linux distro, that still provides a binary for /bin/sh ?
/bin/sh is not always a symlink NetBSD is one system where /bin/sh is not a symlink. The default install includes three shells: the Korn shell, the C shell, and a modified Almquist shell. Of these, the latter is installed only as /bin/sh . Interix (the second POSIX subsystem for Windows NT) does not have /bin/sh as a symlink. A single binary of the MirBSD Korn shell is linked twice as /bin/sh and /bin/mksh . FreeBSD and its derivative TrueOS (formerly PC-BSD) have the TENEX C shell as both /bin/csh and /bin/tcsh , and the Almquist shell as (only) /bin/sh . No symlink there, either. OpenBSD has the (original) C shell as /bin/csh and the PD Korn shell linked thrice as /bin/sh , /bin/ksh , and /bin/rksh . Also no symlink.
{ "source": [ "https://unix.stackexchange.com/questions/322035", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34039/" ] }
322,223
I need to copy one disk to another. I tried with the command below and it takes nearly a day to copy 1 TB of disk in federo. dd if=/dev/sda of=/dev/sdb I have tried the same on a Unix(HP-UX) system with the command below and it completes within a few hours dd if=/dev/sda of=/dev/rdsk What is the alternative I could use to copy from disk to disk as faster?
dd has many (weird) options, see dd(1) . You should explicitly state the buffer size, so try dd if=/dev/sda of=/dev/sdb bs=16M IIRC, the default buffer size is only 512 bytes. The command above sets it to 16 megabytes. You could try something smaller (e.g. bs=1M ) but you should use more than the default (especially on recent disk hardware with sectors of 4Kbytes, i.e. Advanced Format ). I naively recommend some power of two which is at least a megabyte. With the default 512 bytes buffer size, I guess (but I could be very wrong) that the hardware requires the kernel to transfer 4K for each 512 bytes block. Regarding rdsk , the sd(4) man pages say: At this time, only block devices are provided. Raw devices have not yet been implemented. Increase of dd's buffer size will give you more performance for read and write operations. Now all disks have hardware read/write buffer. But if you will increase dd's buffer size more than hardware buffer its performance will decrease because dd will read from first disk to buffer when second disk will have written all from its own hardware buffer. You need set bs option of dd command each time different value for different devices.
{ "source": [ "https://unix.stackexchange.com/questions/322223", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/125769/" ] }
322,459
An alias, such as ll is defined with the alias command. I can check the command with things like type ll which prints ll is aliased to `ls -l --color=auto' or command -v ll which prints alias ll='ls -l --color=auto' or alias ll which also prints alias ll='ls -l --color=auto' but I can't seem to find where the alias was defined, i.e. a file such as .bashrc , or perhaps manually in the running shell. At this point I'm unsure if this is even possible. Should I simply go through all files that are loaded by bash and check every one of them?
Manual definition will be hard to spot (the history logs, maybe) though asking the shell to show what it is doing and then grep should help find those set in a rc file: bash -ixlc : 2>&1 | grep ... zsh -ixc : 2>&1 | grep ... If the shell isn't precisely capturing the necessary options with one of the above invocations (that interactively run the null command), then script : script somethingtogrep thatstrangeshell -x ... grep ... somethingtogrep Another option would be to use something like strace or sysdig to find all the files the shell touches, then go grep those manually (handy if the shell or program does not have an -x flag); the standard RC files are not sufficient for a manual filename check if something like oh-my-zsh or site-specific configurations are pulling in code from who knows where (or also there may be environment variables, as sorontar points out in their answer).
{ "source": [ "https://unix.stackexchange.com/questions/322459", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1290/" ] }
322,720
I have the following systemd unit file in /etc/systemd/system/emacs.service : [Unit] Description=Emacs: the extensible, self-documenting text editor Documentatin=man:emacs(1) info:Emacs [Service] Type=forking ExecStart=/usr/bin/emacs --daemon ExecStop=/usr/bin/emacsclient --eval "(progn (setq kill-emacs-hook nil) (kill-emacs))" Restart=always Environment=DISPLAY=:%i TimeoutStartSec=0 [Install] WantedBy=default.target I want this to start on boot, so I have entered systemctl enable emacs However, each time my service reboots, systemctl status emacs shows: ● emacs.service - Emacs: the extensible, self-documenting text editor Loaded: loaded (/etc/systemd/system/emacs.service; disabled; vendor preset: enabled) Active: inactive (dead) But then entering systemctl start emacs and checking the status returns: ● emacs.service - Emacs: the extensible, self-documenting text editor Loaded: loaded (/etc/systemd/system/emacs.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2016-11-11 23:03:59 UTC; 4s ago Process: 3151 ExecStart=/usr/bin/emacs --daemon (code=exited, status=0/SUCCESS) Main PID: 3154 (emacs) Tasks: 2 Memory: 7.6M CPU: 53ms CGroup: /system.slice/emacs.service └─3154 /usr/bin/emacs --daemon How can I get this process to successfully start on boot?
I have no idea why but to get this to work I: deleted Environment=DISPLAY=:%i added a User= variable Made sure the correct file was in /etc/systemd/system/emacs.service (earlier it had been a hard link) and re-ran systemctl enable emacs This made it work. EDIT The real problem here is that I had a typo at line 3: Documentatin I found this by checking journalctl . I suggest anyone who has issues with a systemd script do the same as there was no error sent to stderr.
{ "source": [ "https://unix.stackexchange.com/questions/322720", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73671/" ] }
322,724
I have a data looks like this, for each SNP, it should repeat 5 times with different beta. But for SNP rs11704961, it only repeat twice, so I want to delete SNP rows that repeat less than 5 times. I tried to use sort -k 1 | uniq -c , but it considers the whole line for checking duplicates, not the first column. SNP R K BETA rs767249 1 1 0.1065 rs767249 1 2 -0.007243 rs767249 1 3 0.02771 rs767249 1 4 -0.008233 rs767249 1 5 0.05073 rs11704961 2 1 0.2245 rs11704961 2 2 0.009203 rs1041894 3 1 0.1238 rs1041894 3 2 0.002522 rs1041894 3 3 0.01175 rs1041894 3 4 -0.01122 rs1041894 3 5 -0.009195
I have no idea why but to get this to work I: deleted Environment=DISPLAY=:%i added a User= variable Made sure the correct file was in /etc/systemd/system/emacs.service (earlier it had been a hard link) and re-ran systemctl enable emacs This made it work. EDIT The real problem here is that I had a typo at line 3: Documentatin I found this by checking journalctl . I suggest anyone who has issues with a systemd script do the same as there was no error sent to stderr.
{ "source": [ "https://unix.stackexchange.com/questions/322724", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199893/" ] }
322,817
I cannot figure out how to find the file where a bash function is defined ( __git_ps1 in my case). I experimented with declare , type , which , but nothing tells me the source file. I read somewhere that declare can print the file name and the line number, but it was not explained how. The help page for declare does not say it either. How can I get this information?
If you are prepared to run the function, then you can get the information by using set -x to trace the execution and setting the PS4 variable. Start bash with --debugger or else use shopt -s extdebug to record extra debugging info. Set PS4 , the 'prompt' printed when tracing to show the source line. Turn on tracing. you can then run your function and for each line you will get the filename of the function. use set +x to turn off tracing. So for this case you would run bash --debugger PS4='+ ${BASH_SOURCE[0]} ' set -x ; __git_ps1 ; set +x
{ "source": [ "https://unix.stackexchange.com/questions/322817", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54221/" ] }
322,883
I am renting a server, running Ubuntu 16.04 at a company, let's name it company.org. Currently, my server is configured like this: hostname: server737263 domain name: company.org Here's my FQDN: user@server737263:~ $ hostname --fqdn server737263.company.org This is not surprising. I am also renting a domain name, let's name it domain.org . What I would like to do would be to rename my server as server1.domain.org . This means configuring my hostname as server1 and my domain name as domain.org . How can I do it correctly? Indeed, the manpage for hostname is not clear. To me at least: HOSTNAME(1) [...] SET NAME When called with one argument or with the --file option, the commands set the host name or the NIS/YP domain name. hostname uses the sethostname(2) function, while all of the three domainname, ypdomainname and nisdomainname use setdomainname(2). Note, that this is effective only until the next reboot. Edit /etc/hostname for permanent change. [...] THE FQDN You cannot change the FQDN with hostname or dnsdomainname. [...] So it seems that editing /etc/hostname is not enough? Because if it really changed the hostname, it would have changed the FQDN. There's also a trick I read to change the hostname with the command sysctl kernel.hostname=server1 , but nothing says whether this is the correct way or an ugly trick. So: What is the correct way to set the hostname? What is the correct way to set the domain name?
Setting your hostname: You'll want to edit /etc/hostname with your new hostname. Then, run sudo hostname $(cat /etc/hostname) . Setting your domain, assuming you have a resolvconf binary: In /etc/resolvconf/resolv.conf.d/head , you'll add then line domain your.domain.name (not your FQDN, just the domain name). Then, run sudo resolvconf -u to update your /etc/resolv.conf (alternatively, just reproduce the previous change into your /etc/resolv.conf ). If you do not have resolvconf , just edit /etc/resolv.conf , adding the domain your.domain.name line. Either way: Finally, update your /etc/hosts file. There should be at least one line starting with one of your IP (loopback or not), your FQDN and your hostname. grepping out ipv6 addresses, your hosts file could look like this: 127.0.0.1 localhost 1.2.3.4 service.domain.com service In response to hostnamectl suggestions piling up in comments: it is not mandatory, nor exhaustive. It can be used as a replacement for step 1 & 2, IF you OS ships with systemd. Whereas the steps given above are valid regardless of systemd being present (pclinuxos, devuan, ...).
{ "source": [ "https://unix.stackexchange.com/questions/322883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200018/" ] }
323,085
Let's take the two lines below which give us two different results. p=$(cd ~ && pwd) ; echo $p p=$(cd ~ | pwd) ; echo $p How do the two differ?
In p=$(cd ~ && pwd) : The command substitution, $() , runs in a subshell cd ~ changes directory to ~ (your home), if cd succeeds ( && ) then pwd prints the directory name on STDOUT, hence the string saved on p will be your home directory e.g. /home/foobar In p=$(cd ~ | pwd) : Again $() spawns a subshell The commands on both sides of | run in respective subshells (and both starts off at the same time) so cd ~ is done in a subshell, and pwd in a separate subshell so you would get only the STDOUT from pwd i.e. from where you run the command, this can be any directory as you can imagine, hence p will contain the directory name from where the command is invoked, not your home directory
{ "source": [ "https://unix.stackexchange.com/questions/323085", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17671/" ] }
323,109
I'm using OpenSuse Leap 42.1 and I would like to remap my Keyboard. Now, I want to remap one key (The Sleep key) to lock the screen instead of sending the entire PC to sleep. For that reason, I need the Konsole-Command to lock the screen. I Googled and only found commands that works for Ubuntu/Debian/Fedora/KDE4 , but I was unable to find anything that worked for my OpenSuse Version. Would you please provide any suggestions?
In p=$(cd ~ && pwd) : The command substitution, $() , runs in a subshell cd ~ changes directory to ~ (your home), if cd succeeds ( && ) then pwd prints the directory name on STDOUT, hence the string saved on p will be your home directory e.g. /home/foobar In p=$(cd ~ | pwd) : Again $() spawns a subshell The commands on both sides of | run in respective subshells (and both starts off at the same time) so cd ~ is done in a subshell, and pwd in a separate subshell so you would get only the STDOUT from pwd i.e. from where you run the command, this can be any directory as you can imagine, hence p will contain the directory name from where the command is invoked, not your home directory
{ "source": [ "https://unix.stackexchange.com/questions/323109", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200195/" ] }
323,610
I understand that reads to /dev/random may block, while reading /dev/urandom is guaranteed not to block. Where does the letter u come into this? What does it signify? Userspace? Unblocking? Micro? Update: Based on the initial wording of the question, there has been some debate over the usefulness of /dev/random vs /dev/urandom . The link Myths about /dev/urandom has been posted three times below, and is summarised in this answer to the question When to use /dev/random vs /dev/urandom .
Unlimited. In Linux, comparing the kernel functions named random_read and random_read_unlimited indicates that the etymology of the letter u in urandom is unlimited . This is confirmed by line 114 : The /dev/urandom device does not have this limit [...] Update: Regarding which came first for Linux, /dev/random or /dev/urandom , @Stéphane Chazelas gave the post with the original patch and @StephenKitt showed they were both introduced simultaneously .
{ "source": [ "https://unix.stackexchange.com/questions/323610", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
323,845
I tried a bash script, but it took too long to create a simple 1 MB file. I think the answer lies in using /dev/random or /dev/urandom , but other posts here only show how to add all kinds of data to a file using these things, but I want to add only numbers. So, is there a command that I can use to create a random file of size 1 GB containing only numbers between 0 and 9? Edit: I want the output to be something like this 0 1 4 7 ..... 9 8 7 5 8 ..... 8 .... .... 8 7 5 3 ..... 3 The range is 0 - 9 meaning only numbers 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. Also I need them to be space separated and 100 per line, up to n number of lines. This n is something I don't care, I want my final size to be 1 GB. Edit: I am using Ubuntu 16.04 LTS
This: LC_ALL=C tr '\0-\377' \ '[0*25][1*25][2*25][3*25][4*25][5*25][6*25][7*25][8*25][9*25][x*]' \ < /dev/urandom | tr -d x | fold -w 1 | paste -sd "$(printf '%99s\\n')" - | head -c1G (assuming a head implementation that supports -c ) appears to be reasonably fast on my system. tr translates the whole byte range (0 to 255, 0 to 0377 in octal): the 25 first bytes as 0, the 25 next ones as 1... then 25 9 the rest (250 to 255) to "x" which we then discard (with tr -d x ) as we want a uniform distribution (assuming /dev/urandom has a uniform distribution itself) and so not give a bias to some digits. That produces one digit for 97% of the bytes of /dev/urandom . fold -w 1 makes it one digit per line. paste -s is called with a list of separators that consists on 99 space characters and one newline character, so to have 100 space separated digits on each line. head -c1G will get the first GiB (2 30 ) of that. Note that the last line will be truncated and undelimited. You could truncate to 2 30 -1 and add the missing newline by hand, or truncate to 10 9 bytes instead which is 50 million of those 200 byte lines ( head -n 50000000 would also make it a standard/portable command). These timings (obtained by zsh on a quad-core system), give an indication of where the CPU time is spent: LC_ALL=C tr '\0-\377' < /dev/urandom 0.61s user 31.28s system 99% cpu 31.904 total tr -d x 1.00s user 0.27s system 3% cpu 31.903 total fold -w 1 14.93s user 0.48s system 48% cpu 31.902 total paste -sd "$(printf '%99s\\n')" - 7.23s user 0.08s system 22% cpu 31.899 total head -c1G > /dev/null 0.49s user 1.21s system 5% cpu 31.898 total The first tr is the bottle neck, most of the time spent in the kernel (I suppose for the random number generation). The timing is roughly in line with the rate I can get bytes from /dev/uramdom (about 19MiB/s and here we produce 2 bytes for each 0.97 byte of /dev/urandom at a rate of 32MiB/s). fold seems to be spending an unreasonable amount of CPU time (15s) just to insert a newline character after every byte but that doesn't affect the overall time as it works on a different CPU in my case (adding the -b option makes it very slightly more efficient, dd cbs=1 conv=unblock seems like a better alternative). You can do away with the head -c1G and shave off a few seconds by setting a limit on the file size ( limit filesize 1024m with zsh or ulimit -f "$((1024*1024))" with most other shells (including zsh )) instead in a subshell. That could be improved if we extracted 2 digits for each byte, but we would need a different approach for that. The above is very efficient because tr just looks up each byte in a 256 byte array. It can't do that for 2 bytes at a time, and using things like hexdump -e '1/1 "%02u"' that computes the text representation of a byte using more complex algorithms would be more expensive than the random number generation itself. Still, if like in my case, you have CPU cores whose time to spare, it may still manage to shave off a few seconds: With: < /dev/urandom LC_ALL=C tr '\0-\377' '\0-\143\0-\143[x*]' | tr -d x | hexdump -n250000000 -ve '500/1 "%02u" "\n"' | fold -w1 | paste -sd "$(printf '%99s\\n')" - > /dev/null I get (note however that here it's 1,000,000,000 bytes as opposed to 1,073,741,824): LC_ALL=C tr '\0-\377' '\0-\143\0-\143[x*]' < /dev/urandom 0.32s user 18.83s system 70% cpu 27.001 total tr -d x 2.17s user 0.09s system 8% cpu 27.000 total hexdump -n250000000 -ve '500/1 "%02u" "\n"' 26.79s user 0.17s system 99% cpu 27.000 total fold -w1 14.42s user 0.67s system 55% cpu 27.000 total paste -sd "$(printf '%99s\\n')" - > /dev/null 8.00s user 0.23s system 30% cpu 26.998 total More CPU time overall, but better distributed between my 4 CPU cores, so it ends up taking less wall-clock time. The bottleneck is now hexdump . If we use dd instead of the line-based fold , we can actually reduce the amount of work hexdump needs to do and improve the balance of work between CPUs: < /dev/urandom LC_ALL=C tr '\0-\377' '\0-\143\0-\143[x*]' | tr -d x | hexdump -ve '"%02u"' | dd bs=50000 count=10000 iflag=fullblock status=none cbs=1 conv=unblock | paste -sd "$(printf '%99s\\n')" - (here assuming GNU dd for its iflag=fullblock and status=none ) which gives: LC_ALL=C tr '\0-\377' '\0-\143\0-\143[x*]' < /dev/urandom 0.32s user 15.58s system 99% cpu 15.915 total tr -d x 1.62s user 0.16s system 11% cpu 15.914 total hexdump -ve '"%02u"' 10.90s user 0.32s system 70% cpu 15.911 total dd bs=50000 count=10000 iflag=fullblock status=none cbs=1 conv=unblock 5.44s user 0.19s system 35% cpu 15.909 total paste -sd "$(printf '%99s\\n')" - > /dev/null 5.50s user 0.30s system 36% cpu 15.905 total Back to the random-number generation being the bottleneck. Now, as pointed out by @OleTange, if you have the openssl utility, you could use it to get a faster (especially on processors that have AES instructions) pseudo-random generator of bytes. </dev/zero openssl enc -aes-128-ctr -nosalt -pass file:/dev/urandom on my system spews 15 times as many bytes per second than /dev/urandom . (I can't comment on how it compares in terms of cryptographically secure source of randomness if that applies to your use case). </dev/zero openssl enc -aes-128-ctr -nosalt -pass file:/dev/urandom 2> /dev/null | LC_ALL=C tr '\0-\377' '\0-\143\0-\143[x*]' | tr -d x | hexdump -ve '"%02u"' | dd bs=50000 count=10000 iflag=fullblock status=none cbs=1 conv=unblock | paste -sd "$(printf '%99s\\n')" - Now gives: openssl enc -aes-128-ctr -nosalt -pass file:/dev/urandom < /dev/zero 2> 1.13s user 0.16s system 12% cpu 10.174 total LC_ALL=C tr '\0-\377' '\0-\143\0-\143[x*]' 0.56s user 0.20s system 7% cpu 10.173 total tr -d x 2.50s user 0.10s system 25% cpu 10.172 total hexdump -ve '"%02u"' 9.96s user 0.19s system 99% cpu 10.172 total dd bs=50000 count=10000 iflag=fullblock status=none cbs=1 conv=unblock 4.38s user 0.20s system 45% cpu 10.171 total paste -sd "$(printf '%99s\\n')" - > /dev/null back to hexdump being the bottleneck. As I still have CPUs to spare, I can run 3 of those hexdump in parallel. </dev/zero openssl enc -aes-128-ctr -nosalt -pass file:/dev/urandom 2> /dev/null | LC_ALL=C tr '\0-\377' '\0-\143\0-\143[x*]' | tr -d x | (hexdump -ve '"%02u"' <&3 & hexdump -ve '"%02u"' <&3 & hexdump -ve '"%02u"') 3<&0 | dd bs=50000 count=10000 iflag=fullblock status=none cbs=1 conv=unblock | paste -sd "$(printf '%99s\\n')" - (the <&3 is needed for shells other than zsh that close commands' stdin on /dev/null when run in background). Now down to 6.2 seconds and my CPUs almost fully utilised.
{ "source": [ "https://unix.stackexchange.com/questions/323845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/190496/" ] }
323,901
I have seen this answer : You should consider using inotifywait, as an example: inotifywait -m /path -e create -e moved_to | while read path action file; do echo "The file '$file' appeared in directory '$path' via '$action'" # do something with the file done The above script watches a directory for creation of files of any type. My question is how to modify the inotifywait command to report only when a file of a certain type/extension is created (or moved into the directory). For example, it should report when any .xml file is created. What I tried : I have run the inotifywait --help command, and have read the command line options. It has --exclude <pattern> and --excludei <pattern> options to EXCLUDE files of certain types (by using regular expressions), but I need a way to INCLUDE just the files of a certain type/extension.
how do I modify the inotifywait command to report only when a file of certain type/extension is created Please note that this is untested code since I don't have access to inotify right now. But something akin to this ought to work: inotifywait -m /path -e create -e moved_to | while read directory action file; do if [[ "$file" =~ .*xml$ ]]; then # Does the file end with .xml? echo "xml file" # If so, do your thing here! fi done
{ "source": [ "https://unix.stackexchange.com/questions/323901", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200776/" ] }
323,914
Is there a way to dynamically assign environment variables in a systemd service unit file? We have a machine that has 4 GPUs, and we want to spin up multiple instances of a certain service per GPU. E.g.: gpu_service@1:1.service gpu_service@2:1.service gpu_service@3:1.service gpu_service@4:1.service gpu_service@1:2.service gpu_service@2:2.service gpu_service@3:2.service gpu_service@4:2.service ad nauseam So the 1:1, 2:1, etc. are effectively the %i in the service unit file. In order for the service to bind to a particular GPU, the service executable checks a certain environment variable, e.g.: USE_GPU=4 Is there a way I can take %i inside the service unit file and run it through some (shell) function to derive the GPU number, and then I can set the USE_GPU environment variable accordingly? Most importantly, I don't want the hassle of writing multiple /etc/systemd/system/gpu_service@x:y.service/local.conf files just so I can spin up more instances.
If you are careful you can incorporate a small bash script sequence as your exec command in the instance service file. Eg ExecStart=/bin/bash -c 'v=%i; USE_GPU=$${v%:*} exec /bin/mycommand' The $$ in the string will become a single $ in the result passed to bash, but more importantly will stop ${...} from being interpolated by systemd. (Earlier versions of systemd did not document the use of $$ , so I don't know if it was supported then).
{ "source": [ "https://unix.stackexchange.com/questions/323914", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26714/" ] }
324,209
Should I use /dev/random or /dev/urandom ? In which situations would I prefer one over the other?
TL;DR Use /dev/urandom for most practical purposes. The longer answer depends on the flavour of Unix that you're running. Linux Historically, /dev/random and /dev/urandom were introduced at the same time. As @DavidSchwartz pointed out in a comment , using /dev/urandom is preferred in the vast majority of cases. He and others also provided a link to the excellent Myths about /dev/urandom article which I recommend for further reading. In summary: The manpage is misleading. Both are fed by the same CSPRNG to generate randomness ( diagrams 2 and 3 ) /dev/random blocks when it runs out of entropy, so reading from /dev/random can halt process execution. The amount of entropy is conservatively estimated, but not counted /dev/urandom will never block. In rare cases very shortly after boot, the CSPRNG may not have had enough entropy to be properly seeded and /dev/urandom may not produce high-quality randomness. Entropy running low is not a problem if the CSPRNG was initially seeded properly. The CSPRNG is being constantly re-seeded. In Linux 4.8 and onward, /dev/urandom does not deplete the entropy pool (used by /dev/random ) but uses the CSPRNG output from upstream. Use /dev/urandom . Exceptions to the rule In the Cryptography Stack Exchange's When to use /dev/random over /dev/urandom in Linux @otus gives two use cases : Shortly after boot on a low entropy device, if enough entropy has not yet been generated to properly seed /dev/urandom . Generating a one-time pad with information theoretic security If you're worried about (1), you can check the entropy available in /dev/random . If you're doing (2) you'll know it already :) Note: You can check if reading from /dev/random will block , but beware of possible race conditions. Alternative: use neither! @otus also pointed out that the getrandom() system will read from /dev/urandom and only block if the initial seed entropy is unavailable. There are issues with changing /dev/urandom to use getrandom() , but it is conceivable that a new /dev/xrandom device is created based upon getrandom() . macOS It doesn't matter, as Wikipedia says : macOS uses 160-bit Yarrow based on SHA1. There is no difference between /dev/random and /dev/urandom; both behave identically. Apple's iOS also uses Yarrow. FreeBSD It doesn't matter, as Wikipedia says : /dev/urandom is just a link to /dev/random and only blocks until properly seeded. This means that after boot, FreeBSD is smart enough to wait until enough seed entropy has been gathered before delivering a never-ending stream of random goodness. NetBSD Use /dev/urandom , assuming your system has read at least once from /dev/random to ensure proper initial seeding. The rnd(4) manpage says : /dev/urandom never blocks. /dev/random sometimes blocks. Will block early at boot if the system's state is known to be predictable. Applications should read from /dev/urandom when they need randomly generated data, e.g. cryptographic keys or seeds for simulations. Systems should be engineered to judiciously read at least once from /dev/random at boot before running any services that talk to the internet or otherwise require cryptography, in order to avoid generating keys predictably.
{ "source": [ "https://unix.stackexchange.com/questions/324209", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
324,296
I would expect find . -delete to delete the current directory, but it doesn't. Why not?
The members of findutils aware of it , it's for compatible with *BSD: One of the reasons that we skip deletion of "." is for compatibility with *BSD, where this action originated. The NEWS in findutils source code shows that they decided to keep the behavior: #20802: If -delete fails, find's exit status will now be non-zero. However, find still skips trying to delete ".". [UPDATE] Since this question become one of the hot topic, so i dive into FreeBSD source code and come out a more convincing reason. Let's see the find utility source code of FreeBSD : int f_delete(PLAN *plan __unused, FTSENT *entry) { /* ignore these from fts */ if (strcmp(entry->fts_accpath, ".") == 0 || strcmp(entry->fts_accpath, "..") == 0) return 1; ... /* rmdir directories, unlink everything else */ if (S_ISDIR(entry->fts_statp->st_mode)) { if (rmdir(entry->fts_accpath) < 0 && errno != ENOTEMPTY) warn("-delete: rmdir(%s)", entry->fts_path); } else { if (unlink(entry->fts_accpath) < 0) warn("-delete: unlink(%s)", entry->fts_path); } ... As you can see, if it doesn't filter out dot and dot-dot, then it will reach rmdir() C function defined by POSIX's unistd.h . Do a simple test, rmdir with dot/dot-dot argument will return -1: printf("%d\n", rmdir("..")); Let's take a look how POSIX describe rmdir : If the path argument refers to a path whose final component is either dot or dot-dot, rmdir() shall fail. No reason was given why shall fail . I found rename explain some reaso n: Renaming dot or dot-dot is prohibited in order to prevent cyclical file system paths. Cyclical file system paths ? I look over The C Programming Language (2nd Edition) and search for directory topic, surprisingly i found the code is similar : if(strcmp(dp->name,".") == 0 || strcmp(dp->name,"..") == 0) continue; And the comment ! Each directory always contains entries for itself, called ".", and its parent, ".."; these must be skipped, or the program will loop forever . "loop forever" , this is same like how rename describe it as "cyclical file system paths" above. I slightly modify the code and to make it run in Kali Linux based on this answer : #include <stdio.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <dirent.h> #include <unistd.h> void fsize(char *); void dirwalk(char *, void (*fcn)(char *)); int main(int argc, char **argv) { if (argc == 1) fsize("."); else while (--argc > 0) { printf("start\n"); fsize(*++argv); } return 0; } void fsize(char *name) { struct stat stbuf; if (stat(name, &stbuf) == -1 ) { fprintf(stderr, "fsize: can't access %s\n", name); return; } if ((stbuf.st_mode & S_IFMT) == S_IFDIR) dirwalk(name, fsize); printf("%81d %s\n", stbuf.st_size, name); } #define MAX_PATH 1024 void dirwalk(char *dir, void (*fcn)(char *)) { char name[MAX_PATH]; struct dirent *dp; DIR *dfd; if ((dfd = opendir(dir)) == NULL) { fprintf(stderr, "dirwalk: can't open %s\n", dir); return; } while ((dp = readdir(dfd)) != NULL) { sleep(1); printf("d_name: S%sG\n", dp->d_name); if (strcmp(dp->d_name, ".") == 0 || strcmp(dp->d_name, "..") == 0) { printf("hole dot\n"); continue; } if (strlen(dir)+strlen(dp->d_name)+2 > sizeof(name)) { printf("mocha\n"); fprintf(stderr, "dirwalk: name %s/%s too long\n", dir, dp->d_name); } else { printf("ice\n"); (*fcn)(dp->d_name); } } closedir(dfd); } Let's see: xb@dnxb:/test/dot$ ls -la total 8 drwxr-xr-x 2 xiaobai xiaobai 4096 Nov 20 04:14 . drwxr-xr-x 3 xiaobai xiaobai 4096 Nov 20 04:14 .. xb@dnxb:/test/dot$ xb@dnxb:/test/dot$ cc /tmp/kr/fsize.c -o /tmp/kr/a.out xb@dnxb:/test/dot$ /tmp/kr/a.out . start d_name: S..G hole dot d_name: S.G hole dot 4096 . xb@dnxb:/test/dot$ It work correctly, now what if I comment out the continue instruction: xb@dnxb:/test/dot$ cc /tmp/kr/fsize.c -o /tmp/kr/a.out xb@dnxb:/test/dot$ /tmp/kr/a.out . start d_name: S..G hole dot ice d_name: S..G hole dot ice d_name: S..G hole dot ice ^C xb@dnxb:/test/dot$ As you can see, I have to use Ctrl + C to kill this infinitely loop program. The '..' directory read its first entry '..' and loop forever. Conclusion: GNU findutils try to compatible with find utility in *BSD . find utility in *BSD internally use rmdir POSIX-compliant C function which dot/dot-dot is not allow. The reason of rmdir do not allow dot/dot-dot is prevent cyclical file system paths. The C Programming Language written by K&R shows the example of how dot/dot-dot will lead to forever loop program.
{ "source": [ "https://unix.stackexchange.com/questions/324296", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181058/" ] }
324,359
I have a basic understanding of dotfiles in *nix system. But I am still quite confused about this Difference between Login Shell and Non-Login Shell? A bunch of different answers (including multiple duplicates) have already addressed the following bullets: How to invoke a login or non-login shell How to detect a login or non-login shell What startup files will be consumed by a login or non-login shell Referred to documentation (e.g., man bash ) for more details What the answers didn't tell (and also something I'm still confused about) is: What is the use case of a login or non-login shell? (e.g., I only configured zshrc for zsh and that's enough for most personal dev requirement, I know it's not as simple as what vimrc to vim ) What is the reason to use a login over a non-login shell (besides consuming different startup files & life cycles)?
The idea is that a user should have (at most) one login shell per host.  (Perhaps I should say, one login shell per host per terminal — if you are simultaneously logged in to a host through multiple terminals, you would expect to have multiple login shells.)  This would typically (always?) be the first shell you get upon logging in (hence the name).  So, this scheme allows you to specify actions that you want to happen only once per login and things that you want to happen every time you start a new (interactive) shell. Normally, every other shell you run after logging in will be a descendant (a child of a child of a child …) of the login shell, and therefore will inherit many settings (environment variables, umask , etc.) from the login shell.  And, accordingly, the idea is that the login initialization files ( .login , .profile , etc.) should set the settings that are inheritable, and let .bashrc (or whatever else you use) handle the ones that aren’t ( set , shopt , non-exported shell variables, etc.) Another notion is that the login initialization files (and only they) should do “heavy lifting”, i.e., resource-intensive actions.  For example, you might want to have certain processes running in the background whenever you’re logged in (but only one copy (instance) of them).  You might want to have some status information (e.g., df or who ) displayed when you login, but not every time you start a new interactive shell.  Especially if you have an interactive program/dialog (i.e., one that demands input from you) that you want to run every time you login, you probably don’t want to have it run every time you start a new shell.  As an extreme example, twenty years ago Solaris logged you in to a single, non-graphical, non-windowed shell.  (I believe that it has changed since then.)  It was the job of .login or .profile (or whatever) to start the windowing system, with a command like startx .  (This was useful partly because there were multiple windowing systems available.  Different users had different preferences.  Some users used different systems in different situations, and we had a dialog in our .profile that asked “Which windowing system do you want to use today?”)  Obviously, you wouldn’t want that to run every time you opened a new window or typed sh . It’s been ages since I’ve used anything other than bash except for edge cases.  (For example, I write scripts with #!/bin/sh , so on some systems, my scripts run with dash , and on others they run with bash in POSIX mode.  A few times a year I run csh / tcsh for a few minutes to see how it handles something, or to answer a question.)  If you use multiple shells (e.g., bash and zsh ) on a daily basis, your patterns may be different.  If your primary shell (as defined in /etc/passwd ) is bash , you might want to invoke a zsh login shell, and then perhaps invoke some interactive non-login zsh shells subordinate to that.  You should probably avoid having a login shell that is subordinate to another login shell of the same type. As mentioned in Difference between Login Shell and Non-Login Shell? , the OS X Terminal application runs a login shell, so a typical user will typically have several “login shells” running simultaneously.  This is a somewhat different model from the one I have described above, and may require the user to rethink what he does in his .login or .profile (or whatever) file.  I don’t know whether the OS X developers have documented their rationale for this design decision.  But I can imagine a situation in which this would be useful.  There was a time when I habitually opened a handful of shell windows when I logged in, and I would set them to different text and background colors (by writing ANSI escape sequences to the screen) to help me keep track of which was which.  Terminal colors are an example of something that is not inherited by children-of-children, but does persist within a window.  So this is the sort of thing that you would want to do every time you started a new Terminal window, but not every time you start a new interactive shell.
{ "source": [ "https://unix.stackexchange.com/questions/324359", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198157/" ] }
324,515
OverlayFS has a workdir option, beside two other directories lowerdir and upperdir , which needs to be an empty directory. Unfortunately the kernel documentation of overlayfs does not talk much about the purpose of this option. The "workdir" needs to be an empty directory on the same filesystem as upperdir. For readonly overlays the workdir might be ommittet among the upperdir . This give me the clue that it has to do with writing the merged files. Please explain what's happening in the workdir when files are written or changed in the merged directory. Why is the writable upperdir not enough?
The workdir option is required, and used to prepare files before they are switched to the overlay destination in an atomic action (the workdir needs to be on the same filesystem as the upperdir). Source: http://windsock.io/the-overlay-filesystem/ I would hazard a guess that "the overlay destination" means upperdir . So... certain files (maybe "whiteout" files?) are non-atomically created and configured in workdir and then atomically moved into upperdir .
{ "source": [ "https://unix.stackexchange.com/questions/324515", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29510/" ] }
325,206
This is an exploration question, meaning I'm not completely sure what this question is about, but I think it's about the biggest integer in Bash. Anyhow, I'll define it ostensively. $ echo $((1<<8)) 256 I'm producing an integer by shifting a bit. How far can I go? $ echo $((1<<80000)) 1 Not this far, apparently. (1 is unexpected, and I'll return to it.) But, $ echo $((1<<1022)) 4611686018427387904 is still positive. Not this, however: $ echo $((1<<1023)) -9223372036854775808 And one step further afield, $ echo $((1<<1024)) 1 Why 1? And why the following? $ echo $((1<<1025)) 2 $ echo $((1<<1026)) 4 Would someone like to analyse this series? UPDATE My machine: $ uname -a Linux tomas-Latitude-E4200 4.4.0-47-generic #68-Ubuntu SMP Wed Oct 26 19:39:52 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Bash uses intmax_t variables for arithmetic . On your system these are 64 bits in length, so: $ echo $((1<<62)) 4611686018427387904 which is 100000000000000000000000000000000000000000000000000000000000000 in binary (1 followed by 62 0s). Shift that again: $ echo $((1<<63)) -9223372036854775808 which is 1000000000000000000000000000000000000000000000000000000000000000 in binary (63 0s), in two's complement arithmetic. To get the biggest representable integer, you need to subtract 1: $ echo $(((1<<63)-1)) 9223372036854775807 which is 111111111111111111111111111111111111111111111111111111111111111 in binary. As pointed out in ilkkachu 's answer , shifting takes the offset modulo 64 on 64-bit x86 CPUs (whether using RCL or SHL ), which explains the behaviour you're seeing: $ echo $((1<<64)) 1 is equivalent to $((1<<0)) . Thus $((1<<1025)) is $((1<<1)) , $((1<<1026)) is $((1<<2)) ... You'll find the type definitions and maximum values in stdint.h ; on your system: /* Largest integral types. */ #if __WORDSIZE == 64 typedef long int intmax_t; typedef unsigned long int uintmax_t; #else __extension__ typedef long long int intmax_t; __extension__ typedef unsigned long long int uintmax_t; #endif /* Minimum for largest signed integral type. */ # define INTMAX_MIN (-__INT64_C(9223372036854775807)-1) /* Maximum for largest signed integral type. */ # define INTMAX_MAX (__INT64_C(9223372036854775807))
{ "source": [ "https://unix.stackexchange.com/questions/325206", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
325,490
I have an http link : http://www.test.com/abc/def/efg/file.jar and I want to save the last part file.jar to variable, so the output string is "file.jar". Condition : link can has different length e.g.: http://www.test.com/abc/def/file.jar. I tried it that way: awk -F'/' '{print $7}' , but problem is the length of URL, so I need a command which can be used for any URL length.
Using awk for this would work, but it's kind of deer hunting with a howitzer. If you already have your URL bare, it's pretty simple to do what you want if you put it into a shell variable and use bash 's built-in parameter substitution: $ myurl='http://www.example.com/long/path/to/example/file.ext' $ echo ${myurl##*/} file.ext The way this works is by removing a prefix that greedily matches '*/', which is what the ## operator does: ${haystack##needle} # removes any matching 'needle' from the # beginning of the variable 'haystack'
{ "source": [ "https://unix.stackexchange.com/questions/325490", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/168140/" ] }
325,494
I have this: 00:05:40.005 id=32214483 Src=PIPE <[email protected]> (received) [email protected] relayed (1234 bytes) I need to achieve this: 00:05:40.005 id=32214483 [email protected] <[email protected]> (received) [email protected] relayed (1234 bytes) NOTE - I can't swap the data "by column" and apply that to the entire file as I have other data in the file that has the correct format I need. I simply wish to swap out all instances of Src=PIPE with the data in the next column without the <> symbols.
Using awk for this would work, but it's kind of deer hunting with a howitzer. If you already have your URL bare, it's pretty simple to do what you want if you put it into a shell variable and use bash 's built-in parameter substitution: $ myurl='http://www.example.com/long/path/to/example/file.ext' $ echo ${myurl##*/} file.ext The way this works is by removing a prefix that greedily matches '*/', which is what the ## operator does: ${haystack##needle} # removes any matching 'needle' from the # beginning of the variable 'haystack'
{ "source": [ "https://unix.stackexchange.com/questions/325494", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81926/" ] }
325,705
I am currently exploring Debian packages, and I have been reading some code samples. And on every line in, for example, the postinst script is a pattern. some command || true another command || true So if some command fails, then the line returns true but I don't see how this affects the output of the program.
The reason for this pattern is that maintainer scripts in Debian packages tend to start with set -e , which causes the shell to exit as soon as any command (strictly speaking, pipeline, list or compound command) exits with a non-zero status. This ensures that errors don't accumulate: as soon as something goes wrong, the script aborts. In cases where a command in the script is allowed to fail, adding || true ensures that the resulting compound command always exits with status zero, so the script doesn't abort. For example, removing a directory shouldn't be a fatal error (preventing a package from being removed); so we'd use rmdir ... || true since rmdir doesn't have an option to tell it to ignore errors.
{ "source": [ "https://unix.stackexchange.com/questions/325705", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145899/" ] }
325,932
VirtualBox: As I have Hyper-Threading capable CPU, I wonder: Is it a bad idea to assign more virtual CPU cores than a number of physical CPU cores as the following warning suggests (simply using all 8 virtual cores of 4 physical core CPU for instance): Transcript: More virtual CPUs are assigned to the virtual machine than the number of physical CPUs on the host system (4). This is likely to degrade the performance of your virtual machine. Please consider reducing the number of virtual CPUs. Can someone put reasoning to this topic? The CPU in question is Intel Core i7-4700HQ, Ark Intel , CPU Benchmark Supposing, there is no obsolete HW, like HDD (instead of an SSD), and / or Low RAM (16GB here, minimum vm.swappiness , 4GB for this VM), and so on.
Hardware / OS / Software Host : Linux Mint 18 Cinnamon 64-bit (fully updated); Kernel version 4.4.0-47-generic Guest : Windows 8.1 Pro 64-bit (fully updated) Processor : Intel Core i7-4700HQ , (6MB cache, 4 physical cores, or 8 using Hyper-Threading ), CPU Benchmark VirtualBox : Version 5.1.10 r112026 (Qt5.5.1) Guest Additions : Installed and up-to-date Benchmark Tool #1 : WinRAR version 5.40 final 64-bit Benchmark Tool #2 : VeraCrypt version 1.19 final 64-bit Preparation In both cases I waited after boot until the CPU, RAM, disk drive are at stable near zero-point hits. Method Cloning the original virtual machine to have two identical ones. I have, for the second pass, since the reboot disabled Antivirus pointed out at the bottom of this answer and updated WinRAR in both cases from a Beta to the Final version. I have done the same Preparation as pointed out earlier. The virtual machine ran in foreground, without any other CPU time hungry application running, I have disabled what I could for the purpose of the test not being influenced. To include potential caching inside or outside the system, I ran the same test twice consequently. The benefit being almost none. Results WinRAR 4 cores => 7.5 minutes ( shorter time is better) WinRAR with 4 cores enabled, 1.5GiB processed in 7.5 minutes. 8 cores => 4.5 minutes ( shorter time is better) WinRAR with 8 cores enabled, 1.5GiB processed in 4.5 minutes. VeraCrypt 4 cores => speed 2.6 GiB/s ( higher speed is better) VeraCrypt with 4 cores enabled, HW-accelerated AES (AES-NI) speed 2.6 GiB/s. 8 cores => speed 3.9 GiB/s ( higher speed is better) VeraCrypt with 8 cores enabled, HW-accelerated AES (AES-NI) speed 3.9 GiB/s. Conclusion I could run as many tests as necessary. But I figure, if these two, one of which is rather complex compression test, the second being a set of rather complex encryption tests, what would be the point. Both of the benchmarks show a marked difference. I see no reason to believe, that their results are inaccurate, as I followed a rather rigorous preparation and method, moreover these tests have taken place in RAM to rule out I/O bottleneck. From my standpoint, the warning mentioned in the question may apply to some conditions, but certainly not all of them. Having shared with you these pretty remarkable results, I am certain for you to agree with me, that this warning probably should not be taken so seriously on modern CPUs featuring Hyper-Threading with the latest VirtualBox version. One thing for sure: Don't take me for the word and test it under your own conditions, before you decide to apply this setting permanently. New benchmark Host + Guest : Linux Mint 19.2 "Tina" - Cinnamon (64-bit) ; both with kernel: 5.3.0-24-generic . Processor : Intel® Core™ i7-7700HQ ; 6 MB Cache, up to 3.80 GHz, 4 physical cores, or 8 using Hyper-Threading, CPU Benchmark comparison VirtualBox : Version 6.1.0 r135406 (Qt5.9.5) Guest Additions : Installed and up-to-date Benchmark Tool : VeraCrypt version 1.24 Hotfix1 64-bit final ( web page , direct deb download link ) Preparation and Method Same as previous benchmark. Results VeraCrypt AES encryption with 4 cores ⟶ speed 4.8 GiB/s (higher speed is better) VeraCrypt AES encryption with 8 cores ( Hyper-Threading warning issued) ⟶ speed 7.2 GiB/s (higher speed is better) Conclusion Wonderful 50% performance increase with Hyper-Threading enabled, but only with the AES sadly, I will have to run some more comprehensive test. Will be back in a few days with results.
{ "source": [ "https://unix.stackexchange.com/questions/325932", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
325,985
I have a large file and I would like print from each sequential 50 lines , the 15th and 25th lines. sed -n '15,25p' inputfile How to modify this command to print only lines 15 and 25 and to loop over each 50 lines in the file.
awk 'NR % 50 == 15 || NR % 50 == 25' would be the obvious portable way. Note a GNU sed alternative: sed '15~50b;25~50b;d' With any sed , you can always do: sed -n 'n;n;n;n;n;n;n;n;n;n;n;n;n;n;p;n;n;n;n;n;n;n;n;n;n;p;n;n;n;n;n;n;n;n;n;n;n;n;n;n;n;n;n;n;n;n;n;n;n;n;n' (get next line 14 times, print, next line 10 times, print, next line 25 times, back to the next cycle (which grabs the missing extra line to make 50)).
{ "source": [ "https://unix.stackexchange.com/questions/325985", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112129/" ] }
326,579
At first, the question seems to be a little bit silly/confusing as the OS does the job of managing process execution. However, I want to measure how much some processes are CPU/IO-bound and I feel like my OS is interfering on my experiments with, for instance, scheduled OS processes. Take as an example the following situation: I ran the process A twice and got the following output from the tool "time" (time columns in seconds): +---+-------+---------+-----------+---------+ |Run|Process|User Time|System Time|Wall time| +---+-------+---------+-----------+---------+ |1 |A |196.3 |5.12 |148.86 | |2 |A |190.79 |4.93 |475.46 | +---+-------+---------+-----------+---------+ As we can see, although the user and sys time are similar, the elapsed time of both drastically changes (diff. of ~5 min). Feels like something in my environment caused some sort of contention. I want to stop every possible background process/services to avoid any kind of noise during my experiments but I consider myself a novice/intermediate unix-user and I don't know how to guarantee this. I'm using Linux 4.4.0-45-generic with Ubuntu 14.04 LTS 64 bit. I really appreciate the assistance. If you guys need any missing information, I will promptly edit my post. CPU Info $ grep proc /proc/cpuinfo | wc -l 8 $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 60 Stepping: 3 CPU MHz: 4002.609 BogoMIPS: 7183.60 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 8192K NUMA node0 CPU(s): 0-7
You have a kernel option configuration where a CPU won't be used by the OS, it is called isolcpus . isolcpus — Isolate CPUs from the kernel scheduler. Synopsis isolcpus= cpu_number [, cpu_number ,...] Description Remove the specified CPUs, as defined by the cpu_number values, from the general kernel SMP balancing and scheduler algroithms. The only way to move a process onto or off an "isolated" CPU is via the CPU affinity syscalls. cpu_number begins at 0, so the maximum value is 1 less than the number of CPUs on the system. This configuration I am about to describe how to setup, can have far more uses than for testing. Meru for instance, uses this technology in their Linux-based AP controllers, to keep the network traffic from interfering with the inner workings of the OS, namely I/O operations. I also use it in a very busy web frontend, for quite the same reasons: I have found out from life experience that I lost control too regularly for my taste of that server ; had to reboot it forcefully until I separated the front end daemon on it´s own dedicated CPUs. As you have 8 CPUs, that you can check with the output of the command: $ grep -c proc /proc/cpuinfo 8 or $ lscpu | grep '^CPU.s' CPU(s): 8 Add in Debian/Ubuntu in the file /etc/default/grub to the option GRUB_CMDLINE_LINUX : GRUB_CMDLINE_LINUX="isolcpus=7" (it is 7, because it starts in 0, and you have 8 cores) Then run, sudo update-grub This is telling the kernel to not use one of your cores. Reboot the system. Then start your process. Immediately after starting it, you can change for the 8th CPU (7 because 0 is the 1st), and be quite sure you are the only one using that CPU. For that, use the command: taskset -cp 7 PID_number taskset - retrieve or set a processes’s CPU affinity SYNOPSIS taskset [options] [mask | list ] [pid | command [arg]...] DESCRIPTION taskset is used to set or retrieve the CPU affinity of a running pro cess given its PID or to launch a new COMMAND with a given CPU affinity. CPU affinity is a scheduler property that "bonds" a process to a given set of CPUs on the system. The Linux scheduler will honor the given CPU affinity and the process will not run on any other CPUs. Note that the Linux scheduler also supports natural CPU affinity: the scheduler attempts to keep processes on the same CPU as long as practical for performance reasons. Therefore, forcing a specific CPU affinity is useful only in certain applications. For reading more about it, see: isolcpus, numactl and taskset Also using ps -eF you should see in the PSR column the processor being used. I have a server with CPU 2 and 3 isolated, and indeed, it can be seen with ps -e the only process in userland as intended, is pound . # ps -eo psr,command | tr -s " " | grep "^ [2|3]" 2 [cpuhp/2] 2 [watchdog/2] 2 [migration/2] 2 [ksoftirqd/2] 2 [kworker/2:0] 2 [kworker/2:0H] 3 [cpuhp/3] 3 [watchdog/3] 3 [migration/3] 3 [ksoftirqd/3] 3 [kworker/3:0] 3 [kworker/3:0H] 2 [kworker/2:1] 3 [kworker/3:1] 3 [kworker/3:1H] 3 /usr/sbin/pound If you compare it with the non-isolated CPUs, they are running many more things (the window below slides ): # ps -eo psr,command | tr -s " " | grep "^ [0|1]" 0 init [2] 0 [kthreadd] 0 [ksoftirqd/0] 0 [kworker/0:0H] 0 [rcu_sched] 0 [rcu_bh] 0 [migration/0] 0 [lru-add-drain] 0 [watchdog/0] 0 [cpuhp/0] 1 [cpuhp/1] 1 [watchdog/1] 1 [migration/1] 1 [ksoftirqd/1] 1 [kworker/1:0] 1 [kworker/1:0H] 1 [kdevtmpfs] 0 [netns] 0 [khungtaskd] 0 [oom_reaper] 1 [writeback] 0 [kcompactd0] 0 [ksmd] 1 [khugepaged] 0 [crypto] 1 [kintegrityd] 0 [bioset] 1 [kblockd] 1 [devfreq_wq] 0 [watchdogd] 0 [kswapd0] 0 [vmstat] 1 [kthrotld] 0 [kworker/0:1] 0 [deferwq] 0 [scsi_eh_0] 0 [scsi_tmf_0] 1 [vmw_pvscsi_wq_0] 0 [bioset] 1 [jbd2/sda1-8] 1 [ext4-rsv-conver] 0 [kworker/0:1H] 1 [kworker/1:1H] 1 [bioset] 0 [bioset] 1 [bioset] 1 [bioset] 1 [bioset] 1 [bioset] 1 [bioset] 1 [bioset] 0 [jbd2/sda3-8] 1 [ext4-rsv-conver] 1 /usr/sbin/rsyslogd 0 /usr/sbin/irqbalance --pid=/var/run/irqbalance.pid 1 /usr/sbin/cron 0 /usr/sbin/sshd 1 /usr/sbin/snmpd -Lf /dev/null -u snmp -g snmp -I -smux -p /var/run/snmpd.pid 1 /sbin/getty 38400 tty1 1 /lib/systemd/systemd-udevd --daemon 0 /usr/sbin/xinetd -pidfile /run/xinetd.pid -stayalive 1 [kworker/1:2] 0 [kworker/u128:1] 0 [kworker/0:2] 0 [bioset] 1 [xfsalloc] 1 [xfs_mru_cache] 1 [jfsIO] 1 [jfsCommit] 0 [jfsCommit] 0 [jfsCommit] 0 [jfsCommit] 0 [jfsSync] 1 [bioset] 0 /usr/bin/monit -c /etc/monit/monitrc 1 /usr/sbin/pound 0 sshd: rui [priv] 0 sshd: rui@pts/0,pts/1 1 -bash 1 -bash 1 -bash 1 [kworker/u128:0] 1 -bash 0 sudo su 1 su 1 bash 0 bash 0 logger -t cmdline root[/home/rui] 1 ps -eo psr,command 0 tr -s 0 grep ^ [0|1] 0 /usr/bin/vmtoolsd
{ "source": [ "https://unix.stackexchange.com/questions/326579", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/202744/" ] }
326,598
Each shell has an environment variable $HOME set (ex: /Users/lotolo ). If I'm under csh I can unsetenv HOME and still if I do cd I'll be in my home. I've tested this also on bash ( unset HOME ) and it's the same behavior. So how does the shell know where is my/other_user home? Where does it reads those values? This is not a duplicate since my question is not how do I know, but how does the shell know HOME . And this behavior is extended to other users as well.
In the case of csh and tcsh , it records the value of the $HOME variable at the time the shell was started ( in its $home variable as noted by @JdeBP ). If you unset it before starting csh , you'll see something like: $ (unset HOME; csh -c cd) cd: No home directory. For bash (and most other Bourne-like shells), I see a different behaviour than yours. bash-4.4$ unset HOME; cd bash: cd: HOME not set The content of the $HOME variable is initialised by the login process based on information stored in the user database against your user name . The information about the user name itself is not always available. All a shell can know for sure is the userid of the process that is executing it and several users (with different home directories) can share the same userid. So, once $HOME is gone there is no reliable way to get it back. Querying the user database (with getpwxxx() standard API) for the home directory of the first user that has the same uid as the one running the shell would only be an approximation (not to mention the fact that the user database could have changed (or the home directory being defined as a one time value) since the login session started). zsh is the only shell that I know that does that: $ env -u HOME ltrace -e getpw\* zsh -c 'cd && pwd' zsh->getpwuid(1000, 0x496feb, 114, 0x7f9599004697) = 0x7f95992fddc0 /home/chazelas +++ exited (status 0) +++ All other shells I tried either complain about that unset HOME or use / as a default home value. Yet a different behaviour is fish 's, which seems to query the database for the user name stored in $USER if any or do a getpwuid() if not: $ env -u HOME USER=bin ltrace -e getpw\* fish -c 'cd;pwd' fish->getpwnam("bin") = 0x7fd2beba3d80 fish: Unable to create a configuration directory for fish. Your personal settings will not be saved. Please set the $XDG_CONFIG_HOME variable to a directory where the current user has write access. fish: Unable to create a configuration directory for fish. Your personal settings will not be saved. Please set the $XDG_CONFIG_HOME variable to a directory where the current user has write access. --- SIGCHLD (Child exited) --- /bin +++ exited (status 0) +++ $ env -u HOME -u USER ltrace -e getpw\* fish -c 'cd;pwd' fish->getpwuid(1000, 0x7f529eb4fb28, 0x12d8790, 0x7f529e858697) = 0x7f529eb51dc0 fish->getpwnam("chazelas") = 0x7f529eb51d80 --- SIGCHLD (Child exited) --- --- SIGCHLD (Child exited) --- /home/chazelas +++ exited (status 0) +++ SEGV when the user doesn't exist ( https://github.com/fish-shell/fish-shell/issues/3599 ): $ env -u HOME USER=foo fish -c '' zsh: segmentation fault env -u HOME USER=foo fish -c ''
{ "source": [ "https://unix.stackexchange.com/questions/326598", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153056/" ] }
326,707
When you're editing a file in vim, it generates a swapfile with the same name as your current file, but with a .swp extension. If .swp is already taken, then it generates one a .swo one. If that's already taken, then you get .swa , etc. etc. I couldn't find any documentation on what the exact naming-fallback order is for these files, can anyone clarify by what convention the extensions are chosen?
The particular piece of code that you're looking for (and comment) is in memline.c : /* * Change the ".swp" extension to find another file that can be used. * First decrement the last char: ".swo", ".swn", etc. * If that still isn't enough decrement the last but one char: ".svz" * Can happen when editing many "No Name" buffers. */ if (fname[n - 1] == 'a') /* ".s?a" */ { if (fname[n - 2] == 'a') /* ".saa": tried enough, give up */ { EMSG(_("E326: Too many swap files found")); vim_free(fname); fname = NULL; break; } --fname[n - 2]; /* ".svz", ".suz", etc. */ fname[n - 1] = 'z' + 1; } --fname[n - 1]; /* ".swo", ".swn", etc. */
{ "source": [ "https://unix.stackexchange.com/questions/326707", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48496/" ] }
326,897
There seems to be various JavaScript+browser specific ways of decompressing this, but isn't there some way to transform jsonlz4 files to something unlz4 will read?
I was able to unpack the jsonlz4 by using lz4json : apt-get install liblz4-dev git clone https://github.com/andikleen/lz4json.git cd lz4json make ./lz4jsoncat ~/.mozilla/firefox/*/bookmarkbackups/*.jsonlz4
{ "source": [ "https://unix.stackexchange.com/questions/326897", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3645/" ] }
326,956
Problem with VirtualBox 5.x running on GNU/Linux Debian 9.x host: EFI-enabled guest suddenly boots only into UEFI Interactive Shell. It waits for 5 seconds and then it drops to Shell> . I don't remember any modifications, which I would have done, neither to the host, nor guest, or VirtualBox itself.
Plausible fix: In UEFI Interactive Shell, enter the file system: fs0: Following up with creating this file: edit startup.nsh Enter this or similar line to it: \EFI\debian\grubx64.efi Press CTRL + S to save the file. Press ENTER to confirm the file name. Press CTRL + Q to exit the editor. Restart the Guest: reset Important notes: For some reason you have only a few seconds to edit and save the file. If it takes you longer, then the guest may react with a significant delay. Or it may even freeze. Replace debian with your system's id, e.g. ubuntu . You may verify this by simply going into the \EFI\ directory and running ls . Another way: If you don't succeed, and supposing your guest is Linux type, I myself had to do it: Boot from live USB with any Linux. Mount the root file system. Create this file on the mounted file system, adjust the path to wherever you have mounted it: /boot/efi/startup.nsh Enter the above explained line to it.
{ "source": [ "https://unix.stackexchange.com/questions/326956", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
327,298
I have a directory of 30 TB having billions of files in it which are formally all JPEG files. I am deleting each folder of files like this: sudo rm -rf bolands-mills-mhcptz This command just runs and doesn't show anything whether it's working or not. I want to see as it's deleting files or what is the current status of the command.
You can use rm -v to have rm print one line per file deleted. This way you can see that rm is indeed working to delete files. But if you have billions of files then all you will see is that rm is still working. You will have no idea how many files are already deleted and how many are left. The tool pv can help you with a progress estimation. http://www.ivarch.com/programs/pv.shtml Here is how you would invoke rm with pv with example output $ rm -rv dirname | pv -l -s 1000 > logfile 562 0:00:07 [79,8 /s] [====================> ] 56% ETA 0:00:05 In this contrived example I told pv that there are 1000 files. The output from pv shows that 562 are already deleted, elapsed time is 7 seconds, and the estimation to complete is in 5 seconds. Some explanation: pv -l makes pv to count by newlines instead of bytes pv -s number tells pv what the total is so that it can give you an estimation. The redirect to logfile at the end is for clean output. Otherwise the status line from pv gets mixed up with the output from rm -v . Bonus: you will have a logfile of what was deleted. But beware the file will get huge. You can also redirect to /dev/null if you don't need a log. To get the number of files you can use this command: $ find dirname | wc -l This also can take a long time if there are billions of files. You can use pv here as well to see how much it has counted $ find dirname | pv -l | wc -l 278k 0:00:04 [56,8k/s] [ <=> ] 278044 Here it says that it took 4 seconds to count 278k files. The exact count at the end ( 278044 ) is the output from wc -l . If you don't want to wait for the counting then you can either guess the number of files or use pv without estimation: $ rm -rv dirname | pv -l > logfile Like this you will have no estimation to finish but at least you will see how many files are already deleted. Redirect to /dev/null if you don't need the logfile. Nitpick: do you really need sudo ? usually rm -r is enough to delete recursively. no need for rm -f .
{ "source": [ "https://unix.stackexchange.com/questions/327298", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/203109/" ] }
327,436
# su -l www-data ./http-app.py This account is currently not available. # su -l www-data -c ./http-app.py This account is currently not available. # su -c ./http-app.py www-data This account is currently not available. # su -lc ./http-app.py www-data This account is currently not available. # getent passwd www-data www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin # getent shadow www-data www-data:*:16842:0:99999:7::: # lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 8.6 (jessie) Release: 8.6 Codename: jessie What's wrong with my su or www-data ? It used to work... Presumably this is because of the /usr/sbin/nologin , but how then I drop root for this one script, without compromising other services on the system ( nologin has been chosen by Debian team for a good reason, I want to believe)?
You are using su which is used to "switch user". Of course it won't work because www-data is a user account which cannot be used to login. You have told it: /usr/sbin/nologin . Maybe what you want is sudo which is used to "execute a command as another user". sudo -u www-data ./http-app.py
{ "source": [ "https://unix.stackexchange.com/questions/327436", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111103/" ] }
327,488
I'm running Konsole 16.08.3-1 (installed via pacman ) as my default terminal emulator under Gnome 3.22.2. Normally, when I start Konsole, I hit Ctrl + Shift + M to hide the menu bar; I only sparingly use it, and generally the white menu bar distracts from my overall dark terinal. Is there any way to hide the menu bar persistently so that I don't have to hide it manually every time I start Konsole?
There are actually two different settings. The one you described in your question, Ctrl + Shift + M or Settings > Show Menubar is for the current window only. You can disable the menubar for newly created windows permanently by unchecking Settings > Configure Konsole > General > Show menubar by default or by changing/adding [KonsoleWindow] ShowMenuBarByDefault=false to ~/.config/konsolerc
{ "source": [ "https://unix.stackexchange.com/questions/327488", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19463/" ] }
327,730
I'm seeing error messages like these below: Nov 15 15:49:52 x99 kernel: pcieport 0000:00:03.0: AER: Multiple Corrected error received: id=0018 Nov 15 15:49:52 x99 kernel: pcieport 0000:00:03.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0018(Receiver ID) Nov 15 15:49:52 x99 kernel: pcieport 0000:00:03.0: device [8086:6f08] error status/mask=00000040/00002000 Nov 15 15:49:52 x99 kernel: pcieport 0000:00:03.0: [ 6] Bad TLP These will cause degraded performance even though they have (so far) been corrected. Obviously, this issue needs to be resolved. However, I cannot find much about it on the Internet. (Maybe I'm looking in the wrong places.) I found only a few links which I will post below. Does anyone know more about these errors? Is it the motherboard, the Samsung 950 Pro, or the GPU (or some combination of these)? The hardware is: Asus X99 Deluxe II Samsung 950 Pro NVMe in the M2. slot on the mb (which shares PCIe port 3). Nothing else is plugged into PCIe port 3. A GeForce GTX 1070 in PCIe slot 1 Core i7 6850K CPU A couple of the links I found mentions the same hardware (X99 Deluxe II mb & Samsung950 Pro). I'm running Arch Linux. I do not find the string "8086:6f08" in journalctl or anywhere else I have thought to search so far. odd error message with nvme ssd (Bad TLP) : linuxquestions https://www.reddit.com/r/linuxquestions/comments/4walnu/odd_error_message_with_nvme_ssd_bad_tlp/ PCIe: Is your card silently struggling with TLP retransmits? http://billauer.co.il/blog/2011/07/pcie-tlp-dllp-retransmit-data-link-layer-error/ GTX 1080 Throwing Bad TLP PCIe Bus Errors - GeForce Forums https://forums.geforce.com/default/topic/957456/gtx-1080-throwing-bad-tlp-pcie-bus-errors/ drivers - PCIe error in dmesg log - Ask Ubuntu https://askubuntu.com/questions/643952/pcie-error-in-dmesg-log 780Ti X99 hard lock - PCIE errors - NVIDIA Developer Forums https://devtalk.nvidia.com/default/topic/779994/linux/780ti-x99-hard-lock-pcie-errors/
I can give at least a few details, even though I cannot fully explain what happens. As described for example here , the CPU communicates with the PCIe bus controller by transaction layer packets (TLPs). The hardware detects when there are faulty ones, and the Linux kernel reports that as messages. The kernel option pci=nommconf disables Memory-Mapped PCI Configuration Space, which is available in Linux since kernel 2.6. Very roughly, all PCI devices have an area that describe this device (which you see with lspci -vv ), and the originally method to access this area involves going through I/O ports, while PCIe allows this space to be mapped to memory for simpler access. That means in this particular case, something goes wrong when the PCIe controller uses this method to access the configuraton space of a particular device. It may be a hardware bug in the device, in the PCIe root controller on the motherboard, in the specific interaction of those two, or something else. By using pci=nommconf , the configuration space of all devices will be accessed in the original way, and changing the access methods works around this problem. So if you want, it's both resolving and suppressing it.
{ "source": [ "https://unix.stackexchange.com/questions/327730", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15010/" ] }
328,131
How would you go about finding the DNS servers used by systemd-resolved , for troubleshooting purposes? Generally I can use dig and test the DNS servers shown in /etc/resolv.conf . (Or windows - ipconfig /all + nslookup ). But that approach doesn't work when resolv.conf just points to a local resolver daemon on a loopback address. What method is used under systemd-resolved, to show the DNS servers it uses? ( unbound has config files I could look into. dnsmasq does too, though I'm not sure if servers can be added dynamically without a config file. Even NetworkManager, now has nmcli , and I see you can query nmcli d show wlan0 to show the DNS configuration for an interface.)
Use resolvectl status ( systemd-resolve --status when using systemd version earlier than 239 ) to show your global and per-link DNS settings .
{ "source": [ "https://unix.stackexchange.com/questions/328131", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29483/" ] }
328,553
I'm supposed to be accessing a server in order to link a company's staging and live servers into our deployment loop. An admin over on their side set up the two instances and then created a user on the server for us to SSH in as. This much I'm used to. In my mind now what would happen is I would send them my public key which could be placed inside their authorized keys folder. Instead however they sent me a file name id_rsa which inside the file contains -----BEGIN RSA PRIVATE KEY----- over email. Is this normal? I looked around and can find tonnes of resources on generating and setting up my own keys from scratch, but nothing about starting from the private keys of the server. Should I be using this to generate some key for myself or? I would ask the system admin directly but don't want to appear an idiot and waste everybody in-between us' time. Should I just ignore the key he sent me and ask them to put my public key inside their authorized folder?
In my mind now what would happen is I would send them my public key which could be placed inside their authorized keys folder. What's "in your mind" as what should now happen is correct. Email is not a secure channel of communication, so from a standpoint of proper security, you (and they) should consider that private key compromised. Depending on your technical skill and how diplomatic you want to be, you could do several different things. I would recommend one of the following: Generate your own key pair and attach the public key to an email you send to them, saying: Thanks! Since email isn't a secure distribution method for private keys, could you please put my public key in place, instead? It's attached. Thank them and ask them if they object to you installing your own keypair, since the private key they have sent should be considered compromised after having been sent over email. Generate your own keypair, use the key they sent you to log in the first time, and use that access to edit the authorized_keys file to contain the new public key (and remove the public key corresponding to the compromised private key.) Bottom line: You won't look like an idiot. But, the other admin could be made to look like an idiot very easily. Good diplomacy could avoid that. Edit in response to comments from MontyHarder: Neither of my suggested courses of action involves "fixing things without telling the other admin what he did wrong"; I just did so subtly without throwing him under the bus. However, I will add that I would also follow up (politely) if the subtle clues weren't picked up: Hello, I saw you didn't respond to my comment about email as an insecure channel. I do want to be confident that this won't happen again: Do you understand why I'm making this point about the secure handling of private keys? Best, Toby
{ "source": [ "https://unix.stackexchange.com/questions/328553", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204226/" ] }
328,882
I have an array containing some elements, but I want to push new items to the beginning of the array; How do I do that?
To add an element to the beginning of an array use. arr=("new_element" "${arr[@]}") Generally, you would do. arr=("new_element1" "new_element2" "..." "new_elementN" "${arr[@]}") To add an element to the end of an array use. arr=( "${arr[@]}" "new_element" ) Or instead arr+=( "new_element" ) Generally, you would do. arr=( "${arr[@]}" "new_element1" "new_element2" "..." "new_elementN") #Or arr+=( "new_element1" "new_element2" "..." "new_elementN" ) To add an element to specific index of an array use. Let's say we want to add an element to the position of Index2 arr[2] , we would actually do merge on below sub-arrays: Get all elements before Index position2 arr[0] and arr[1] ; Add an element to the array; Get all elements with Index position2 to the last arr[2] , arr[3] , .... arr=( "${arr[@]:0:2}" "new_element" "${arr[@]:2}" ) Removing an element from the array In addition to removing an element from an array (let's say element #3), we need to concatenate two sub-arrays. The first sub-array will hold the elements before element #3 and the second sub-array will contain the elements after element #3. arr=( "${arr[@]:0:2}" "${arr[@]:3}" ) ${arr[@]:0:2} will get two elements arr[0] and arr[1] starts from the beginning of the array. ${arr[@]:3} will get all elements from index3 arr[3] to the last. one possible handy way to re-build the arr excluding element#3 (arr[2]) from that: del_element=3; arr=( "${arr[@]:0:$((del_element-1))}" "${arr[@]:$del_element}" ) specify which element you want to exclude in del_element= . Another possibility to remove an element is Using unset (actually assign 'null' value to the element) unset -v 'arr[2]' Use replace pattern if you know the value of your array elements to truncate their value (replace with empty string). arr=( "${arr[@]/PATTERN/}" ) Print the array printf '%s\n' "${arr[@]}"
{ "source": [ "https://unix.stackexchange.com/questions/328882", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119942/" ] }
328,886
I've already seen this answer , but it didn't work! I tested both CentOS 6 and 7 and I got the same error. Interestingly enough when I try to install it on Vm, everything goes smoothly.
To add an element to the beginning of an array use. arr=("new_element" "${arr[@]}") Generally, you would do. arr=("new_element1" "new_element2" "..." "new_elementN" "${arr[@]}") To add an element to the end of an array use. arr=( "${arr[@]}" "new_element" ) Or instead arr+=( "new_element" ) Generally, you would do. arr=( "${arr[@]}" "new_element1" "new_element2" "..." "new_elementN") #Or arr+=( "new_element1" "new_element2" "..." "new_elementN" ) To add an element to specific index of an array use. Let's say we want to add an element to the position of Index2 arr[2] , we would actually do merge on below sub-arrays: Get all elements before Index position2 arr[0] and arr[1] ; Add an element to the array; Get all elements with Index position2 to the last arr[2] , arr[3] , .... arr=( "${arr[@]:0:2}" "new_element" "${arr[@]:2}" ) Removing an element from the array In addition to removing an element from an array (let's say element #3), we need to concatenate two sub-arrays. The first sub-array will hold the elements before element #3 and the second sub-array will contain the elements after element #3. arr=( "${arr[@]:0:2}" "${arr[@]:3}" ) ${arr[@]:0:2} will get two elements arr[0] and arr[1] starts from the beginning of the array. ${arr[@]:3} will get all elements from index3 arr[3] to the last. one possible handy way to re-build the arr excluding element#3 (arr[2]) from that: del_element=3; arr=( "${arr[@]:0:$((del_element-1))}" "${arr[@]:$del_element}" ) specify which element you want to exclude in del_element= . Another possibility to remove an element is Using unset (actually assign 'null' value to the element) unset -v 'arr[2]' Use replace pattern if you know the value of your array elements to truncate their value (replace with empty string). arr=( "${arr[@]/PATTERN/}" ) Print the array printf '%s\n' "${arr[@]}"
{ "source": [ "https://unix.stackexchange.com/questions/328886", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183478/" ] }
328,906
What are the commands to find out fan speed and cpu temp in linux (I know lm-sensor can do the task). Is there any alternative for that?
For CPU temperature: On Debian: sudo apt-get install lm-sensors On Centos: sudo yum install lm_sensors Run using: sudo sensors-detect Type sensors to get CPU temp. For fan speed: sensors | grep -i fan This will output fan speed or install psensor using: sudo apt-get install psensor One can also use hardinfo sudo apt-get install hardinfo
{ "source": [ "https://unix.stackexchange.com/questions/328906", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/202905/" ] }
328,911
I cannot fully kill a mysql service on CentOS 7. I tried to find all PIDs: ps -ef | grep 'mysql' and then kill them with kill -9 ... but mysql recreates after some time. Also I tried to kill it like this: killall -KILL mysql mysqld_safe mysqld The same effect. After several seconds mysql rejoins. Why it happens? EDITED: # ps aux | grep mysql root 15284 0.0 0.3 115384 1804 ? Ss 12:10 0:00 /bin/sh /usr/bin/mysqld_safe --basedir=/usr --wsrep-new-cluster mysql 15743 0.1 40.3 1353412 202276 ? Sl 12:10 0:03 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --wsrep-provider=/usr/lib64/galera3/libgalera_smm.so --wsrep-new-cluster --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock --wsrep_start_position=43de3d74-bca8-11e6-a178-57b39b925285:9 root 16303 0.0 0.1 112648 976 pts/0 R+ 12:56 0:00 grep --color=auto mysql I am using a mysql fork (Percona Xtradb Cluster) and it can't be stopped if the node is partitioned from the cluster. It can be stopped only if I disable a mysql service and reboot a node. But it is much better for me to kill the process without node rebooting. So systemctl stop mysql Doesn't work. It tries to stop it but without success. I have installed it from Percona repository via yum: yum install Percona-XtraDB-Cluster-57 Situation is the next: There was 3 nodes and they crashed. After some time only 2 nodes could start. But they are waiting for the 3rd node. They have state: activating. If I try to stop mysql service then it change its state to: deactivating. But it can't be stopped. So, I try to kill mysql service and provision a new cluster from 2 nodes. But I can't stop mysql without reboot (reboot isn't a solution for me).
For CPU temperature: On Debian: sudo apt-get install lm-sensors On Centos: sudo yum install lm_sensors Run using: sudo sensors-detect Type sensors to get CPU temp. For fan speed: sensors | grep -i fan This will output fan speed or install psensor using: sudo apt-get install psensor One can also use hardinfo sudo apt-get install hardinfo
{ "source": [ "https://unix.stackexchange.com/questions/328911", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164881/" ] }
328,912
Since a few days I'm facing an issue while being connected to my server in ssh, for proxy/tunel usage. I - Setup Client Here is the machine : iMac:~ Luca$ sw_vers ProductName: Mac OS X ProductVersion: 10.11.6 BuildVersion: 15G1108 iMac:~ Luca$ sudo sysctl net.inet.ip.forwarding net.inet.ip.forwarding: 0 iMac:~ Luca$ sudo sysctl net.inet.ip.fw.enable net.inet.ip.fw.enable: 1 Tried on three different network. Browser I'm using Firefox 50.0.1 to browse internet, with the FoxyProxy extension configured like so : host address : 127.0.0.1 port : 9999 socks v5 SSH command I'm using Terminal.app to connect in ssh to my server. iMac:~ Luca$ ssh -p 53 -D 9999 luca@myIP Server luca@myServer:~$ ssh -V OpenSSH_6.7p1 Debian-5+deb8u3, OpenSSL 1.0.1t 3 May 2016 luca@myServer:~$ cat /proc/sys/net/ipv4/ip_forward 1 II - Expected Once the connection is open, I can browse any website without any issue (with my IP being my server one). This was fine until a few days. This is still fine if I try : same server (A), another computer (Y) same computer (X), another server (B) From what it looks like, it doesn't work with my computer (X) and my server (A). III - What happens luca@myServer:~$ ssh_dispatch_run_fatal: Connection to myIP: message authentication code incorrect The connection is then closed. This message appears at random time. But I can reproduce it easily with a big data load through the proxy : load multiple videos, download big files, etc... IV - Another way, similar problem If I connect to my server through sftp:// (with FileZilla) with the same login (luca) and same port (53). Then I try to download a file, every <30 seconds I get the following error : Error : Incorrect MAC received on packet Once again, this happen only with my computer (X) and my server (A). If I try another server (B) on the same computer (X) : no problem. If I try the same server (A) on another computer (Y) : no problem. V - What I've tried (and didn't fix) Reboot the server and the computer Restart ssh/sshd on both the server and the computer Delete the knowns_hosts file on the computer Specify a -m and -c with the ssh command Specify a -o GSSAPIKeyExchange=no within the ssh command Uncomment the Ciphers and/or MACs lines within /etc/ssh/ssh_config on the server or/and the computer Tried to look at -vvvvv option with the ssh command and read logs on server/computer, nothing looked related. Any help would be appreciated. APPENDIX Server ssh -Q mac luca@myServer:~$ ssh -Q mac hmac-sha1 hmac-sha1-96 hmac-sha2-256 hmac-sha2-512 hmac-md5 hmac-md5-96 hmac-ripemd160 [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Computer ssh -Q mac iMac:~ Luca$ ssh -Q mac hmac-sha1 hmac-sha1-96 hmac-sha2-256 hmac-sha2-512 hmac-md5 hmac-md5-96 hmac-ripemd160 [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Server ssh -v -p 53 -D 9999 luca@myIP iMac:~ Luca$ ssh -v -p 53 -D 9999 luca@myIP OpenSSH_6.9p1, LibreSSL 2.1.8 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 21: Applying options for * debug1: Connecting to myIP [myIP] port 53. debug1: Connection established. debug1: key_load_public: No such file or directory debug1: identity file /Users/Luca/.ssh/id_rsa type -1 debug1: key_load_public: No such file or directory debug1: identity file /Users/Luca/.ssh/id_rsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /Users/Luca/.ssh/id_dsa type -1 debug1: key_load_public: No such file or directory debug1: identity file /Users/Luca/.ssh/id_dsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /Users/Luca/.ssh/id_ecdsa type -1 debug1: key_load_public: No such file or directory debug1: identity file /Users/Luca/.ssh/id_ecdsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /Users/Luca/.ssh/id_ed25519 type -1 debug1: key_load_public: No such file or directory debug1: identity file /Users/Luca/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.9 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.7p1 Debian-5+deb8u3 debug1: match: OpenSSH_6.7p1 Debian-5+deb8u3 pat OpenSSH* compat 0x04000000 debug1: Authenticating to myIP:53 as 'luca' debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client [email protected] <implicit> none debug1: kex: client->server [email protected] <implicit> none debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ecdsa-sha2-nistp256 SHA256:DUAAYL1r0QUDtRI89JozTTz+bm5wcg4cOSaFaRdbr/Y debug1: Host '[myIP]:53' is known and matches the ECDSA host key. debug1: Found key in /Users/Luca/.ssh/known_hosts:1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/Luca/.ssh/id_rsa debug1: Trying private key: /Users/Luca/.ssh/id_dsa debug1: Trying private key: /Users/Luca/.ssh/id_ecdsa debug1: Trying private key: /Users/Luca/.ssh/id_ed25519 debug1: Next authentication method: password luca@myIP's password: debug1: Authentication succeeded (password). Authenticated to myIP ([myIP]:53). debug1: Local connections to LOCALHOST:9999 forwarded to remote address socks:0 debug1: Local forwarding listening on ::1 port 9999. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 9999. debug1: channel 1: new [port listener] debug1: channel 2: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = fr_FR.UTF-8 Debian GNU/Linux 8.6 Linux <server> #1 SMP Tue Mar 18 14:48:24 CET 2014 x86_64 GNU/Linux server : 274305 hostname : myServer eth0 IPv4 : myIPv4 eth0 IPv6 : myIPv6 Last login: Thu Dec 8 15:36:09 2016 from XXX.XXX.XXX.XXX luca@myServer:~$ Error I see sometime luca@myServer:~$ Bad packet length 3045540078. padding error: need -1249427218 block 8 mod 6 ssh_dispatch_run_fatal: Connection to 5.39.88.21: message authentication code incorrect Server ssh -o macs=hmac-sha1 -v -p 53 -D 9999 luca@myServer when crash happens iMac:~ Luca$ ssh -o macs=hmac-sha1 -v -p 53 -D 9999 luca@myIP // [...] luca@myServer:~$ debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 3: new [dynamic-tcpip] debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 4: new [dynamic-tcpip] debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 5: new [dynamic-tcpip] debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 6: new [dynamic-tcpip] debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 7: new [dynamic-tcpip] debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 8: new [dynamic-tcpip] debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 9: new [dynamic-tcpip] debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 10: new [dynamic-tcpip] debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 11: new [dynamic-tcpip] debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 12: new [dynamic-tcpip] debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 13: new [dynamic-tcpip] debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 14: new [dynamic-tcpip] debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 15: new [dynamic-tcpip] debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 16: new [dynamic-tcpip] debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 17: new [dynamic-tcpip] debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 18: new [dynamic-tcpip] debug1: Connection to port 9999 forwarding to socks port 0 requested. debug1: channel 19: new [dynamic-tcpip] ssh_dispatch_run_fatal: Connection to myIP : message authentication code incorrect iMac:~ Luca$ After updating SSH on client-side iMac:~ Luca$ ssh -V OpenSSH_7.3p1, OpenSSL 1.0.2j 26 Sep 2016 iMac:~ Luca$ ssh -p 53 -D 9999 luca@myIP luca@myIP's password: luca@ns3274305:~$ ssh_dispatch_run_fatal: Connection to myIP port 53: message authentication code incorrect iMac:~ Luca$ ssh -o macs=hmac-sha1 -p 53 -D 9999 luca@myIP luca@myIP's password: luca@ns3274305:~$ ssh_dispatch_run_fatal: Connection to myIP port 53: message authentication code incorrect iMac:~ Luca$
For CPU temperature: On Debian: sudo apt-get install lm-sensors On Centos: sudo yum install lm_sensors Run using: sudo sensors-detect Type sensors to get CPU temp. For fan speed: sensors | grep -i fan This will output fan speed or install psensor using: sudo apt-get install psensor One can also use hardinfo sudo apt-get install hardinfo
{ "source": [ "https://unix.stackexchange.com/questions/328912", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204508/" ] }
328,913
I have the following function : GetHostName () { NODE01_CHECK=`cat /etc/hosts | grep -w "node01" | awk '{print $1}'` NODE02_CHECK=`cat /etc/hosts | grep -w "node02" | awk '{print $1}'` IS_NODE1=`ifconfig -a | grep -w $NODE01_CHECK` IS_NODE2=`ifconfig -a | grep -w $NODE02_CHECK` if [[ ! -z $IS_NODE1 ]]; then echo "This is NODE 1" fi if [[ ! -z $IS_NODE2 ]]; then echo "This is Node 2" fi } This script will identify if a certain ip is configured on one of the two nodes belonging to a cluster. This works fine locally, but I need to run it remotely from a server that only knows of the VIP of the cluster. The goal is to transfer some files to both nodes. So when I run : scp -r /tmp/files CLUST_VIP ssh CLUST_VIP <<EOF NODE01_CHECK=`cat /etc/hosts | grep -w "node01" | awk '{print $1}'` NODE02_CHECK=`cat /etc/hosts | grep -w "node02" | awk '{print $1}'` IS_NODE1=`ifconfig -a | grep -w $NODE01_CHECK` IS_NODE2=`ifconfig -a | grep -w $NODE02_CHECK` if [[ ! -z $IS_NODE1 ]]; then scp -r /tmp/files node02 fi if [[ ! -z $IS_NODE2 ]]; then scp -r /tmp/files node01 fi EOF However, now while running the same commands in a ssh block, I get the following messages : Usage: grep [OPTION]... PATTERN [FILE]... Try 'grep --help' for more information. Usage: grep [OPTION]... PATTERN [FILE]... Try 'grep --help' for more information. Pseudo-terminal will not be allocated because stdin is not a terminal. I have also tried using ssh -t and that removed the above errors regarding grep , but the environment variables do not seem to work. Is there a way to use environment variables over a ssh block?
For CPU temperature: On Debian: sudo apt-get install lm-sensors On Centos: sudo yum install lm_sensors Run using: sudo sensors-detect Type sensors to get CPU temp. For fan speed: sensors | grep -i fan This will output fan speed or install psensor using: sudo apt-get install psensor One can also use hardinfo sudo apt-get install hardinfo
{ "source": [ "https://unix.stackexchange.com/questions/328913", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100918/" ] }
329,926
I installed Linux Mint on my laptop along with a pre-installed Windows 10. When I turn on the computer, the normal GRUB menu appears most of the time: But after booting either Linux or Windows then rebooting, I GRUB starts in command line mode, as seen in the following screenshot: There is probably a command that I can type to boot from that prompt, but I don't know it. What works is to reboot using Ctrl+Alt+Del, then pressing F12 repeatedly until the normal GRUB menu appears. Using this technique, it always loads the menu. Rebooting without pressing F12 always reboots in command line mode. I think that the BIOS has EFI enabled, and I installed the GRUB bootloader in /dev/sda. Why is this happening and how can I ensure that GRUB always loads the menu? Edit As suggested in the comments, I tried purging the grub-efi package and reinstalling it. This did not fix the problem, but now when it starts in command prompt mode, GRUB shows the following message: error: no such device: 6fxxxxx-xxxx-xxxx-xxxx-xxxxxee. Entering rescue mode... grub rescue> I checked with the blkid command and that is the identifier of my linux partition. Maybe this additional bit of information can help figure out what is going on?
The boot process can't find the root partition (the part of the disk, that contains the information for starting up the system), so you have to specify its location yourself. I think you have to look at something like this article: how-rescue-non-booting-grub-2-linux short summary: in the grub rescue> command line type ls ... to list all available devices. Then you have to go through each, type something like (depends what is shown by the ls command): ls (hd0,1)/ ls (hd0,2)/ ... and so on, until you find: (hd0,1)/boot/grub OR (hd0,1)/grub ... or, in case of "UEFI", it look something like: (hd0,1)/efi/boot/grub OR (hd0,1)/efi/grub Now you have to set the boot parameters accordingly - just type the following (with the correct numbers for your case) and after each line press return: set prefix=(hd0,1)/grub ... or (if grub is in a sub-directory): set prefix=(hd0,1)/boot/grub Then continue with set root=(hd0,1) insmod linux insmod normal normal Now it should boot: boot Go to the commandline (e.g. start a "terminal") now, and execute: sudo update-grub ... this should correct the missing information and it should boot next time. If NOT - you have to go through the steps again an might have to repair or install grub again: Please look at the "Boot-Repair"-tool from this article: https://help.ubuntu.com/community/Boot-Repair (I had positive experiences with it, when previous steps wouldn't survive the reboot)
{ "source": [ "https://unix.stackexchange.com/questions/329926", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271/" ] }
329,994
From the bash manual The rules concerning the definition and use of aliases are somewhat confusing. Bash always reads at least one complete line of input before executing any of the commands on that line. Aliases are expanded when a command is read, not when it is executed. Therefore, an alias definition appearing on the same line as another command does not take effect until the next line of input is read. The commands following the alias definition on that line are not affected by the new alias. This behavior is also an issue when functions are executed. Aliases are expanded when a function definition is read, not when the function is executed , because a function definition is itself a compound command. As a consequence, aliases defined in a function are not available until after that function is executed . To be safe, always put alias definitions on a separate line, and do not use alias in compound commands. The two sentences "Aliases are expanded when a function definition is read, not when the function is executed" and "aliases defined in a function are not available until after that function is executed" seem to be contrary to each other. Can you explain what they mean respectively?
Aliases are expanded when a function definition is read, not when the function is executed … $ echo "The quick brown fox jumps over the lazy dog." > myfile   $ alias myalias=cat   $ myfunc() { > myalias myfile > }   $ myfunc The quick brown fox jumps over the lazy dog.   $ alias myalias="ls -l"   $ myalias myfile -rw-r--r-- 1 myusername mygroup 45 Dec 13 07:07 myfile   $ myfunc The quick brown fox jumps over the lazy dog. Even though myfunc was defined to call myalias , and I’ve redefined myalias , myfunc still executes the original definition of myalias .  Because the alias was expanded when the function was defined.  In fact, the shell no longer remembers that myfunc calls myalias ; it knows only that myfunc calls cat : $ type myfunc myfunc is a function myfunc () { cat myfile } … aliases defined in a function are not available until after that function is executed. $ echo "The quick brown fox jumps over the lazy dog." > myfile   $ myfunc() { > alias myalias=cat > }   $ myalias myfile -bash: myalias: command not found   $ myfunc   $ myalias myfile The quick brown fox jumps over the lazy dog. The myalias alias isn’t available until the myfunc function has been executed.  (I believe it would be rather odd if defining the function that defines the alias was enough to cause the alias to be defined.)
{ "source": [ "https://unix.stackexchange.com/questions/329994", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
329,996
I am testing a systemd timer and trying to override its default timeout, but without success. I'm wondering whether there is a way to ask systemd to tell us when the service is going to be run next. Normal file ( /lib/systemd/system/snapbackend.timer ): # Documentation available at: # https://www.freedesktop.org/software/systemd/man/systemd.timer.html [Unit] Description=Run the snapbackend service once every 5 minutes. [Timer] # You must have an OnBootSec (or OnStartupSec) otherwise it does not auto-start OnBootSec=5min OnUnitActiveSec=5min # The default accuracy is 1 minute. I'm not too sure that either way # will affect us. I am thinking that since our computers will be # permanently running, it probably won't be that inaccurate anyway. # See also: # http://stackoverflow.com/questions/39176514/is-it-correct-that-systemd-timer-accuracysec-parameter-make-the-ticks-slip #AccuracySec=1 [Install] WantedBy=timers.target # vim: syntax=dosini The override file ( /etc/systemd/system/snapbackend.timer.d/override.conf ): # This file was auto-generated by snapmanager.cgi # Feel free to do additional modifications here as # snapmanager.cgi will be aware of them as expected. [Timer] OnUnitActiveSec=30min I ran the following commands and the timer still ticks once every 5 minutes. Could there be a bug in systemd? sudo systemctl stop snapbackend.timer sudo systemctl daemon-reload sudo systemctl start snapbackend.timer So I was also wondering, how can I know when the timer will tick next? Because that would immediately tell me whether it's in 5 min. or 30 min. but from the systemctl status snapbackend.timer says nothing about that. Just wondering whether there is a command that would tell me the delay currently used. For those interested, there is the service file too ( /lib/systemd/system/snapbackend.service ), although I would imagine that this should have no effect on the timer ticks... # Documentation available at: # https://www.freedesktop.org/software/systemd/man/systemd.service.html [Unit] Description=Snap! Websites snapbackend CRON daemon After=snapbase.service snapcommunicator.service snapfirewall.service snaplock.service snapdbproxy.service [Service] # See also the snapbackend.timer file Type=simple WorkingDirectory=~ ProtectHome=true NoNewPrivileges=true ExecStart=/usr/bin/snapbackend ExecStop=/usr/bin/snapstop --timeout 300 $MAINPID User=snapwebsites Group=snapwebsites # No auto-restart, we use the timer to start once in a while # We also want to make systemd think that exit(1) is fine SuccessExitStatus=1 Nice=5 LimitNPROC=1000 # For developers and administrators to get console output #StandardOutput=tty #StandardError=tty #TTYPath=/dev/console # Enter a size to get a core dump in case of a crash #LimitCORE=10G [Install] WantedBy=multi-user.target # vim: syntax=dosini
The state of currently active timers can be shown using systemctl list-timers : $ systemctl list-timers --all NEXT LEFT LAST PASSED UNIT ACTIVATES Wed 2016-12-14 08:06:15 CET 21h left Tue 2016-12-13 08:06:15 CET 2h 18min ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service 1 timers listed.
{ "source": [ "https://unix.stackexchange.com/questions/329996", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57773/" ] }
329,997
I have a file with the following syntax: slave_master: '1.2.3.4' and I would like to replace it with sed or awk this way: slave_master: - '1.2.3.4' - '1.2.3.5' The file file is few hundreds lines long and there are other such lines with other IP values which should not be affected. Is it possible to do it with on command? Thanks a lot.
The state of currently active timers can be shown using systemctl list-timers : $ systemctl list-timers --all NEXT LEFT LAST PASSED UNIT ACTIVATES Wed 2016-12-14 08:06:15 CET 21h left Tue 2016-12-13 08:06:15 CET 2h 18min ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service 1 timers listed.
{ "source": [ "https://unix.stackexchange.com/questions/329997", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/191994/" ] }
330,233
There is not much to explain here. Just want to know why echo $SHELL always gives the output /bin/bash even though I switch to other shells. What do I have to do to make sure the $SHELL gives the correct shell path that I am in. [root@localhost user]# echo $0 bash [root@localhost user]# echo $SHELL /bin/bash [root@localhost user]# csh [root@localhost user]# echo $0 csh [root@localhost user]# echo $SHELL /bin/bash [root@localhost user]# tcsh [root@localhost user]# echo $0 tcsh [root@localhost user]# echo $SHELL /bin/bash [root@localhost user]# sh sh-4.2# echo $0 sh sh-4.2# echo $SHELL /bin/bash sh-4.2# [root@localhost user]# which csh /bin/csh [root@localhost user]# which csh /bin/csh
$SHELL is the environment variable that holds your preferred shell , not the currently running shell. It's initialised by login or any other application that logs you in based on the shell field in your user entry in the passwd database (your login shell ). That variable is used to tell applications like xterm , vim ... what shell they should start for you when they start a shell. You typically change it when you want to use another shell than the one set for you in the passwd database. To get a path of the current shell interpreter, on Linux, and with Bourne or csh like shells, you can do: readlink "/proc/$$/exe" The rc equivalent: readlink /proc/$pid/exe The fish equivalent: set pid %self readlink /proc/$pid/exe csh/tcsh set the $shell variable to the path of the shell. In Bourne-like shells, $0 will contain the first argument that the shell received ( argv[0] ) which by convention is the name of the command being invoked (though login applications, again by convention make the first character a - to tell the shell it's a login shell and should for instance source the .profile or .login file containing your login session customisations) when not called to interpret a script or when called with shell -c '...' without extra arguments. In: $ bash -c 'echo "$0"' bash $ /bin/bash -c 'echo "$0"' /bin/bash It's my shell that calls /bin/bash in both cases, but in the first case with bash as its first argument, and in the second case with /bin/bash . Several shells allow passing arbitrary strings instead like: $ (exec -a whatever bash -c 'echo "$0"') whatever In ksh/bash/zsh/yash.
{ "source": [ "https://unix.stackexchange.com/questions/330233", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205044/" ] }
330,366
When I used an X11 desktop, I could run graphical applications in docker containers by sharing the $DISPLAY variable and /tmp/X11-unix directory. For example: docker run -ti -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix some:ubuntu xclock Now, I'm on Fedora 25 running Wayland, so there is no X11 infrastructure to share with the container. How can I launch a graphical application in the container, and have it show up on my desktop? Is there some way to tie in XWayland?
As you say you are running Fedora 25 with Wayland, I assume you are using Gnome-Wayland desktop. Gnome-Wayland runs Xwayland to support X applications. You can share Xwayland access like you did before with Xorg. Your example command misses XAUTHORITY , and you don't mention xhost . You need one of this ways to allow X applications in docker to access Xwayland (or any X). As all this is not related to Wayland, I refer to How can you run GUI applications in docker container? on how to run X applications in docker. As for short, two solutions with xhost: Allow your local user access via xhost: xhost +SI:localuser:$(id -un) and create a similar user with docker run option: --user=$(id -u):$(id -g) Discouraged: Allow root access to X with xhost +SI:localuser:root Related Pitfall : X normally uses shared memory (X extension MIT-SHM ). Docker containers are isolated and cannot access shared memory. That can lead to rendering glitches and RAM access failures. You can avoid that with docker run option --ipc=host . That impacts container isolation as it disables IPC namespacing. Compare: https://github.com/jessfraz/dockerfiles/issues/359 To run Wayland applications in docker without X, you need a running wayland compositor like Gnome-Wayland or Weston. You have to share the Wayland socket. You find it in XDG_RUNTIME_DIR and its name is stored in WAYLAND_DISPLAY . As XDG_RUNTIME_DIR only allows access for its owner, you need the same user in container as on host. Example: docker run -e XDG_RUNTIME_DIR=/tmp \ -e WAYLAND_DISPLAY=$WAYLAND_DISPLAY \ -v $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY:/tmp/$WAYLAND_DISPLAY \ --user=$(id -u):$(id -g) \ imagename waylandapplication QT5 applications also need -e QT_QPA_PLATFORM=wayland and must be started with imagename dbus-launch waylandapplication x11docker for X and Wayland applications in docker is an all in one solution. It also cares about preserving container isolation (that gets lost if simply sharing host X display as in your example).
{ "source": [ "https://unix.stackexchange.com/questions/330366", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21466/" ] }
330,414
I noticed bash has a short cut for ctrl + T which swaps the last two characters before the cursor. I'm wondering why the engineers decided to include this. Was it inherited from a previous convention? Or is there a practical purpose that this is commonly used for?
It's very useful to quickly fix typos: sl becomes ls with a single Ctrl T . You can use Alt T to swap words too ( e.g. when switching between service and systemctl ...). Historically speaking, the Ctrl T feature came to Bash from Emacs in all likelihood. It probably was copied to Emacs from some other editor; it was present in Stanford's E editor (see Essential E page 13) by 1980, and E had a strong impact on Richard Stallman (as described in Free as in Freedom ). It was implemented in very early versions of Bash, before its first release in 1989, when it was pulled out into the readline library where it lives today (the very first entry in the readline ChangeLog hints at this).
{ "source": [ "https://unix.stackexchange.com/questions/330414", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16792/" ] }
330,484
I wrote a bash script, and I executed it without compiling it first. It worked perfectly. It can work with or without permissions, but when it comes to C programs, we need to compile the source code. Why?
It means that shell scripts aren't compiled, they're interpreted: the shell interprets scripts one command at a time, and figures out every time how to execute each command. That makes sense for shell scripts since they spend most of their time running other programs anyway. C programs on the other hand are usually compiled: before they can be run, a compiler converts them to machine code in their entirety, once and for all. There have been C interpreters in the past (such as HiSoft 's C interpreter on the Atari ST) but they were very unusual. Nowadays C compilers are very fast; TCC is so fast you can use it to create "C scripts", with a #!/usr/bin/tcc -run shebang, so you can create C programs which run in the same way as shell scripts (from the users' perspective). Some languages commonly have both an interpreter and a compiler: BASIC is one example that springs to mind. You can also find so-called shell script compilers but the ones I've seen are just obfuscating wrappers: they still use a shell to actually interpret the script. As mtraceur points out though a proper shell script compiler would certainly be possible, just not very interesting. Another way of thinking about this is to consider that a shell's script interpreting capability is an extension of its command-line handling capability, which naturally leads to an interpreted approach. C on the other hand was designed to produce stand-alone binaries; this leads to a compiled approach. Languages which are usually compiled do tend to sprout interpreters too, or at least command-line-parsers (known as REPLs, read-eval-print loops ; a shell is itself a REPL).
{ "source": [ "https://unix.stackexchange.com/questions/330484", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98965/" ] }
330,660
This script does not echo "after": #!/bin/bash -e echo "before" echo "anything" | grep e # it would if I searched for 'y' instead echo "after" exit It also would if I removed the -e option on the shebang line, but I wish to keep it so my script stops if there is an error. I do not consider grep finding no match as an error. How may I prevent it from exiting so abruptely?
echo "anything" | { grep e || true; } Explanation: $ echo "anything" | grep e ### error $ echo $? 1 $ echo "anything" | { grep e || true; } ### no error $ echo $? 0 ### DopeGhoti's "no-op" version ### (Potentially avoids spawning a process, if `true` is not a builtin): $ echo "anything" | { grep e || :; } ### no error $ echo $? 0 The "||" means "or". If the first part of the command "fails" (meaning "grep e" returns a non-zero exit code) then the part after the "||" is executed, succeeds and returns zero as the exit code ( true always returns zero).
{ "source": [ "https://unix.stackexchange.com/questions/330660", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87656/" ] }
330,690
I'm working on a script that runs a command as sudo and echoes a line of text ONLY if my sudo privileges have timed out, so only if running a command with sudo would require my user (not root) to type its password again. How do I verify that? Mind that $(id -u) even when running as sudo will return my current user id so that can't be check to match it with 0... I need a method that would check this quietly.
Use the option -n to check whether you still have privileges; from man sudo : -n , --non-interactive Avoid prompting the user for input of any kind. If a password is required for the command to run, sudo will display an error message and exit. For example, sudo -n true 2>/dev/null && echo Privileges active || echo Privileges inactive Be aware that it is possible for the privileges to expire between checking with sudo -n true and actually using them. You may want to try directly with sudo -n command... and in case of failure display a message and possibly retry running sudo interactively. Edit: See also ruakh's comment below.
{ "source": [ "https://unix.stackexchange.com/questions/330690", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183485/" ] }
330,742
I have an external hard drive which is encrypted via LUKS. It contains an ext4 fs. I just got an error from rsync for a file which is located on this drive: rsync: readlink_stat("/home/some/dir/items.json") failed: Structure needs cleaning (117) If I try to delete the file I get the same error: rm /home/some/dir/items.json rm: cannot remove ‘//home/some/dir/items.json’: Structure needs cleaning Does anyone know what I can do to remove the file and fix related issues with the drive/fs (if there are any)?
That is strongly indicative of file-system corruption. You should unmount, make a sector-level backup of your disk, and then run e2fsck to see what is up. If there is major corruption, you may later be happy that you did a sector-level backup before letting e2fsck tamper with the data.
{ "source": [ "https://unix.stackexchange.com/questions/330742", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148560/" ] }
330,876
The Bash command cd - prints the previously used directory and changes to it. On the other hand, the Bash command cd ~- directly changes to the previously used directory, without echoing anything. Is that the only difference? What is the use case for each of the commands?
There are two things at play here. First, the - alone is expanded to your previous directory. This is explained in the cd section of man bash (emphasis mine): An argument of - is converted to $OLDPWD before the directory change is attempted. If a non-empty directory name from CDPATH is used, or if - is the first argument, and the directory change is successful, the absolute pathname of the new working directory is written to the standard output. The return value is true if the directory was successfully changed; false otherwise. So, a simple cd - will move you back to your previous directory and print the directory's name out. The other command is documented in the "Tilde Expansion" section: If the tilde-prefix is a ~+ , the value of the shell variable PWD replaces the tilde-prefix. If the tilde-prefix is a ~- , the value of the shell variable OLDPWD, if it is set, is substituted. If the characters following the tilde in the tilde-prefix consist of a number N, optionally prefixed by a + or a - , the tilde-prefix is replaced with the corresponding element from the directory stack, as it would be displayed by the dirs builtin invoked with the tilde-prefix as an argument. If the characters following the tilde in the tilde-prefix consist of a number without a leading + or - , + is assumed. This might be easier to understand with an example: $ pwd /home/terdon $ cd ~/foo $ pwd /home/terdon/foo $ cd /etc $ pwd /etc $ echo ~ ## prints $HOME /home/terdon $ echo ~+ ## prints $PWD /etc $ echo ~- ## prints $OLDPWD /home/terdon/foo So, in general, the - means "the previous directory". That's why cd - by itself will move you back to wherever you were. The main difference is that cd - is specific to the cd builtin. If you try to echo - it will just print a - . The ~- is part of the tilde expansion functionality and behaves similarly to a variable. That's why you can echo ~- and get something meaningful. You can also use it in cd ~- but you could just as well use it in any other command. For example cp ~-/* . which would be equivalent to cp "$OLDPWD"/* .
{ "source": [ "https://unix.stackexchange.com/questions/330876", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34039/" ] }
331,208
Suppose I have a folder: cd /home/cpm135/public_html and make a symbolic link ln -s /var/lib/class . Later, I'm in that directory: cd /home/cpm135/public_html/class The pwd is going to tell me I'm in /home/cpm135/public_html/class Is there any way to know that I'm "really" in /var/lib/class ? Thanks
Depending on how your pwd command is configured, it may default to showing the logical working directory (output by pwd -L ) which would show the symlink location, or the physical working directory (output by pwd -P ) which ignores the symlink and shows the "real" directory. For complete information you can do file "$(pwd -L)" Inside a symlink, this will return /path/of/symlink: symbolic link to /path/of/real/directory
{ "source": [ "https://unix.stackexchange.com/questions/331208", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104388/" ] }
331,211
I run a server that has Zabbix. Recently, I've noticed that it's running out of space. Is there any easy way to increase space without loosing any data? Centos is in VM. I've allocated some space to the VM. I understand that /dev/sda2 is out of space I assume that /dev/sda4 is unused space... Simply adding space via lvextend produces error lvextend -L+5G /dev/sda2 "/dev/sda2": Invalid path for Logical Volume. Run `lvextend --help' for more information. I assume that /dev/sda4 is the the unallocated space that I need to add to /dev/sda2 Am I correct?
Depending on how your pwd command is configured, it may default to showing the logical working directory (output by pwd -L ) which would show the symlink location, or the physical working directory (output by pwd -P ) which ignores the symlink and shows the "real" directory. For complete information you can do file "$(pwd -L)" Inside a symlink, this will return /path/of/symlink: symbolic link to /path/of/real/directory
{ "source": [ "https://unix.stackexchange.com/questions/331211", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206096/" ] }
331,216
I am a first-timer user of linux, and I have been able to fix all issues I have met until this one. Upon trying to upgrade the kernel version to something above 4.1 from Debian backport, I am met with the following message: The following packages have unmet dependencies: linux-image-4.7.0-0.bpo.1-amd64: Depends: linux-base (>=4.3~) but 3.5 is to be installed E: Unable to correct problems, you have held broken packages. scouring the internet has told me, that some users fixed it by doing a clean install from scratch, but I feel like I wouldn't learn anything from it, if it is fixable - and I have done 5 clean installs already since yesterday.
Depending on how your pwd command is configured, it may default to showing the logical working directory (output by pwd -L ) which would show the symlink location, or the physical working directory (output by pwd -P ) which ignores the symlink and shows the "real" directory. For complete information you can do file "$(pwd -L)" Inside a symlink, this will return /path/of/symlink: symbolic link to /path/of/real/directory
{ "source": [ "https://unix.stackexchange.com/questions/331216", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206101/" ] }
331,419
I found information that nvram is used for BIOS flashing/backup and that it contains some bios related data. Would cat /dev/random > /dev/nvram permanently brick computer? I'm quite tempted to type this command but somehow I feel it's not gonna end up well for my machine so I guess I'd like to know how dangerous is playing with this device.
I'm curious as to exactly why you'd want to run such a command if you think it might damage your computer... /dev/nvram provides access to the non-volatile memory in the real-time clock on PCs and Ataris. On PCs this is usually known as CMOS memory and stores the BIOS configuration options; you can see the information stored there by looking at /proc/driver/nvram : Checksum status: valid # floppies : 4 Floppy 0 type : none Floppy 1 type : none HD 0 type : ff HD 1 type : ff HD type 48 data: 65471/255/255 C/H/S, precomp 65535, lz 65279 HD type 49 data: 3198/255/0 C/H/S, precomp 0, lz 0 DOS base memory: 630 kB Extended memory: 65535 kB (configured), 65535 kB (tested) Gfx adapter : monochrome FPU : installed All this is handled by the nvram kernel module, which takes care of checksums etc. Most of the information here is only present for historical reasons, and reflects the limitations of old operating systems: the computer I ran this on doesn't have four floppy drives, the hard drive information is incorrect, as is the memory information and display adapter information. I haven't tried writing random values to the device, but I suspect it wouldn't brick your system: at worst, you should be able to recover by clearing the CMOS (there's usually a button or jumper to do that on your motherboard). But I wouldn't try it! The only useful features in the CMOS memory nowadays are RTC-related. In particular, nvram-wakeup can program the CMOS alarm to switch your computer on at a specific time. (So that would be one reason to write to /dev/nvram .)
{ "source": [ "https://unix.stackexchange.com/questions/331419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78925/" ] }
331,522
I'm confused how to include optional arguments/flags when writing a bash script for the following program: The program requires two arguments: run_program --flag1 <value> --flag2 <value> However, there are several optional flags: run_program --flag1 <value> --flag2 <value> --optflag1 <value> --optflag2 <value> --optflag3 <value> --optflag4 <value> --optflag5 <value> I would like to run the bash script such that it takes user arguments. If users only input two arguments in order, then it would be: #!/bin/sh run_program --flag1 $1 --flag2 $2 But what if any of the optional arguments are included? I would think it would be if [ --optflag1 "$3" ]; then run_program --flag1 $1 --flag2 $2 --optflag1 $3 fi But what if $4 is given but not $3?
This article shows two different ways - shift and getopts (and discusses the advantages and disadvantages of the two approaches). With shift your script looks at $1 , decides what action to take, and then executes shift , moving $2 to $1 , $3 to $2 , etc. For example: while :; do case $1 in -a|--flag1) flag1="SET" ;; -b|--flag2) flag2="SET" ;; -c|--optflag1) optflag1="SET" ;; -d|--optflag2) optflag2="SET" ;; -e|--optflag3) optflag3="SET" ;; *) break esac shift done With getopts you define the (short) options in the while expression: while getopts abcde opt; do case $opt in a) flag1="SET" ;; b) flag2="SET" ;; c) optflag1="SET" ;; d) optflag2="SET" ;; e) optflag3="SET" ;; esac done Obviously, these are just code-snippets, and I've left out validation - checking that the mandatory args flag1 and flag2 are set, etc. Which approach you use is to some extent a matter of taste - how portable you want your script to be, whether you can live with short (POSIX) options only or whether you want long (GNU) options, etc.
{ "source": [ "https://unix.stackexchange.com/questions/331522", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115891/" ] }
331,611
Is there an official POSIX, GNU, or other guideline on where progress reports and logging information (things like "Doing foo; foo done") should be printed? Personally, I tend to write them to stderr so I can redirect stdout and get only the program's actual output. I was recently told that this is not good practice since progress reports aren't actually errors and only error messages should be printed to stderr. Both positions make sense, and of course you can choose one or the other depending on the details of what you are doing, but I would like to know if there's a commonly accepted standard for this. I haven't been able to find any specific rules in POSIX, the GNU coding standards, or any other such widely accepted lists of best practices. We have a few similar questions, but they don't address this exact issue: When to use redirection to stderr in shell scripts : The accepted answer suggests what I tend to do, keep the program's final output on stdout and anything else to stderr. However, this is just presented as a user's opinion, albeit supported by arguments. Should the usage message go to stderr or stdout? : This is specific to help messages but cites the GNU coding standard. This is the sort of thing I'm looking for, just not restricted to help messages only. So, are there any official rules on where progress reports and other informative messages (which aren't part of the program's actual output) should be printed?
Posix defines the standard streams thus : At program start-up, three streams shall be predefined and need not be opened explicitly: standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). When opened, the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device. The GNU C Library describes the standard streams similarly: Variable: FILE * stdout The standard output stream, which is used for normal output from the program. Variable: FILE * stderr The standard error stream, which is used for error messages and diagnostics issued by the program. Thus, standard definitions have little guidance for stream usage beyond “conventional/normal output” and “diagnostic/error output.” In practice, it’s common to redirect either or both of these streams to files and pipelines, where progress indicators will be a problem. Some systems even monitor stderr for output and consider it a sign of problems. Purely auxiliary progress information is therefore problematic on either stream. Instead of sending progress indicators unconditionally to either standard stream, it’s important to recognize that progress output is only appropriate for interactive streams. With that in mind, I recommend writing progress counters only after checking whether the stream is interactive (e.g., with isatty() ) or when explicitly enabled by a command-line option. That’s especially important for progress meters that rely on terminal update behavior to make sense, like %-complete bars. For certain very simple progress messages (“Starting X” ... “Done with X”) it’s more reasonable to include the output even for non-interactive streams. In that case, consider how users might interact with the streams, like searching with grep or paging with less or monitoring with tail -f . If it makes sense to see the progress messages in those contexts, they will be much easier to consume from stdout .
{ "source": [ "https://unix.stackexchange.com/questions/331611", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22222/" ] }
331,615
I have 2 external storage devices of 1TB each and I want to backup all of this to a server. I want to use rsync to do this but I have found that of ~100,000 files on each device, ~80,000 files are the same (have the same name and directory path). I could rsync both of these separately which would merge the files, but I want a way to find out if the 'mutual' files contain the same content, because I dont want to lose a modified file if they have been modified. Is there a way of checking for this using rsync?
Posix defines the standard streams thus : At program start-up, three streams shall be predefined and need not be opened explicitly: standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). When opened, the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device. The GNU C Library describes the standard streams similarly: Variable: FILE * stdout The standard output stream, which is used for normal output from the program. Variable: FILE * stderr The standard error stream, which is used for error messages and diagnostics issued by the program. Thus, standard definitions have little guidance for stream usage beyond “conventional/normal output” and “diagnostic/error output.” In practice, it’s common to redirect either or both of these streams to files and pipelines, where progress indicators will be a problem. Some systems even monitor stderr for output and consider it a sign of problems. Purely auxiliary progress information is therefore problematic on either stream. Instead of sending progress indicators unconditionally to either standard stream, it’s important to recognize that progress output is only appropriate for interactive streams. With that in mind, I recommend writing progress counters only after checking whether the stream is interactive (e.g., with isatty() ) or when explicitly enabled by a command-line option. That’s especially important for progress meters that rely on terminal update behavior to make sense, like %-complete bars. For certain very simple progress messages (“Starting X” ... “Done with X”) it’s more reasonable to include the output even for non-interactive streams. In that case, consider how users might interact with the streams, like searching with grep or paging with less or monitoring with tail -f . If it makes sense to see the progress messages in those contexts, they will be much easier to consume from stdout .
{ "source": [ "https://unix.stackexchange.com/questions/331615", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173266/" ] }
331,645
I'd like to extract a file from a Docker image without having to run the image. The docker save option is not currrently a viable option for me as it's saving too huge of a file just to un-tar a specific file.
You can extract files from an image with the following commands: container_id=$(docker create "$image") docker cp "$container_id:$source_path" "$destination_path" docker rm "$container_id" According to the docker create documentation , this doesn't run the container: The docker create command creates a writeable container layer over the specified image and prepares it for running the specified command. The container ID is then printed to STDOUT . This is similar to docker run -d except the container is never started. You can then use the docker start <container_id> command to start the container at any point. For reference (my previous answer), a less efficient way of extracting a file from an image is the following: docker run some_image cat "$file_path" > "$output_path"
{ "source": [ "https://unix.stackexchange.com/questions/331645", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18009/" ] }
331,696
find . -type f -exec echo {} \; Using the above command, I want to get file names without leading "./" characters. So basically, I want to get: filename Instead of: ./filename Any way to do this?
Use * instead of . and the leading ./ disappears. find * -type f
{ "source": [ "https://unix.stackexchange.com/questions/331696", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206439/" ] }
331,722
In bash. I am having some difficulty to determine what I should use? all my scripts use ">>/dev/stderr" at bash prompt, if I try: echo test >>/dev/stderr works echo test >> /dev/stderr works echo test >/dev/stderr works echo test > /dev/stderr works echo test >>&2 FAILS! echo test >> &2 FAILS! echo test >&2 works echo test > &2 FAILS! I am willing to change all my scripts to >&2 . It seems to also have a big effect over ssh (after su SomeUser ) where >>/dev/stderr will not work at all (permission denied), only >&2 will work.
>& n is shell syntax to directly duplicate a file descriptor . File descriptor 2 is stderr; that's how that one works. You can duplicate other file descriptors as well, not just stderr. You can't use append mode here because duplicating a file descriptor never truncates (even if your stderr is a file) and >& is one token, that's why you can't put a space inside it—but >& 2 works. >> name is a different permitted syntax, where name is a file name (and the token is >> ). In this case, you're using the file name /dev/stderr , which by OS-specific handling (on Linux, it's a symlink to /proc/self/fd/2 ) also means standard error. Append and truncate mode both wind up doing the same thing when stderr is a terminal because that can't be truncated. If your standard error is a file, however, it will be truncated: anthony@Zia:~$ bash -c 'echo hi >/dev/stderr; echo bye >/dev/stderr' 2>/tmp/foo anthony@Zia:~$ cat /tmp/foo bye If you're seeing an error with /dev/stderr over ssh, it's possible the server admin has applied some security measure preventing that symlink from working. (E.g., you can't access /proc or /dev ). While I'd expect either to cause all kinds of weird breakage, using the duplicate file descriptor syntax is a perfectly reasonable (and likely slightly more efficient) approach. Personally I prefer it.
{ "source": [ "https://unix.stackexchange.com/questions/331722", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30352/" ] }
331,837
Usually, if you edit a scrpit, all running usages of the script are prone to errors. As far as I understand it, bash (other shells too?) read the script incrementally, so if you modified the script file externally, it starts reading the wrong stuff. Is there any way to prevent it? Example: sleep 20 echo test If you execute this script, bash will read the first line (say 10 bytes) and go to sleep. When it resumes, there can be different contents in the script starting at 10-th byte. I may be in the middle of a line in the new script. Thus the running script will be broken.
Yes shells, and bash in particular, are careful to read the file one line at a time, so it works the same as when you use it interactively. You'll notice that when the file is not seekable (like a pipe), bash even reads one byte at a time to be sure not to read past the \n character. When the file is seekable, it optimises by reading full blocks at a time, but seek back to after the \n . That means you can do things like: bash << \EOF read var var's content echo "$var" EOF Or write scripts that update themselves. Which you wouldn't be able to do if it didn't give you that guarantee. Now, it's rare that you want to do things like that and, as you found out, that feature tends to get in the way more often than it is useful. To avoid it, you could try and make sure you don't modify the file in-place (for instance, modify a copy, and move the copy in place (like sed -i or perl -pi and some editors do for instance)). Or you could write your script like: { sleep 20 echo test }; exit (note that it's important that the exit be on the same line as } ; though you could also put it inside the braces just before the closing one). or: main() { sleep 20 echo test } main "$@"; exit The shell will need to read the script up until the exit before starting to do anything. That ensures the shell will not read from the script again. That means the whole script will be stored in memory though. That can also affect the parsing of the script. For instance, in bash : export LC_ALL=fr_FR.UTF-8 echo $'St\ue9phane' Would output that U+00E9 encoded in UTF-8. However, if you change it to: { export LC_ALL=fr_FR.UTF-8 echo $'St\ue9phane' } The \ue9 will be expanded in the charset that was in effect at the time that command was parsed which in this case is before the export command is executed. Also note that if the source aka . command is used, with some shells, you'll have the same kind of problem for the sourced files. That's not the case of bash though whose source command reads the file fully before interpreting it. If writing for bash specifically, you could actually make use of that, by adding at the start of the script: if [[ ! $already_sourced ]]; then already_sourced=1 source "$0"; exit fi (I wouldn't rely on that though as you could imagine future versions of bash could change that behaviour which can be currently seen as a limitation (bash and AT&T ksh are the only POSIX-like shells that behave like that as far as can tell) and the already_sourced trick is a bit brittle as it assumes that variable is not in the environment, not to mention that it affect the content of the BASH_SOURCE variable)
{ "source": [ "https://unix.stackexchange.com/questions/331837", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92541/" ] }
331,840
Where can I find infos about when the unattend updates/upgrades run and what ist done (or IF something was done)? I want to enable the unattended-upgrades (for security updates) on a debian virtual server and, yeah, on my RaspberryPi, too. Do I have to search the /var/log/apt -logs for infos about WHAT was installed and /var/log/syslog about infos WHEN there was an action? I see no CRON entry for when the update-process will run and the configs /etc/apt/apt.conf.d/20auto-upgrades and /etc/apt/apt.conf.d/50unattended-upgrades don't tell me either. Solution (credits to @bahamut): sudo cat /var/log/unattended-upgrades/unattended-upgrades.log 2016-12-22 06:35:26,489 INFO Initial whitelisted packages: 2016-12-22 06:35:26,489 INFO script for unattended-upgrades is executed 2016-12-22 06:35:26,489 INFO allowed sources are: ['origin=Debian,codename=jessie,label=Debian-Security'] 2016-12-22 06:35:35,518 INFO Packages that will be upgraded: libsmbclient libtevent0 libwbclient0 python-samba samba samba-common samba-common-bin samba-dsdb-modules samba-libs samba-vfs-modules smbclient winbind 2016-12-22 06:35:35,523 INFO dpkg-protocol written to »/var/log/unattended-upgrades/unattended-upgrades-dpkg.log« 2016-12-22 06:35:52,336 INFO all upgrades installed
Unattended upgrade has its own log-file in /var/log/unattended-upgrades/unattended-upgrades.log . It is policed by anacron. # These lines replace cron's entries 1 5 cron.daily run-parts --report /etc/cron.daily 7 10 cron.weekly run-parts --report /etc/cron.weekly @monthly 15 cron.monthly run-parts --report /etc/cron.monthly Additional information on what was done is located in /var/log/unattended-upgrades/unattended-upgrades-dpkg.log .
{ "source": [ "https://unix.stackexchange.com/questions/331840", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161003/" ] }
331,952
Is there a way to use dpkg to view a changelog between different versions of a package? If I wanted to know e.g., why 'passwd' was being upgraded in a recent update is there a way to use dpkg to see what changed? $ dpkg -l passwd Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Architecture Description +++-==============-============-============-================================= ii passwd 1:4.2-3.1 amd64 change and administer password an It's being upgraded to 1:4.2-3.3... I know with Debian I can look at the package notes and from there at the linked Debian changelog . But this doesn't apply to all deb based distros, and it's awkward for a quick look at what's new.
dpkg does not provide any facility to read the changelog of a package. you should extract the package and read the changelog dpkg -X <package.deb> <folder> then you can read the changelog using the dpkg-parsechangelog utility dpkg-parsechangelog -l <folder>/usr/share/doc/<package>/changelog.Debian.gz Since that's a real pain , if your distro is using apt-get you can use apt-get changelog <packagename> or apt changelog <packagename>
{ "source": [ "https://unix.stackexchange.com/questions/331952", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14792/" ] }
331,977
I just uploaded some files to my laptop running Ubuntu 16.04 LTS using Xender (which is great). The files are pictures and 1 video .MOV extension. When I try to view or play them, I get an error stating File reading failed: VLC could not open the file "/home/blah/Videos/IMG_0006.MOV" (Permission denied). Your input can't be opened: VLC is unable to open the MRL 'file:///home/blah/Videos/IMG_0006.MOV'. Check the log for details. Here is the output of ls -al : total 497040 drwxr-xr-x 2 blah blah 4096 Dec 21 11:31 . drwx------ 27 blah blah 4096 Dec 18 14:41 .. ---------- 1 blah blah 358905035 Sep 5 13:19 IMG_0002.MOV ---------- 1 blah blah 39697387 Sep 25 16:58 IMG_0003.MOV ---------- 1 blah blah 72482166 Sep 25 16:59 IMG_0004.MOV ---------- 1 blah blah 3468251 Sep 25 17:00 IMG_0005.MOV ---------- 1 blah blah 34355357 Sep 25 17:00 IMG_0006.MOV I have searched online, and don't find anything on this type of issue. Any help? Thanks. I installed VLC and mplayer and changed the permissions as follows: -r-------- 1 blah blah 358905035 Sep 5 13:19 IMG_0002.MOV -r-------- 1 blah blah 39697387 Sep 25 16:58 IMG_0003.MOV -r-------- 1 blah blah 72482166 Sep 25 16:59 IMG_0004.MOV -r-------- 1 blah blah 3468251 Sep 25 17:00 IMG_0005.MOV -r-------- 1 blah blah 34355357 Sep 25 17:00 IMG_0006.MOV And both mplayer and VLC play the file now. The fix seems to be a change in permissions.
dpkg does not provide any facility to read the changelog of a package. you should extract the package and read the changelog dpkg -X <package.deb> <folder> then you can read the changelog using the dpkg-parsechangelog utility dpkg-parsechangelog -l <folder>/usr/share/doc/<package>/changelog.Debian.gz Since that's a real pain , if your distro is using apt-get you can use apt-get changelog <packagename> or apt changelog <packagename>
{ "source": [ "https://unix.stackexchange.com/questions/331977", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206636/" ] }
332,005
Is there a way to test whether a shell function exists that will work both for bash and zsh ?
If you want to check that there's a currently defined (or at least potentially marked for autoloading) function by the name foo regardless of whether a builtin/executable/keyword/alias may also be available by that name, you could do: if typeset -f foo > /dev/null; then echo there is a foo function fi Though note that if there's a keyword or alias called foo as well, it would take precedence over the function (when not quoted). The above should work in ksh (where it comes from), zsh and bash .
{ "source": [ "https://unix.stackexchange.com/questions/332005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10618/" ] }
332,019
I'm running an application that writes to log.txt. The app was updated to a new version, making the supported plugins no longer compatible. It forces an enormous amount of errors into log.txt and does not seem to support writing to a different log file. How can I write them to a different log? I've considered replacing log.txt with a hard link (application can't tell the difference right?) Or a hard link that points to /dev/null. What are my options?
# cp -a /dev/null log.txt This copies your null device with the right major and minor dev numbers to log.txt so you have another null . Devices are not known by name at all in the kernel but rather by their major and minor numbers. Since I don't know what OS you have I found it convenient to just copy the numbers from where we already know they are. If you make it with the wrong major and minor numbers, you would most likely have made some other device, perhaps a disk or something else you don't want writing to.
{ "source": [ "https://unix.stackexchange.com/questions/332019", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82447/" ] }
332,048
I am sharing documents by running a hotspot in conjonction to dnsmasq that redirect all name queries to an IP <IP> where the documents can be found create_ap wlan0 wlan0 HereAreTheDocuments echo "address=/#/<IP>" >> /dev/dnsmasq.conf service dnsmasq start I need to force users connected to my hotspot to set my IP as their DNS. How can I force connected users to use the local DNS instead of a remote one? For instance lots of machine are using Google DNS at 8.8.8.8 and 8.8.4.4
# cp -a /dev/null log.txt This copies your null device with the right major and minor dev numbers to log.txt so you have another null . Devices are not known by name at all in the kernel but rather by their major and minor numbers. Since I don't know what OS you have I found it convenient to just copy the numbers from where we already know they are. If you make it with the wrong major and minor numbers, you would most likely have made some other device, perhaps a disk or something else you don't want writing to.
{ "source": [ "https://unix.stackexchange.com/questions/332048", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/189711/" ] }
332,056
I would like to take a screenshot of the KDE Plasma 5 splash screen as I am creating a new splash theme. But pressing PrtSc during the splash doesn't launch spectacle (my screenshooter) until after the splash screen is gone and the screenshot it takes is of the desktop as it appears after the splash screen.
# cp -a /dev/null log.txt This copies your null device with the right major and minor dev numbers to log.txt so you have another null . Devices are not known by name at all in the kernel but rather by their major and minor numbers. Since I don't know what OS you have I found it convenient to just copy the numbers from where we already know they are. If you make it with the wrong major and minor numbers, you would most likely have made some other device, perhaps a disk or something else you don't want writing to.
{ "source": [ "https://unix.stackexchange.com/questions/332056", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27613/" ] }
332,061
I have a dedicated server with 3 SSD drives in RAID 1. Output of cat /proc/mdstat : Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md4 : active raid1 sdc4[2] sdb4[1] sda4[0] 106738624 blocks [3/3] [UUU] bitmap: 0/1 pages [0KB], 65536KB chunk md2 : active raid1 sdc2[2] sda2[0] sdb2[1] 5497792 blocks [3/3] [UUU] md1 : active raid1 sda1[0] sdc1[2] sdb1[1] 259008 blocks [3/3] [UUU] unused devices: <none> ¿How can a drive be safely removed from the soft raid without loosing any data? I would like to remove a drive from the array in order to reformat it and use it independently, while keeping the most important data mirrored.
You've got a three-way mirror there: each drive has a complete copy of all data. Assuming the drive you want to remove is /dev/sdc , and you want to remove it from all three arrays, you'd perform the following steps for /dev/sdc1 , /dev/sdc2 , and /dev/sdc4 . Step 1: Remove the drive from the array. You can't remove an active device from an array, so you need to mark it as failed first. mdadm /dev/md1 --fail /dev/sdc1 mdadm /dev/md1 --remove /dev/sdc1 Step 2: Erase the RAID metadata so the kernel won't try to re-add it: wipefs -a /dev/sdc1 Step 3: Shrink the array so it's only a two-way mirror, not a three-way mirror with a missing drive: mdadm --grow /dev/md1 --raid-devices=2 You may need to remove the write-intent bitmap from /dev/md4 before shrinking it (the manual isn't clear on this), in which case you'd do so just before step 3 with mdadm --grow /dev/md4 --bitmap=none , then put it back afterwards with mdadm --grow /dev/md4 --bitmap=internal .
{ "source": [ "https://unix.stackexchange.com/questions/332061", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/202601/" ] }
332,419
I'm using tmux 2.1 and tried to on mouse mode with set -g mouse on And it works fine, I can switch across tmux window splits by clicking the appropriate window. But the downside of this is that I cannot select text with mouse. Here is how it looks like: As you can see, the selection just become red when I keep pressing the mouse button and disappear when I release the button. Without mouse mode enabled the "selection with mouse" works completely fine. Is there some workaround to turn mouse mode on and have the ability to select text?
If you press Shift while doing things with the mouse, that overrides the mouse protocol and lets you select/paste. It's documented in the xterm manual for instance, and most terminal emulators copy that behavior. Notes for OS X: In iTerm, use Option instead of Shift .  In Terminal.app, use Fn .
{ "source": [ "https://unix.stackexchange.com/questions/332419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56819/" ] }
332,457
I'm having a problem with the Esc key when I want to return to the interactive mode from the insert mode. Is there exist another key used to release the insert mode.
Ctrl - [ sends the same character to the terminal as the physical Esc key. The latter is simply a shortcut for the former, generally.
{ "source": [ "https://unix.stackexchange.com/questions/332457", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198926/" ] }
332,531
Bash Manual says: Bash attempts to determine when it is being run with its standard input connected to a network connection, as when executed by the remote shell daemon, usually rshd , or the secure shell daemon sshd . If Bash determines it is being run in this fashion, it reads and executes commands from ~/.bashrc , if that file exists and is readable. This Bash sources ~/.bashrc : ssh user@host : But this Bash sources ~/.bash_profile : ssh user@host I don't see a difference in these two commands according to the spec. Isn't stdin connected to a network connection in both cases?
A login shell first reads /etc/profile and then ~/.bash_profile . A non-login shell reads from /etc/bash.bashrc and then ~/.bashrc . Why is that important? Because of this line in man ssh : If command is specified, it is executed on the remote host instead of a login shell. In other words, if the ssh command only has options (not a command), like: ssh user@host It will start a login shell, a login shell reads ~/.bash_profile . An ssh command which does have a command , like: ssh user@host : Where the command is : (or do nothing). It will not start a login shell, therefore ~/.bashrc is what will be read. Remote stdin The supplied tty connection for /dev/stdin in the remote computer may be an actual tty or something else. For: $ ssh isaac@localhost /etc/profile sourced $ ls -la /dev/stdin lrwxrwxrwx 1 root root 15 Dec 24 03:35 /dev/stdin -> /proc/self/fd/0 $ ls -la /proc/self/fd/0 lrwx------ 1 isaac isaac 64 Dec 24 19:34 /proc/self/fd/0 -> /dev/pts/3 $ ls -la /dev/pts/3 crw--w---- 1 isaac tty 136, 3 Dec 24 19:35 /dev/pts/3 Which ends in a TTY (not a network connection) as the started bash sees it. For a ssh connection with a command: $ ssh isaac@localhost 'ls -la /dev/stdin' isaac@localhost's password: lrwxrwxrwx 1 root root 15 Dec 24 03:35 /dev/stdin -> /proc/self/fd/0 The list of TTY's start the same, but note that /etc/profile was not sourced. $ ssh isaac@localhost 'ls -la /proc/self/fd/0' isaac@localhost's password: lr-x------ 1 isaac isaac 64 Dec 24 19:39 /proc/self/fd/0 -> pipe:[6579259] Which tells the shell that the connection is a pipe (not a network connection). So, in both the test cases, the shell is unable to know that the connection is from a network and therefore does not read ~/.bashrc (if we only talk about the connection to a network). It does read ~/.bashrc, but for a different reason.
{ "source": [ "https://unix.stackexchange.com/questions/332531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7157/" ] }
332,532
Assuming user has /bin/bash as the shell in /etc/passwd . Then ssh user@host command runs the command using Bash. However, that shell is neither login nor interactive, which means neither ~/.bash_profile nor ~/.bashrc is sourced. In that case how to set the PATH environment variable so that executables can be found and executed? Is it recommended to prefix the actual command with source ~/.bashrc ? Edit. This question is trivial for Bash, because (as people pointed out) ~/.bashrc is sourced in such case. The definitive answer comes from this paragraph in man bash : Bash attempts to determine when it is being run with its standard input connected to a network connection, as when executed by the remote shell daemon, usually rshd, or the secure shell daemon sshd. If bash determines it is being run in this fashion, it reads and executes commands from ~/.bashrc , if that file exists and is readable. It will not do this if invoked as sh . The --norc option may be used to inhibit this behavior, and the --rcfile option may be used to force another file to be read, but neither rshd nor sshd generally invoke the shell with those options or allow them to be specified.
You have few possibilities: Set the PATH on the server in ~/.ssh/environment (needs to be enabled by PermitUserEnvironment yes in sshd_config ). Use full path to the binary As you mentioned, manually source .bashrc : prefix the command with . ~/.bashrc (or source ) It pretty much depends on the use case, which way you will go.
{ "source": [ "https://unix.stackexchange.com/questions/332532", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7157/" ] }
332,579
Using eval is often discouraged because it allows execution of arbitrary code. However, if we use eval echo , then it looks like the rest of the string will become arguments of echo so it should be safe. Am I correct on this?
Counterexample: DANGEROUS=">foo" eval echo $DANGEROUS The arbitrary arguments to echo could have done something more nefarious than creating a file called "foo".
{ "source": [ "https://unix.stackexchange.com/questions/332579", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7157/" ] }
332,641
I'd like to install the latest Python, which is 3.6 at the time of this post. However, the repository is saying that Python 3.4.2 is the newest version. I've tried: $ sudo apt-get update $ sudo apt-get install python3 python3 is already the newest version. $ python -V Python 3.4.2 To upgrade to Python 3.6 on my Windows workstation, I simply downloaded an exe, clicked "next" a few times, and it's done. What's the proper and officially accepted procedure to install Python 3.6 on Debian Jessie?
You can install Python-3.6 on Debian 8 as follows: wget https://www.python.org/ftp/python/3.6.9/Python-3.6.9.tgz tar xvf Python-3.6.9.tgz cd Python-3.6.9 ./configure --enable-optimizations --enable-shared make -j8 sudo make altinstall python3.6 It is recommended to use make altinstall according to the official website . If you want pip to be included, you need to add --with-ensurepip=install to your configure call. For more details see ./configure --help . Warning: make install can overwrite or masquerade the python binary. make altinstall is therefore recommended instead of make install since it only installs exec_prefix/bin/pythonversion . Some packages need to be installed to avoid some known problems, see: Common build problems (updated) Ubuntu/Debian: sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \ libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \ xz-utils tk-dev libffi-dev liblzma-dev Alternative of libreadline-dev: sudo apt install libedit-dev Fedora/CentOS/RHEL(aws ec2): sudo yum install zlib-devel bzip2 bzip2-devel readline-devel sqlite sqlite-devel \ openssl-devel xz xz-devel libffi-devel Alternative of openssl-devel: sudo yum install compat-openssl10-devel --allowerasing Update You can download the latest python-x.y.z.tar.gz from here . To set a default python version and easily switch between them , you need to update your update-alternatives with the multiple python version. Let's say you have installed the python3.7 on debian stretch , use the command whereis python to locate the binary ( */bin/python ). e,g: /usr/local/bin/python3.7 /usr/bin/python2.7 /usr/bin/python3.5 Add the python versions: update-alternatives --install /usr/bin/python python /usr/local/bin/python3.7 50 update-alternatives --install /usr/bin/python python /usr/bin/python2.7 40 update-alternatives --install /usr/bin/python python /usr/bin/python3.5 30 The python3.7 with the 50 priority is now your default python , the python -V will print: Python 3.7.0b2 To switch between them, use: update-alternatives --config python Sample output: There are 3 choices for the alternative python (providing /usr/bin/python). Selection Path Priority Status ------------------------------------------------------------ * 0 /usr/local/bin/python3.7 50 auto mode 1 /usr/bin/python2.7 40 manual mode 2 /usr/bin/python3.5 30 manual mode 3 /usr/local/bin/python3.7 50 manual mode Press <enter> to keep the current choice[*], or type selection number:
{ "source": [ "https://unix.stackexchange.com/questions/332641", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/207108/" ] }
332,691
I want to construct an xml string by inserinting variables: str1="Hello" str2="world" xml='<?xml version="1.0" encoding="iso-8859-1"?><tag1>$str1</tag1><tag2>$str2</tag2>' echo $xml The result should be <?xml version="1.0" encoding="iso-8859-1"?><tag1>Hello</tag1><tag2>world</tag2> But what I get is: <?xml version="1.0" encoding="iso-8859-1"?><tag1>$str1</tag1><tag2>$str2</tag2> I also tried xml="<?xml version="1.0" encoding="iso-8859-1"?><tag1>$str1</tag1><tag2>$str2</tag2>" But that removes the inner double quotes and gives: <?xml version=1.0 encoding=iso-8859-1?><tag1>hello</tag1><tag2>world</tag2>
You can embed variables only in double-quoted strings. An easy and safe way to make this work is to break out of the single-quoted string like this: xml='<?xml version="1.0" encoding="iso-8859-1"?><tag1>'"$str1"'</tag1><tag2>'"$str2"'</tag2>' Notice that after breaking out of the single-quoted string, I enclosed the variables within double-quotes. This is to make it safe to have special characters inside the variables. Since you asked for another way, here's an inferior alternative using printf : xml=$(printf '<?xml version="1.0" encoding="iso-8859-1"?><tag1>%s</tag1><tag2>%s</tag2>' "$str1" "$str2") This is inferior because it uses a sub-shell to achieve the same effect, which is an unnecessary extra process. As @steeldriver wrote in a comment, in modern versions of bash, you can write like this to avoid the sub-shell: printf -v xml ' ... ' "$str1" "$str2" Since printf is a shell builtin, this alternative is probably on part with my first suggestion at the top.
{ "source": [ "https://unix.stackexchange.com/questions/332691", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31464/" ] }