output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
For your first question, you can read it here. For your second question, I'm currently using mount --bind.
I'm setting up a proFTPd server so I can upload files to my webserver, but I've never tried this before. I've installed proftpd, added a user with a home folder: /home/FTP-shared and added /bin/false shell to it as well. But what do I do configuration-wise now in proftp to be able to login with this user, and up and download, delete and so on? And my idea was to symlink to Apache www folder from the ftp user directory? Will that work?
Creating a proFTPd user
Castaglia's answer is easier to use with ProFTPd, and works on any system. As a more general solution for Debian packages (including Ubuntu), you can find the configure options in the debian/rules file (that link takes you directly to the version used in 14.04): CONF_ARGS := --prefix=/usr \ --with-includes=$(shell pg_config --includedir):$(shell mysql_config --include|sed -e 's/-I//') \ --mandir=/usr/share/man --sysconfdir=/etc/$(NAME) --localstatedir=/var/run --libexecdir=/usr/lib/$(NAME) \ --enable-sendfile --enable-facl --enable-dso --enable-autoshadow --enable-ctrls --with-modules=mod_readme \ --enable-ipv6 --enable-nls --enable-memcache --with-lastlog=/var/log/lastlog --enable-pcre $(DEVELOPT)To find this yourself, go to the Launchpad page for proftpd-dfsg, click on "Code" at the top of the screen, then on the branch for the release you're interested in, then on "Browse the code". Once you're there you can work your way down to debian/rules.
I need to upgrade proftpd that is running on my Ubuntu 14.04 server. Since I want to keep all configfiles as they are I thought the best option would be to compile a newer version 1.3.5b and just copy in the binary to replace the current running. Would work fine in theory but I am running into issues because I probably do not have the right configure options. Is there a way to see the configure / compile options for the proftpd package?
How to find proftpd compile options Ubuntu 14.04
The strace output indicates that the error is caused by the attempt to create /run/proftpd.sock, which apparently already exists. Try fuser /run/proftpd.sock to see if any process is holding onto it; it will report the PID numbers of any such processes. Then use ps -fp <PID number here> to get more information about the process(es) in question. If it's systemd, you might need to do something like systemctl stop proftpd.socket; systemctl disable proftpd.socket to get rid of it. (In this case, DietPi's default ProFTPD configuration might have been tailored to use systemd's socket activation mechanism - essentially a mechanism that can replace the classic inetd/xinetd in running the FTP daemon on-demand only. As you seem to want to run ProFTPD as a classic stand-alone service, you would need to disable systemd's socket for it.) If it's some other process, you might want to kill it and figure out how to prevent it from getting started again. But if fuser lists no processes at all, it might be that the /run/proftpd.sock is simply a left-over from an earlier test run that did not start correctly; in that case, run rm /run/proftpd.sock and try systemctl start proftpd.service again.
I am trying to set up an FTP server on one of my devices that runs DietPi and I selected proFTPD as a server. I have installed the software and followed some set-up information I found here. But then I noticed that the service was not running. After trying to find it in via ps aux | grep proftpd I did not succeed. After issuing: systemclt status proftpd.service I got the following: ● proftpd.service - LSB: Starts ProFTPD daemon Loaded: loaded (/etc/init.d/proftpd; generated) Active: failed (Result: exit-code) since Tue 2021-04-13 22:58:49 BST; 9s ago Docs: man:systemd-sysv-generator(8) Process: 26998 ExecStart=/etc/init.d/proftpd start (code=exited, status=1/FAILURE)Apr 13 22:58:48 DietPi systemd[1]: Starting LSB: Starts ProFTPD daemon... Apr 13 22:58:49 DietPi proftpd[26998]: Starting ftp server: proftpd2021-04-13 22:58:49,163 DietPi proftpd[2700 5]: mod_ctrls/0.9.5: error: unable to bind to local socket: Address already in use Apr 13 22:58:49 DietPi proftpd[26998]: 2021-04-13 22:58:49,242 DietPi proftpd[27005]: error: unable to stat() /var/log/proftpd: No such file or directory Apr 13 22:58:49 DietPi proftpd[26998]: 2021-04-13 22:58:49,244 DietPi proftpd[27005]: mod_ctrls/0.9.5: unable to open ControlsLog '/var/log/proftpd/controls.log': No such file or directory Apr 13 22:58:49 DietPi proftpd[26998]: 2021-04-13 22:58:49,246 DietPi proftpd[27005]: fatal: ControlsLog: unab le to open '/var/log/proftpd/controls.log': No such file or directory on line 68 of '/etc/proftpd/proftpd.conf' Apr 13 22:58:49 DietPi proftpd[26998]: failed! Apr 13 22:58:49 DietPi systemd[1]: proftpd.service: Control process exited, code=exited, s tatus=1/FAILURE Apr 13 22:58:49 DietPi systemd[1]: proftpd.service: Failed with result 'exit-code'. Apr 13 22:58:49 DietPi systemd[1]: Failed to start LSB: Starts ProFTPD daemon.So I dug a little bit here and turns out that no other process runs or binds on port 21. So, what could be the issue of the service failing here? Furthermore, by issuing sudo lsof -i tcp:21 I do not get any response. Also, via nmap I get the following: PORT STATE SERVICE 22/tcp open ssh 53/tcp open domain 80/tcp open httpNo 21/tcp port here. Debug via proftpd -nd10 on the cl: roftpd -nd10 2021-04-14 08:13:45,498 DietPi proftpd[951]: using PCRE 8.39 2016-06-14 2021-04-14 08:13:45,508 DietPi proftpd[951]: using TCP receive buffer size of 131072 bytes 2021-04-14 08:13:45,510 DietPi proftpd[951]: using TCP send buffer size of 16384 bytes 2021-04-14 08:13:45,513 DietPi proftpd[951]: testing Unix domain socket using S_ISFIFO 2021-04-14 08:13:45,517 DietPi proftpd[951]: testing Unix domain socket using S_ISSOCK 2021-04-14 08:13:45,519 DietPi proftpd[951]: using S_ISSOCK macro for Unix domain socket detection 2021-04-14 08:13:45,528 DietPi proftpd[951]: mod_ctrls/0.9.5: error: unable to bind to local socket: Address already in use 2021-04-14 08:13:45,532 DietPi proftpd[951]: using 'UTF-8' as local charset for UTF-8 conversion 2021-04-14 08:13:45,535 DietPi proftpd[951]: ROOT PRIVS at mod_core.c:376 2021-04-14 08:13:45,537 DietPi proftpd[951]: RELINQUISH PRIVS at mod_core.c:378 2021-04-14 08:13:45,541 DietPi proftpd[951]: ROOT PRIVS at mod_core.c:385 2021-04-14 08:13:45,544 DietPi proftpd[951]: ROOT PRIVS at parser.c:1187 2021-04-14 08:13:45,549 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_ctrls_admin.c' 2021-04-14 08:13:45,554 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_ctrls_admin' (from '/usr/lib/proftpd/mod_ctrls_admin.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,558 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_tls.c' 2021-04-14 08:13:45,562 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_tls' (from '/usr/lib/proftpd/mod_tls.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,565 DietPi proftpd[951]: mod_tls/2.7: using OpenSSL 1.1.1d 10 Sep 2019 2021-04-14 08:13:45,587 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_radius.c' 2021-04-14 08:13:45,591 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_radius' (from '/usr/lib/proftpd/mod_radius.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,594 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_quotatab.c' 2021-04-14 08:13:45,599 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_quotatab' (from '/usr/lib/proftpd/mod_quotatab.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,602 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_quotatab_file.c' 2021-04-14 08:13:45,607 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_quotatab_file' (from '/usr/lib/proftpd/mod_quotatab_file.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,609 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_quotatab_radius.c' 2021-04-14 08:13:45,612 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_quotatab_radius' (from '/usr/lib/proftpd/mod_quotatab_radius.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,617 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_wrap.c' 2021-04-14 08:13:45,625 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_wrap' (from '/usr/lib/proftpd/mod_wrap.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,628 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_rewrite.c' 2021-04-14 08:13:45,633 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_rewrite' (from '/usr/lib/proftpd/mod_rewrite.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,636 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_load.c' 2021-04-14 08:13:45,639 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_load' (from '/usr/lib/proftpd/mod_load.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,643 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_ban.c' 2021-04-14 08:13:45,648 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_ban' (from '/usr/lib/proftpd/mod_ban.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,651 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_wrap2.c' 2021-04-14 08:13:45,656 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_wrap2' (from '/usr/lib/proftpd/mod_wrap2.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,660 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_wrap2_file.c' 2021-04-14 08:13:45,664 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_wrap2_file' (from '/usr/lib/proftpd/mod_wrap2_file.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,668 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_dynmasq.c' 2021-04-14 08:13:45,673 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_dynmasq' (from '/usr/lib/proftpd/mod_dynmasq.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,675 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_exec.c' 2021-04-14 08:13:45,681 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_exec' (from '/usr/lib/proftpd/mod_exec.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,683 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_shaper.c' 2021-04-14 08:13:45,688 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_shaper' (from '/usr/lib/proftpd/mod_shaper.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,692 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_ratio.c' 2021-04-14 08:13:45,696 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_ratio' (from '/usr/lib/proftpd/mod_ratio.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,699 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_site_misc.c' 2021-04-14 08:13:45,704 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_site_misc' (from '/usr/lib/proftpd/mod_site_misc.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,706 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_sftp.c' 2021-04-14 08:13:45,722 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_sftp' (from '/usr/lib/proftpd/mod_sftp.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,725 DietPi proftpd[951]: mod_sftp/1.0.0: using OpenSSL 1.1.1d 10 Sep 2019 2021-04-14 08:13:45,737 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_sftp_pam.c' 2021-04-14 08:13:45,741 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_sftp_pam' (from '/usr/lib/proftpd/mod_sftp_pam.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,744 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_facl.c' 2021-04-14 08:13:45,749 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_facl' (from '/usr/lib/proftpd/mod_facl.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,752 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_unique_id.c' 2021-04-14 08:13:45,757 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_unique_id' (from '/usr/lib/proftpd/mod_unique_id.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,762 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_copy.c' 2021-04-14 08:13:45,768 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_copy' (from '/usr/lib/proftpd/mod_copy.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,773 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_deflate.c' 2021-04-14 08:13:45,787 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_deflate' (from '/usr/lib/proftpd/mod_deflate.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,789 DietPi proftpd[951]: mod_deflate/0.5.7: using zlib 1.2.11 2021-04-14 08:13:45,792 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_ifversion.c' 2021-04-14 08:13:45,798 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_ifversion' (from '/usr/lib/proftpd/mod_ifversion.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,800 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_memcache.c' 2021-04-14 08:13:45,805 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_memcache' (from '/usr/lib/proftpd/mod_memcache.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,809 DietPi proftpd[951]: mod_memcache/0.1: using libmemcached-1.0.18 2021-04-14 08:13:45,812 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_tls_memcache.c' 2021-04-14 08:13:45,815 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_tls_memcache' (from '/usr/lib/proftpd/mod_tls_memcache.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,815 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_readme.c' 2021-04-14 08:13:45,823 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_readme' (from '/usr/lib/proftpd/mod_readme.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,825 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_ifsession.c' 2021-04-14 08:13:45,831 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_ifsession' (from '/usr/lib/proftpd/mod_ifsession.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,835 DietPi proftpd[951]: RELINQUISH PRIVS at parser.c:1190 2021-04-14 08:13:45,838 DietPi proftpd[951]: RELINQUISH PRIVS at mod_core.c:388 2021-04-14 08:13:45,844 DietPi proftpd[951]: DenyFilter: compiling regex '\*.*/' 2021-04-14 08:13:45,857 DietPi proftpd[951]: retrieved UID 1000 for user 'dietpi' 2021-04-14 08:13:45,862 DietPi proftpd[951]: retrieved GID 1000 for group 'dietpi' 2021-04-14 08:13:45,866 DietPi proftpd[951]: <IfModule>: using 'mod_quotatab.c' section at line 53 2021-04-14 08:13:45,868 DietPi proftpd[951]: <IfModule>: using 'mod_ratio.c' section at line 57 2021-04-14 08:13:45,871 DietPi proftpd[951]: <IfModule>: using 'mod_delay.c' section at line 61 2021-04-14 08:13:45,873 DietPi proftpd[951]: <IfModule>: using 'mod_ctrls.c' section at line 65 2021-04-14 08:13:45,874 DietPi proftpd[951]: ROOT PRIVS at mod_ctrls.c:114 2021-04-14 08:13:45,877 DietPi proftpd[951]: RELINQUISH PRIVS at mod_ctrls.c:117 2021-04-14 08:13:45,878 DietPi proftpd[951]: <IfModule>: using 'mod_ctrls_admin.c' section at line 73 2021-04-14 08:13:45,879 DietPi proftpd[951]: ROOT PRIVS at mod_core.c:376 2021-04-14 08:13:45,879 DietPi proftpd[951]: RELINQUISH PRIVS at mod_core.c:378 2021-04-14 08:13:45,879 DietPi proftpd[951]: ROOT PRIVS at mod_core.c:385 2021-04-14 08:13:45,879 DietPi proftpd[951]: processing configuration directory '/etc/proftpd/conf.d/' 2021-04-14 08:13:45,880 DietPi proftpd[951]: RELINQUISH PRIVS at mod_core.c:388 2021-04-14 08:13:45,907 DietPi proftpd[951]: UseReverseDNS off, returning IP address instead of DNS name 2021-04-14 08:13:45,907 DietPi proftpd[951] 127.0.0.1: 2021-04-14 08:13:45,907 DietPi proftpd[951] 127.0.0.1: Config for DietPi FTP: 2021-04-14 08:13:45,908 DietPi proftpd[951] 127.0.0.1: IdentLookups 2021-04-14 08:13:45,908 DietPi proftpd[951] 127.0.0.1: DeferWelcome 2021-04-14 08:13:45,908 DietPi proftpd[951] 127.0.0.1: MultilineRFC2228 2021-04-14 08:13:45,908 DietPi proftpd[951] 127.0.0.1: DefaultServer 2021-04-14 08:13:45,908 DietPi proftpd[951] 127.0.0.1: ShowSymlinks 2021-04-14 08:13:45,908 DietPi proftpd[951] 127.0.0.1: AllowRetrieveRestart 2021-04-14 08:13:45,908 DietPi proftpd[951] 127.0.0.1: AllowStoreRestart 2021-04-14 08:13:45,909 DietPi proftpd[951] 127.0.0.1: TimeoutNoTransfer 2021-04-14 08:13:45,909 DietPi proftpd[951] 127.0.0.1: TimeoutStalled 2021-04-14 08:13:45,909 DietPi proftpd[951] 127.0.0.1: TimeoutIdle 2021-04-14 08:13:45,909 DietPi proftpd[951] 127.0.0.1: DisplayLogin 2021-04-14 08:13:45,909 DietPi proftpd[951] 127.0.0.1: DisplayChdir 2021-04-14 08:13:45,909 DietPi proftpd[951] 127.0.0.1: ListOptions 2021-04-14 08:13:45,909 DietPi proftpd[951] 127.0.0.1: DenyFilter 2021-04-14 08:13:45,909 DietPi proftpd[951] 127.0.0.1: DefaultRoot 2021-04-14 08:13:45,910 DietPi proftpd[951] 127.0.0.1: RootLogin 2021-04-14 08:13:45,910 DietPi proftpd[951] 127.0.0.1: UserID 2021-04-14 08:13:45,910 DietPi proftpd[951] 127.0.0.1: UserName 2021-04-14 08:13:45,910 DietPi proftpd[951] 127.0.0.1: GroupID 2021-04-14 08:13:45,910 DietPi proftpd[951] 127.0.0.1: GroupName 2021-04-14 08:13:45,910 DietPi proftpd[951] 127.0.0.1: Umask 2021-04-14 08:13:45,910 DietPi proftpd[951] 127.0.0.1: DirUmask 2021-04-14 08:13:45,910 DietPi proftpd[951] 127.0.0.1: AllowOverwrite 2021-04-14 08:13:45,911 DietPi proftpd[951] 127.0.0.1: TransferLog 2021-04-14 08:13:45,911 DietPi proftpd[951] 127.0.0.1: SystemLog 2021-04-14 08:13:45,911 DietPi proftpd[951] 127.0.0.1: WtmpLog 2021-04-14 08:13:45,911 DietPi proftpd[951] 127.0.0.1: QuotaEngine 2021-04-14 08:13:45,911 DietPi proftpd[951] 127.0.0.1: Ratios 2021-04-14 08:13:45,911 DietPi proftpd[951] 127.0.0.1: DelayEngine 2021-04-14 08:13:45,912 DietPi proftpd[951] 127.0.0.1: mod_facl/0.6: registered 'facl' FS 2021-04-14 08:13:45,921 DietPi proftpd[951] 127.0.0.1: mod_tls/2.7: generating initial TLS session ticket key 2021-04-14 08:13:45,924 DietPi proftpd[951] 127.0.0.1: ROOT PRIVS at mod_tls.c:4815 2021-04-14 08:13:45,927 DietPi proftpd[951] 127.0.0.1: RELINQUISH PRIVS at mod_tls.c:4818 2021-04-14 08:13:45,930 DietPi proftpd[951] 127.0.0.1: mod_tls/2.7: scheduling new TLS session ticket key every 3600 secs 2021-04-14 08:13:45,935 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: binding to text domain 'proftpd' using locale path '/usr/share/locale' 2021-04-14 08:13:45,936 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: using locale files in '/usr/share/locale' 2021-04-14 08:13:45,939 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'ko_KR': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,943 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'bg_BG': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,945 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'ja_JP': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,948 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'en_US': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,951 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'fr_FR': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,954 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'es_ES': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,958 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'zh_TW': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,960 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'zh_CN': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,964 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'it_IT': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,968 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'ru_RU': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,971 DietPi proftpd[951] 127.0.0.1: ROOT PRIVS at mod_log.c:2151 2021-04-14 08:13:45,974 DietPi proftpd[951] 127.0.0.1: RELINQUISH PRIVS at mod_log.c:2154 2021-04-14 08:13:45,976 DietPi proftpd[951] 127.0.0.1: ROOT PRIVS at mod_rlimit.c:555 2021-04-14 08:13:45,978 DietPi proftpd[951] 127.0.0.1: RELINQUISH PRIVS at mod_rlimit.c:558 2021-04-14 08:13:45,980 DietPi proftpd[951] 127.0.0.1: set core resource limits for daemon 2021-04-14 08:13:45,981 DietPi proftpd[951] 127.0.0.1: ROOT PRIVS at mod_auth_unix.c:1338 2021-04-14 08:13:45,986 DietPi proftpd[951] 127.0.0.1: RELINQUISH PRIVS at mod_auth_unix.c:1341 2021-04-14 08:13:45,989 DietPi proftpd[951] 127.0.0.1: retrieved group ID: 1000 2021-04-14 08:13:45,991 DietPi proftpd[951] 127.0.0.1: setting group ID: 1000 2021-04-14 08:13:45,993 DietPi proftpd[951] 127.0.0.1: SETUP PRIVS at main.c:2594 2021-04-14 08:13:45,994 DietPi proftpd[951] 127.0.0.1: ROOT PRIVS at main.c:1862 2021-04-14 08:13:45,995 DietPi proftpd[951] 127.0.0.1: deleting existing scoreboard '/run/proftpd.scoreboard' 2021-04-14 08:13:45,996 DietPi proftpd[951] 127.0.0.1: opening scoreboard '/run/proftpd.scoreboard' 2021-04-14 08:13:45,998 DietPi proftpd[951] 127.0.0.1: RELINQUISH PRIVS at main.c:1889 2021-04-14 08:13:46,002 DietPi proftpd[951] 127.0.0.1: ROOT PRIVS at mod_ctrls_admin.c:1632 2021-04-14 08:13:46,002 DietPi proftpd[951] 127.0.0.1: opening scoreboard '/run/proftpd.scoreboard' 2021-04-14 08:13:46,005 DietPi proftpd[951] 127.0.0.1: RELINQUISH PRIVS at mod_ctrls_admin.c:1634 2021-04-14 08:13:46,007 DietPi proftpd[951] 127.0.0.1: ROOT PRIVS at inet.c:409 2021-04-14 08:13:46,008 DietPi proftpd[951] 127.0.0.1: RELINQUISH PRIVS at inet.c:459 2021-04-14 08:13:46,009 DietPi proftpd[951] 127.0.0.1: Failed binding to ::, port 21: Address already in use 2021-04-14 08:13:46,011 DietPi proftpd[951] 127.0.0.1: Check the ServerType directive to ensure you are configured correctly 2021-04-14 08:13:46,011 DietPi proftpd[951] 127.0.0.1: Check to see if inetd/xinetd, or another proftpd instance, is already using ::, port 21 2021-04-14 08:13:46,011 DietPi proftpd[951] 127.0.0.1: Unable to start proftpd; check logs for more detailsDebug via strace proftpd | grep -E "SOCKET|sock" getpeername(0, 0xbe8a6c1c, [16]) = -1 ENOTSOCK (Socket operation on non-socket) socket(AF_UNIX, SOCK_DGRAM, 0) = 3 socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3 connect(3, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3 connect(3, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 4 getsockopt(4, SOL_SOCKET, SO_RCVBUF, [131072], [4]) = 0 getsockopt(4, SOL_SOCKET, SO_SNDBUF, [16384], [4]) = 0 socket(AF_UNIX, SOCK_STREAM, 0) = 4 bind(4, {sa_family=AF_UNIX, sun_path="/run/test.sock"}, 110) = 0 unlink("/run/test.sock") = 0 socket(AF_UNIX, SOCK_STREAM, 0) = 4 bind(4, {sa_family=AF_UNIX, sun_path="/run/proftpd.sock"}, 110) = -1 EADDRINUSE (Address already in use) write(2, "2021-04-14 11:08:40,739 DietPiHo"..., 1292021-04-14 11:08:40,739 DietPi proftpd[2682]: mod_ctrls/0.9.5: error: unable to bind to local socket: Address already in use socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 6 connect(6, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 6 connect(6, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 5 connect(5, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 5 connect(5, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 4 connect(4, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 4 connect(4, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) socket(AF_NETLINK, SOCK_RAW|SOCK_CLOEXEC, NETLINK_ROUTE) = 4 getsockname(4, {sa_family=AF_NETLINK, nl_pid=2682, nl_groups=00000000}, [12]) = 0 socket(AF_INET, SOCK_DGRAM|SOCK_CLOEXEC, IPPROTO_IP) = 4 getsockname(4, {sa_family=AF_INET, sin_port=htons(44402), sin_addr=inet_addr("127.0.0.1")}, [28->16]) = 0 getsockname(4, {sa_family=AF_INET, sin_port=htons(40796), sin_addr=inet_addr("127.0.0.1")}, [28->16]) = 0 socket(AF_INET, SOCK_DGRAM|SOCK_CLOEXEC|SOCK_NONBLOCK, IPPROTO_IP) = 4
proFTPD not working due to socket bind error
It's difficult to answer this question because of lack of information provided (proftpd.conf, /etc/hosts, output of ifconfig and hostname). I guess it's a problem related to your changed hostname. If so, try to modify your /etc/hosts from: x.y.z.t Debian where x.y.z.t is your actual IP address, to: x.y.z.t jon-virtual-machine I've supposed your hostname output is jon-virtual-machine. Let me know if it's the case.
I have so far: installed ProFTPD, turned ipv6 off in the conf, changed server name from Debian to jon-virtual-machine, and jailed users to their home folder. But it says I can't determine IP address or process the conf file.
ProFTPD won't start or restart [closed]
Many, but I will cite a few off the top of my head.What if ssh/rsh are not available on the remote server or if they are broken in terms of configuration or stricter network rules? Using rsh/ssh still would require the client (depends on the sender or receiver role), the remote side would have to however fork the rsync binary locally and establish the connection with the rsync process running at the local side. rsh/ssh would merely provide a connection tunnel; as far as rsync is concerned, rsync is communicating with the other rsync process over the pipe(s). Having a daemon mode rsync process would make the server a true ftp look-alike server where some of the filesystems can be made available through rsync modules. Everything else can be avoided. Say I want to make available only /usr/local and /var for download and refuse any rsync client's request for other downloads. I can use discretion at the host level or at the filesystem (modules) level to allow either upload or download (read only). Can control host/user level access, authentication, authorization, logging and filesystem (structure) modules for download/upload specifically through a configuration file. Every time a change is made to the configuration file, rsyncd --daemon need not be restarted or HUPped. Can also put control on how many clients can connect to the rsync server process at a time. This is good, since I do not want my rsyncd server process to hog down the host completely over CPU or disk based I/O operations. chroot functionality can be made available through the configuration for rsyncd in daemon mode. I can use this as a pretty neat security feature if I want to avoid clients connecting to my rsyncd for any of the files/filesystems that must be secured on the host and should not have outside access. I can outright deny some of the options used by rsync client and not entertain at the server end, such as not allowing the --delete option. Can have an option to run some commands/scripts before and after the rsync process. An example would be reporting and storing the rsync stats in post-transfer mode.These are some of them, but I am sure the expert users of rsync can throw more light on this.
I don't understand the need for an rsync server in daemon mode. What are the benefits from it if I can use rsync with SSH or telnet?
What is the need for rsync server in daemon mode
SCSI and ATA are entirely different standards. They are currently both developed under the aegis of the INCITS standards organization but by different groups. SCSI is under technical committee T10, while ATA is under T13.1 ATA was designed with hard disk drives in mind, only. SCSI is both broader and older, being a standard way of controlling mass storage devices, tape drives, removable optical media drives (CD, DVD, Blu-Ray...), scanners, and many other device types. It wasn't obvious in the mid-1980s — when IDE was introduced to the PC world — that SCSI would get pushed to the margins of the computing world. SCSI was well-established and more capable. Unix workstations and Macintosh computers shipped with SCSI hard disk drives for decades. High-end PCs often had a SCSI card for peripherals at least, and often for the system HDD, too. The early CD-ROM and tape drives for personal computers came out in SCSI form first. The PC industry being what it is, though, there was a push to use the less expensive ATA standard instead of SCSI. The initial compromise was called ATAPI, an extension to ATA that allows a device that understands SCSI internally to receive those SCSI commands over an ATA interface. More on this below. Several years later, SCSI got the ATA command pass-through feature, basically the inverse of ATAPI, allowing ATA commands over a SCSI bus. One use for this facility is to tunnel ATA SMART commands over SCSI. smartmontools does this, for example. Later still, the INCITS T10 committee developed a standard called the SCSI/ATA Translation (SAT), which translates SCSI commands to ATA commands and vice versa.2 The Linux kernel's libata library provides a SAT implementation for Linux, among other things. There is some logical overlap in the SCSI and ATA protocols, since they both control hard disk drives. Both obviously need a way to seek to a particular hard drive sector, retrieve that sector's contents, etc. Nevertheless, the command formats are entirely different; otherwise, we wouldn't need these translation and pass-through mechanisms.SATA actually "talks" SCSIThat is about as true as the assertion that "Cars are pink." Some cars are pink. ATAPI, ATA pass-through, and SAT are only part of the story. Read on.I assume it is taken for granted that they differ on the physical layer, as they do not share compatible cables.That was true in the old parallel SCSI world, but just as SATA replaced PATA, SAS replaced parallel SCSI. SAS and SATA share the same drive connectors, and they are electrically compatible. A SAS controller can talk to SAS and SATA devices, but a SAS drive cannot work with a SATA-only controller. The difference is in the negotiation, and in the commands you can use after the devices on each end of the cable figure out what they are talking to. In fact, a lot of "SATA RAID" controllers are really SAS RAID controllers. Such controllers often have one or more SFF-8087 SAS mating connectors on the card, but you can connect SATA drives to them with an SFF-8087 to 4× SATA breakout cable. So, a SAS/SATA RAID card with two SFF-8087 mating connectors controls up to 8 drives.3 Another common situation is a hot-swap drive enclosure or computer case with a SAS backplane. The backplane usually has an SFF-8087 connector on it, allowing use of a simple 8087-to-8087 cable from the backplane to the disk controller. If the drives in the hot-swap trays are SATA, that's of no matter. The SAS controller can talk to them over the SAS cabling, as they sit in drive sleds that plug the drives into the SAS backplane. The drives are still SATA drives, though, speaking the ATA protocol, not SCSI.I also know that ATAPI is an encapsulation for SCSITrue, but ATAPI is only used for devices other than hard disk drives. The main reason this standard exists is to allow an ATA interface to transport SCSI commands like the streaming data commands for a tape drive, the "eject media" command for an optical disk drive, or the "play track" command for a CD audio disc. This fact is becoming less relevant as the non-HDD devices that used to speak SCSI over ATAPI disappear or move on to other interfaces. Low-end tape drives no longer exist, so tape drives are all SAS now.4 Scanners are pretty much USB-only these days. Optical media drives are moving outside the computer case to be connected via USB, or disappearing entirely, leaving just the increasingly rare internal optical drives speaking ATAPI. Regardless, a SATA device that understands SCSI over ATAPI is a "SCSI device" only in a limited way. Such devices will not benefit from most of the advantages of SAS over SATA. These capabilities make SAS distinctly valuable compared to SATA, ATAPI notwithstanding. If you want another car analogy, the fact that I can run my car on an oval race track does not make it a race car.I've noticed that features from SCSI such as NCQ, FUA, DPO, etc (if I don't remember incorrectly) have been adopted from SCSI. But it is unclear how "much" of the SCSI command set is actually shared or similar.Mostly this amounts to low-end mimicry. NCQ is not the same thing as TCQ, for example. You will only get a hard drive with TCQ if it is a SAS device. Plug an NCQ-capable SATA drive into a SAS controller, and it doesn't suddenly gain TCQ capability. That said, a modern SATA device may well be much more capable than a SCSI device from a decade ago. It is certainly going to be capable of much higher levels of I/O. All of this is confusing and overlapping because that's the nature of the PC hardware world. There aren't clear lines because optical drive manufacturers — just to pick on one sub-industry — really don't want to build two entirely different drives, one speaking SAS to its highest expression, and the other speaking SATA. So, they compromise. They lobby in the committees defining such standards to create a single standard that lets them drop their SATA drive on a SAS bus, and everyone's mostly happy.Where can I find some clear information on this, and especially how it relates to the Linux kernel?Ultimately, you want to read the Linux sources. The libATA Developer's Guide should also be helpful. I'm not aware of any easy summary of how all this works. It wasn't designed to be easy. It was designed to accommodate three decades of hardware evolution, competing standards, and disparate goals. Further, it was designed without magical levels of foresight. In short, it's a mess. The only people who really have to know how the mess works are those building the OS kernels, those designing the hardware, and to a lesser extent, those writing the drivers for the OS kernels. For such a small cadre of highly capable people, standards and working code are sufficient. Today, Linux calls most rewritable mass-storage devices /dev/sd?. "SD" once stood for "SCSI disk," and existed merely to differentiate from /dev/hd? generically meaning "Hard Disk," but implying PATA in most cases. This distinction is another practical irrelevancy today. Now we have SSDs, USB thumb drives, virtual hard drives, iSCSI devices and more all called /dev/sd?. I suggest you start thinking of "SD" as short for "storage device," rather than worrying about whether the device speaks ATA over SATA, ATA over Ethernet, SCSI over USB, SCSI over ATAPI, SCSI over SAS, SCSI over IP (iSCSI), or what have you. The core problem is that naming schemes often outlast the reason behind the scheme. You see this in /dev/scd0. The device connected to that /dev node is more likely to be a DVD or Blu-Ray drive than a Compact Disc drive these days. The alternative — where you name each /dev node after the exact device type that's connected to it — has its own problems. Would it really be better if we named the /dev node after the low-level protocol it used? /dev/atapi0, /dev/sas0, etc? Or maybe you'd prefer /dev/atapibluray0 and such? What about multi-media drives? Does the same driver also need to expose /dev/atapicd0 in case you slide a Compact Disc into the Blu-Ray drive? That just replaces one confusing scheme with another. Linux's /dev/sd? abstraction is not perfect, but it is useful. For instance, you can learn the fact that /dev/sda is most likely the boot drive without bothering to worry about what cabling, interface protocol, and media are behind that name. If I tell you that a given Linux box has a single system drive, an optical drive, and sometimes has a USB thumb drive plugged into it, you can confidently guess that they are called /dev/sda, /dev/sdb and /dev/sdc, respectively.Footnotes:SCSI and ATA didn't start out sharing a parent standards organization. They both started out as proprietary hard disk controllers. SCSI evolved from Shugart Associates' SASI, and ATA/IDE came out of a much later design collaboration between Western Digital, Compaq and CDC. ANSI later standardized both, with ATA-1 following SCSI-1 about 8 years later. INCITS is a kind of sister organization to ANSI. INCITS publishes final standards through ANSI in the US, and ISO/IEC JTC 1 worldwide.The current standard is SAT-3, published in May 2015, with SAT-4 and SAT-5 in progress as I write this in mid-July 2018. The latter link takes you to drafts of the in-progress versions.I'm ignoring SATA port multipliers, SAS expanders, etc.Excepting the models made for compatibility with old parallel SCSI systems.
This is nothing new to me at least, that SATA actually "talks" SCSI, hence why these SATA devices show up as SCSI devices in Linux. A related question has been asked before, e.g. Why do my SATA devices show up under /proc/scsi/scsi? However what fails to be mentioned where I've seen this discussed before is exactly in what sense SATA relates to SCSI, and how they differ. I assume it is taken for granted that they differ on the physical layer, as they do not share compatible cables. However what about higher up on the stack? I am aware of how Linux represents SATA and even IDE disks on modern kernels as just SCSI to the SCSI subsystem. But what about the actual protocol that is used on the bus? I also know that ATAPI is an encapsulation for SCSI, but what about regular ATA? I've noticed that features from SCSI such as NCQ, FUA, DPO, etc (if I don't remember incorrectly) have been adopted from SCSI. But it is unclear how "much" of the SCSI command set is actually shared or similar. Do modern SATA devices with their ATA specification implement a subset of the SCSI command set, but encapsulated (as in ATAPI)? An identical set? A superset? Or perhaps only selected features are implemented as variants that are not directly identical? Where can I find some clear information on this, and especially how it relates to the Linux kernel? Some kind of tutorial for driver development would be nice, but even just an overview that doesn't completely skip over all the details would suffice. I am aware I can just read the actual specification, but that is again much too detailed, hard to find what you're really looking for, and just not realistic for me and probably most other users in the temporal sense.
In what sense does SATA "talk" SCSI? How much is shared between SCSI and ATA?
I gave a try to lftp: lftp -c "torrent $1"where $1 is the .torrent file. Unlike lftp -e "torrent $1"lftp -c must exit when the command is done (lftp -e leaves you in its command pronpt). It also does seeding. (I don't know yet how seeding interacts with -c.) Seeding after the command finished This is actually done by lftp -c: first, I started it. And the command finished after a while: Name: lib.ru_2007-03-05.7z dn:1.7G up:0 complete, ratio:0.000000 Seeding in background... [15137] Moving to background to complete transfers... $ Checking that it is still active (seeding) in the background: $ ps x | fgrep lftp 15137 ? Ss 0:37 lftp -c torrent lib.ru_2007-03-05.7z.4fb7e98d43804eca.torrent 67517 pts/3 S+ 0:00 grep -F --color=auto lftp $
I'm interested in a single command that would download the contents of a torrent (and perhaps participate as a seed following the download, until I stop it). Usually, there is a torrent-client daemon which should be started separately beforehand, and a client to control (like transmission-remote). But I'm looking for the simplicity of wget or curl: give one command, get the result after a while.
command-line tool for a single download of a torrent (like wget or curl)
Basically, it's because that was the tradition from way back when port numbers started being assigned through until approximately 2011. See, for example, §7.1 “Past Principles” of RFC 6335:TCP and UDP ports were simultaneously assigned when either was requestedIt's possible they will be un-allocated someday, of course, as ports 1023 and below are the "system ports", treated specially by most operating systems, and most of that range is currently assigned. And, by the way, HTTP/3 runs over UDP. Though it can use any UDP port, not just 80/443. So really those are still unused. As far as Debian is concerned, its /etc/services already had 22/udp in 1.0 (buzz 1996). It was however removed in this commit in 2016, first released in version 5.4 of the netbase package. As of writing, the latest stable version of Debian (buster) has 5.6. And the latest Ubuntu LTS (18.04, bionic) netbase package is based on Debian netbase 5.4 and you can see its changelog also mentions the removal of udp/22.
I'm reading a book on network programming with Go. One of the chapters deals with the /etc/services file. Something I noticed while exploring this file is that certain popular entries like HTTP and SSH, both of which use TCP at the transport layer, have a second entry for UDP. For example on Ubuntu 14.04: ubuntu@vm1:~$ grep ssh /etc/services ssh 22/tcp # SSH Remote Login Protocol ssh 22/udpubuntu@vm1:~$ grep http /etc/services http 80/tcp www # WorldWideWeb HTTP http 80/udp # HyperText Transfer ProtocolAnyone know why these have two entries? I don't believe SSH or HTTP ever use UDP (confirmed by this question for SSH).
Why do popular TCP-using services have UDP as well as TCP entries in /etc/services?
Telnet is defined in RFC 854. What makes it (and anything else) a protocol is a set of rules/constraints. One such rule is that Telnet is done over TCP, and assigned port 23 - this stuff might seem trivial, but it needs to be specified somewhere. You can't just send whatever you want, there are limitations and special meaning to some things. For example, it defines a "Network Virtual Terminal" - this is because when telnet was established, there could be many different terminals: A printer, a black/white monitor, a color monitor that supported ANSI codes, etc. Also, there's stuff like this:In summary, WILL XXX is sent, by either party, to indicate that party's desire (offer) to begin performing option XXX, DO XXX and DON'T XXX being its positive and negative acknowledgments; similarly, DO XXX is sent to indicate a desire (request) that the other party (i.e., the recipient of the DO) begin performing option XXX, WILL XXX and WON'T XXX being the positive and negative acknowledgments. Since the NVT is what is left when no options are enabled, the DON'T and WON'T responses are guaranteed to leave the connection in a state which both ends can handle. Thus, all hosts may implement their TELNET processes to be totally unaware of options that are not supported, simply returning a rejection to (i.e., refusing) any option request that cannot be understood.In modern times, most of the stuff isn't really that important anymore (then again, telnet as a protocol isn't being used much anymore, not just because it lacks security) so in practice it boils down to send/echo unless you have to actually interface with terminals.
This is more like a conceptual question. I need some clarifications.Today I was learning some socket programming stuff and wrote a simple chat server and chat client based on Beej's Guide to Network Programming. (chat server receives clients message and send messages to all the other clients) I copied the chat server and I wrote my own chat client. The chat client is just a program to send stdin input to server and print socket data from server. Later I noticed that the guide says I can just use telnet to connect to the server. I tried and it worked. I was unfamiliar with telnet and for a long time I don't know what exactly it is. So now my experience confuses me: Isn't telnet just a simple TCP send/echo program? What makes it so special to be a protocol thing? My dumb chat client program doesn't create a [application] protocol. From Wikipedia Communication_protocol :In telecommunication, a communication protocol is a system of rules that allow two or more entities of a communications system to transmit information via any kind of variation of a physical quantity.What rules does Telnet create? telnet host port, open a TCP stream socket for raw input/output? That's not a rule.
Why telnet is considered to be a protocol? Isn't it just a simple TCP send/echo program?
By default sshd uses ipv4 and ipv6. You can configure the protocol sshd uses through the AddressFamily directive in /etc/ssh/sshd_config For ipv4 & ipv6 (default) AddressFamily anyFor ipv4 only AddressFamily inetFor ipv6 only AddressFamily inet6After you make any changes to sshd_config restart sshd for the changes to take effect.
$ netstat -nat Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp6 0 0 :::22 :::* LISTEN Why are there two records of port 22 (:::22 and 0.0.0.0:22) and why does one use protocol as tcp and the other as tcp6 This is on Ubuntu 12.04.4
Why does SSH show protocol as tcp6 *and* tcp in netstat?
That server is clearly running a partial or broken implementation of WebDAV. Note that you need to connect to an URL like https://public.me.com/ix/rudchenko, not the normal URL https://public.me.com/rudchenko. I tried several clients:With a normal HTTP downloader such as wget or curl, I could download a file knowing its name (e.g. wget https://public.me.com/ix/rudchenko/directory/filename), but was not able to obtain a directory listing. FuseDAV, which would have been my first choice, is unable to cope with some missing commands. It apparently manages to list the root directory (visible in the output from fusedav -D) but eventually runs some request that returns “PROPFIND failed: 404 Not Found” and locks up. Nd lacks a list command. Cadaver works well, but lacks a recursive retrieval command. You could use it to obtain listings, then retrieve individual files as above. It's not perfect, and there is a problem specifically in this case: cadaver's mget fails to treat args with wildcards that expand to filenames with spaces. Davfs2 works very well. I could mount that share and copy files from it. The only downside is that this is not a FUSE filesystem, you need root to mount it or an entry in /etc/fstab. The FUSE-based wdfs-1.4.2-alt0.M51.1 worked very well in this case, requiring no root (only permissions for /dev/fuse). mkdir viewRemote wdfs https://public.me.com/ix/rudchenko/ viewRemote rsync -a viewRemote/SEM*TO\ PRINT* ./ fusermount -u viewRemote rmdir viewRemote(Of course, a simple cp instead of rsync would work well in this example; rsync was chosen merely for extra diagnostics about the difference when we would update the copy.) (Apart from wdfs, I tried these commands on a Debian squeeze system. Your mileage may vary.)
How can I copy a folder from http://public.me.com/ (a service related to iDisk, or MobileMe) to my local filesystem with a Unix tool (like wget, a command-line non-interactive tool)? The problem is that the web interface is actually a complex Javascript-based thing rather than simply exposing the files. (Even w3m can't browse, e.g., https://public.me.com/rudchenko.) My goal is to update the local copy from time to time non-interactively, and to put the command to download the files to a script, so that other people can run the script and download the files. A wget-like (rsync-like, git pull-like) tool will suit me, or a combination of mounting a network filesystem via FUSE and then using standard Unix commands to copy the directories will do. I've read in the Wikipedia articles (which I refer to above) that Apple provides WebDAV access to these services, and I've also read about cadaver, a wget-like WebDAV client, but I can't figure out which address I should use to access the folders at http://public.me.com/ read-only (anonymously). Perhaps Gilles' comment (that WebDAV isn't currently used) is true, but still there seems to be some WebDAV stuff behind the scene: the URL passed to the browser for downloading an archive with a directory (after pressing the "download selected files" button at the top of the web interface) looks like this: https://public.me.com/ix/rudchenko/SEM%20Sep21%201%20TO%20PRINT.zip?webdav-method=ZIPGET&token=1g3s18hn-363p-13fryl0a20-17ial2zeu00&disposition=download-- note that it mentions "WebDAV". (If you are curious, I tried to re-use this URL as an argument for wget, but it failed: $ LC_ALL=C wget 'https://public.me.com/ix/rudchenko/SEM%20Sep21%201%20TO%20PRINT.zip?webdav-method=ZIPGET&token=1g3s18hn-363p-13fryl0a20-17ial2zeu00&disposition=download' --2011-11-21 01:21:48-- https://public.me.com/ix/rudchenko/SEM%20Sep21%201%20TO%20PRINT.zip?webdav-method=ZIPGET&token=1g3s18hn-363p-13fryl0a20-17ial2zeu00&disposition=download Resolving public.me.com... 23.32.106.105 Connecting to public.me.com|23.32.106.105|:443... connected. HTTP request sent, awaiting response... 404 Not Found 2011-11-21 01:21:48 ERROR 404: Not Found. $ ) (I'm using a GNU/Linux system.)
How to copy someone's else folders from public.me.com with a wget-like tool?
From RFC 5424 (which lays down the syslog protocol and refers to RFC 3339 for timestamps) "1. Introduction":This document describes the standard format for syslog messages and outlines the concept of transport mappings. It also describes structured data elements, which can be used to transmit easily parseable, structured information, and allows for vendor extensions. This document does not describe any storage format for syslog messages. It is beyond of the scope of the syslog protocol and is unnecessary for system interoperability.A message here refers to what is to be logged, and NOT the format of the logging. Put another way: the log is not the message, and the RFC is about the message, not the log. The stuff you see in /var/log/syslog is the "stored format" messages. That format is determined by how you have configured your particular syslog, and as the preamble states, there is no real necessity for any protocol there, at least as far as "system interoperability" goes. Syslog daemons can serve as loggers for multiple systems. The RFC is intended to set a standard such that compliant systems can log to a remote syslog, regardless of which particular implementation is in use, etc. The syslog daemon receiving such messages will then write them to a file, but it doesn't write them verbatim -- it writes them in accordance with its configuration. If you look at the RFC further, you will notice there are many, many ways in which /var/log/syslog does not comply. Take a look at the ABNF at the beginning of section 6 -- this does not simply describe a line in a log file (notice the timestamp is not nearly the first item!). This is a structured format for serializing messages for transmission.
Why Linux syslog file: /var/log/syslog does not follow the timestamp format defined in the protocol https://www.rfc-editor.org/rfc/rfc5424#page-11?
Why Linux syslog file does not follow the RFC3339 protocol?
One reason he may have said this is that if you look at the traffic that flows back and forth between a client and a server, it's fairly verbose. This doesn't present an issue when the traffic is only having to go locally on a single box between the 2, however when the traffic needs to go over a network connection, then it becomes more painfully obvious that it's an inefficient protocol. The protocol is tolerable on a LAN network, but as soon as you try and span it over a WAN connection, or introduce encryption in the form of a VPN or by using an SSH connection as a link between the client and the server, the Protocol really starts to show it's lack of scalability. Benchmarking You can use the tool x11perf to get a sense of the impact of running the applications localhosted vs. running them over an SSH connection to another X system. Here I'm running the -create test to give you a taste of what I'm talking about. localhost $ x11perf -create x11perf - X11 performance program, version 1.2 Fedora Project server version 10905000 on :0.0 from grinchy Mon Sep 16 21:08:28 2013Sync time adjustment is 0.1340 msecs. 2400 reps @ 0.0134 msec ( 74400.0/sec): Create and map subwindows (4 kids) 2400 reps @ 0.0156 msec ( 64300.0/sec): Create and map subwindows (4 kids) .... 2400 reps @ 0.0119 msec ( 83800.0/sec): Create and map subwindows (100 kids) 12000 trep @ 0.0063 msec (158000.0/sec): Create and map subwindows (100 kids) .... 2400 reps @ 0.0029 msec (349000.0/sec): Create and map subwindows (200 kids) 12000 trep @ 0.0049 msec (205000.0/sec): Create and map subwindows (200 kids)LAN host $ ssh skinner "x11perf -create" .... Sync time adjustment is 1.5461 msecs. 2400 reps @ 0.0270 msec ( 37100.0/sec): Create and map subwindows (4 kids) 2400 reps @ 0.0219 msec ( 45700.0/sec): Create and map subwindows (4 kids) .... 2400 reps @ 0.0168 msec ( 59600.0/sec): Create and map subwindows (100 kids) 12000 trep @ 0.0211 msec ( 47300.0/sec): Create and map subwindows (100 kids) .... 2400 reps @ 0.0159 msec ( 62900.0/sec): Create and map subwindows (200 kids) 12000 trep @ 0.0196 msec ( 50900.0/sec): Create and map subwindows (200 kids)WAN host $ ssh catbus-o "x11perf -create" .... Mon Sep 16 21:12:22 2013Sync time adjustment is 27.9911 msecs. 2400 reps @ 0.0592 msec ( 16900.0/sec): Create and map subwindows (4 kids) 2400 reps @ 0.0604 msec ( 16600.0/sec): Create and map subwindows (4 kids) .... 2400 reps @ 0.0538 msec ( 18600.0/sec): Create and map subwindows (100 kids) 12000 trep @ 0.0558 msec ( 17900.0/sec): Create and map subwindows (100 kids) .... 2400 reps @ 0.0697 msec ( 14400.0/sec): Create and map subwindows (200 kids) 12000 trep @ 0.0586 msec ( 17100.0/sec): Create and map subwindows (200 kids)Notice the extreme drop off from: localhost: 12000 trep @ 0.0049 msec (205000.0/sec): Create and map subwindows (200 kids)LAN host: 12000 trep @ 0.0196 msec ( 50900.0/sec): Create and map subwindows (200 kids)WAN host: 12000 trep @ 0.0586 msec ( 17100.0/sec): Create and map subwindows (200 kids)That's a pretty steep decline in performance. Now realize that this isn't all X's fault. It is going over a 100MB network in the LAN test, and a ~20MB connection for the WAN test, but the point is still the same. X isn't helping itself with it's overly beefy communications it's throwing back and forth between the X server and the X client. Communications Breakdown (couldn't resist the Led Zeppelin reference) This is more for effect but just to give you an idea of the amount of data that's roughly flowing back and forth during the x11perf -create test that I used above I decided to run it on my LAN host again, only this time I used tcpdump to capture the SSH traffic, and dump it to a file. I used this command: $ sudo -i $ tcpdump -lnni wlan0 -w dump.log -s 65535 host skinner and port sshThe resulting log file: $ ll dump.log -rw-r--r-- 1 root root 5768821 Sep 16 22:30 dump.logSo the resulting amount of traffic was in the ballpark of ~5.5MB. Granted this is not all X traffic but it gives you an idea of the amount of data flowing. This is really the Achilles' heel of X, and the major reason why it can't scale.
One of my professors was telling us about scalability problems, and said that the X protocol was a prime example of a not scalable protocol. Why is that? Is it because it is very hardware dependent? I know that X is used in modern unix/linux environments, if it's not scalable than why is it used so widely?
Does the X windowing system suffer from scalability?
There are protocols like AoE (ATA over Ethernet) that allows communication without IP. The problem is that such protocols aren't that common. In fact, I can't see any at the moment, except for dinosaurs such as the old file sharing protocols of yore like Banyan Vines, DECNET, etc. There's a reason why IP took over after all. The overhead doesn't represent much anymore to our hardware, and adds flexibility.
I've read a little about Internet protocols and deduced that on local area network, there is no need to use the IP protocol, though it's normally used. Is there a possibility to turn off the IP protocol in Linux and use only MAC (ethernet) addresses for frame delivering? How would you do it? I guess there will be a problem with TCP. I'm not sure if it can work on top of LLC layer. The IP protocol overhead is so small that it's being used on LANs (with hubs) too?
Local area network without using the IP protocol in Linux
The file is documented in man 5 protocols:This file is a plain ASCII file, describing the various DARPA internet protocols that are available from the TCP/IP subsystem. It should be consulted instead of using the numbers in the ARPA include files, or, even worse, just guessing them. These numbers will occur in the protocol field of any IP header.It’s a list of protocols, not tied to protocols actually supported on your system. It’s the local equivalent of the IANA’s list of protocol numbers. It can be interrogated using the getprotobyname and getprotobynumber functions. It is typically used to provide a name for a protocol seen in use, or to determine the protocol number for a user-specified protocol name; see for example this use in the Unbound DNS resolver. It shouldn’t be modified:Keep this file untouched since changes would result in incorrect IP packages. Protocol numbers and names are specified by the IANA (Internet Assigned Numbers Authority).You would only need to change it if you were implementing a new protocol over IP — not a new protocol over TCP/UDP (which are listed in /etc/services): something like SCTP, not HTTP. If you were doing that then you might want to modify /etc/protocols temporarily; but before publication you’d request a new assignment from the IANA (which is quite straightforward), and then your protocol would be added to the IANA’s list and would eventually make its way into /etc/protocols updates.
What is the use case and usage of /etc/protocols? I can see it lists the number of protocols available. But what is the significance? For example, my Linux machine is not running OSPF but I see OSPF in /etc/protocols. What does it mean? What is the significance of that file? Do we edit that file?
What is the significance of /etc/protocols in Linux?
In older NFS versions (v2 and v3) there are two distinct RPC services handled by separate software: the "MOUNT" protocol that's used only to obtain the initial filesystem handles from rpc.mountd, and the "NFS" protocol that's used for everything else. So the mountproto= option defines the transport used to access rpc.mountd whenever you mount an NFSv3 filesystem. It has no effect on performance, only compatibility (older mountd did not support TCP). NFSv4 doesn't have mountproto= anymore because the mount-related operations have been integrated into the core protocol. Meanwhile proto= defines the protocol used to transfer file data (the actual NFS operations.) NFSv3 supports UDP, so yes, proto=udp,vers=3 is possible, but keep in mind the caution message in the manual page – NFS via UDP over a Gigabit or faster connection risks data corruption, and the faster your connection is, the higher risk of corruption. NFSv4 supports TCP and RDMA only – it doesn't support UDP anymore.
in RHEL 8.8 when I am trying to test NFS tcp versus udp and versions 3 versus 4.0, 4.1 and 4.2, I observe on my nfs client a mountproto= in addition to proto= when typing mount. What is the significance of this and what does it mean? Should I be able to, in RHEL 8.8, have NFS operating showing specifically vers=3 and proto=udp on the nfs client? What does it mean, on the nfs client, when I see proto=tcp and mountproto=udp ?
difference NFS proto and mountproto
Where do application layer protocols reside? Protocols are an abstraction, so they don't really "reside" anywhere beyond specifications and other documentation. If you mean, where are they implemented, there's a few common patterns:They may be implemented first in native C as libraries which can be wrapped by for use in other languages (since most other languages are themselves implemented in C and have a C interface). E.g., encryption protocols are generally like this. They may be implemented from scratch as libraries or modules for use in a specific language, using just that language (and/or the language it is implemented in). E.g., high level networking protocols. They may be implemented from scratch by a given application.These are all pure userland implementations, but some protocols -- e.g., low level networking -- may be implemented in the kernel. This may include a corresponding native C userland library (as with networking and filesystems) or the kernel (including independent kernel modules) may provide a language agnostic interface via procfs, /dev, etc.
Where do application layer protocols reside? Are they part of library routines of language e.g. C, C++, Java? As goldilocks says in his answer, this is about the implementation of application layer protocols.
Are application layer protocols part of library routines?
The list you're looking for is most probably at http://www.oid-info.com/ Yes, this is some kind of standard: OIDs are objects in the MIB, the global root MIB was defined in RFC 1155. It has since been extended, the SNMP MIB is RFC 1157.
I was looking at this link here: http://www.debianadmin.com/linux-snmp-oids-for-cpumemory-and-disk-statistics.html and noticed that the OIDs are the same ones I see for the same stats for our appliance. Is this some kind of standard with SNMP maybe an RFC or something? Does anyone know where I can find the list that tells me what each OID describes?
Where do I find the OID descriptions for SNMPv2 in Linux?
Sure you can use lsof to see what activity is currently taking place on the server. Here's what the output would look like for an idle connection to an SFTP server. $ sudo /usr/sbin/lsof -p $(pgrep sftp) COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME sftp-serv 30268 sam cwd DIR 0,19 20480 28312529 /home/sam (mulder:/export/raid1/home/sam) sftp-serv 30268 sam rtd DIR 253,0 4096 2 / sftp-serv 30268 sam txt REG 253,0 51496 48727430 /usr/libexec/openssh/sftp-server sftp-serv 30268 sam mem REG 253,0 109740 46368404 /lib/libnsl-2.5.so sftp-serv 30268 sam mem REG 253,0 613716 48382913 /usr/lib/libkrb5.so.3.3 sftp-serv 30268 sam mem REG 253,0 1205988 48387619 /usr/lib/libnss3.so sftp-serv 30268 sam mem REG 253,0 33968 48377969 /usr/lib/libkrb5support.so.0.1 sftp-serv 30268 sam mem REG 253,0 15556 48387614 /usr/lib/libplc4.so sftp-serv 30268 sam mem REG 253,0 11524 48387615 /usr/lib/libplds4.so sftp-serv 30268 sam mem REG 253,0 190712 48383685 /usr/lib/libgssapi_krb5.so.2.2 sftp-serv 30268 sam mem REG 253,0 1706232 46368382 /lib/libc-2.5.so sftp-serv 30268 sam mem REG 253,0 50848 46367899 /lib/libnss_files-2.5.so sftp-serv 30268 sam mem REG 253,0 46624 46367905 /lib/libnss_nis-2.5.so sftp-serv 30268 sam mem REG 253,0 1298276 46368392 /lib/libcrypto.so.0.9.8e sftp-serv 30268 sam mem REG 253,0 232156 48387613 /usr/lib/libnspr4.so sftp-serv 30268 sam mem REG 253,0 45432 46368394 /lib/libcrypt-2.5.so sftp-serv 30268 sam mem REG 253,0 121324 48387616 /usr/lib/libnssutil3.so sftp-serv 30268 sam mem REG 253,0 75088 46368385 /lib/libz.so.1.2.3 sftp-serv 30268 sam mem REG 253,0 137944 46368395 /lib/libpthread-2.5.so sftp-serv 30268 sam mem REG 253,0 15308 46368401 /lib/libutil-2.5.so sftp-serv 30268 sam mem REG 253,0 20668 46368384 /lib/libdl-2.5.so sftp-serv 30268 sam mem REG 253,0 130860 46368381 /lib/ld-2.5.so sftp-serv 30268 sam mem REG 253,0 157336 48382170 /usr/lib/libk5crypto.so.3.1 sftp-serv 30268 sam mem REG 253,0 93508 46368390 /lib/libselinux.so.1 sftp-serv 30268 sam mem REG 253,0 233296 46368389 /lib/libsepol.so.1 sftp-serv 30268 sam mem REG 253,0 7812 46368391 /lib/libcom_err.so.2.1 sftp-serv 30268 sam mem REG 253,0 84904 46368388 /lib/libresolv-2.5.so sftp-serv 30268 sam mem REG 253,0 7880 46368387 /lib/libkeyutils-1.2.so sftp-serv 30268 sam 0u unix 0xcb014040 0t0 104100868 socket sftp-serv 30268 sam 1u unix 0xcb014040 0t0 104100868 socket sftp-serv 30268 sam 2u unix 0xd8077580 0t0 104100870 socket sftp-serv 30268 sam 3u unix 0xcb014040 0t0 104100868 socket sftp-serv 30268 sam 4u unix 0xcb014040 0t0 104100868 socketNow when some files are currently being copied from the server: $ sudo /usr/sbin/lsof -p $(pgrep sftp) COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME sftp-serv 30268 sam cwd DIR 0,19 20480 28312529 /home/sam (mulder:/export/raid1/home/sam) ... sftp-serv 30268 sam 5r REG 0,19 3955027 9257067 /home/sam/which witch is wich-dDSr2oxZeAM.mp3 (mulder:/export/raid1/home/sam)The line that shows me copying the file which witch is wich-dDSr2oxZeAM.mp3 is at the bottom of the output. When I use SFTP to put a file it shows up like this: sftp-serv 30268 sam 5r REG 0,19 1933312 9257073 /home/sam/bob.mp3 (mulder:/export/raid1/home/sam)See a difference? Me neither, so this method can only tell you whether a file is currently being accessed via a put or get but it cannot distinguish between the two. However this will tell you if the connection is "active" in the sense if there's a file being read/written from/to the SFTP server. Watching the daemon I typically use this method when I want to watch the SFTP server. $ sudo watch "/usr/sbin/lsof -p $(pgrep sftp)"This will run the lsof command every 2 seconds, "polling" it for any activity. Multiple connections If you have more than 1 user connecting at a time, you may need to modify the $(pgrep sftp) and pick a specific PID, if there are multiple sftp-server instances. Also you'll have to identify which user is accessing the files via SFTP. For that though, you can look at the "USER" column in the lsof output.
I have Linux machine red-hat 5.X please advice - which what command I can identify if someone is tiring to copy files from my machine VIA sftp or ftp is it possible to verify this on my Linux machine ? thx
how to know on my linux machine if connection VIA sftp is active
The output of ssh-agent -s is some environment variable assignments, something like SSH_AUTH_SOCK=blahblah; export SSH_AUTH_SOCK etc. When you run eval $(ssh-agent -s), the shell executes that as code, and those variables get set in that shell. The variables there contain the information ssh-add needs to contact the agent, and they get inherited down from the shell to the ssh-add process. But here, you're running it from inside hello.sh. The shell running the script is an independent process, distinct from the upper interactive shell that started hello.sh, and the variables don't get inherited "upwards". Instead, if you source the script, with source hello.sh, or . hello.sh, it runs in the same shell, and the variables get assigned properly. Though, if you're running multiple shells (multiple terminal emulators, SSH sessions, screen/tmux windows, whatever), you really only need one ssh-agent. You'll have to save the variable assignments to a file somewhere, and load them from e.g. .bashrc. But I don't know what exactly you're doing.
I know that ssh-add is "front-end" of ssh-agent. But ssh-agent on my computer is already running (I could find it in top). When I type ssh-add, it said "Could not open a connection to your authentication agent". How ssh-add communicate with ssh-agent in details? My situation is #! /bin/sh# hello.sheval $(ssh-agent -s) ssh-addThen, bash ./hello.shwhen I type ssh-add some-new-keyin terminal, It output that error message
ssh-add not able to connect to ssh-agent
A UART (Universal Asynchronous Receiver Transmitter) is not a protocol, it's a piece of hardware capable of receiving and transmitting data over a serial interface. I presume you are selecting some design block for your FPGA design implementing an UART.
When I write programs for my own FPGA, I must select UART to emulate a terminal and for my FPGA design but I don't know exactly what that means. I believe that UART is a basic serial transmission protocol, isn't it? And is that the protocol between the program and the terminal and therefore I must choose UART from my programming environment?
What is the relation between UART and the tty?
The Linux TCP stack and conntrack have two different visions of the TCP connection. What you're seeing in /proc/net/ip_conntrack is different from what the kernel sees. The kernel state is stored in /proc/net/tcp and /proc/net/tcp6 and can be displayed with netstat. As seen here: https://serverfault.com/questions/313061/netstat-and-ip-conntrack-connection-count-differ-by-order-of-magnitude-why both counts differ. I presume that if you look at netstat's output you will only see one end in TIME-WAIT
I am reading how the TCP states work and especially the connection termination part. All of the books or online material I read, shows that for the termination procedure these states are followed from the side initiated (active) the connection termination: ESTABLISHED, FIN-WAIT-1, FIN-WAIT-2, TIME-WAIT, CLOSED And these from the receiving (passive) side: ESTABLISHED, CLOSE-WAIT, LAST-ACK, CLOSED Now here comes the question: I have modprobed the nf_conntrack_ipv4 module to both sides to check the connection states in /proc/net/ip_conntrack. To my surprise, when the connection is terminated, both the initiator (active) and the receiver (passive) goes to the TIME-WAIT state. I would expect only the initiator to go through this state, and the receiver just close the connection. Can someone explain why this is happening? UPDATE: How do I perform this test I have a virtual machine with IP 10.0.0.1 (Ubuntu 12.04) and I start two ssh connections to 10.0.0.2 (Debian 6) from it (10.0.0.2 is a VM as well). I check the ip_conntrack of both ends and this is what I get. root@machine1:~# cat /proc/net/ip_conntrack | grep 10.0.0.1 tcp 6 431997 ESTABLISHED src=10.0.0.1 dst=10.0.0.2 sport=53925 dport=22 src=10.0.0.2 dst=10.0.0.1 sport=22 dport=53925 [ASSURED] mark=0 use=2 tcp 6 431944 ESTABLISHED src=10.0.0.1 dst=10.0.0.2 sport=53924 dport=22 src=10.0.0.2 dst=10.0.0.1 sport=22 dport=53924 [ASSURED] mark=0 use=2root@machine2:~# cat /proc/net/ip_conntrack | grep 10.0.0.1 tcp 6 432000 ESTABLISHED src=10.0.0.1 dst=10.0.0.2 sport=53925 dport=22 packets=206 bytes=19191 src=10.0.0.2 dst=10.0.0.1 sport=22 dport=53925 packets=130 bytes=18177 [ASSURED] mark=0 secmark=0 use=2 tcp 6 431947 ESTABLISHED src=10.0.0.1 dst=10.0.0.2 sport=53924 dport=22 packets=16 bytes=4031 src=10.0.0.2 dst=10.0.0.1 sport=22 dport=53924 packets=17 bytes=3741 [ASSURED] mark=0 secmark=0 use=2So far everything looks fine. Now I disconnect one of the ssh connections from machine2 and this is what I get: root@machine1:~# cat /proc/net/ip_conntrack | grep 10.0.0.1 tcp 6 431989 ESTABLISHED src=10.0.0.1 dst=10.0.0.2 sport=53925 dport=22 src=10.0.0.2 dst=10.0.0.1 sport=22 dport=53925 [ASSURED] mark=0 use=2 tcp 6 117 TIME_WAIT src=10.0.0.1 dst=10.0.0.2 sport=53924 dport=22 src=10.0.0.2 dst=10.0.0.1 sport=22 dport=53924 [ASSURED] mark=0 use=2root@machine2:~# cat /proc/net/ip_conntrack | grep 10.0.0.1 tcp 6 432000 ESTABLISHED src=10.0.0.1 dst=10.0.0.2 sport=53925 dport=22 packets=211 bytes=19547 src=10.0.0.2 dst=10.0.0.1 sport=22 dport=53925 packets=133 bytes=18925 [ASSURED] mark=0 secmark=0 use=2 tcp 6 115 TIME_WAIT src=10.0.0.1 dst=10.0.0.2 sport=53924 dport=22 packets=31 bytes=5147 src=10.0.0.2 dst=10.0.0.1 sport=22 dport=53924 packets=25 bytes=4589 [ASSURED] mark=0 secmark=0 use=2
Why TCP TIME-WAIT State is present at both ends after a connection termination?
Kernel does not decide the bInterfaceProtocol. The value is received from the connected USB device.A variety of protocols are supported HID devices. The bInterfaceProtocol member of an Interface descriptor only has meaning if the bInterfaceSubClass member declares that the device supports a boot interface, otherwise it is 0. Check USB Device Class Definition for HID 1.11 for more information.
Let's say: $ ls -l /dev/input/by-id lrwxrwxrwx 1 root root 10 Feb 10 03:47 usb-Logitech_USB_Keyboard-event-if01 -> ../event22 lrwxrwxrwx 1 root root 10 Feb 10 03:47 usb-Logitech_USB_Keyboard-event-kbd -> ../event21 $ ls -l /dev/input/by-path/ lrwxrwxrwx 1 root root 10 Feb 10 03:47 pci-0000:00:14.0-usb-0:1.1:1.0-event-kbd -> ../event21 lrwxrwxrwx 1 root root 10 Feb 10 03:47 pci-0000:00:14.0-usb-0:1.1:1.1-event -> ../event22I know Interface number 1 (event22) above is non-functional because of bInterfaceProtocol is None for bInterfaceNumber 1: $ sudo lsusb -v -d 046d:c31cBus 002 Device 005: ID 046d:c31c Logitech, Inc. Keyboard K120 Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x046d Logitech, Inc. idProduct 0xc31c Keyboard K120 bcdDevice 64.00 iManufacturer 1 Logitech iProduct 2 USB Keyboard iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 59 bNumInterfaces 2 bConfigurationValue 1 iConfiguration 3 U64.00_B0001 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 90mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 1 Keyboard iInterface 2 USB Keyboard HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 65 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 No Subclass bInterfaceProtocol 0 None iInterface 2 USB Keyboard HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 159 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 255 Device Status: 0x0000 (Bus Powered) $ I don't get it and raise up two possibility questions:If value of bInterfaceProtocol always None independent of Host, then what's the point of this unused Interface exists ? If value of bInterfaceProtocol decided by Kernel, then what's the condition did Kernel take to set it to None ?
Is value of bInterfaceProtocol fixed or decided by Kernel?
The above iptables config will only let TCP and UDP packets get past the firewall (unless they came from loopback). The default rule of the INPUT chain has been set to DROP, meaning that every packet that isn't explicitly ACCEPTed will be discarded. There should be no weird packets from loopback, so only TCP/UDP packets are allowed in. There is one major thing about protocols which should be cleared up: Network communication happens on many (actually seven) layers and each layer has it's own set of protocols. E.g., there are fundamental differences between the purpose of transport layer protocols (like TCP and UDP) and application layer protocols (like SMB). The scope of iptables is limited to the transport layer and below. Analysing packets for their application layer protocols requires deep packet inspection and is computationally much more expensive. One should also be careful not to confuse protocols and service names. Popular services have been assigned to specific ports. FTP services are typically available on port 21, while a web server will listen on port 80. This implies that the protocol in use will usually be FTP for traffic on port 21 and HTTP on port 80. However, traffic on any such port is in no way required to use the protocol associated with that service. Traffic on port 80 might as well be SSH or complete gibberish.
I have the desktop-server Debian Jessie machine running for testing purposes just for 19 hours now. I have already set a few rules as you can see above. But I am not really into networking. So it needs some revision. Here is my iptables -L -v: Chain INPUT (policy DROP 1429 packets, 233K bytes) pkts bytes target prot opt in out source destination 1360 61482 DROP all -- any any anywhere anywhere ctstate INVALID 25079 2528K DROP icmp -- any any anywhere anywhere 15 480 DROP igmp -- any any anywhere anywhere 14353 7379K ACCEPT all -- lo any anywhere anywhere 5848K 1157M ACCEPT all -- any any anywhere anywhere ctstate RELATED,ESTABLISHED 1632 86441 ACCEPT tcp -- eth0 any anywhere anywhere tcp dpt:8333 9 472 ACCEPT tcp -- eth0 any anywhere anywhere tcp dpt:33211 13801 804K ACCEPT tcp -- eth0 any anywhere anywhere tcp dpt:56874 58386 5659K ACCEPT udp -- eth0 any anywhere anywhere udp dpt:56874 0 0 ACCEPT tcp -- eth0 any anywhere anywhere tcp dpt:63547 0 0 ACCEPT tcp -- eth0 any anywhere anywhere tcp dpt:httpsHow can I drop all other incoming protocols than the ones I need, like HTTP(S), as the machine will serve mainly as a web server? When I run Etherape I see lots of protocols trying to connect through (or may have already penetrated) my firewall.
iptables - how to drop protocols [closed]
All of the following commands are equivalent. They read the bytes of the CD /dev/sr0 and write them to a file called image.iso. cat /dev/sr0 >image.iso cat </dev/sr0 >image.iso tee </dev/sr0 >image.iso dd </dev/sr0 >image.iso dd if=/dev/cdrom of=image.iso pv </dev/sr0 >image.iso cp /dev/sr0 image.iso tail -c +1 /dev/sr0 >image.isoWhy would you use one over the other?Simplicity. For example, if you already know cat or cp, you don't need to learn yet another command. Robustness. This one is a bit of a variant of simplicity. How much risk is there that changing the command is going to change what it does? Let's see a few examples:Anything with redirection: you might accidentally put a redirection the wrong way round, or forget it. Since the destination is supposed to be a non-existing file, set -o noclobber should ensure that you don't overwrite anything; however you might overwrite a device if you accidentally write >/dev/sda (for a CD, which is read-only, there's no risk, of course). This speaks in favor of cat /dev/sr0 >image.iso (hard to get wrong in a damaging way) over alternatives such as tee </dev/sr0 >image.iso (if you invert the redirections or forget the input one, tee will write to /dev/sr0). cat: you might accidentally concatenate two files. That leaves the data easily salvageable. dd: i and o are close on the keyboard, and somewhat unusual. There's no equivalent of noclobber, of= will happily overwrite anything. The redirection syntax is less error-prone. cp: if you accidentally swap the source and the target, the device will be overwritten (again, assuming a non read-only device). If cp is invoked with some options such as -R or -a which some people add via an alias, it will copy the device node rather than the device content.Additional functionality. The one tool here that has useful additional functionality is pv, with its powerful reporting options. But here you can check how much has been copied by looking at the size of the output file anyway. Performance. This is an I/O-bound process; the main influence in performance is the buffer size: the tool reads a chunk from the source, writes the chunk to the destination, repeats. If the chunk is too small, the computer spends its time switching between tasks. If the chunk is too large, the read and write operations can't be parallelized. The optimal chunk size on a PC is typically around a few megabytes but this is obviously very dependent on the OS, on the hardware, and on what else the computer is doing. I made benchmarks for hard disk to hard disk copies a while ago, on Linux, which showed that for copies within the same disk, dd with a large buffer size has the advantage, but for cross-disk copies, cat won over any dd buffer size.There are a few reasons why you find dd mentioned so often. Apart from performance, they aren't particularly good reasons.In very old Unix systems, some text processing tools couldn't cope with binary data (they used null-terminated strings internally, so they tended to have problems with null bytes; some tools also assumed that characters used only 7 bits and didn't process 8-bit character sets properly). I'm not sure if this ever was a problem with cat (it was with more line-oriented tools such as head, sed, etc.), but people tended to avoid it on binary data because of its association with text processing. This is not a problem on modern systems such as Linux, OSX, *BSD, or anything that's POSIX-compliant. There's a sort of myth that dd is somewhat “lower level” than other tools such as cat and accesses devices directly. This is completely false: dd and cat and tee and the others all read bytes from their input and write the bytes to their output. The real magic is in /dev/sr0. dd has an unusual command line syntax, so explaining how it works gives more of an opportunity to shine by explaining something that just writing cat /dev/sr0. Using dd with a large buffer size can have better performance, but it is not always the case (see some benchmarks on Linux).A major risk with dd is that it can silently skip some data. I think dd is safe as long as skip or count are not passed but I'm not sure whether this is the case on all platforms. But it has no advantage except for performance. So just use pv if you want its fancy progress report, or cat if you don't.
Background I'm copying some data CDs/DVDs to ISO files to use them later without the need of them in the drive. I'm looking on the Net for procedures and I found a lot:Use of cat to copy a medium: http://www.yolinux.com/TUTORIALS/LinuxTutorialCDBurn.html cat /dev/sr0 > image.isoUse of dd to do so (apparently the most widely used): http://www.linuxjournal.com/content/archiving-cds-iso-commandline dd if=/dev/cdrom bs=blocksize count=count of=/path/to/isoimage.isoUse of just pv to accomplish this: See man pv for more information, although here's an excerpt of it: Taking an image of a disk, skipping errors: pv -EE /dev/sda > disk-image.imgWriting an image back to a disk: pv disk-image.img > /dev/sdaZeroing a disk: pv < /dev/zero > /dev/sdaI don't know if all of them should be equivalent, although I tested some of them (using the md5sum tool) and, at least, dd and pv are not equivalent. Here's the md5sum of both the drive and generated files using each procedure: md5 of dd procedure: 71b676875b0194495060b38f35237c3c md5 of pv procedure: f3524d81fdeeef962b01e1d86e6acc04 EDIT: That output was from another CD than the output given. In fact, I realized there are some interesting facts I provide as an answer. In fact, the size of each file is different comparing to each other. So, is there a best procedure to copy a CD/DVD or am I just using the commands incorrectly?More information about the situation Here is more information about the test case I'm using to check the procedures I've found so far: isoinfo -d i /dev/sr0 Output: https://gist.github.com/JBFWP286/7f50f069dc5d1593ba62#file-isoinfo-output-19-aug-2015 dd to copy the media, with output checksums and file information Output: https://gist.github.com/JBFWP286/75decda0a67605590d32#file-dd-output-with-md5-and-sha256-19-aug-2015 pv to copy the media, with output checksums and file information Output: https://gist.github.com/JBFWP286/700a13fe0a2f06ce5e7a#file-pv-output-with-md5-and-sha256-19-aug-2015 Any help will be appreciated!
Is it better to use cat, dd, pv or another procedure to copy a CD/DVD?
With pv 1.2.0 (December 2010) and above, it's with the -a option: Here with both current and average, line-based: $ find / 2> /dev/null | pv -ral > /dev/null [6.28k/s] [70.1k/s]With 1.3.8 (October 2012) and newer, you can also use -F/--format with %a: $ find / 2> /dev/null | pv -lF 'current: %r, average: %a' > /dev/null current: [4.66k/s], average: [ 218k/s]Note that tail -f starts by dumping the last 10 lines of the file. Use tail -n 0 -f file | pv -la to avoid that bias in your average speed calculation.
If myfile is increasing over time, I can get the number of line per second using tail -f | pv -lr > /dev/nullIt gives instantaneous speed, not average. How can I get the average speed (i.e the integral of the speed function v(t) over the monitoring time).
How to get an average pipe flow speed
progress can do this for you — not quite a progress bar, but it will show progress (as a percentage) and the current file being processed (when multiple files are processed): gpg ... & progress -mp $!
I need to encrypt a large file using gpg. Is it possible to show a progress bar like when using the pv command?
How to show progress with GPG for large files?
A CD-ROM and USB stick use entirely different methods to boot. For an ISO9660 image on a CD-ROM, it's the El Torito Specification that makes it bootable; for a USB stick, it needs a Master Boot Record style boot sector. ISOLINUX, the bootloader that is used in ISO9660 CD-ROM images to boot Linux, has recently added a "isohybrid" hybrid mode that uses some clever tricks to create a single image that can be booted both ways. My guess is that your LiveCDs are actually isohybrid images, whereas the full installation DVDs are not. You may be able to use the isohybrid tool in the syslinux distribution to convert them, as described in the hybrid mode link above.
I want to install Scientific Linux from USB. I don't know why unetbootin doesn't work but I am not curious to find out: after all, I transferred to Linux from Windows to see and learn the underlying procedures. I format my USB drive to FAT32 and run this command as root: # pv -tpreb /path/to/the/downloaded/iso | sudo dd of=/path/to/the/USB/device While it works for Live-CDs or network installs (that are less than 1GB) it doesn't work for the actual installation DVDs that are about ~4GB. I would be really grateful if anyone can help me fix this problem. Considering the fact that it works for smaller .iso files, I guess it has to do with the File System, am I correct? What other options do I have?
Creating a bootable Linux installation USB without unetbootin
The pv utility is a "fancy cat", which means that you may use pv in most situations where you would use cat. Using cat with md5sum, you can compute the MD5 checksum of a single file with cat file | md5sumor, with pv, pv file | md5sumUnfortunately though, this does not allow md5sum to insert the filename into its output properly. Now, fortunately, pv is a really fancy cat, and on some systems (Linux), it's able to watch the data being passed through another process. This is done by using its -d option with the process ID of that other process. This means that you can do things like md5sum dir/* | sort >sums & sleep 1 pv -d "$(pgrep -n md5sum)"This would allow pv to watch the md5sum process. The sleep is there to allow md5sum, which is running in the background, to properly start. pgrep -n md5sum would return the PID of the most recently started md5sum process that you own. pv will exit as soon as the process that it is watching terminates. I've tested this particular way of running pv a few times and it seems to generally work well, but sometimes it seems to stop outputting anything as md5sum switches to the next file. Sometimes, it seems to spawn spurious background tasks in the shell. It would probably be safest to run it as md5sum dir/* >sums & sleep 1 pv -W -d "$!" sort -o sums sumsThe -W option will cause pv to wait until there's actual data being transferred, although this does also not always seem to work reliably.
I used md5sum with pv to check 4 GiB of files that are in the same directory: md5sum dir/* | pv -s 4g | sortThe command completes successfully in about 28 seconds, but pv's output is all wrong. This is the sort of output that is displayed throughout: 219 B 0:00:07 [ 125 B/s ] [> ] 0% ETA 1668:01:09:02It's like this without the -s 4g and | sort aswell. I've also tried it with different files. I've tried using pv with cat and the output was fine, so the problem seems to be caused by md5sum.
Using pv with md5sum
You backed up the whole disk including the MBR (512 bytes), and not a simple partition which you can mount, so you have to skip the MBR. Please try with: sudo losetup -o 512 /dev/loop0 disk-image.img sudo mount -t ntfs-3g /dev/loop0 /mnt Edit: as suggested by @grawity: sudo losetup --partscan /dev/loop0 disk-image.img sudo mount -t ntfs-3g /dev/loop0 /mnt
I was able to backup a drive using the following command. pv -EE /dev/sda > disk-image.imgThis is all well and good, but now I have no way of seeing the files unless I use this command pv disk-image.img > /dev/sdaThis, of course, writes the data back to the disk which is not what I want to do. My question is what can I do to mount the .img file itself instead of just writing back to a disk?I've tried mounting using loop but it seems to complain about an invalid NTFS. $ mount -o loop disk-image.img mount: disk-image.img: can't find in /etc/fstab. $ mount -o loop disk-image.img /mnt/disk-image/ NTFS signature is missing. Failed to mount '/dev/loop32': Invalid argument The device '/dev/loop32' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
With the command pv it is possible to clone a drive, how do I mount it? [duplicate]
man pv says:To use it, insert it in a pipeline between two processes, with the appropriate options. Its standard input will be passed through to its standard output and progress will be shown on standard error.The output you see comes from pv. The progress bar is on stderr, and the content you piped in is on stdout. You can redirect the output: cmd | pv > /dev/nulland you will still get the progress bar output. If the command produces its own text on stderr as well, you can redirect that explicitly to /dev/null, before passing on the output to pv: cmd 2>/dev/null | pv > /dev/null
I'm trying to use pv, but I want to hide the command I piped in's output while still be able to see pv's output. Using command &> /dev/null | pv doesn't work (as in, pv doesn't receive any data). command produces output on both standard output and standard error, and I don't want to see either. I tried using a grep pipe (command &> /dev/null | pv | grep <=>) but that now and then outputs things to the terminal.
Pipe a command to pv but hide all the original command's output
Found that I can do this with xargs and the -P option: josh@subdivisions:/# seq 1 10 | xargs -P 4 -I {} bash -c "dd if=/dev/zero bs=1024 count=10000000 | pv -c -N {} | dd of=/dev/null" 3: 7.35GiB 0:00:29 [ 280MiB/s] [ <=> ] 1: 7.88GiB 0:00:29 [ 312MiB/s] [ <=> ] 4: 7.83GiB 0:00:29 [ 258MiB/s] [ <=> ] 2: 6.55GiB 0:00:29 [ 238MiB/s] [ <=> ]Send output of the array to iterate over into stdin of xargs; To run all commands simultaneously, use -P 0
I want to run a sequence of command pipelines with pv on each one. Here's an example: for p in 1 2 3 do cat /dev/zero | pv -N $p | dd of=/dev/null & doneThe actual commands in the pipe don't matter (cat/dd are just an example)... The goal being 4 concurrently running pipelines, each with their own pv output. However when I try to background the commands like this, pv stops and all I get are 4 stopped jobs. I've tried with {...|pv|...}&, bash -c "...|pv|..." & all with the same result. How can I run multiple pv command pipelines concurrently?
How can I run multiple pv commands in parallel?
In your setup the data has passed pv while it is still processed on the right side. You could try to move pv to the rightmost side like this: seq 20 | while read line; do sleep 1; echo ${line}; done | pv -l -s 20 > /dev/nullUpdate: Regarding your update, maybe the easiest solution is to use a named pipe and a subshell to monitor the progress: #! /bin/bash trap "trap - SIGTERM && kill -- -$$" SIGINT SIGTERM EXIT (rm /tmp/progress.pipe; mkfifo /tmp/progress.pipe; tail -f /tmp/progress.pipe | pv -l -s 20 > /dev/null)& limit=10 seq 20 | \ while read num do sleep 1 if [ $num -gt $limit ] then echo $num fi echo $num > /tmp/progress.pipe done
I would like to track progress of a slow operation using pv. The size of the input of this operation is known in advance, but the size of its output is not. This forced me to put pv to the left of the operation in the pipe. The problem is that the long-running command immediately consumes its whole input because of buffering. This is somewhat similar to the Turn off buffering in pipe question, but in my case it is the consuming operation that is slow, not the producing one and none of the answers to the other question seem to work in this case. Here is a simple example demonstrating the problem: seq 20 | pv -l -s 20 | while read line; do sleep 1; done 20 0:00:00 [13.8k/s] [=====================================>] 100%Instead of getting updated every second, the progress bar immediately jumps to 100% and stays there for the entire 20 seconds it takes to process the input. pv could only measure the progress if the lines were processed one by one, but the entire input of the last command seems to be read into a buffer. A somewhat longer example that also demonstrates the unknown number of output lines: #! /bin/bash limit=10 seq 20 | \ pv -l -s 20 | \ while read num do sleep 1 if [ $num -gt $limit ] then echo $num fi doneAny suggestions for a workaround? Thanks!
How track progress of a command in a pipe if only the size of its input is known in advance?
This was asked recently but it was in the context of local disks. In that situation, there is a good reason to use a partition table on the disk even if you only intend to make it a single big partition spanning the entire disk: documenting the fact that the disk is actually in use, thus preventing accidents. I believe that the situation is different for managed disks, whether it is network block devices, paravirtualized disk images, SAN LUNs, iSCSI, etc... In this case, I think you should feel free to use whole disks directly, for the following reasons:Trying to use partition tables on these disks is apparently giving you trouble with resizing. Logical disks of this type can be dynamically resized in ways that physical disks never can. If it's giving you trouble, save yourself that trouble and don't do it. These types of disks often do not correspond to physical hard disks and won't be subject to the same kinds of "this disk appears to be blank" accidents that physical disks can. The logical disks is often backed by something that is already container-like: an outer partition, an LVM LV, a virtual block device in a storage cluster and an extra layer of partitioning is probably superfluous.
I've been able to define a physical volume (LVM) in two ways: Creating a 8e (Linux LVM type) partition and then # pvcreate /dev/sdb1 Usign pvcreate directly using a non-partitioned disk and then # pvcreate /dev/sdc <-- note the lack of number since there aren't any partitions.My disks are not local, I use both scenarios: SAN provided LUN and VMware provided disks in different systems. We are testing LUN/vmware disk online resizing and everything went fine with the PVs defined using non-partitioned disks but with the partition layer it was impossible to hot-resize them using parted or fdisk. My question is: why should I bother using 8e partitions if I can use raw disks for creating physical volumes and then resizing them online?
Define physical volume inside non-partitioned disk
There are two reasons. In the first place, you don't tell it to quit. Consider: seq 10 | sed -ne1,5pIn that case, though it only prints the first half of input lines, it must still read the rest of them through to EOF. Instead: seq 10|sed 5qIt will quit right away there. You're also working with a delay between each process. So if pv buffers at 4kb, and sed buffers 4kb, then the last pv is 8kb behind input all the while. It is quite likely that the numbers are higher than that. You can try the -u switch w/ a GNU/BSD/AST sed but that's almost definitely not going to help performance on large inputs. If you call a GNU sed with -u it will read() for every byte of input. I haven't looked at what the others do in that situation, but I have no reason to believe they would do any differently. All three document -u to mean unbuffered - and that's a pretty generally understood concept where streams are concerned. Another thing you might do is explicitly line-buffer sed's output with the write command and one-or-more named write-file[s]. It will still slow things a little, but it probably will be better than the alternative. You can do this w/ any sed like: sed -n 'w outfile'sed's write command is always immediate - it is unbuffered output. And because (by default) sed applies commands once per line-cycle, sed can be easily used to effectively line-buffer i/o even within the middle of a pipeline. That way, at least, you can keep the second pv pretty much up to date w/ sed the whole time like: pv ... | sed -n '24629045,24629162!w /dev/fd/1' | pv ......though that assumes a system which provides the /dev/fd/[num] links (which is to say: practically any linux-based system - excepting Android - and many others besides). Failing said links' availability, to do the same thing you could just explicitly create your own pipe with mkfifo and use it as the last pv's stdin and name it as sed's write file.
I ran sed on a large file, and used the pv utility to see how quickly it's reading input and writing output. Although pv showed that sed read the input and wrote the output within about 5 seconds, sed did not exit for another 20-30 seconds. Why is this? Here's the output I saw: pv -cN source input.txt | sed "24629045,24629162d" | pv -cN output > output.txt source: 2.34GB 0:00:06 [ 388MB/s] [==========================================================================================================>] 100% output: 2.34GB 0:00:05 [ 401MB/s] [ <=> ]
Why doesn't sed exit immediately after writing the output?
You should try openssl enc -aes-256-cbc -d -salt -in "$input_filename" | pv -W >> "$output_filename"From the Manual:-W, --wait: Wait until the first byte has been transferred before showing any progress information or calculating any ETAs. Useful if the program you are piping to or from requires extra information before it starts, eg piping data into gpg(1) or mcrypt(1) which require a passphrase before data can be processed.which is exactly your case. If you need to see the progress bar, for the reason clearly explained by Weijun Zhou in a comment below, you can reverse the order of the commands in the pipe: pv -W "$input_filename" | openssl enc -aes-256-cbc -d -salt -out "$output_filename"
I need to encrypt and be able to decrypt files with openssl, currently I do this simply with: openssl enc -aes-256-cbc -salt -in "$input_filename" -out "$output_filename"and the decryption with: openssl enc -aes-256-cbc -d -salt -in "$input_filename" -out "$output_filename"But with large files, I would like to see progress. I tried different variations of the following (decryption): pv "$input_filename" | openssl enc -aes-256-cbc -d -salt | pv > "$output_filename"But this fails to ask me for a password. I am unsure as to how to go about it? EDIT1: I found this tar over openssl: https://stackoverflow.com/a/24704457/1997354 While it could be extremely helpful, I don't get it much. EDIT2: Regarding the named pipe: It almost works. Except for the blinking progress, which I can't show you obviously and the final result looking like this: enter aes-256-cbc decryption password: 1.25GiB 0:00:16 [75.9MiB/s] [==============================================================================================================================================================================================>] 100% 1.25GiB 0:00:10 [ 126MiB/s] [ <=> ]
How to use pv to show progress of openssl encryption / decryption?
The two pv processes in the pipe may start in any order. The output from the latest pv will be in the bottom line. Delay pv you want in the bottom line. Instead of pv … (where … denotes all its arguments) use a subshell: ( </dev/null sleep 1; exec pv … )In theory the other pv may still start after the delayed one, but in a not-totally-overloaded system it's almost certain the delayed pv will start last. sleep should not read from its stdin anyway; </dev/null is in case your sleep is weird. I'm not sure if some race condition can cause an extra (stale) line to appear. If so, delaying one pv should also (almost certainly) help. In my tests the output gets mangled when the terminal needs to be "additionally" updated. Therefore:Do not resize the terminal while pvs are running. Avoid scrolling:Before you run the script, invoke clear (or hit Ctrl+L). This will clear the screen, place the prompt on top and provide room below without the need of scrolling later. Do not type while pvs are running; especially multiple Enters (that could eventually scroll the text) should be avoided. In general do not let anything else than pvs print to the terminal until pvs finish. This applies to other parts of the pipe (e.g. via /dev/tty), asynchronous processes in the script (e.g. simply via their stdout), processes outside of the script (e.g. via /dev/tty* or /dev/pts/*).
I have a script on a Linux machine with a fancy pv piped to a second pv that count a subset of the outputted lines. Here's the script: max=1000 for (( i=0; i<max; i++ )); do [[ $(shuf -i 1-100 -n 1) -lt 20 ]] && echo REMOVE || echo LEAVE done | pv -F "%N %b / $(numfmt --to=si $max) %t %p %e" -c -N 'Lookups' -l -s $max \ | grep --line-buffered '^REMOVE' \ | pv -F "%N %b / $(numfmt --to=si $max)" -c -N 'Deletes' -l -s $max \ >/dev/null stty sanewhat I would expect is that the first pv always shows first, and the second always second. Like this example output: $ ./fancy_pv.sh Lookups: 1.00k / 1.0K 0:00:03 [===============================================================================================================================================================================================================================================================================================================================================================>] 100% Deletes: 189 / 1.0KBut that's not the case, sometimes they swap positions and I see something like this: $ ./fancy_pv.sh Deletes: 199 / 1.0K Lookups: 1.00k / 1.0K 0:00:03 [===============================================================================================================================================================================================================================================================================================================================================================>] 100% And sometimes I also see something like this: $ ./fancy_pv.sh Lookups: 321 / 1.0K 0:00:01 [===============================================================================================================> ] 32% ETA 0:00:02 Deletes: 198 / 1.0K Lookups: 1.00k / 1.0K 0:00:03 [===============================================================================================================================================================================================================================================================================================================================================================>] 100% I know it must be because of the way pv deletes the line and redraws it, but is there anything I can do to prevent it from messing with the order? stty sane is there to sanitize the prompt because sometimes pv leaves the terminal unusable. Thanks
Multiple pv order
The killer is the use of two processes. With cat | pv, cat reads and writes, and pv reads and writes, and both processes need to run: $ perf stat sh -c 'cat /dev/zero | pv -s 100G -S > /dev/null' 100GiB 0:00:26 [3.72GiB/s] [====================================================================================>] 100% Performance counter stats for 'sh -c cat /dev/zero | pv -s 100G -S > /dev/null': 34,048.63 msec task-clock # 1.267 CPUs utilized 1,676,706 context-switches # 0.049 M/sec 3,678 cpu-migrations # 0.108 K/sec 304 page-faults # 0.009 K/sec 119,270,941,758 cycles # 3.503 GHz (74.89%) 137,822,862,590 instructions # 1.16 insn per cycle (74.94%) 32,379,369,104 branches # 950.974 M/sec (75.14%) 216,658,446 branch-misses # 0.67% of all branches (75.04%) 26.865741948 seconds time elapsed 1.257950000 seconds user 38.893870000 seconds sysWith pv only, there’s just pv reading and writing, no context switching needed (or hardly any): $ perf stat sh -c '< /dev/zero pv -s 100G -S > /dev/null' 100GiB 0:00:07 [13.3GiB/s] [====================================================================================>] 100% Performance counter stats for 'sh -c < /dev/zero pv -s 100G -S > /dev/null': 7,501.68 msec task-clock # 1.000 CPUs utilized 37 context-switches # 0.005 K/sec 0 cpu-migrations # 0.000 K/sec 198 page-faults # 0.026 K/sec 27,916,420,023 cycles # 3.721 GHz (75.00%) 62,787,377,126 instructions # 2.25 insn per cycle (74.99%) 15,361,951,954 branches # 2047.801 M/sec (75.03%) 51,741,595 branch-misses # 0.34% of all branches (74.98%) 7.505304560 seconds time elapsed 1.768600000 seconds user 5.733786000 seconds sysThere’s some parallelism (“1.267 CPUs utilized”), but it doesn’t make up for the huge difference in the number of context switches. Things could be worse, considering the data path — in the first case, data seems to flow from the kernel (/dev/zero), to cat, back to the kernel (for the pipe), to pv, to the kernel (/dev/null). In the second, data flows from the kernel, to pv, back to the kernel. But in the first scenario, pv uses splice to copy data from the pipe, avoiding a trip through kernel-owned memory.
I was testing different methods to produce random garbage and comparing their speed by piping output to pv, as in: $ cmd | pv -s "$size" -S > /dev/nullI also wanted a "baseline reference", so I measured the the fastest "generator", cat, with the fastest source, /dev/zero: $ cat /dev/zero | pv -s 100G -S > /dev/null 100GiB 0:00:33 [2,98GiB/s] [=============================>] 100% 3GB/s, that's pretty impressive, specially compared to ~70MB I get from /dev/urandom. But hey, for the special case of /dev/zero I don't need cat! Just for the kicks I removed this textbook UUOC: $ < /dev/zero pv -s 100G -S > /dev/null 100GiB 0:00:10 [9,98GiB/s] [=============================>] 100% What??? Almost 10GB/s? How can removing cat and a pipe more than triple the speed? If using a slower source such as /dev/urandom the difference is negligible. Is pv doing some voodoo magic? So I tested: $ dd if=/dev/zero iflag=count_bytes count=200G of=/dev/null status=progress 205392969728 bytes (205 GB, 191 GiB) copied, 16 s, 12,8 GB/s12,8 GB/s! Same ballpark as pv, and 4 times faster than using pipes. Is cat to blame? Are pipes so much different than redirection? Afterall, both go to pv as stdin, right? What can explain this huge difference?
pipe and redirection speed, `pv` and UUOC
The progress bar is a feature of pv, it is written on standard error. From the pv manual:pv shows the progress of data through a pipeline by giving information such as time elapsed, percentage completed (with progress bar), current throughput rate, total data transferred, and ETA. To use it, insert it in a pipeline between two processes, with the appropriate options. Its standard input will be passed through to its standard output and progress will be shown on standard error.There is really no problem writing to the TTY while at the same time redirecting both standard output and standard error though: $ ( echo "out"; echo "error" >&2; echo "hi there" >$(tty) ) 2>&1 | cat >file hi there$ cat file out errorAlso, O_WRONLY and O_RDONLY are not nouns but adjectives. Standard output is write-only and standard input is read-only.
How does the following command work? pv file.tar.gz | tar -xzFrom my understanding the pipe operator | creates a pipe and stdout of pv is mapped to the O_WRONLY end of the pipe and tar's stdin is mapped to the O_RDONLY with both O_WRONLY and O_RDONLY existing in pipefs This is all well and good, but the following is being printed to my screen: 31.1MiB 0:00:05 [6.17MiB/s] [===================================>] 100%To the best of my knowledge this progress bar is not generated by tar because it would be available via an option if it was and I wouldn't need pv, thus pv has to be generating it. But how? pv's stdout is mapped to O_WRONLY. I also read that some shells use socket pairs for pipes in place of pipefs and socket pairs are bidirectional. But that just seems like it would tie up stdin and stdout of both commands until one or both completes. which is not the case in the above example since the progress bar updates in real time.
How does pv work?
Use pv -f … From man 1 pv:-f, --force Force output. Normally, pv will not output any visual display if standard error is not a terminal. This option forces it to do so.(pv -fF $'%t %r %e\n' /dev/nvme0n1p1 | gzip -c >/run/test.img ) 2>&1 | tr -d ':[]'
Executing this command displays the output on console. But when output is piped to another command it does not work. See below. (pv -F $'%t %r %e\n' /dev/nvme0n1p1 | gzip -c >/run/test.img ) 0:00:01 [25.2MiB/s] ETA 0:00:18 0:00:02 [23.7MiB/s] ETA 0:00:18 0:00:03 [ 100MiB/s] ETA 0:00:07 0:00:04 [ 199MiB/s] ETA 0:00:01Now see below same command output is piped to another command and it does not display anything at all. I have redirected stderr to stdout and passed it to tr -d so it can remove ":[ ] " characters. (pv -F $'%t %r %e\n' /dev/nvme0n1p1 | gzip -c >/run/test.img ) 2>&1 | tr -d ':[]'See below, same command, but I am not redirecting stderr to stdout, Also if I don't redirect stderr to stdout, with the same command above, I don't get desired results, see below, using tr -d to delete following characters ":[]" but does not work. You can see tr -d command is completely ignored. (pv -F $'%t %r %e\n' /dev/nvme0n1p1 | gzip -c >/run/test.img ) | tr -d ':[]' 0:00:01 [25.2MiB/s] ETA 0:00:18 0:00:02 [23.7MiB/s] ETA 0:00:18 0:00:03 [ 100MiB/s] ETA 0:00:07 0:00:04 [ 199MiB/s] ETA 0:00:01I have spent countless hours to figure this out, searched on stackexchange and all the forums but I cannot get my head around, how to fix this. I have also tried using file Descriptor 2>&3 but still no luck.
pv not printing to a pipe
Your first command should not be able to run at all as you can't use a pipe in an -exec like that (this was apparently a typo in the original question). Instead: find . -type f -exec md5sum {} + | sort -o ~/checksumsor, with pv, find . -type f -exec md5sum {} + | pv | sort -o ~/checksumsIn both of the above, md5sum would be called with as many pathnames as possible in batches. sort would take the output of find (which is the output of md5sum) and sort it into the given filename. The second variation additionally inserts pv between the find and the sort. You can't use -exec pv {} | md5sum for individual files as the pipe would need to be embedded in a in-line shell script that you call from -exec for each file. But even the correct -exec sh -c 'pv "$1" | md5sum' sh {} \; would discard the filename from the generated md5sum output, so that can't be used either. The pv utility acts like a drop-in replacement for cat.
I use the following command to verify ~700 GiB of backed-up files: $ find -type f -exec md5sum {} + | sort > ~/checksumsThis takes many hours, so I would like to integrate pv into the command to show the progress. I could do this: $ find -type f -exec pv {} + | md5sumBut it concatenates all of the files, resulting in only one checksum. So how could I include pv and still get a text file full of checksums at the end?
Use pv with find -exec
This is caused by accounting in pv, which effectively means its rate-limiting is read-limited rather than write-limited. Looking at the source code shows that rate-limiting is driven by a “target”, which is the amount remaining to send. If rate-limiting is on, once per rate limit evaluation cycle, the target is increased by however much we’re supposed to send according to the rate limit; the target is then decreased by however much is actually written. This means that if you set the rate limit to a value larger than the actual write capacity, the target will keep going up; reducing the rate limit won’t then have any effect until pv has caught up with its target (including what it’s allowed to write according to the new rate limit). To see this in action, start a basic pv: pv /dev/zero /dev/nullThen control that: pv -R 32605 -L 1M; sleep 10; pv -R 32605 -L 1G; sleep 1; pv -R 32605 -L 1MYou’ll see the impact of the target calculations by varying the duration of the second sleep... Because of the write limitation, this only causes an issue when you set the rate limit to a value greater than the write capacity. In a little more detail, here’s how the accounting works with a flow initially limited to 1M, then to 1G for 5s, then back to 1M, on a connection capable of transmitting 400M: Time Rate Target Sent Remaining 1 1M 1M 1M 0 2 1G 1G 400M 600M 3 1G 1.6G 400M 1.2G 4 1G 2.2G 400M 1.8G 5 1G 2.8G 400M 2.4G 6 1G 3.4G 400M 3G 7 1M 3001M 400M 2601M 8 1M 2602M 400M 2202M 9 1M 2203M 400M 1803M 10 1M 1804M 400M 1404M 11 1M 1405M 400M 1005M 12 1M 1006M 400M 606M 13 1M 607M 400M 207M 14 1M 208M 208M 0 15 1M 1M 1M 0It takes 7s for the rate limit to be applied again. The longer the time spent with a high rate limit, the longer it takes for the reduced rate limit to be enforced... The fix for this is quite straightforward, if you can recompile pv: in loop.c, change line 154 to target = (from target +=), resulting in || (cur_time.tv_sec == next_ratecheck.tv_sec && cur_time.tv_usec >= next_ratecheck.tv_usec)) { target = ((long double) (state->rate_limit)) / (long double) (1000000 / RATE_GRANULARITY);Once that’s done, rate limit reductions are applied immediately (well, within one rate-limit cycle).
I'm using pv for sending files via ssh. I can change "active pv" the limit at under 100M without any problem. When i set active pv process to 100M or 1G or higher I cant change rate anymore... BUT! if i change 5-10 times 1M to 2M, 2M to 1M pv can set sometimes to new rate. I couldn't find any solution for the problem. Any idea? Examples: pv -R "15778" -f -F "%p***%t***%e***%r***%b" -L 1M pv -R "15778" -f -F "%p***%t***%e***%r***%b" -L 1G pv -R "15778" -f -F "%p***%t***%e***%r***%b" -L 1M (not working anymore)
Pipe-Viewer problem with changing Rate-Limit
There will typically (lacking zero-copy trickery) be measurable overhead due to the extra IPC: copying the data from one process to another, rather than the "workhorse" process reading files directly. A pipe may also result in loss of performance (or functionality) for other reasons: with piped input a process cannot seek() on its input, and cannot mmap() it. Generally though, the main performance bottlenecks are probably disk I/O, and CPU compute time (which presumably is intensive in your case). These may be very much larger than the IPC overhead, but there are many variables here (CPU type, disk type and fileystem type, available physical RAM, OS and version, libc and version — at least). You can get a rough idea of performance with some quick tests, taking care to flush the disk cache before each one (I'm using linux, I use this method) in between each test. # time ( pv -pt somethinglarge.iso | sha256sum ) [...] real 0m8.066s user 0m5.146s sys 0m1.075s# time ( sha256sum somethinglarge.iso ) [...] real 0m7.913s user 0m5.064s sys 0m0.309sNote the similar real and user and times, and the marked increase in system time for the piped case due to the extra copying. On some OSes, specifically Linux, you may be able to read per-process I/O stats from /proc (see 3.3) (you'll need CONFIG_TASKSTATS enabled in the kernel for this). This isn't as easy or as slick as pv, but it's low-overhead. pidstat uses this, it can be used to show real-time throughput (rate) on a PID, but it's less useful as a completion indicator. A similar linux option (this one doesn't need CONFIG_TASKSTATS), given a process and file descriptor, you can track the file descriptor's offset in /proc/PID/fdinfo/FD (the pos: field). Here's a toy script that shows this: FILE=/tmp/some-large-input SZ=$(stat -c "%s" "$FILE")# start slow process in background ( some-slow-command $FILE ) & PID=$! FD=/proc/$PID/fdinfo/3 # some experimentation required # or iterate over /proc/$PID/fd/* with readlink # start %-ometer in background, exits when FD disappears ( while nawk '/^pos:/{printf("%i\n",$2*100/'$SZ')}' $FD 2>/dev/null ; do sleep 5 # adjust done | dialog --gauge "$PID: processing $FILE ($SZ bytes)" 10 60 ) &wait $PID if [ $? -eq 0 ]; then echo 100 | dialog --gauge "$PID: completed $FILE ($SZ bytes)" 10 60 else echo ... fi(Caveat: not accurate for small files, libc stdio buffering will skew the results.) Other options that occur to me right now:use lsof to monitor a processes fd offsets not exactly lightweight, but multi-platform, and you can start it on any long-running process after the fact, which you cannot do with pv (it's not pretty either, since lsof refuses to give both the size and offset in one go) something hackish with LD_PRELOAD and some stubs which track data read/write, this too is multi-platform, but I think you'd have to write your own (I don't know any that does exactly this, but here's a related answer of mine) Update: someone has gone to the trouble of writing a general purpose transfer monitor tool cv for use with coreutils commands on Linux. It uses similar logic to the /proc fdinfo approach (as demonstrated in the shell hack above). It also has a background mode where it scans /proc and reports on transfers in progress as it finds them. See related question Is it possible to see cp speed and percent copied?
I am writing a batch script to sort through gigs and gigs of data. All of the data is text but the script will take a long long time to execute. I would like to give some visual indication that the script is running. I found the program called pv which allows you to create progress bars and other nice CLI progress indicators. My question is what type of impact does something like this have on performance and are there any other alternatives without me having to reinvent the wheel. I have googled it to death and haven't found anything which is surprising as I would assume you would only show progress on long tasks where performance is important. But I suppose I am using clock cycles and you could also use it to measure disk I/O or network data transfers where using a few cycles for a progress indication wouldn't matter. Any ideas? P.S. I have also considered using the echo -ne trick to create my own but in order to make it feasible I would have to use the % operator on every loop and only take action ever 100th or so loop but that is A LOT of wasted calculations...
Pipe Viewer - Progress monitor performance consequence
The system interprets /dev/zero as literally just an endless stream of zeroes, and I believe this is the fastest way to obtain useless information. In all likelihood, you're going to be bottlenecked by your physical disk speed, and so this should be as fast as you'd ever need even if there are any faster methods. Also, when testing, I was surprised to find that cat was much faster than dd for this!
Which command produces more data per second? This could be useful to quickly fill a file with garbage data or to test data transfer rates. So far, I found that "/dev/zero" is the quickest one. $ cat /dev/urandom | pv > /dev/null 3,04GO 0:08:22 [5,83MB/s] [ <=> ]$ yes | pv > /dev/null 38GO 0:11:56 [40,2MB/s] [ <=> ]$ cat /dev/zero | pv > /dev/null 754GO 0:08:52 [ 1,4GB/s] [ <=> ]Would you suggest another possible faster command?
Which command produces more data per second?
Using "sigstop - sigcont" signal on ZFS send or receive process cause error. Only way to using these signals works with when you use "PV". You can stop and cont PV but when you stop PV zfs still trying to send and I don't know yet consequences or is it cause any problem or CPU, I/O usage on host. I stoped few hours and send sigcont I did not saw any problem.
I'm using PV for my ZFS send-recv replication. I use ZFS resume token too but i want to pause and resume like sigstop, sigcontinue. Because using resume token means sending same thing again. So how do you manage pause and resume with pv? BTW: "pv - monitor the progress of data through a pipe"
What is the best way pause "zfs send via PV" and resume
If you do not have backups, your data wasn't important. It's gone. There is no undo. Especially not with encryption involved. something that produces output > /dev/somedisk overwrites data on the device. Whatever is overwritten can not be restored, so your only chance would be if you noticed and cancelled it right away. Then probably only the first few hundred megs would be missing and you might have a chance at recovery, especially if the partitions you want to recover started somewhere further out. In this case it's a matter of restoring the partition table, from memory or using testdisk, gpart or whatever. If you did not cancel, it depends on how much output was produced, i.e. in your case whether /dev/sdb was smaller than /dev/sdc so it was only overwritten so far. However, you say it was dm-crypt'ed. That usually means LUKS. And LUKS has a header at the start. If you lose that header and the LUKS container is not still open, there is no way to get anything back. If it's still open, you want to save the output of dmsetup table --showkeys. Some people use LUKS without partitioning the drive, and then have some silly mistake in a partitioner or installer that does nothing but create a small partition table. That overwrites less than 512 bytes at the start of the disk but it's still enough to damage the LUKS header and the data is irrecoverably lost.
I've just written over the wrong hard drive using the command: sudo sh -c 'pv /dev/sdb >/dev/sdc'How do I go about undoing this? I was creating the first even backup of the drive, and I backed up over the wrong drive... The drive which got written over also has no backups, I was going to backup that drive next. Both drives were dm-crypt'ed.
Backed up over wrong hard drive
Continue with dd: dd if=original.data of=copy.data ibs=512 obs=512 seek=NNN skip=NNN status=progressYou have to get byte count in the copy.data. Then replace NNNs with byte count divided by 512 (value set to ibs and obs).
I was copying a very big file and I accidentally stopped it. Can I resume copying data without need to delete copy and copy data again? Command I used: pv original.data > copy.data
Continue copying file
What am I doing wrong?LVM Logical Volumes are not created with fdisk. You need to use lvcreate instead.I did chose type 8e when creating these partition lvmSetting the partition type using fdisk, let you hint that a partition may contain an LVM Physical Volume. Like setting any other partition type, this doesn't actually format the partition. To format a partition as an LVM Physical Volume, you need to use pvcreate. You do the pvcreate first. Then assign it to a LVM Volume Group, for example creating a new VG using vgcreate myvg /dev/sda2. Then you can create logical volumes. You could go ahead and do this from the man pages, you shouldn't need to set any non-default option here, but it's probably easier to look for a nice tutorial which satisfies these critera :-P.So what did you do? Well, you effectively treated the partition /dev/sda2 as a disk itself. You formatted it with a partition table, and created partitions inside it. Apparently fdisk is happy to let you do this without considering it a problem :). However this isn't generally useful or something that people do. BSD installs on PCs do something a bit like this, however Linux installers do not. I tried creating something like sda2p1 myself. My conclusion was the Linux kernel itself does not support nesting partition tables like this, although userspace commands can let you access them if you understand what's going on. In my own testing, partprobe /dev/sda8 failed. It seemed confused, thinking that partitions were already being used, and reported errors on more partitions than existed anywhere on my system. Instead, using kpartx -av /dev/sda8 worked, in my case to detect and map "sda8p1". However it appears the Linux kernel did not support nested partitions like this.[1] The kernel was not aware of the block device sda8p1. (It did not appear in /sys/class/block under that name). Instead, the result of kpartx was to create a "device mapper" block device called dm-0. It was created such that cat /sys/block/dm-0/dm/name showed sda8p1. Even after the kpartx command, my system did not create a device node at /dev/sda8p1. Instead, the device node was accessible as /dev/mapper/sda8p1. (Or directly as /dev/dm-0. ls -l /dev/mapper shows that the file(s) there are symbolic links to /dev/dm-*). [1] Bonus fact: device nodes for sda1 etc. have pre-allocated device numbers. There is no pre-allocated number for sda2p1 etc.
So when I do (as a root) fdisk -lI see /dev/sda1 and /dev/sda2 Now I am practicing creating logical volumes, when I tried partitioning /dev/sda2I got two new partition /dev/sda2p1 and /dev/sda2p2 and then I run partprobebut then when I try creating a pv /dev/sda2p1 /dev/sda2p2It says these devices are not found even though when I run fdisk -l /dev/sda2I do see them listed there (and I did chose type 8e when creating these partition lvm) what am I doing wrong ?
lvm and a partitioning question
If for some reason you must read the block device using a block size of 16K: dd if=/mnt/nfs bs=16k | pv -L <rate> > /dev/sdaWhere <rate> is the maximum allowed amount of bytes per second to be transferred, or the maximum allowed amount of kibibytes, mibibytes, gibibytes, [...] per second to be transferred if K, M, G, [...] is specified. However if you don't really have to read the file using a block size of 16K, just use pv, which can read block devices: pv -L <rate> /mnt/nfs > /dev/sda
This is my dd command which I need to modify: dd if=/tmp/nfs/image.dd of=/dev/sda bs=16kNow I would like to use pv to limit the speed of copying from NFS server. How can I achieve that? I know that --rate-limit does the job, but I am not sure how to construct pipes.
How to redirect dd to pv? [duplicate]
Following command is not exactly doing what you are intending to do gzip -d /mnt/mydrive/img.gz > /dev/sdaThe command is decompressing the file /mnt/mydrive/img.gz and creating a file called img which is the ungzipped copy of img.gz. The > /dev/sda is not doing anything useful because nothing is sent to /dev/sda via stdout.This is what you need to do, send the output to stdout (using -c): gunzip -c /mnt/mydrive/img.gz > /dev/sdaOr gunzip -c /mnt/mydrive/img.gz | pv -s 256G | dd of=/dev/sda bs=4M
I created an image from a 256GB HDD using following command: dd if=/dev/sda bs=4M | pv -s 256G | gzip > /mnt/mydrive/img.gzlater I tried to restore the image to another 512GB HDD on another computer using following command: gzip -d /mnt/mydrive/img.gz | pv -s 256G | dd of=/dev/sda bs=4Mthe 2nd command shows very long time zero bytes progress (just counting seconds, but nothing happens) and after some while it fails with error telling me no space left on device. the problem is in the gzip command, when I unpack the image file to a raw 256GB file xxx.img and restore it without using gzip, it works: dd if=/mnt/mydrive/xxx.img bs=4M | pv -s 256G | dd of=/dev/sda bs=4Mclearly the problem is in the gzip command (tried as well gunzip, no luck), as a workaround I can restore images using a huge temporary external drive which is annoying. The zipped image is about 10% size of the raw image. Do you have an idea why the gzip is failing? side note: the problem is not in pv or dd, following command fails with the same error message: gzip -d /mnt/mydrive/img.gz > /dev/sda
restoring hdd image using gzip throws error no space left on device
pv doesn't know about the system power states. All it sees is that the clock changed by a very large amount at some point. My guess is that pv doesn't care if the amount of time between two clock readouts suddenly gets large and just calculates the throughput based on the time interval. Since the interval is very large, it appears that the throughput is very low. The throughput calculation is averaged over a number of clock reads (about 5min in your observations). As long as the interval considered includes the time spent in suspension, the calculated throughput value will be very low. Once the interval again consists only of awake time, the throughput will be back to what is expected. For example, suppose that you suspended for 5 minutes. Then just after resuming, pv calculates that 500kB were transferred in the last 5min, meaning a throughput of only about 1.7kB/s. That's way below the 500kB threshold so pv transfers more data for a while to compensate. Eventually the throughput calculation will stabilize again. Suspending the system is not like suspending the pv process. Suspending the system is transparent for programs. Suspending the process sends it a SIGCONT signal when it wakes up. pv has a signal handler for SIGCONT which causes it to more or less subtract the time it spent suspended (I haven't check what exactly it does if it was suspended with an uncatchable SIGSTOP signal, but it shouldn't cause too big a perturbation, unlike system suspension).
I have this perl script and I discovered the pv command and decided to use it to get some feedback into what is going on with the randomness in terms of throughput. After a few tests1 I decided to throttle the command, like so: perl_commands < /dev/urandom | pv -L 512k | tr -cd SET5.5MiB 0:00:11 [ 529kiB/s] [ <=> ]I suspend to ram using systemctl suspend(Archbang). When I resume, the command still runs and includes the elapsed time since suspend in its dialog but it looks as if the limit I set is no longer enforced, throughput is 2-3MiB/s and CPU is higher - like without a limit. After some time, this subsides and I can see that the limit is still enforced. For example, if I run the command for only a few seconds it'll take seconds for the throughput to come back to its set limit. On the other hand, generating 815Mb of data during an hour, then suspending for 30 mins, it then takes about 5 mins for the command to return to the limit I had set - and CPU usage is like with no throttling during that time. So it is not that the limit isn't enforced, it's rather that coming out of suspend to ram seems to impact the throughput in this context. Why and can this behavior be changed?1. The command uses one CPU core when not throttled. With a limit of 512KiB\s, CPU usage is about 10-15% or less. It takes about 2gb of randomness(and some time) to fill my 80x40 terminal window (depending on SET).
Why does it temporarily look like the pv command transfer limit is no longer enforced when I come out of suspend to ram?
Inserting pv in your receive-side pipeline should allow you to observe progress: nc 127.0.0.1 8888 | pv >device_image.ddIf you had pv available on the sending side, you could also use it there: dd if=/dev/block/mmcblk0 | pv | busybox nc -l -p 8888But pv probably won't be available on your Android device unless you installed it there.
how can i monitor netcat transferring from android to my linux machine i used this command on android device ( sender ) to make a full dump for my device :dd if=/dev/block/mmcblk0 | busybox nc -l -p 8888on receiver side i use this command : nc 127.0.0.1 8888 > device_image.ddi need to watch the progress with pv how can i do it ? thank
watch netcat transfer dump from android to pc
The nice thing about Linux is that you have access to the sources, so it is pretty much always possible to change something to do what you would like it to do, if you make the effort. In this case, it is not too difficult to download the sources, and look through them to see if it is obvious what to change. Then just rebuild your own pv binary. If you are using an rpm based system try the following (as an ordinary user): $ yumdownloader --source pv(This should work even if you have dnf instead of yum). You should end up with a file with suffix .src.rpm. The rest of the name will vary depending on your release. Install and compile it: $ rpm -i pv-1.6.0-1.fc22.src.rpm $ rpmbuild -bc ~/rpmbuild/SPECS/pv.specYou don't need to be root to install the sources as they are put in ~/rpmbuild. You may however need to install the rpmbuild and other packages to do the compilation. You should get the normal final binary pv in: $ file ~/rpmbuild/BUILD/pv-1.6.0/pvGrep through the sources, for say MiB, to find a likely change. I found ~/rpmbuild/BUILD/pv-1.6.0/src/pv/display.c had a routine pv__si_prefix() that took a parameter is_bytes that determined whether to divide by 1000 or 1024. I simply edited this routine to force it to 0 by adding is_bytes = 0;just after the declarations (before if (is_bytes) {). Then do make to get the binary recompiled as follows: $ cd ~/rpmbuild/BUILD/pv-1.6.0/ $ makeThe new pv file should do what you want.On a deb packaging system you have similar steps to do: $ sudo apt-get install dpkg-dev debhelper $ apt-get source pv $ cd pv-1.6.0/ $ dpkg-buildpackage -b -nc ... edit src/pv/display.c $ make
In pv, the rate meter is displayed as 47.5MiB 0:00:00 [ 165MiB/s] [================================>] 100%where the unit used for the transfer stats is MiB (1024 bytes). Is it possible to change this unit to MB (1000 bytes)?
Can the unit displayed in the transfer rate meter in pv be changed?
See What are the shell's control and redirection operators? and Order of redirections for background. tar -czf - ./Downloads/ | (pv -p --timer --rate --bytes > backup.tgz)tells the shell to run tar -czf - ./Downloads/ with standard output redirected to (pv -p --timer --rate --bytes > backup.tgz). (pv -p --timer --rate --bytes > backup.tgz)tells the shell to run pv -p --timer --rate --bytes with standard input connected to the pipe from tar, and standard output redirected to backup.tgz. The overall effect, since tar is told to create a compressed archive, outputting to its standard output (f -), is to build a compressed archive, pipe it through pv, and then have pv write it to backup.tgz. pv displays a progress bar, updated as data flows through it. The parentheses aren’t required. You second command changes the second half of the first pipe: (pv -n > backup.tgz) 2>&1again writes to backup.tgz, but also redirects the subshell’s (introduced by the parentheses) standard error to standard output, and feeds that into dialog which produces its own progress display. The parentheses are required here if the redirections are set up in the order given in your command: pv -n > backup.tgz 2>&1 would redirect both pv’s standard output and standard error to backup.tgz, which isn’t what you want. The desired effect can also be achieved by redirecting standard error first: pv -n 2>&1 > backup.tgz.
I am trying to understand how the redirection is exactly working in this command # tar -czf - ./Downloads/ | (pv -p --timer --rate --bytes > backup.tgz)What is the english translation ? All the data of tar is redirected as input to pv and then pv redirects it to backup.tgz ? Then why is the bracket around pv required ? Then how does this redirection make sense ? tar -czf - ./Documents/ | (pv -n > backup.tgz) 2>&1 | dialog --gauge "Progress" 10 70After pv what is the meaning of 2>&1 ?
How does redirection to pv actually work?
Pipe viewer has an option for this job. You can save pid to a file with this command. -P FILE, --pidfile FILE Save the process ID of pv in FILE. The file will be truncated if it already exists, and will be removed when pv exits. While pv is running, it will contain a single number - the process ID of pv - followed by a newline.
I'm running bash files for zfs send jobs and this my bash file example: zfs send -Rc tank/test@snap | pv -fs datasize -F "%p***%t***%e***%r***%b" | mbuffer -q -s 128k -m 1G -O ip:portWhen I start the bash I want to know PV pid. I couldn't figure out how can I take pv pid.
How to get PID when start bash script
“Do not cross filesystem boundaries” means “do not look inside mount points”. A boundary between filesystems is a mount point. Effectively, this means “only act on the specified partition”, except that not all filesystems are on a partition. See What mount points exist on a typical Linux system? When you make a backup, you should avoid a number of filesystems, in particular:remote filesystems (NFS, Samba, SSHFS, …); filesystems whose content is generated on the fly from runtime data (/proc, /sys, …); filesystems which are a view of another part of the directory tree (bindfs, avfs, …); in-memory filesystems containing data that is only valid until the next reboot (/tmp, /dev/shm, …); removable media which may or may not be present at any given time and aren't part of the system proper.The filesystems to back up are only the ones that correspond to on-disk storage. Many systems have only a single such filesystem, mounted on the root directory /. You can tell which filesystems correspond to on-disk storage because their source (first column titled “Filesystem” of the df output, first column before on of the mount output) is a disk volume (a PC-style partition such as /dev/sda1, an LVM logical volume such as /dev/mapper/mygroup-mylogicalvolume, …). There are a few subtleties that can make determining which filesystems to back up harder than just looking at the source to see if it's on-disk:Removable volumes shouldn't get backed up, even though they are on-disk. Linux allows the same filesystem (or parts thereof) to be mounted at multiple locations, with mount --bind; only one of them should be backed up. It can be difficult to enumerate all on-disk volumes: there are encrypted volumes, distributed storage volumes, etc.On your system, the filesystems to back up are /dev/sda7 mounted on /, /dev/sda6 mounted on /boot and /dev/sda8 mounted on /home. So you should tell rsnapshot to back up those three directories. You should almost always use the -x option with rsnapshot.
I'm planning to use rsnapshot to backup my whole Linux system, though I'm confused by -x option (same of one_fs in rsnapshot.conf). The man page says:-x one filesystem, don't cross partitions within each backup pointI understand it's not a specific rsnapshot option since rsync, cp, tar and other commands provide this feature as well. Does file system boundaries refers to different partitions? Different mount points? And what does it mean to don't "cross" them? Back to my case, I read many people suggesting to use -x with rsnapshot, but I'm wondering if doing so is not going to compromise the completeness of my backup. I want to backup everything under /, including /boot and /home, which reside on dedicated partitions of the same disk, while at the same time I don't want to backup files and directories not strictly belonging to my system, like /mnt, /media, etc. Executing mount command on my system gives the following output. Pratically, using rsnapshot -x, what will be included and what will be left out? /dev/sda7 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /tmp type tmpfs (rw,noexec,nosuid) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda6 on /boot type ext3 (rw) /dev/sda8 on /home type ext4 (rw) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/myuser/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=myuser)
Meaning of Crossing filesystem boundaries, --one-file-system, etc
There is a fourth field for the backup line, which can be used for such tasks. So your line should look like follows. backup tom@laptop:/home/tom/ laptop/ exclude=/home/tom/musicYou can add more per backup options by separating these with a comma. For further reading consult the man page of rsnapshot.
I've looked around a bit without finding an exact answer to my question, which is how to specify a directory to be excluded from only the remote filesystem backup. Say I've got two machines: a desktop (server) and a laptop. My home directory on each of them is /home/tom. rsnapshot lives on the desktop (localhost) with its config file(s). The backup commands, therefore, are: backup /home/tom/ localhost/ backup tom@laptop:/home/tom/ laptop/All well and good. But say I've got /home/tom/music on the laptop. It's stuff copied over from the desktop, and large. But when I go to exclude /home/tom/music/ from only the backup of tom@laptop: exclude tom@laptop:/home/tom/music/But this doesn't result in music/ being excluded, and causes my herpes to flare up. Doing this: exclude /home/tom/music/of course causes music/ to be excluded from both the localhost and laptop backups, and causes my PID to flare up. My solution for now is to simply have separate rsnapshot config files for each host, and execute rsnapshot once for each host. But this shouldn't be necessary. So how would I exclude a directory from only the remote (laptop) backup?
Specifying remote directories to be excluded from rsnapshot backup
First off, you should read up a little on rsync's include/exclude syntax. I get the feeling that what you want to do is better done using ** globs than * globs. (** expands to any number of entries, whereas * expands only to a single entry possibly matching multiple directory entries. The details are in man rsync under Include/Exclude Pattern Rules.) That said, if you want to be able to restore the system to a known working state from the backup with a minimum of hassle, you should be careful with excluding files or directories. I use rsnapshot myself and have actually taken the opposite approach: include everything except for a few carefully selected directories. So my rsnapshot.conf actually states (with tabs to make rsnapshot's configuration file parser happy): interval backup NNN # pick your poison one_fs 0 exclude /backup/** exclude /dev/** exclude /proc/** exclude /run/** exclude /sys/** exclude /tmp/** backup / ./and very little else. Yes, it means I might copy a bit more than what is strictly needed, but it ensures that anything not intended as ephermal is copied. Because of rsnapshot using rsync's hardlink-to-deduplicate behavior, the only real cost to this is during the first run; after that, assuming you have a reasonably sized (compared to your total data set size) backup target location, it takes very little extra in either time or disk space. I exclude the contents of /backup because that's where I mount the backup target file system; not excluding it would lead to the situation of copying the backup into itself. However, for simplicity if I ever need to restore onto bare metal, I want to keep the mount point! In my case I also cannot reasonably use one_fs 1; I run ZFS with currently ~40 file systems. Listing all of those explicitly would be a maintenance nightmare and make working with ZFS file systems a lot more involved than it needs to be. Pretty much anything you want to exclude above and beyond the above is going to depend on the distribution, anyway, so it's virtually impossible to give a generic answer. That said, you're likely to find some candidates under /var.
I'm planning a backup strategy based on rsnapshot. I want to do a full system backup excluding files and directories that would be useless for the restore to have a working system again. I already excluded: # System: exclude /dev/* exclude /proc/* exclude /sys/* exclude /tmp/* exclude /run/* exclude /mnt/* exclude /media/* exclude /lost+found# Application: exclude /*.pyc exclude /*.pyoI wonder which other entries I can add to the exclude list without compromising the restored system. Talking about a "generic" Linux system, can you suggest further glob extensions, temporary directories, caches, etc. I can exclude safely?
Entries I can safely exclude doing backups
You will be running as root on server A, which runs rsnapshot, and ssh-ing to your dedicated user backupmaker on B. Normally, you will want this user to be able to sudo rsync, so that you can read all the files to send back to A. Assume, for example, you have a user on A who can sudo, and another user on B who can sudo. On B create user backupmaker and give it a password. On B create a sudoers entry for it to run rsync without a password, eg: sudo tee /etc/sudoers.d/backupmaker <<<'backupmaker ALL = (root) NOPASSWD: /usr/bin/rsync'(Beware when editing sudoers files. Always ensure you have a root login somewhere for recovery). On A, from your user account copy root's ssh keys to this new user: sudo ssh-copy-id backupmaker@B(If you don't have root keys setup yet, use sudo ssh-keygen -q -N '' on A to create them). On A, test root can ssh to B without password and sudo to rsync: sudo ssh backupmaker@B sudo rsync --versionOn A configure /etc/rsnapshot.conf, removing existing backup lines and adding at the end, for example verbose 3 cmd_ssh /usr/bin/ssh rsync_long_args --rsync-path="sudo rsync" --delete --numeric-ids --relative --delete-excluded backup backupmaker@B:/home/ mybackupofB/Beware, the 2 columns are separated by tabs, not spaces. The last line is an example saying we will ssh to backupmaker@B and copy /home back to A's /.snapshots/hourly.0/mybackupofB/. Note the rsync_long_args has an option --rsync-path="sudo rsync" which means the command run on B will not be rsync but sudo rsync. To start with, use a small directory to backup rather than all of /home. You may also want to change the default placement of backups on A from /.snapshots. You can now try a first snapshot on A. sudo rsnapshot -vvv hourlyThis will run on A the commands: /usr/bin/ssh -l backupmaker B sudo rsync --server --sender -logDtprRe.iLsfx --numeric-ids . /home /usr/bin/rsync -a --rsync-path='sudo rsync' --delete --numeric-ids --relative --delete-excluded --rsh=/usr/bin/ssh backupmaker@B:/home /.snapshots/hourly.0/backupmaker/and on B: sh -c sudo rsync --server --sender -logDtprRe.iLsfx --numeric-ids . /homeLook in /var/log/rsnapshot for logs.
I have two Debian 8 servers:Server A: at home, lots of storage Server B: vps at commercial host, running web and mail servicesBoth are pet projects, not business stuff. Server B runs rsnapshot which works fine. Server A and B can SSH to each other passwordlesly with certificates, that also works fine. They do not allow root to SSH in directly but they have regular user accounts that can sudo su to become root. For non-automated SSH sessions I use password protected certs. The last couple of days I have been trying to set up rsnapshot on server B to create backups from server B to server A, or on server A to pull backups from server B to server A, which seems the proper way according to rsnapshot documentation. My problem is that a lot a documentation mentions servers relatively, for example they say 'do x on your server' or 'copy y to ~/somepath'. Seldom did I find documentation that explicitly lays out which server has which functions and which user's home directory it is that you need to copy y to. So:The production data that needs to be backed up is on server B. The backups need to be saved to server A. Server A is going to run rsnapshot.Questions: In the rsnapshot config, which user account do I need to say is going to log in via SSH on server B? Either root (which gets either complicated or unsafe) or a dedicated regular user account, for example a user called 'backupmaker' (Debian has a system user called 'backup' which is eligable but I don't want to mess with). I have read both and I understand the Linux mantra that more ways can be fine but I am really looking for some practicle advice from someone who has this set up in a production environment, preferably with relevant lines from /etc/rsnapshot.conf and /home//.ssh/authorized_keys (do you really use from="a.b.c.d",command="/home/remoteuser/cron/validate-rsync", and is that 'validate-rsync' script mandatory or can it be any command, e.g. /home/serverAuser/myrsnapscript.sh?). Do you use the root account for creating backups on server B, a regular user account, a custom dedicated account or the built-in 'backup' account? I am not looking for sshfs or other alternatives; I want to do this right, maybe expand it to a hub backup system later on. Any insights and advice are welcome!
Proper way to set up rsnapshot over ssh
I don't have an rsnapshot setup to test this on. Be careful. Personally, I think that the best thing to do is to carefully evaluate the output of rsnapshot -t interval. However if you want to actually move files, one way to do it might be to create an alternate config file that is identical to your real config file but with a different snapshot_root such as: snapshot_root /test/backup/pathAnd then you can run your test using rsnapshot -c rsnapshot.test.conf interval0where interval0 is your lowest order interval.
I occasionally make changes to my rsnapshot.conf and I'm wondering if there's any way I can do a test run that is sync-ed to a location other than the normal flow... something that's not an interval. Is this possible? how?
Can I do a "test run" with rsnapshot?
.gvfs directories are mount points (sometimes). You may want to use the one_fs option in your rsnapshot configuration (so that it passes --one-file-system to rsync).Gvfs is a library-level filesystem implementation, implemented in libraries written by the Gnome project (in particular libgvfscommon). Applications linked with this library can use a filesystem API to access ftp, sftp, webdav, samba, etc. Gvfs is like FUSE in that it allows filesystems to be implemented in userland code. FUSE requires the one-time cooperation of the kernel (so it's only available on supported versions of supported OSes), but then can be used by any application since it plugs into the normal filesystem API. Gvfs can only be used through Gnome libraries, but doesn't need any special collaboration from the kernel so works on more operating systems. A quick experiment on Ubuntu 10.04 shows that while an application is accessing a Gvfs filesystem, ~/.gvfs is a mount point for a gvfs-fuse-daemon filesystem. This filesystem allows any application to access Gvfs filesystems, without needing to link to Gnome libraries. It is a FUSE filesystem whose implementation redirects the ordinary filesystem calls to Gvfs calls. The gvfs-fuse-daemon filesystem does not allow any access to the root user, only to the user running the application (it's up to each individual filesystem to manage the root user's permissions; a classic case where root doesn't have every power is NFS, where accesses from root are typically mapped to nobody).
I was running rsnapshot as root and I got the following error. Why would this happen? what is .gvfs? rsnapshot weekly slave-iv rsync: readlink_stat("/home/griff/.gvfs") failed: Permission denied (13) IO error encountered -- skipping file deletion rsync: readlink_stat("/home/xenoterracide/.gvfs") failed: Permission denied (13) rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1042) [sender=3.0.7]
root user denied access to .gvfs in rsnapshot?
When the COW fills up, you start getting I/O errors on write operations. LVM2 allows you to check the size and usage of the COW, and resize it if necessary.
Consider the following scenario:I use Linux device mapper to create a snapshot of an ext4 file system. The snapshot is mounted as read-only; the source volume is mounted as read-write. I read the snapshot, and simultaneously write (too much) to the source volume. Eventually, the copy-on-write table fills up.Now exactly what happens in practice from the user's perspective? What kind of messages should I expect to see in dmesg? How do the applications that read the snapshot behave? Has someone actually tried this to see what would happen?
Linux device-mapper & ext4: what happens when the COW table fills up?
If it is intended as a backup (I'm looking at the tag), not as a remote copy of working directory, you should consider using tools like dar or good old tar. If some important file gets deleted and you won't notice it, you will have no chance to recover it after the weekly sync. Second advantage is that using tar/dar will let you preserve ownership of the files. And the third one - you will save bandwidth because you can compress the content.
Drive A is 2TB in a closet at home. Drive B is 2TB in my office at work. I'd like drive A to be the one I use regularly and to have rsync mirror A to B nightly/weekly. The problem I have with this is that multiple users have stuff on A. I have root run rsync -avz from A to $MYNAME:B Root can certainly read everything on A, but doesn't have permission to write non-$MYNAME stuff on B. How am I supposed to be doing this? Should I have a passwordless private key on A that logs into root on B? That seem's super dangerous. Also, I'd prefer to use rsnapshot but it looks like they demand that I draw from B to A using the passwordless private key to root's account that I'm so frightened by.
How do I backup via rsync to a remote machine, preserving permissions and ownership?
Rsnapshot takes a snapshot every day and every seven days the oldest daily snapshot becomes the new weekly snapshot. The other dailies are discarded. That's the basic idea, to store a relatively low number of snapshots, but with high granularity for the recent days and decreasing granularity for older data. If I understand you correctly, you want to keep the state of every day without discarding any data. Then the solution is not to use yearlies, monthlies and weeklies, but to use e.g. retain daily 730this stores the backup for two years without discarding any data not older than 730 days.
I have an Rsnapshotting local server that takes snapshots of folders of miscellaneous computers within the local LAN. There are daily, weekly, monthly and yearly snapshots. So somebody puts a file into one of those folders being monitored by Rsnapshot and then some hours later the Rsnapshot server takes his daily snapshot. After that the user deletes this given file. And then the next day the system takes another snapshot. It seems like this file will get permanently deleted on the backup, seven days (for I have 7 dailies) after the last snapshot was taken of it. Are there any precautions for controlling how long a system keeps files that are being deleted, in Rsnapshot? How have others dealt with this issue?
Control how long Rsnapshot keeps a file after being deleted
In the default /etc/rsnapshot configuration file is the following: # Specify the path to a script (and any optional arguments) to run right # after rsnapshot syncs files # cmd_postexec /path/to/postexec/scriptYou can use cmd_postexec to run a chgrp command on the resulting files which need their group ownership changing.
I am using rsnapshot to make daily backups of a MYSQL database on a server. Everything works perfectly except the ownership of the directory is root:root. I would like it to be root:backups to enable me to easily download these backups to a local computer over an ssh connection. (My ssh user has sudo permissions but I don't want to have to type in the password every time I make a local copy of the backups. This user is part of the backups group.) In /etc/rsnapshot.conf I have this line: backup_script /usr/local/bin/backup_mysql.sh mysql/ And in the file /usr/local/bin/backup_mysql.sh I have: umask 0077 # backup the database date=`date +"%y%m%d-%h%m%s"` destination=$date'-data.sql.gz' /usr/bin/mysqldump --defaults-extra-file=/root/.my.cnf --single-transaction --quick --lock-tables=false --routines data | gzip -c > $destination /bin/chmod 660 $destination /bin/chown root:backups $destinationThe file structure that results is: /backups/ ├── [drwxrwx---] daily.0 │ └── [drwxrwx---] mysql [error opening dir] ├── [drwxrwx---] daily.1 │ └── [drwxrwx---] mysql [error opening dir]The ownership of the backup data file itself is correct, as root:backups, but I cannot access that file because the folder it is in, mysql, belongs to root:root.
Rsnapshot: folder ownership permissions to 'backups' group instead of root
The time it takes to encrypt is proportional to the size of the data, plus some constant overhead. You can't save time for the whole operation by splitting the data, except by taking advantage of multiple cores so that it takes the same CPU time overall (or very slightly little more) but less wall-clock time. Splitting can of course be advantageous if you later want to access part of the data. GnuPG compresses data before encrypting it. If the data is already compressed, this won't do anything useful and may slow the process down a little. I recommend duplicity to make encrypted backups. It takes care of both collecting files and calling GPG and it knows how to do incremental backups. It splits the data into multiple volumes, so it can save wall-clock time by encrypting one volume while it's collecting files for the next one. The first time you back up 50GB is going to be slow regardless. If you have AES acceleration on your hardware, it helps (as long as you make sure that GPG is using AES — GnuPG used CAST-5 by default before version 2.1, but it uses your public key's preferences and that should default to AES even in GPG 1.4 or 2.0).
I have the following problem. I'm currently need to store my backup on a cloud solution like dropbox since my local nas is broken. That's why I have to encrypt my backup. I'm using rsnapshot to generate it. On the NAS I didn't encrypt it so I'm not experienced with it. What I've done is, I've zipped the latest backup and simple encrypted it via gpg. However, its still encrypting. My backup is aronud 50GB. I've never encrypted such a big file. Is there a way to encrypt such big files more efficiently or what am I doing wrong?
how to efficiently encrypt backup via gpg
After the snapshot, you can use rsnapshot diff which calls rsnapshot-diff to note the differences between two snapshots. It just compares inode numbers so is fairly efficient. Alternatively, before each backup create a file outside the backup tree to note the time, touch timestamp. Then before a new backup, create a new timestamp, touch timestamp.new, and test if any files or directories have a newer time than the old timestamp find tree -newer timestampIf not, do not do the backup. In any case, mv timestamp.new timestamp for the next time. This assumes you don't have applications that manipulate the file and directory timestamps.
I use rsnapshot to make regular backups of my systems' filesystems to a remote server. (For those familiar with rsync but less used to rsnapshot here is a brief introduction to its workings. A backup is a file-by-file copy of a source file-system tree, much like cp -a would produce. The "current" backup is always hourly.0, and the previous one is hourly.1. These names are rotated each time a backup begins. Under the covers rsnapshot uses rsync --link-dest to hardlink unchanged files in hourly.0 to the corresponding entries in the previous backup tree, hourly.1.) If a backup fails, the previous backup is copied (linked) using cp -al to the current backup, so that a backup always appears to have been made. What I would like is to avoid making a backup if there have been no changes since the previous backup. This could include a backup failure or simply that the source filesystem was not modified since the last backup. ("Making a backup" can be rephrased to "deleting an unnecessary backup" if you would prefer.) I've considered looking in the hourly.0 tree for files that are not hard-linked elsewhere, and if there is none then simply deleting the backup tree. This does not handle a file that is validly linked elsewhere within its backup, and it also fails to consider changes to directories. I have also considered using rsync --dry-run to compare the two backup trees and looking at its output but this feels somewhat ugly. Is there a better solution?
Backup with rsnapshot only if there are changes
Before I do the external backup, I create a definition file /root/folders_to_backup_external in each VM and a cronjob in each VM to create a hidden file .backupped_folder that contains the current date in all folders, that are defined in rsnapshot with # create hidden files with date to check in external server 19 2 * * * root for f in $(cat /root/folders_to_backup_external); do date +"%m-%d-%y %T">"$f".backupped_folder; doneIn the end I can check every day on the external server if all those folders are up-to-date with for f in $(locate .backupped_folder); do echo -n "$f - "; cat "$f"; done
On my Xen Host I first create an up-to-date snapshot of all VMs and then I use rsnapshot to backup all my important folders daily. Secondly I backup the same folders on an external server via rsync how can I ensure all those folders are successfully backed up on the external server?
Ensure all rsnapshot folders from all VMs in a Xen host are successfully backupped via rsync
It could be hard work trying to rename remote daily.0 directories to keep in sync with renaming done locally by rsnapshot. This might be needed to avoid an rsync of the entire snapshot directory from local to remote having to do a lot of work. It would be much simpler to have separate snapshots independently generated, locally and remotely. You will even gain some resilience if you separate them in time, so effectively doubling the snapshots. You do not need to copy the local files to the remote before doing a remote snapshot, as rsnapshot on the remote can fetch the files over the network, i.e. you can locally backup files that are remote. rsync is optimised to reduce network bandwidth by only transferring the minimal amounts of data needed when a file changes, by calculating checksums of parts of the file locally and remotely.
I am in the process of setting up a (proper) backup system which is built upon my NAS and rsnapshot. I have the NAS which has two drives in case one dies which in itself is not a backup, but I am also taking daily, weekly and monthly rsnapshots of the NAS which is being stored onto an external HDD. I want to have an external off-site copy of this data, and trying to work out the best way to do this. I know you can't remote rsnapshot, but is it better to sync the rsnapshot directory which contains the daily / weekly / monthly snapshots, or would it be better to do an rsync of all of the files i would backup on my local NAS to a remote NAS, and then rsnapshot the remote NAS directory. Apologies if this is convoluted, I am just trying to work out which would save more bandwidth, to rsnapshot the local NAS, save it on the external HDD and then rsync the external HDD to the remote nas, or to rsync everything on the local NAS to the remote NAS, and then rsnapshot the remote NAS? My worry is that given everything would likely change in the local rsnapshot directory, e.g. daily.0 becoming daily.1, would this mean the entire backup needs to be sync'd to the remote NAS, in which case rsync the initial files would be better? Many thanks!
Remote Syncing RSnapshots
After some extended discussion it appears that the filesystem may corrupted. As an example, rm -rf fails - as root - on a normal tree of files. After unmounting the filesystem, fsck identified it as NTFS. Frustratingly I have seen NTFS fail on other Linux-based platforms under the heavy loads incurred from rsnapshot. There's nothing sufficiently repeatable with which a bug can be filed, but a week's worth of rsnapshots can usually corrupt the filesystem. My recommendation is to replace the NTFS filesystem with something native to a Linux-based system, such as ext4. As an aside, if the backups must be accessed from a Windows platform, I have had good use from the Ext2FSD utility and driver for extN filesystems (also at sourceforge).
CRONTABS I'm using rsnapshot with cron. Here's what sudo crontab -l shows me. 0 */4 * * * /usr/bin/rsnapshot hourly 30 3 * * * /usr/bin/rsnapshot daily 0 3 * * 1 /usr/bin/rsnapshot weeklyOUTPUT I went to check on the backup folder to see if everything is working correctly, but here is the time sorted output: elijah@degas:~$ ls -lt /media/backup/ total 0 drwxrwxrwx 1 root root 0 May 30 04:00 hourly.1 drwxrwxrwx 1 root root 0 May 23 04:00 hourly.2 drwxrwxrwx 1 root root 0 May 17 04:00 hourly.3 drwxrwxrwx 1 root root 0 May 14 04:00 hourly.4 drwxrwxrwx 1 root root 0 May 13 04:00 hourly.5 drwxrwxrwx 1 root root 0 May 12 04:00 daily.0 drwxrwxrwx 1 root root 0 May 10 04:00 daily.1 drwxrwxrwx 1 root root 0 May 7 04:00 daily.2 drwxrwxrwx 1 root root 0 May 4 04:00 daily.3 drwxrwxrwx 1 root root 0 Apr 29 16:00 daily.4 drwxrwxrwx 1 root root 0 Apr 28 20:00 daily.5 drwxrwxrwx 1 root root 0 Apr 28 16:04 hourly.0 drwxrwxrwx 1 root root 0 Apr 28 12:21 daily.6 drwxrwxrwx 1 root root 0 Apr 27 10:09 weekly.1 drwxrwxrwx 1 root root 0 Apr 25 07:23 weekly.3The output seems almost random! Why could this be happening? I have what I thought was an identical configuration on a different machine, and it seems to be working fine. SYSLOG elijah@degas:~$ cat /var/log/syslog.1 | grep cron Jun 20 07:40:21 degas anacron[2795]: Job `cron.daily' terminated Jun 20 07:40:21 degas anacron[2795]: Normal exit (1 job run) Jun 20 08:17:01 degas CRON[3144]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 20 09:17:01 degas CRON[3228]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 20 10:17:01 degas CRON[4893]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 20 11:17:01 degas CRON[8737]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 20 12:17:01 degas CRON[10192]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 20 13:17:01 degas CRON[11870]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 20 14:17:01 degas CRON[12829]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 20 15:17:01 degas CRON[13614]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 20 15:54:28 degas crontab[14446]: (root) BEGIN EDIT (root) Jun 20 15:55:27 degas crontab[14446]: (root) END EDIT (root) Jun 20 15:55:29 degas crontab[14460]: (root) LIST (root) Jun 20 16:17:01 degas CRON[14770]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 20 16:44:04 degas crontab[14911]: (root) DELETE (root) Jun 20 16:44:07 degas crontab[14913]: (root) LIST (root) Jun 20 17:17:01 degas CRON[15713]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 20 18:17:01 degas CRON[15842]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 20 19:17:01 degas CRON[15928]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 20 20:17:01 degas CRON[16023]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 20 21:17:01 degas CRON[16110]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 20 22:17:01 degas CRON[16212]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 20 23:17:01 degas CRON[16300]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 21 00:00:01 degas CRON[16372]: (root) CMD (invoke-rc.d atop _cron) Jun 21 00:17:01 degas CRON[16437]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 21 01:17:01 degas CRON[16525]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 21 02:17:01 degas CRON[16612]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 21 03:17:01 degas CRON[16701]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 21 04:17:01 degas CRON[16798]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 21 05:17:01 degas CRON[16886]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 21 06:17:01 degas CRON[16974]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 21 06:25:01 degas CRON[16988]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )) Jun 21 07:17:01 degas CRON[17061]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 21 07:30:01 degas CRON[17083]: (root) CMD (start -q anacron || :) Jun 21 07:30:01 degas anacron[17086]: Anacron 2.3 started on 2016-06-21 Jun 21 07:30:01 degas anacron[17086]: Will run job `cron.daily' in 5 min. Jun 21 07:30:01 degas anacron[17086]: Jobs will be executed sequentially Jun 21 07:35:01 degas anacron[17086]: Job `cron.daily' started Jun 21 07:35:01 degas anacron[17099]: Updated timestamp for job `cron.daily' to 2016-06-21RSNAPSHOT TEST elijah@degas:~$ /usr/bin/rsnapshot -t hourly echo 23633 > /var/run/rsnapshot.pid /bin/rm -rf /media/backup/hourly.5/ mv /media/backup/hourly.4/ /media/backup/hourly.5/ mv /media/backup/hourly.3/ /media/backup/hourly.4/ mv /media/backup/hourly.2/ /media/backup/hourly.3/ mv /media/backup/hourly.1/ /media/backup/hourly.2/ /bin/cp -al /media/backup/hourly.0 /media/backup/hourly.1 /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \ --exclude=/var/ --exclude=/space/ --exclude=/nfs/ --exclude=/media/ \ --exclude=/proc/ --exclude=/sys/ --exclude=/dev/ --exclude=/tmp/ \ --exclude=/cdrom/ --exclude=media/backup /. \ /media/backup/hourly.0/Backup touch /media/backup/hourly.0/
What the heck is going on with my cron scheduler? (rsnapshot)
Unless you set rsync_short_args or rsync_long_args in the configuration file for rsnapshot, you'll use rsync -a to perform the copy. The default action for rsync -a is to copy a symlink as a symlink, and to ignore the target of the symlink. (Obviously, if the target of the symlink is within your source file tree it will get copied, but that's because of its position not because it's the target of an included symlink.) If you want to exclude the symlink itself, you just need to reference it in an exclude line: exclude /foo/barI've not included the leading /home because that's the root of your source tree and filters are always based on the root of the source.
I am using rsnapshot to manage my backups. The file /home/foo/bar is a symbolic link to a folder, and I want to exclude it. The --exclude option does not work, because if the pattern ends with a / then it will only match a directory, not a symlink. How could I do this? The rsync man page says: if the pattern ends with a / then it will only match a directory, not a regular file, symlink, or device.So what I'm asking is: how can I exclude from rsnapshot a symbolic link to a folder? If the link is /home/foo/bar, which of the following config lines is the right one? exclude /home/foo/bar/exclude /home/foo/barexclude /home/foo/bar**exclude /home/foo/bar***This is the config file that I'm using now: config_version 1.2 snapshot_root /media/satellite/rsnapshot/cmd_cp /bin/cp cmd_rm /bin/rm cmd_rsync /usr/bin/rsync cmd_logger /usr/bin/loggerretain hourly 6 retain daily 7 retain weekly 4verbose 2 loglevel 3lockfile /var/run/rsnapshot.pidexclude /home/roberto/media/ exclude /home/roberto/media exclude /home/roberto/media** exclude /home/roberto/media***backup /home/ localhost/ backup /etc/ localhost/ backup /usr/local/ localhost/ backup /var/www/ localhost/Where /home/roberto/media is a symlink to /media, which is a directory
How to exclude symbolic link in rsync/rsnapshot
A perhaps not so simple way is to connect as root but to limit the key used to connect to only run specific invocations of rsync; this requires an /root/.ssh/authorized_keys entry along the lines of from="192.0.2.*",command="/root/limit-rsnap" ssh-rsa AAAAB3N...which limits both where the backup is expected to originate from (this may not be ideal for all setups) and more importantly in the /root/limit-rsnap script on the system being connected to only specific calls to rsync are allowed: #!/bin/bashshopt -s extglobtest -n "$SSH_ORIGINAL_COMMAND" || exit 1case "$SSH_ORIGINAL_COMMAND" in 'rsync --server --sender -'+([vnlHogDtprRxe.isfLS])' --numeric-ids . '*) RSYNCPATH="${SSH_ORIGINAL_COMMAND#rsync --server --sender -+([vnlHogDtprRxe.isfLS]) --numeric-ids . }" test -e "$RSYNCPATH" && exec $SSH_ORIGINAL_COMMAND || exit 1 ;; *) exit 1 ;; esac
I am trying to setup ssh-key based login for a remote server to rsnapshot daily. The key I am using is a normal user key, but it obviously doesn't have root access, so when rsnapshot connects to the server with the user key, /root for example won't be backed up. What is the best way to setup rsnapshot in the simplest possible way? Should I just create a normal backup user, and add it to the wheel group and be done with it?
rsnapshot a remote server - best practice for permissions
Are you sure you have a abc_backups directory at the root of your filesystem? I really doubt it (and even if you did, this is not a good practice). Also backup takes 2 arguments, not one as in your example. First the source (what you backup) and then the destination. Based on your description, change your backup line like that: backup [emailprotected]:/var/www/abc.com/html/ website/ (which will then backup the website server 1.2.3.4 under /abc_backups/website/) In case of doubts, you can always run rsnapshot with the -t flag to see what commands it would execute (without executing them really)
I'm trying to set remote backup for my website. in /etc/rsnapshot.conf, I've set the following things but its still not working. snapshot_root /abc_backups/backup [emailprotected]:/var/www/abc.com/html/ Can anyone help me out on how to set this? My website server is on 1.2.3.4 and the source is /var/www/abc.com/html/ and destination is /abc_backups/
rsnapshot settings confusion
Take a look at this topic in the CentOS wiki, titled: rsnapshot Backups. It has examples that show how to backup using rsnapshot: excerpt from that page # crontab -e#MAILTO="" ##Supresses output MAILTO=me ################################################################### #minute (0-59), # #| hour (0-23), # #| | day of the month (1-31), # #| | | month of the year (1-12), # #| | | | day of the week (0-6 with 0=Sunday)# #| | | | | commands # ################################################################### 15 02 * * * /usr/bin/rsnapshot -c /etc/rsnapshot/laptop.rsnapshot.conf daily 15 03 * * Sun /usr/bin/rsnapshot -c /etc/rsnapshot/laptop.rsnapshot.conf weekly 30 03 1 * * /usr/bin/rsnapshot -c /etc/rsnapshot/laptop.rsnapshot.conf monthlyI don't think you can do this as non-root, especially if you're interacting with the LVM. I found numerous tickets regarding the lack of access to LVM tools for non-root users.https://bugzilla.redhat.com/show_bug.cgi?id=620571Given this the crontab entry will have to be one that's run by root.
I'd like to cron a local backup to a locally mounted USB drive. I'm using rsnapshot and want it to back up an LVM snapshot onto the USB drive. But, unless I run the cron as root it complains that I can't make an LVM snapshot because I don't have permission to look at /dev/mapper/control. Am I missing something? This is on CentOS 6.4.
rsnapshot LVM without root
You've tested the configuration (-t) but you haven't yet run it. Here's what the man page (see man rsnapshot) says,-t test, show shell commands that would be executedUse this to run the rsnapshot backup, optionally with -v to see what's going on: rsnapshot alphaDon't mix retain and interval; they mean the same thing and it can get confusing. Similarly, make sure they're in order to to bottom in a group with the most frequent first.
my goal is to backup a remote server. However, I first want to get just a local backup working, running on Ubuntu 20. For this, my /etc/rsnapshot.conf file is the following: config_version 1.2snapshot_root /var/backupsFromRsnapshot/cmd_rsync /usr/bin/rsync# The retain arguments define the number of snapshots to retain at different le> # I'm going to run cron job beta daily (so below will keep 7 daily snapshots), > retain alpha 6 retain beta 7 retain gamma 4# Below defines what folders I want included in the snapshots. backup /home/ localhost/ backup /etc/ localhost/ backup /var/ localhost/ backup /usr/local/ localhost/interval hourly 6If I run "rsnapshot configtest", I get the following result: SYNTAX OKThen I test the backup with the following command: rsnapshot -t alpha The result is as follows: mkdir -m 0700 -p /var/backupsFromRsnapshot/ mkdir -m 0755 -p /var/backupsFromRsnapshot/alpha.0/ /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \ /home/ /var/backupsFromRsnapshot/alpha.0/localhost/ mkdir -m 0755 -p /var/backupsFromRsnapshot/alpha.0/ /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded /etc/ \ /var/backupsFromRsnapshot/alpha.0/localhost/ mkdir -m 0755 -p /var/backupsFromRsnapshot/alpha.0/ /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \ --filter=-/_/var/backupsFromRsnapshot /var/ \ /var/backupsFromRsnapshot/alpha.0/localhost/ mkdir -m 0755 -p /var/backupsFromRsnapshot/alpha.0/ /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \ /usr/local/ /var/backupsFromRsnapshot/alpha.0/localhost/ touch /var/backupsFromRsnapshot/alpha.0/ However, if I check my /var/ directory, there is no backupsFromRsnapshot folder, yet any backup file. Is my config correct? Is my test expression correct? Where is the fault? Thanks!
Problems getting Rsnapshot to work, even just for a local backup
Here's your relevant backup linebackup [emailprotected]:/ popbackup/You're running the source backup as gisbi rather than as root, so it cannot open the problematic files listed as errors. I'd be inclined to run the source sender as root, and use --fake-super on the receiving side with a non-root account. This would go into the rsync_long_args value. Here are some of my typical working settings: # Remember: command {TAB} arguments # rsync_short_args -azHS rsync_long_args --delete --delete-excluded --numeric-ids --fake-super … backup root@remoteHost:/ root/When backing up with root as the remote user, you should use ssh public/private key authentication. (By default the ssh service disallows root logins by password. You can change this but it's really not recommended.) Check out ssh-keygen -t ed25519 and other references here on Unix&Linux, as well as the paper, Comparing SSH Keys - RSA, DSA, ECDSA, or EdDSA? by Kontsevoy.
EDIT: the solution to this problem is the marked solution underneath + enabling PermitRootLogin without-password in /etc/ssh/sshd_config I'm trying to backup my entire system to my local server, but I even though I'm running rsnapshot as sudo, I get permission errors in /var/, /etc/ and /usr/. Is there a way to fix this? If there isn't, what's my best option to backup my system to my local server? This is my rsnapshot.conf config_version 1.2########################### # SNAPSHOT ROOT DIRECTORY # ###########################snapshot_root /home/gisbi/backup/cmd_cp /bin/cpcmd_rm /bin/rmcmd_rsync /usr/bin/rsynccmd_ssh /usr/bin/sshcmd_logger /usr/bin/loggercmd_du /usr/bin/du######################################### # BACKUP LEVELS / INTERVALS # # Must be unique and in ascending order # # e.g. alpha, beta, gamma, etc. # ##########################################retain hourly 24 retain daily 7 retain weekly 4 retain monthly 12#logsverbose 5loglevel 4logfile /var/log/rsnapshot.loglockfile /var/run/rsnapshot.pidssh_args -p 22#exclusionsexclude /dev/* exclude /proc/* exclude /sys/* exclude /run/* exclude /var/tmp/* exclude /var/run/* exclude /tmp/* exclude /run/* exclude /mnt/* exclude /usr/portage/distfiles/* exclude /lost+found exclude /home/gisbi/Storage exclude /home/gisbi/.local/share/Trash/*#locationbackup [emailprotected]:/ popbackup/EDIT: errors look like this rsync: [sender] send_files failed to open "/usr/lib/cups/backend/cups-brf": Permission denied (13) rsync: [sender] send_files failed to open "/usr/lib/cups/backend/implicitclass": Permission denied (13)
Permission errors backing up entire system using rsnapshot over local server
There are a couple of problems here slowing the backup solution down.You're using rsync to copy between two "local" filesystems. Just because one of them happens to be SMB is irrelevant to rsync. If the filesystem is mounted as part of the local system then rsync has to treat it as local. This means that any changed file has to be copied from the SMB network share in its entirely, not just the changed parts. If your fileserver can run rsync directly, modify the backup process so that it can start a remote rsync process and gain the benefit of incremental copies. You're writing to your backup disks via fuseblk. I assume this is because the disks have NTFS filesystems on them. If you can reformat them to use a native Linux filesystem such as ext4 you will see a significant increase in file IO speed. Including the rm -rf that is taking so long. If you're writing to VFAT then you've also got the problem of reduced quality timestamps and you'll need to warn rsync accordingly so that it doesn't try to keep copying otherwise-identical files to your backup media. I understand from comments that you are indeed using NTFS and you want to continue using that so the disks can be read under Windows. An alternative is to install an ext4 disk driver into Windows. I use ext2fs, which I find pretty solid.
For backing up data in my office I use a Raspberry Pi model B (I had a spare one) running rsnapshot. Basically, every night it copies data from a bunch of smb mounted folders to a couple of external hard drives (fuseblk). I gradually added data to back up and recently the whole process became really slow: it takes something like 15 hrs to perform the entire operation. This is the log of a copy (only on one disk): [07/Nov/2018:21:16:05] /usr/bin/rsnapshot -c /etc/rsnapshot.conf Daily: started [07/Nov/2018:21:16:05] echo 28378 > /var/run/rsnapshot.pid [07/Nov/2018:21:16:08] /bin/rm -rf /mnt/Disk1/Backup/Daily.4/ [07/Nov/2018:23:31:33] mv /mnt/Disk1/Backup/Daily.3/ /mnt/Disk1/Backup/Daily.4/ [07/Nov/2018:23:31:33] mv /mnt/Disk1/Backup/Daily.2/ /mnt/Disk1/Backup/Daily.3/ [07/Nov/2018:23:31:33] mv /mnt/Disk1/Backup/Daily.1/ /mnt/Disk1/Backup/Daily.2/ [07/Nov/2018:23:31:33] /bin/cp -al /mnt/Disk1/Backup/Daily.0 /mnt/Disk1/Backup/Daily.1 [08/Nov/2018:02:17:45] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld01 /mnt/Disk1/Backup/Daily.0/./ [08/Nov/2018:02:43:28] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld02 /mnt/Disk1/Backup/Daily.0/./ [08/Nov/2018:02:46:29] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld03 /mnt/Disk1/Backup/Daily.0/./ [08/Nov/2018:02:54:05] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld04 /mnt/Disk1/Backup/Daily.0/./ [08/Nov/2018:02:54:48] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld05 /mnt/Disk1/Backup/Daily.0/./ [08/Nov/2018:02:54:49] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld06 /mnt/Disk1/Backup/Daily.0/./ [08/Nov/2018:02:54:49] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld07 /mnt/Disk1/Backup/Daily.0/./ [08/Nov/2018:03:00:10] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld08 /mnt/Disk1/Backup/Daily.0/./ [08/Nov/2018:03:25:57] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld09 /mnt/Disk1/Backup/Daily.0/./ [08/Nov/2018:03:25:57] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld10 /mnt/Disk1/Backup/Daily.0/./ [08/Nov/2018:03:28:42] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld11 /mnt/Disk1/Backup/Daily.0/./ [08/Nov/2018:03:53:39] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld12 /mnt/Disk1/Backup/Daily.0/./ [08/Nov/2018:03:58:05] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld13 /mnt/Disk1/Backup/Daily.0/./ [08/Nov/2018:04:00:24] touch /mnt/Disk1/Backup/Daily.0/ [08/Nov/2018:04:00:24] rm -f /var/run/rsnapshot.pid [08/Nov/2018:04:00:24] /usr/bin/rsnapshot -c /etc/rsnapshot.conf Daily: completed successfullyNow, I know that the RPi is not fast, nor are the external drives. Still, the problems seem to be here [07/Nov/2018:21:16:08] /bin/rm -rf /mnt/Disk1/Backup/Dayly.4/and especially here [07/Nov/2018:23:31:33] /bin/cp -al /mnt/Disk1/Backup/Dayly.0 /mnt/Disk1/Backup/Daily.1Keep in mind that I have probably tens of thousands of files (I'm counting them as I write but I don't know how long it will take). (EDIT: there are 250k files in ~30 GB of space) Any idea on what could be the problem and if/how I could solve it? While I'm here, I have no clue on the --relative [...] --no-relative option on the rsync command. I honestly don't remember how I came to it, it's been some time since I configured it. Given that I need to save the tree, should I just use relative? Or is it ok this way, since it works? -=* UPDATE *=- I did as I was suggested and formatted as ext4 the usb drives. This is the log after the operation: [16/Nov/2018:21:16:04] /usr/bin/rsnapshot -c /etc/rsnapshot.conf Daily: started [16/Nov/2018:21:16:04] echo 19966 > /var/run/rsnapshot.pid [16/Nov/2018:21:16:04] /bin/rm -rf /mnt/Disk1/Backup/Daily.4/ [16/Nov/2018:21:18:52] mv /mnt/Disk1/Backup/Daily.3/ /mnt/Disk1/Backup/Daily.4/ [16/Nov/2018:21:18:52] mv /mnt/Disk1/Backup/Daily.2/ /mnt/Disk1/Backup/Daily.3/ [16/Nov/2018:21:18:52] mv /mnt/Disk1/Backup/Daily.1/ /mnt/Disk1/Backup/Daily.2/ [16/Nov/2018:21:18:52] /bin/cp -al /mnt/Disk1/Backup/Daily.0 /mnt/Disk1/Backup/Daily.1 [16/Nov/2018:21:22:25] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld01 /mnt/Disk1/Backup/Daily.0/./ [16/Nov/2018:21:24:19] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld02 /mnt/Disk1/Backup/Daily.0/./ [16/Nov/2018:21:24:27] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld03 /mnt/Disk1/Backup/Daily.0/./ [16/Nov/2018:21:24:41] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld04 /mnt/Disk1/Backup/Daily.0/./ [16/Nov/2018:21:24:44] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld05 /mnt/Disk1/Backup/Daily.0/./ [16/Nov/2018:21:24:44] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld06 /mnt/Disk1/Backup/Daily.0/./ [16/Nov/2018:21:24:45] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld07 /mnt/Disk1/Backup/Daily.0/./ [16/Nov/2018:21:25:04] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld08 /mnt/Disk1/Backup/Daily.0/./ [16/Nov/2018:21:26:04] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld09 /mnt/Disk1/Backup/Daily.0/./ [16/Nov/2018:21:26:04] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld10 /mnt/Disk1/Backup/Daily.0/./ [16/Nov/2018:21:26:20] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld11 /mnt/Disk1/Backup/Daily.0/./ [16/Nov/2018:21:26:58] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld12 /mnt/Disk1/Backup/Daily.0/./ [16/Nov/2018:21:28:54] /usr/bin/rsync -a --stats --relative --delete --no-relative /mnt/Backup/Fld13 /mnt/Disk1/Backup/Daily.0/./ [16/Nov/2018:21:30:03] touch /mnt/Disk1/Backup/Daily.0/ [16/Nov/2018:21:30:03] rm -f /var/run/rsnapshot.pid [16/Nov/2018:21:30:03] /usr/bin/rsnapshot -c /etc/rsnapshot.conf Daily: completed successfullyAs you can see the overall time was drastically reduced: 15 mins vs. ~7hrs. Thank you all, I'm honestly impressed. The only doubt I have left is about what was discussed in the comments: I believe that rsync does an incremental copy, even if it sees the smb source folders as local. Some of these folders contain 10k+ files (probably even more, I can't check in this very moment) and there's just no way that all of these are copied in just, say, 2 minutes.
rsnapshot very slow
Instead of trying to solve this through Samba, I've reset the samba configuration to the default that the QNAP created. (I.e. un-commented the commented out lines. This also seems safer in the long run since the Web GUI can potentially overwrite the tuned smb.conf file if new shares etc are created by myself or other admins.) I then change the file system permissions to add the extended ACL for the MYDOM\Domain Users group with read r+x for the directories: /share /share/CACHEDEV1_DATA /share/CACHEDEV1_DATA/homesThis way when the files are backed up the domain users can navigate all the way to the homes directory. However since there is no default ACL that is inherited from the snapshots directory (/share/CACHEDEV1_DATA/Local Backups) and no changes to the user's home directories, only the original users can access their own home directories. RSnapshot Changes I though that the extended ACLs were preserved. They were not, it only looked right because the home directories' standard ACLs were setup with a domain user and group. So the standard ACLs were preserved, but not the extended ones. To fix this, I edited the rsnapshot script and added the -A flag to rsync by changing: my $default_rsync_short_args = '-a';to my $default_rsync_short_args = '-aA';To fix access, to the snapshot directories (i.e. hourly.0, etc), I also added a permission change to the create_backup_point_dir function by adding right at the bottom of the function: system("setfacl -m g:MYDOM\\\\Domain\\ Users:rx \"$destpath\"");It now works as expected and users can recover their own private files from backups. :) I'll try and roll this into a patch for rsnapshot once I've done some more testing.
The basic problems is that I have a Domain Connected QNAP and want to publish the RSnapshot snapshots via Samba so users can recover their own files from backups. (As per the original RSnapshot HowTo: http://rsnapshot.org/rsnapshot/docs/docbook/rest.html#restoring-backups) However unless I set a Default ACL (setfacl -m g:MYDOM\Domain\ Users:rx) that the new snapshots will inherit, I simply can't browse the content of the shared snapshots. RSnapshot Overview It creates hourly / daily / weekly / monthly snapshots and are preserving the standard and extended Linux ACLs correctly. The snapshots are stored in the following directory: /share/CACHEDEV1_DATA/Local BackupsTo prevent changes in permissions from occurring, I have cleared the default ACLs of that directory and simply set default permissions. The permissions are: # ls -al drwxrwxrwx 4 admin administ 4096 Nov 22 17:00 Local Backups/# getfacl Local\ Backups/ # file: Local Backups/ # owner: admin # group: administrators user::rwx user:admin:rwx user:guest:--- group::rwx group:MYDOM\domain\040users:r-x mask::rwx other::rwx default:user::rwx default:group::rwx default:mask::rwx default:other::rwxThis means that the default permissions of the snapshot sub-directories (hourly.0, hourly.1 etc) looks like: # cd hourly.0# ls -al drwxrwxrwx 3 admin administ 4096 Nov 22 16:02 ./# getfacl . # file: . # owner: admin # group: administrators user::rwx group::rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:mask::rwx default:other::rwxAt this point RSnapshot is fully tested and working as expected. (The permissions are pretty liberal to work out if the FS permissions or Samba is the problem.) Samba Overview I've created a Share through the WebGUI called LocalBackups, and reviewing the smb.conf file I would expect it to work without modifications. Though I can access the LocalBackups directory fine, every time I try to access on of the backups, i.e. hour.0, hourly.1 etc, I get the error message "You do not have permissions to access \192.168.1.20\LocalBackups\hourly.0. From the smb.conf, the [global] section is: [global] # Add this, apparently Windows 7 Bug. # acl allow execute always = yes log level = 3 passdb backend = smbpasswd workgroup = MYDOM security = ADS server string = encrypt passwords = Yes username level = 0 #map to guest = Bad User null passwords = yes max log size = 10 socket options = TCP_NODELAY SO_KEEPALIVE os level = 20 preferred master = no dns proxy = No smb passwd file=/etc/config/smbpasswd username map = /etc/config/smbusers guest account = guest directory mask = 0777 create mask = 0777 oplocks = yes locking = yes disable spoolss = no load printers = yes veto files = /.AppleDB/.AppleDouble/.AppleDesktop/:2eDS_Store/Network Trash Folder/Temporary Items/TheVolumeSettingsFolder/.@__thumb/.@__desc/:2e*/.@__qini/.Qsync/.@upload_cache/.qsync/.qsync_sn/.@qsys/.streams/.digest/ delete veto files = yes map archive = no map system = no map hidden = no map read only = no deadtime = 10 server role = auto use sendfile = yes unix extensions = no store dos attributes = yes client ntlmv2 auth = yes dos filetime resolution = no wide links = yes #force unknown acl user = yes force unknown acl user = yes template homedir = /share/homes/DOMAIN=%D/%U inherit acls = yes domain logons = no min receivefile size = 256 case sensitive = auto domain master = auto local master = no enhance acl v1 = yes remove everyone = yes conn log = no kernel oplocks = no max protocol = SMB2_10 smb2 leases = yes durable handles = yes kernel share modes = no posix locking = no lock directory = /share/CACHEDEV1_DATA/.samba/lock state directory = /share/CACHEDEV1_DATA/.samba/state cache directory = /share/CACHEDEV1_DATA/.samba/cache printcap cache time = 0 acl allow execute always = yes server signing = disabled aio read size = 1 aio write size = 0 streams_depot:delete_lost = yes streams_depot:check_valid = no fruit:nfs_aces = no fruit:veto_appledouble = no winbind expand groups = 1 pid directory = /var/lock printcap name = /etc/printcap printing = cups show add printer wizard = no realm = mydom.local ldap timeout = 5 password server = mydc001.mydom.local pam password change = yes winbind enum users = Yes winbind enum groups = Yes winbind cache time = 3600 idmap config * : backend = tdb idmap config * : range = 400001-500000 idmap config MYDOM : backend = rid idmap config MYDOM : range = 10000001-20000000 host msdfs = yes vfs objects = shadow_copy2 acl_xattr catia fruit qnap_macea streams_depot aio_pthreadThe [LocalBackups] section is: [LocalBackups] comment = path = /share/CACHEDEV1_DATA/Local Backups browsable = yes oplocks = yes ftp write only = no recycle bin = no recycle bin administrators only = no qbox = no public = yes #invalid users = "guest" #read list = @"MYDOM\Domain Users" #write list = "admin" #valid users = "root","admin",@"MYDOM\Domain Users" guest ok = yes read only = yes inherit permissions = no shadow:snapdir = /share/CACHEDEV1_DATA/_.share/LocalBackups/.snapshot shadow:basedir = /share/CACHEDEV1_DATA/Local Backups shadow:sort = desc shadow:format = @GMT-%Y.%m.%d-%H:%M:%S smb encrypt = disabled strict allocate = yes streams_depot:check_valid = yes mangled names = yes admin users = admin only = "admin" #nt acl support = noUsing this configuration, I can enter the LocalBackupds directory, but I can't enter any of the snapshot sub-directories, i.e. hourly.0, hourly.1 etc. The commented out lines is things I have tried to see if it makes a difference, but the behavior has been consistent with or without the commented out lines. If I change the ACL on one of the snapshot directories (i.e. hourly.0) to include the MYDOM\Domain Users, I am allowed to enter that directory (i.e. hourly.0) via Samba. The permissions of the directory is then: # cd hourly.0# ls -al drwxrwxrwx 3 admin administ 4096 Nov 22 18:00 ./# getfacl . # file: . # owner: admin # group: administrators user::rwx group::rwx group:MYDOM\domain\040users:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:mask::rwx default:other::rwxAt this point I have not been able to work out how to enable proper logging on the QNAP. From the basic WebUI logging information I can see the SMB connection request passing with my user name etc. I'm leaning towards the Samba configuration being more strict than the FS Permissions, but I'm guessing. At this stage I'm not sure if my knowlege of ACLs, Samba or both are failing me. Any ideas?
Unable to explore sub-directory in Samba share with Linux ACLs
If you have put this command in your cmd_ssh line, like this: cmd_ssh /usr/bin/ssh -p 22 -i /home/thelemur/.ssh/id_rsa_n900then you have unfortunately tickled an interesting almost-bug in rsnapshot. The problem is that the cmd_ssh parameter takes the entire value - including spaces - as the ssh alternative to run, whereas what you (and previously I) would want in this scenario is shell parsing of the option. What you need to do is either to create a little script that contains the necessary ssh invocation and call that, or to set up the ssh configuration in the $HOME/.ssh/ssh_config. The former is easier; just put the following into a script such as /home/thelemur/.ssh/ssh_with_id_rsa_n900.sh: #!/bin/sh exec ssh -p 22 -i /home/thelemur/.ssh/id_rsa_n900 "$@"Then make it executable chmod u+x /home/thelemur/.ssh/ssh_with_id_rsa_n900.sh, and finally use that in the rsnapshot configuration: cmd_ssh /home/thelemur/.ssh/ssh_with_id_rsa_n900.sh
** edit 8/6/15 * So the crux of my problem turned out not to be some quirkiness with the config file. In the end it turned out I simply had multiple ssh directories in 2 different places, and was using the wrong one. It's an embarrassing mistake to make, but live and learn, right? I'm trying to do a backup of my Nokia N900 (a linux box smart phone) with rsnapshot. For reasons I can't understand, rsnapshot throws up the following error: rsync: Failed to exec /usr/bin/ssh -p 22 -i /home/thelemur/.ssh/id_rsa_n900: \ No such file or directory (2)What's strange about this to me is that I can run the very same ssh command line from a bash terminal, and have no problem. I've tried playing with the backslashes, entering the rsnapshot command from root, and even placing a sudo directly in the rsnapshot config file. I've also checked my tab placement in the config file. Does anyone know what I've been doing wrong?
Having trouble with rsnapshot via ssh (from debian laptop) of Nokia n900
In smartctl -a <device> look for Self-test execution status. Example when no test is running: Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run.Example while a test is running: Self-test execution status: ( 249) Self-test routine in progress... 90% of test remaining.When running selective self-test (-t select) there will also be a progress shown here: SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 125045423 Self_test_in_progress [90% left] (2881512-2947047)
I am testing a hard disk with SmartMonTools. Hard disk status prior to the testings (only one short test performed days ago): $ sudo smartctl -l selftest /dev/sda smartctl 6.2 2013-07-26 r3841 [i686-linux-3.16.0-30-generic] (local build) Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 5167 -So I start the long test: $ sudo smartctl -t long /dev/sda smartctl 6.2 2013-07-26 r3841 [i686-linux-3.16.0-30-generic] (local build) Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION === Sending command: "Execute SMART Extended self-test routine immediately in off-line mode". Drive command "Execute SMART Extended self-test routine immediately in off-line mode" successful. Testing has begun. Please wait 130 minutes for test to complete. Test will complete after Sat May 9 16:05:27 2015Use smartctl -X to abort test.The test is supposed to be running, then, but if I try to see its progress: $ sudo smartctl -l selftest /dev/sda smartctl 6.2 2013-07-26 r3841 [i686-linux-3.16.0-30-generic] (local build) Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 5167 -... all I get is the same results, like if there were no running/performing tests right now. The '-H' parameter gives no more info: $ sudo smartctl -H /dev/sda smartctl 6.2 2013-07-26 r3841 [i686-linux-3.16.0-30-generic] (local build) Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSEDAnd, as long as there is no process running (this test is performed by the hard disk controller alone), some ps -e style search should neither help. How can I know if there is some SMART self test running right now?
SmartMonTools: How can I know if there is any smartctl test running on my hard disk?
All these "poke the sector" answers are, quite frankly, insane. They risk (possibly hidden) filesystem corruption. If the data were already gone, because that disk stored the only copy, it'd be reasonable. But there is a perfectly good copy on the mirror. You just need to have mdraid scrub the mirror. It'll notice the bad sector, and rewrite it automatically. # echo 'check' > /sys/block/mdX/md/sync_action # use 'repair' instead for older kernelsYou need to put the right device in there (e.g., md0 instead of mdX). This will take a while, as it does the entire array by default. On a new enough kernel, you can write sector numbers to sync_min/sync_max first, to limit it to only a portion of the array. This is a safe operation. You can do it on all of your mdraid devices. In fact, you should do it on all your mdraid devices, regularly. Your distro likely ships with a cronjob to handle this, maybe you need to do something to enable it?Script for all RAID devices on the system A while back, I wrote this script to "repair" all RAID devices on the system. This was written for older kernel versions where only 'repair' would fix the bad sector; now just doing check is sufficient (repair still works fine on newer kernels, but it also re-copies/rebuilds parity, which isn't always what you want, especially on flash drives) #!/bin/bashsave="$(tput sc)"; clear="$(tput rc)$(tput el)"; for sync in /sys/block/md*/md/sync_action; do md="$(echo "$sync" | cut -d/ -f4)" cmpl="/sys/block/$md/md/sync_completed" # check current state and get it repairing. read current < "$sync" case "$current" in idle) echo 'repair' > "$sync" true ;; repair) echo "WARNING: $md already repairing" ;; check) echo "WARNING: $md checking, aborting check and starting repair" echo 'idle' > "$sync" echo 'repair' > "$sync" ;; *) echo "ERROR: $md in unknown state $current. ABORT." exit 1 ;; esac echo -n "Repair $md...$save" >&2 read current < "$sync" while [ "$current" != "idle" ]; do read stat < "$cmpl" echo -n "$clear $stat" >&2 sleep 1 read current < "$sync" done echo "$clear done." >&2; donefor dev in /dev/sd?; do echo "Starting offline data collection for $dev." smartctl -t offline "$dev" doneIf you want to do check instead of repair, then this (untested) first block should work: case "$current" in idle) echo 'check' > "$sync" true ;; repair|check) echo "NOTE: $md $current already in progress." ;; *) echo "ERROR: $md in unknown state $current. ABORT." exit 1 ;; esac
The tl;dr: how would I go about fixing a bad block on 1 disk in a RAID1 array? But please read this whole thing for what I've tried already and possible errors in my methods. I've tried to be as detailed as possible, and I'm really hoping for some feedback This is my situation: I have two 2TB disks (same model) set up in a RAID1 array managed by mdadm. About 6 months ago I noticed the first bad block when SMART reported it. Today I noticed more, and am now trying to fix it. This HOWTO page seems to be the one article everyone links to to fix bad blocks that SMART is reporting. It's a great page, full of info, however it is fairly outdated and doesn't address my particular setup. Here is how my config is different:Instead of one disk, I'm using two disks in a RAID1 array. One disk is reporting errors while the other is fine. The HOWTO is written with only one disk in mind, which bring up various questions such as 'do I use this command on the disk device or the RAID device'? I'm using GPT, which fdisk does not support. I've been using gdisk instead, and I'm hoping that it is giving me the same info that I needSo, lets get down to it. This is what I have done, however it doesn't seem to be working. Please feel free to double check my calculations and method for errors. The disk reporting errors is /dev/sda: # smartctl -l selftest /dev/sda smartctl 5.42 2011-10-20 r3458 [x86_64-linux-3.4.4-2-ARCH] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net=== START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed: read failure 90% 12169 3212761936With this, we gather that the error resides on LBA 3212761936. Following the HOWTO, I use gdisk to find the start sector to be used later in determining the block number (as I cannot use fdisk since it does not support GPT): # gdisk -l /dev/sda GPT fdisk (gdisk) version 0.8.5Partition table scan: MBR: protective BSD: not present APM: not present GPT: presentFound valid GPT with protective MBR; using GPT. Disk /dev/sda: 3907029168 sectors, 1.8 TiB Logical sector size: 512 bytes Disk identifier (GUID): CFB87C67-1993-4517-8301-76E16BBEA901 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 3907029134 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB)Number Start (sector) End (sector) Size Code Name 1 2048 3907029134 1.8 TiB FD00 Linux RAIDUsing tunefs I find the blocksize to be 4096. Using this info and the calculuation from the HOWTO, I conclude that the block in question is ((3212761936 - 2048) * 512) / 4096 = 401594986. The HOWTO then directs me to debugfs to see if the block is in use (I use the RAID device as it needs an EXT filesystem, this was one of the commands that confused me as I did not, at first, know if I should use /dev/sda or /dev/md0): # debugfs debugfs 1.42.4 (12-June-2012) debugfs: open /dev/md0 debugfs: testb 401594986 Block 401594986 not in useSo block 401594986 is empty space, I should be able to write over it without problems. Before writing to it, though, I try to make sure that it, indeed, cannot be read: # dd if=/dev/sda1 of=/dev/null bs=4096 count=1 seek=401594986 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.000198887 s, 20.6 MB/sIf the block could not be read, I wouldn't expect this to work. However, it does. I repeat using /dev/sda, /dev/sda1, /dev/sdb, /dev/sdb1, /dev/md0, and +-5 to the block number to search around the bad block. It all works. I shrug my shoulders and go ahead and commit the write and sync (I use /dev/md0 because I figured modifying one disk and not the other might cause issues, this way both disks overwrite the bad block): # dd if=/dev/zero of=/dev/md0 bs=4096 count=1 seek=401594986 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.000142366 s, 28.8 MB/s # sync I would expect that writing to the bad block would have the disks reassign the block to a good one, however running another SMART test shows differently: # 1 Short offline Completed: read failure 90% 12170 3212761936Back to square 1. So basically, how would I fix a bad block on 1 disk in a RAID1 array? I'm sure I've not done something correctly... Thanks for your time and patience.EDIT 1: I've tried to run an long SMART test, with the same LBA returning as bad (the only difference is it reports 30% remaining rather than 90%): SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 30% 12180 3212761936 # 2 Short offline Completed: read failure 90% 12170 3212761936I've also used badblocks with the following output. The output is strange and seems to be miss-formatted, but I tried to test the numbers outputed as blocks but debugfs gives an error # badblocks -sv /dev/sda Checking blocks 0 to 1953514583 Checking for bad blocks (read-only test): 1606380968ne, 3:57:08 elapsed. (0/0/0 errors) 1606380969ne, 3:57:39 elapsed. (1/0/0 errors) 1606380970ne, 3:58:11 elapsed. (2/0/0 errors) 1606380971ne, 3:58:43 elapsed. (3/0/0 errors) done Pass completed, 4 bad blocks found. (4/0/0 errors) # debugfs debugfs 1.42.4 (12-June-2012) debugfs: open /dev/md0 debugfs: testb 1606380968 Illegal block number passed to ext2fs_test_block_bitmap #1606380968 for block bitmap for /dev/md0 Block 1606380968 not in useNot sure where to go from here. badblocks definitely found something, but I'm not sure what to do with the information presented...EDIT 2 More commands and info. I feel like an idiot forgetting to include this originally. This is SMART values for /dev/sda. I have 1 Current_Pending_Sector, and 0 Offline_Uncorrectable. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 100 100 051 Pre-fail Always - 166 2 Throughput_Performance 0x0026 055 055 000 Old_age Always - 18345 3 Spin_Up_Time 0x0023 084 068 025 Pre-fail Always - 5078 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 75 5 Reallocated_Sector_Ct 0x0033 252 252 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 252 252 051 Old_age Always - 0 8 Seek_Time_Performance 0x0024 252 252 015 Old_age Offline - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 12224 10 Spin_Retry_Count 0x0032 252 252 051 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 252 252 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 75 181 Program_Fail_Cnt_Total 0x0022 100 100 000 Old_age Always - 1646911 191 G-Sense_Error_Rate 0x0022 100 100 000 Old_age Always - 12 192 Power-Off_Retract_Count 0x0022 252 252 000 Old_age Always - 0 194 Temperature_Celsius 0x0002 064 059 000 Old_age Always - 36 (Min/Max 22/41) 195 Hardware_ECC_Recovered 0x003a 100 100 000 Old_age Always - 0 196 Reallocated_Event_Count 0x0032 252 252 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 1 198 Offline_Uncorrectable 0x0030 252 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0036 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x002a 100 100 000 Old_age Always - 30 223 Load_Retry_Count 0x0032 252 252 000 Old_age Always - 0 225 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 77# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu May 5 06:30:21 2011 Raid Level : raid1 Array Size : 1953512383 (1863.01 GiB 2000.40 GB) Used Dev Size : 1953512383 (1863.01 GiB 2000.40 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Jul 3 22:15:51 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : server:0 (local to host server) UUID : e7ebaefd:e05c9d6e:3b558391:9b131afb Events : 67889 Number Major Minor RaidDevice State 2 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1As per one of the answers: it would seem I did switch seek and skip for dd. I was using seek as that's what is used with the HOWTO. Using this command causes dd to hang: # dd if=/dev/sda1 of=/dev/null bs=4096 count=1 skip=401594986 Using blocks around that one (..84, ..85, ..87, ..88) seems to work just fine, and using /dev/sdb1 with block 401594986 reads just fine as well (as expected as that disk passed SMART testing). Now, the question that I have is: When writing over this area to reassign the blocks, do I use /dev/sda1 or /dev/md0? I don't want to cause any issues with the RAID array by writing directly to one disk and not having the other disk update. EDIT 3 Writing to the block directly produced filesystem errors. I've chosen an answer that solved the problem quickly: # 1 Short offline Completed without error 00% 14211 - # 2 Extended offline Completed: read failure 30% 12244 3212761936Thanks to everyone who helped. =)
Linux - Repairing bad blocks on a RAID1 array with GPT
First, keep in mind that SMART saying that your drive is healthy doesn't necessarily mean that the drive is healthy. SMART reports are an aid, not an absolute truth. If all you are interested in is what to do, rather than why, then feel free to scroll down to the last few paragraphs; however, the interim text will tell you why I think what I propose is the correct course of action, and how to derive that from what you posted. With that said, let's look at what one of those errors are telling us. [ 1670.547805] ata3.00: exception Emask 0x50 SAct 0x7f SErr 0x280900 action 0x6 frozen [ 1670.547812] ata3.00: irq_stat 0x08000000, interface fatal error [ 1670.547820] ata3: SError: { UnrecovData HostInt 10B8B BadCRC } [ 1670.547826] ata3.00: failed command: READ FPDMA QUEUED [ 1670.547839] ata3.00: cmd 60/80:00:00:1f:2e/01:00:0c:00:00/40 tag 0 ncq 196608 in res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error) [ 1670.547846] ata3.00: status: { DRDY } [ 1670.547852] ata3.00: failed command: READ FPDMA QUEUED(I hope I got the parts that should go together, but you were getting a bundle of those so it should be okay either way.) The Linux ata Wiki has a page explaining how to read these errors. Particularly,A status value of DRDY means "Device ready. Normally 1, when all is OK." Seeing a status value of DRDY is perfectly normal and expected. SError has multiple component values, of which you are seeing (in this particular snippet):UnrecovData "Data integrity error occurred, interface did not recover" HostInt "Host bus adapter internal error" 10B8B "10b to 8b decoding error occurred" BadCRC "Link layer CRC error occurred"10b8b coding, which encodes 8 bits as 10 bits to aid with both signal synchronization and error detection, is used on the physical cabling, not necessarily on the drive itself. The drive most likely uses other forms of FEC or ECC coding, and an error there would normally show up as some form of I/O error, likely with an error value of UNC ("uncorrectable error - often due to bad sectors on the disk"), likely with "media error" ("software detected a media error") in parenthesis at the end of the res line. This latter is not what you are seeing, so while we can't completely rule it out, it seems unlikely. The "link layer" is the physical cables and circuit board traces between the drive's own controller, and the disk drive interface chip (likely part of the southbridge on your computer's motherboard, but could be located at an offboard HBA). A host bus adapter, also known as a HBA, is the circuitry that connects to storage equipment. Also colloquially known as a "disk controller", a term which is a bit of a misnomer with modern systems. The most visible part of the HBA is generally the connection ports, most often these days either SATA or some SAS form factor. The UnrecovData and HostInt flags basically tell us that "something just went horribly wrong, and there was no way to recover or no attempt at recovery was made". The opposite would likely be RecovData, which indicates that a "data integrity error occurred, but the interface recovered". (As an aside, I probably would have used HBAInt instead of HostInt, as the "host" refers to the HBA, not the whole system.) The combination of 10B8B and BadCRC, which both point to the physical link layer, makes me suspect a cabling issue. This suspicion is also supported by the fact that the SMART self-tests, which are completely internal to the drive except for status reporting, are finding no errors that the manufacturer feels are serious enough to warrant reporting in the results. If the drive was having problems storing or reading data, the long SMART self-test in particular should have reported that. TL;DR: The first thing I would do is thus simply to unplug and re-plug the SATA cable at both ends; it may be slightly loose, causing it to lose electrical contact intermittently. See if that resolves the problem. It might even be worth doing this to all SATA cabling in your computer, not just the affected disk. If you are using an off-board HBA, I would also remove and re-seat that card, mainly because it's an easy thing to try while you are already messing around with the cabling. Failing that, try throwing away and replacing the SATA cable, preferably with a high-quality cable. A high-quality cable will be slightly more expensive, but I find that it's usually well worth the small extra expense if it helps avoid headaches like this. Nobody likes seeing their storage spewing errors!
Sometimes I have strange troubles booting my computer (which runs Debian). So I issued "dmesg" command. In its output I saw a lot of errors. However, when I run extended SMART test on hard disks (using "smartctl -t long /dev/sda" command), the result is that my disks are not broken. What can be the reason of those errors? Here are the errors: (...) [ 505.918537] ata3.00: exception Emask 0x50 SAct 0x400 SErr 0x280900 action 0x6 frozen [ 505.918549] ata3.00: irq_stat 0x08000000, interface fatal error [ 505.918558] ata3: SError: { UnrecovData HostInt 10B8B BadCRC } [ 505.918566] ata3.00: failed command: READ FPDMA QUEUED [ 505.918579] ata3.00: cmd 60/40:50:20:5b:60/00:00:0b:00:00/40 tag 10 ncq 32768 in res 40/00:54:20:5b:60/00:00:0b:00:00/40 Emask 0x50 (ATA bus error) [ 505.918586] ata3.00: status: { DRDY } [ 505.918595] ata3: hard resetting link [ 506.410055] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 506.422648] ata3.00: configured for UDMA/133 [ 506.422679] ata3: EH complete [ 1633.123880] md: bind<sdb3> [ 1633.187966] RAID1 conf printout: [ 1633.187977] --- wd:1 rd:2 [ 1633.187984] disk 0, wo:0, o:1, dev:sda3 [ 1633.187989] disk 1, wo:1, o:1, dev:sdb3 [ 1633.188866] md: recovery of RAID array md0 [ 1633.188871] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 1633.188875] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. [ 1633.188890] md: using 128k window, over a total of 1943618560k. [ 1634.167341] ata3.00: exception Emask 0x50 SAct 0x7f80 SErr 0x280900 action 0x6 frozen [ 1634.167353] ata3.00: irq_stat 0x08000000, interface fatal error [ 1634.167361] ata3: SError: { UnrecovData HostInt 10B8B BadCRC } [ 1634.167369] ata3.00: failed command: READ FPDMA QUEUED [ 1634.167382] ata3.00: cmd 60/00:38:00:00:6f/02:00:01:00:00/40 tag 7 ncq 262144 in res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error) [ 1634.167389] ata3.00: status: { DRDY } [ 1634.167395] ata3.00: failed command: READ FPDMA QUEUED [ 1634.167407] ata3.00: cmd 60/00:40:00:02:6f/02:00:01:00:00/40 tag 8 ncq 262144 in res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error) [ 1634.167413] ata3.00: status: { DRDY } [ 1634.167418] ata3.00: failed command: READ FPDMA QUEUED [ 1634.167429] ata3.00: cmd 60/00:48:00:04:6f/02:00:01:00:00/40 tag 9 ncq 262144 in res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error) [ 1634.167435] ata3.00: status: { DRDY } [ 1634.167439] ata3.00: failed command: READ FPDMA QUEUED [ 1634.167451] ata3.00: cmd 60/00:50:00:06:6f/02:00:01:00:00/40 tag 10 ncq 262144 in res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error) [ 1634.167457] ata3.00: status: { DRDY } [ 1634.167462] ata3.00: failed command: READ FPDMA QUEUED [ 1634.167473] ata3.00: cmd 60/00:58:00:08:6f/02:00:01:00:00/40 tag 11 ncq 262144 in res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error) [ 1634.167479] ata3.00: status: { DRDY } [ 1634.167484] ata3.00: failed command: READ FPDMA QUEUED [ 1634.167495] ata3.00: cmd 60/00:60:00:0a:6f/02:00:01:00:00/40 tag 12 ncq 262144 in res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error) [ 1634.167500] ata3.00: status: { DRDY } [ 1634.167505] ata3.00: failed command: READ FPDMA QUEUED [ 1634.167516] ata3.00: cmd 60/80:68:00:0c:6f/00:00:01:00:00/40 tag 13 ncq 65536 in res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error) [ 1634.167522] ata3.00: status: { DRDY } [ 1634.167527] ata3.00: failed command: READ FPDMA QUEUED [ 1634.167538] ata3.00: cmd 60/00:70:80:0c:6f/02:00:01:00:00/40 tag 14 ncq 262144 in res 40/00:6c:00:0c:6f/00:00:01:00:00/40 Emask 0x50 (ATA bus error) [ 1634.167544] ata3.00: status: { DRDY } [ 1634.167553] ata3: hard resetting link [ 1634.658816] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1634.672645] ata3.00: configured for UDMA/133 [ 1634.672696] ata3: EH complete [ 1637.687898] ata3.00: exception Emask 0x50 SAct 0x3ff000 SErr 0x280900 action 0x6 frozen [ 1637.687910] ata3.00: irq_stat 0x08000000, interface fatal error [ 1637.687918] ata3: SError: { UnrecovData HostInt 10B8B BadCRC } [ 1637.687926] ata3.00: failed command: READ FPDMA QUEUED [ 1637.687940] ata3.00: cmd 60/00:60:80:a7:af/02:00:02:00:00/40 tag 12 ncq 262144 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.687947] ata3.00: status: { DRDY } [ 1637.687953] ata3.00: failed command: READ FPDMA QUEUED [ 1637.687965] ata3.00: cmd 60/00:68:80:a9:af/02:00:02:00:00/40 tag 13 ncq 262144 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.687971] ata3.00: status: { DRDY } [ 1637.687976] ata3.00: failed command: READ FPDMA QUEUED [ 1637.687987] ata3.00: cmd 60/80:70:80:ab:af/01:00:02:00:00/40 tag 14 ncq 196608 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.687993] ata3.00: status: { DRDY } [ 1637.687998] ata3.00: failed command: READ FPDMA QUEUED [ 1637.688009] ata3.00: cmd 60/00:78:00:ad:af/02:00:02:00:00/40 tag 15 ncq 262144 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.688015] ata3.00: status: { DRDY } [ 1637.688020] ata3.00: failed command: READ FPDMA QUEUED [ 1637.688031] ata3.00: cmd 60/80:80:00:af:af/00:00:02:00:00/40 tag 16 ncq 65536 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.688037] ata3.00: status: { DRDY } [ 1637.688042] ata3.00: failed command: READ FPDMA QUEUED [ 1637.688053] ata3.00: cmd 60/00:88:80:af:af/01:00:02:00:00/40 tag 17 ncq 131072 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.688059] ata3.00: status: { DRDY } [ 1637.688064] ata3.00: failed command: READ FPDMA QUEUED [ 1637.688075] ata3.00: cmd 60/80:90:80:b0:af/00:00:02:00:00/40 tag 18 ncq 65536 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.688081] ata3.00: status: { DRDY } [ 1637.688085] ata3.00: failed command: READ FPDMA QUEUED [ 1637.688096] ata3.00: cmd 60/00:98:00:b1:af/02:00:02:00:00/40 tag 19 ncq 262144 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.688102] ata3.00: status: { DRDY } [ 1637.688107] ata3.00: failed command: READ FPDMA QUEUED [ 1637.688118] ata3.00: cmd 60/00:a0:00:b3:af/01:00:02:00:00/40 tag 20 ncq 131072 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.688124] ata3.00: status: { DRDY } [ 1637.688129] ata3.00: failed command: READ FPDMA QUEUED [ 1637.688140] ata3.00: cmd 60/00:a8:00:b4:af/01:00:02:00:00/40 tag 21 ncq 131072 in res 40/00:ac:00:b4:af/00:00:02:00:00/40 Emask 0x50 (ATA bus error) [ 1637.688146] ata3.00: status: { DRDY } [ 1637.688154] ata3: hard resetting link [ 1638.179398] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1638.192977] ata3.00: configured for UDMA/133 [ 1638.193029] ata3: EH complete [ 1640.259492] md: export_rdev(sdb1) [ 1640.326109] md: bind<sdb1> [ 1640.346712] RAID1 conf printout: [ 1640.346724] --- wd:1 rd:2 [ 1640.346731] disk 0, wo:0, o:1, dev:sda1 [ 1640.346736] disk 1, wo:1, o:1, dev:sdb1 [ 1640.346893] md: delaying recovery of md1 until md0 has finished (they share one or more physical units) [ 1657.987964] ata3.00: exception Emask 0x50 SAct 0x40000 SErr 0x280900 action 0x6 frozen [ 1657.987975] ata3.00: irq_stat 0x08000000, interface fatal error [ 1657.987984] ata3: SError: { UnrecovData HostInt 10B8B BadCRC } [ 1657.987992] ata3.00: failed command: READ FPDMA QUEUED [ 1657.988006] ata3.00: cmd 60/00:90:00:30:2e/03:00:09:00:00/40 tag 18 ncq 393216 in res 40/00:94:00:30:2e/00:00:09:00:00/40 Emask 0x50 (ATA bus error) [ 1657.988013] ata3.00: status: { DRDY } [ 1657.988022] ata3: hard resetting link [ 1658.479548] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1658.493107] ata3.00: configured for UDMA/133 [ 1658.493147] ata3: EH complete [ 1670.547791] ata3: limiting SATA link speed to 1.5 Gbps [ 1670.547805] ata3.00: exception Emask 0x50 SAct 0x7f SErr 0x280900 action 0x6 frozen [ 1670.547812] ata3.00: irq_stat 0x08000000, interface fatal error [ 1670.547820] ata3: SError: { UnrecovData HostInt 10B8B BadCRC } [ 1670.547826] ata3.00: failed command: READ FPDMA QUEUED [ 1670.547839] ata3.00: cmd 60/80:00:00:1f:2e/01:00:0c:00:00/40 tag 0 ncq 196608 in res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error) [ 1670.547846] ata3.00: status: { DRDY } [ 1670.547852] ata3.00: failed command: READ FPDMA QUEUED [ 1670.547863] ata3.00: cmd 60/80:08:80:20:2e/00:00:0c:00:00/40 tag 1 ncq 65536 in res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error) [ 1670.547869] ata3.00: status: { DRDY } [ 1670.547875] ata3.00: failed command: READ FPDMA QUEUED [ 1670.547886] ata3.00: cmd 60/00:10:00:21:2e/02:00:0c:00:00/40 tag 2 ncq 262144 in res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error) [ 1670.547892] ata3.00: status: { DRDY } [ 1670.547896] ata3.00: failed command: READ FPDMA QUEUED [ 1670.547907] ata3.00: cmd 60/00:18:00:23:2e/02:00:0c:00:00/40 tag 3 ncq 262144 in res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error) [ 1670.547913] ata3.00: status: { DRDY } [ 1670.547918] ata3.00: failed command: READ FPDMA QUEUED [ 1670.547929] ata3.00: cmd 60/00:20:00:25:2e/01:00:0c:00:00/40 tag 4 ncq 131072 in res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error) [ 1670.547935] ata3.00: status: { DRDY } [ 1670.547940] ata3.00: failed command: READ FPDMA QUEUED [ 1670.547951] ata3.00: cmd 60/00:28:00:26:2e/02:00:0c:00:00/40 tag 5 ncq 262144 in res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error) [ 1670.547957] ata3.00: status: { DRDY } [ 1670.547961] ata3.00: failed command: READ FPDMA QUEUED [ 1670.547972] ata3.00: cmd 60/00:30:00:28:2e/02:00:0c:00:00/40 tag 6 ncq 262144 in res 40/00:2c:00:26:2e/00:00:0c:00:00/40 Emask 0x50 (ATA bus error) [ 1670.547978] ata3.00: status: { DRDY } [ 1670.547987] ata3: hard resetting link [ 1671.039264] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 310) [ 1671.053386] ata3.00: configured for UDMA/133 [ 1671.053444] ata3: EH complete [ 2422.512002] md: md0: recovery done. [ 2422.547344] md: recovery of RAID array md1 [ 2422.547355] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 2422.547360] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. [ 2422.547378] md: using 128k window, over a total of 4877312k. [ 2422.668465] RAID1 conf printout: [ 2422.668474] --- wd:2 rd:2 [ 2422.668480] disk 0, wo:0, o:1, dev:sda3 [ 2422.668486] disk 1, wo:0, o:1, dev:sdb3 [ 2469.990451] md: md1: recovery done. [ 2470.049986] RAID1 conf printout: [ 2470.049997] --- wd:2 rd:2 [ 2470.050003] disk 0, wo:0, o:1, dev:sda1 [ 2470.050009] disk 1, wo:0, o:1, dev:sdb1 [ 3304.445149] PM: Hibernation mode set to 'platform' [ 3304.782375] PM: Syncing filesystems ... done. [ 3307.028591] Freezing user space processes ... (elapsed 0.001 seconds) done. (...)
according to SMART hard disk is not broken, but I have errors in dmesg
You need to comment out the DEVICESCAN line, and put in lines for individual devices. Mine, for example, looks like this: /dev/sda -d removable -n standby,8 -S on -o on -a \ -m root -M exec /usr/share/smartmontools/smartd-runner \ -r 194 -R 5 -R 183 -R 187 -s L/../../6/01 /dev/sdb -d removable -n standby,8 -S on -o on -a \ -m root -M exec /usr/share/smartmontools/smartd-runner \ -r 194 -R 5 -R 183 -R 187 -s L/../../6/06 /dev/sdc -d removable -n standby,8 -S on -o on -a \ -m root -M exec /usr/share/smartmontools/smartd-runner \ -r 194 -R 5 -R 183 -R 187 -s L/../../7/01 /dev/sdd -d removable -n standby,8 -S on -o on -a \ -m root -M exec /usr/share/smartmontools/smartd-runner \ -r 194 -R 5 -R 183 -R 187 -s L/../../7/06 /dev/sde -d removable -n standby,8 -S on -o on -a \ -m root -M exec /usr/share/smartmontools/smartd-runner \ -r 194 -R 5 -R 183 -R 187 -s L/../../6/01You can refer to individual devices in any convenient way; for example, instead of /dev/sda I could use /dev/disk/by-id/wwn-0x5000c5001fc90b93, which will track that same disk no matter how its connected.
I have an external HDD which does not report SMART information properly (it gives nonsense results). As such, the smartd daemon (part of smartmontools) keeps giving false alarms on how the device might be failing. In /etc/smartmontools/smartd.conf (I'm using the default, here) I see a bunch of options but none that relate to my need (ignoring the alarms for a specific hard drive - I would like to be able to refer to it e.g. by USB ID, since the entry in /dev will vary if I have more devices connected). I could edit /usr/libexec/smartmontools/smartdnotify (the script that smartd calls when an event happens) and manually force it to shut up about that specific device, but I'd like to know if there's a less ugly way to do that. How to get smartd to not report any warnings for a specific HDD? I would not like to disable the daemon; I would like it to just not care about this specific HDD.
How to get smartd to ignore an HDD?
# DEVICESCAN For all disks with SMART capabilities. # # -o off Turn off automatic running of offline tests. An offline test # is a test which may degrade performance. # # -n standby Do not spin up the disk for the periodic 30 minute (default) # SMART status polling, instead wait until the disk is active # again and poll it then. # # -W 2 Report temperature changes of at least 2 degrees celsius since # the last reading. Also report if a new min/max temperature is # detected. # # -S on Auto save attributes such as how long the disk has been powered # on, min and max disk temperature. # # -s (L/../.[02468]/1/04|S/../.[13579]/1/04) # '-------a--------' '--------b-------' # # a: Long test on even monday mornings at 04:00 # b: Short test on uneven monday mornings at 04:00DEVICESCAN -o off -n standby -W 2 -S on -s (L/../.[02468]/1/04|S/../.[13579]/1/04)
I have a server with three identical SATA/600 3TB drives: /dev/sda, /dev/sdb, /dev/sdc. The drives are partitioned, using GPT with three partitions each:1 MB: Reserved partition for boot loader 1 GB: RAID1 /dev/md0 ( ext2 ( /boot ) ) 3 TB: RAID1 /dev/md1 ( encrypted volume ( LVM ( volume group ( Swap, /, /etc, /home ... ) ) ) )One of the three drives is a hot spare and the other two are active in the RAID sets. It works fine and I am able to boot after disconnecting any single HDD. I want to use smartd (part of smartmontools) to monitor the health of the drives and report errors to syslog (which I monitor using logcheck). This server should have as high availability as possible, but it is acceptable that performance is lowered during tests. Here is the output of smartctl -a /dev/sda: smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net=== START OF INFORMATION SECTION === Device Model: WDC WD30EZRX-00MMMB0 Serial Number: WD-WMAWZ0412093 LU WWN Device Id: 5 0014ee 2b19fbdcd Firmware Version: 80.00A80 User Capacity: 3,000,592,982,016 bytes [3.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Fri Sep 27 15:37:25 2013 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled=== START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSEDGeneral SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: (50280) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 255) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x3035) SCT Status supported. SCT Feature Control supported. SCT Data Table supported.SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 148 148 021 Pre-fail Always - 9575 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 95 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 099 099 000 Old_age Always - 820 10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 93 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 65 193 Load_Cycle_Count 0x0032 196 196 000 Old_age Always - 12824 194 Temperature_Celsius 0x0022 119 116 000 Old_age Always - 33 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0SMART Error Log Version: 1 No Errors LoggedSMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 787 - # 2 Extended offline Completed without error 00% 727 -SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.It seems offline testing is supported. When I issue smartctl -o on, smartctl -c shows that Offline data collection status has been set to (0x82). If I issue smartctl -o off the same value becomes (0x02). I have set up smartd to start up with the server by setting start_smartd=yes in /etc/default/smartmontools. How would you recommend that I configure smartd by editing /etc/smartd.conf for this server? Please describe each parameter you use and why you use it the way you do. I will add my current set up as an answer. Feel free to use it as a base and improve it in your own answer. A better description using the same set up would be an improvement too!
Monitor disk health using smartd (in smartmontools) on a high availability software RAID 1 server
There are already tools which can do this, often as part of a more general monitoring tool. One I find useful is Munin, which has a SMART plugin to trace the available attributes:Munin is available in many distributions. smartmontools itself contains a tool which can log attributes periodically, smartd. You might find that that’s all you need.
I'd like to start storing the SMART data over time and see any trends based on disk ID/serial number. Something that would let me, for example just get the smart information from disks once a day and put it in a database. Is there already a tool for this in Linux, or do I have to roll my own?
Are there any tools available to store SMART data over time?
Have you checked this question on askubuntu? https://askubuntu.com/questions/207573/how-to-enable-smart If this fails, it could be that your USB enclosure doesn't support SMART, I experienced this with one enclosure of mine. In that case you would need to connect the drive directly via SATA or use a different enclosure to retrieve SMART data from the device.
I use my recently bought 1T Seagate Backup Plus Slim external hard disk ID 0bc2:ab24 Seagate RSS LLC (NTFS filesystem) as a backup tool. I want to run the Smartmontools software on this disk, but when I tried to enable it using smartctl -s on -d scsi /dev/sdb (as a root)I got the following response: smartctl 6.6 2016-05-31 r4324 [i686-linux-4.15.0-23-generic] (local b$ Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontoo$=== START OF ENABLE/DISABLE COMMANDS SECTION === Informational Exceptions (SMART) disabled Temperature warning disabledIndeed when I try to run for example smartctl -all -d scsi /dev/sdbthe output is: smartctl 6.6 2016-05-31 r4324 [i686-linux-4.15.0-23-generic] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION === Vendor: Seagate Product: BUP Slim WH Revision: 0304 Compliance: SPC-4 User Capacity: 1.000.204.885.504 bytes [1,00 TB] Logical block size: 512 bytes Logical Unit id: 0x5000000000000001 Serial number: NA9DTQ90 Device type: disk Local Time is: Wed Jun 20 20:25:13 2018 CEST SMART support is: Available - device has SMART capability. SMART support is: Disabled Temperature Warning: Disabled or Not Supported=== START OF READ SMART DATA SECTION === SMART Health Status: OK Current Drive Temperature: 0 C Drive Trip Temperature: 0 CError Counter logging not supportedDevice does not support Self Test loggingwhich confirms that the SMART support is still disabled, but that is available. Does anyone have an idea if and (if so) how to enable it? FYI: The drive is connected to an old 32-bit laptop that runs Lubuntu 18.04.
Unable to enable SMART support for external hard drive
I haven't seen this kind of warning you've got, yet. But apparently it means that smartctl only evaluated the attribute table (see below) because there is no further information from SMART explicitly about the health which is typically a part of the ATA protocol. The response overall is considered not reliable in this case by the author of smartmontools. Drives attached directly to a SATA controller work better with SMART from what I've seen so far. As concerns the attribute table, when you take a look at a SMART attribute output with smartctl -A /dev/XXX, you'll see three columns VALUE, WORST and THRESH. Here a part of such an output: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 189 182 021 Pre-fail Always - 5508 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 18The first column VALUE tells you the current value of the attribute. The WORST column tells you the worst (typically lowest) value SMART has ever seen. The THRESH column tells you what the vendors considers as lowest possible value considered as healthy. If the WORST column shows values below THRESH in same row, the drive is considered as not healthy. It also implies that VALUE has been seen below THRESH, of course. You can also see that only the attributes of type Pre-fail matter when evaluating health. Other thresholds are simply set to 0 and their attributes cannot fail. This table is all that smartctl used for the analysis of the drive's health. And it is not really the correct way to do it right.
I have an external USB-drive which is giving me the following output on running the command $ smartctl /dev/sdb -Hon it: SMART Status not supported: Incomplete response, ATA output registers missing SMART overall-health self-assessment test result: PASSED Warning: This result is based on an Attribute check. Could you elaborate if this is something to worry about or if it is just a wrong setting? Generally, what is the meaning of the health status in simplified form? Maybe as a relevant aside: The short and long tests finish without issues.
SMART health-test and status
Yes, there’s GSmartControl, which provides a GUI showing the SMART information from all the drives attached to the system it’s run on. In Mint it’s packaged as gsmartcontrol.
In 2013 I used a program that analyzes the HDD and gives detailed and deep information about the hard disk. However, that program, CrystalDiskInfo, only works on Windows. Is there a GUI that is similar to the CrystalDiskInfo which displays the information based on the S.M.A.R.T characteristics? I am looking specifically for the life time hours to estimate how many hours did the laptop work. I am running Linux Mint 18.3 Sylvia 64-bit - MATE 1.18 on my DELL inspiron 1546
is there a way to get detailed information about HDD in linux
OK, I found 2 alternatives. Getting a precompiled binary that works on CentOS 7 Even though their packages page only offers Smartmontools 6.2 for CentOS 7, their SVN builds page offers binaries that do work on CentOS. The proper archive has a .linux suffix, for example I chose:smartmontools-6.6-0-20170503-r4430.linux-x86_64.tar.gzThis archive contains a smartctl binary that works like a charm. Using the nvme command-line tool CentOS 7 ships with an nvme command (the yum package is named nvme-cli). It can list the NVMe drives: # nvme listAnd can read SMART info: # nvme smart-log /dev/nvme0And additional SMART info (not sure why it's split): # nvme smart-log-add /dev/nvme0
I've just set up CentOS 7 on a server with NVMe drives, and was suprised not to be able to run smartctl on them: # smartctl -a /dev/nvme0 /dev/nvme0: Unable to detect device type Please specify device type with the -d option.# smartctl -a /dev/nvme0 -d nvme /dev/nvme0: Unknown device type 'nvme'Then I noticed that CentOS ships with Smartmontools version 6.2, whereas Smartmontools supports NVMe starting from version 6.5. How can I upgrade Smartmontools to version 6.5 on CentOS 7? Their download page only offers Smartmontools 6.2 for CentOS 7. Ideally, I don't want to compile from source, I would prefer a RPM, or better, a third-party repo that would include the latest Smartmontools, to get regular updates. Alternative I'm also open to suggestions if you know another tool, preferably included in CentOS 7, that could allow me to get SMART info from an NVMe drive.
Smartmontools with NVMe support on CentOS 7
Backup Immediately Go buy an additional external HDD/SSD and make a full CloneZilla Live backup right now! The dead giveaway that your drive is in imminent danger of failing is the following parameter:184 End-to-End_Error 0x0032 096 096 099 Old_age Always FAILING_NOW 4Especially as you've been having this issue for a month now: HDDs are known to not die immediately, but give you ample warning like clicking sounds, random errors, ... whereas SSDs die suddenly without warning unless you measure their SMART status regularly. The rule of thumb for drives is:HDDs die a slow, painful death like cancer SSDs die a sudden death like a heart attack
I'm having a wired problem in my laptop. It works fine but almost every hour the screen freezes. When I force the shutdown and start it again, I see problems similar to this:The only solution I found is turning over the laptop for few seconds before starting it again. This help me see my Ubuntu work normally without these FS problems. Update: This is the smartctl output: smartctl 6.5 2016-01-24 r4214 [i686-linux-4.15.0-32-generic] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION === Model Family: Seagate Laptop SSHD Device Model: ST500LM000-SSHD-8GB Serial Number: W761F5WC LU WWN Device Id: 5 000c50 07c440eb8 Firmware Version: LIV5 User Capacity: 500 107 862 016 bytes [500 GB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 5400 rpm Form Factor: 2.5 inches Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS, ACS-3 T13/2161-D revision 3b SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Fri Aug 17 14:37:51 2018 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled=== START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes.General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 128) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 96) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x1081) SCT Status supported.SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 114 099 034 Pre-fail Always - 81759080 3 Spin_Up_Time 0x0003 098 098 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 097 097 020 Old_age Always - 3865 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 072 060 030 Pre-fail Always - 163745195646 9 Power_On_Hours 0x0032 080 080 000 Old_age Always - 17649 (115 151 0) 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 096 096 020 Old_age Always - 4175 184 End-to-End_Error 0x0032 096 096 099 Old_age Always FAILING_NOW 4 187 Reported_Uncorrect 0x0032 098 098 000 Old_age Always - 2 188 Command_Timeout 0x0032 100 094 000 Old_age Always - 25770197149 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 058 036 045 Old_age Always In_the_past 42 (Min/Max 42/46 #389) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 1213 193 Load_Cycle_Count 0x0032 098 098 000 Old_age Always - 4834 194 Temperature_Celsius 0x0022 042 064 000 Old_age Always - 42 (0 12 0 0 0) 196 Reallocated_Event_Count 0x000f 080 080 030 Pre-fail Always - 17711 (44104 0) 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 254 Free_Fall_Sensor 0x0032 100 100 000 Old_age Always - 0SMART Error Log Version: 1 ATA Error Count: 7 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days.Error 7 occurred at disk power-on lifetime: 16948 hours (706 days + 4 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 00 08 ff ff ff 4f 00 00:00:25.320 READ FPDMA QUEUED 60 00 20 ff ff ff 4f 00 00:00:25.319 READ FPDMA QUEUED 60 00 60 88 da 7f 41 00 00:00:25.309 READ FPDMA QUEUED 60 00 20 ff ff ff 4f 00 00:00:25.288 READ FPDMA QUEUED 60 00 08 ff ff ff 4f 00 00:00:25.284 READ FPDMA QUEUEDError 6 occurred at disk power-on lifetime: 966 hours (40 days + 6 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 10 c5 a5 00 Error: UNC at LBA = 0x00a5c510 = 10863888 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 00 08 88 79 42 43 00 00:00:18.239 READ FPDMA QUEUED 60 00 08 80 79 42 43 00 00:00:18.239 READ FPDMA QUEUED 60 00 a8 10 c8 84 40 00 00:00:18.237 READ FPDMA QUEUED 60 00 08 78 79 42 43 00 00:00:18.237 READ FPDMA QUEUED 60 00 00 e0 c6 84 40 00 00:00:18.237 READ FPDMA QUEUEDError 5 occurred at disk power-on lifetime: 966 hours (40 days + 6 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 10 c5 a5 00 Error: UNC at LBA = 0x00a5c510 = 10863888 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 00 28 00 48 f8 40 00 00:00:13.615 READ FPDMA QUEUED 60 00 08 08 0e 44 40 00 00:00:13.609 READ FPDMA QUEUED 60 00 18 60 d0 e6 40 00 00:00:13.608 READ FPDMA QUEUED 60 00 08 b8 a8 e6 40 00 00:00:13.608 READ FPDMA QUEUED 60 00 28 10 9b e6 40 00 00:00:13.607 READ FPDMA QUEUEDError 4 occurred at disk power-on lifetime: 32 hours (1 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 76 9d b6 01 Error: UNC at LBA = 0x01b69d76 = 28745078 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 00 80 60 9d b6 41 00 00:01:10.856 READ FPDMA QUEUED 61 00 08 68 89 59 40 00 00:01:10.747 WRITE FPDMA QUEUED 61 00 08 88 f6 3c 40 00 00:01:10.747 WRITE FPDMA QUEUED 2f 00 01 10 00 00 20 00 00:01:10.494 READ LOG EXT 60 00 40 c8 25 4c 41 00 00:01:10.441 READ FPDMA QUEUEDError 3 occurred at disk power-on lifetime: 32 hours (1 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 76 9d b6 01 Error: UNC at LBA = 0x01b69d76 = 28745078 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 00 80 60 9d b6 41 00 00:00:53.242 READ FPDMA QUEUED 61 00 80 40 ba 44 41 00 00:00:53.241 WRITE FPDMA QUEUED 61 00 10 10 f7 86 40 00 00:00:53.241 WRITE FPDMA QUEUED 60 00 40 98 b7 1e 42 00 00:00:53.216 READ FPDMA QUEUED 60 00 08 60 9d b6 41 00 00:00:53.169 READ FPDMA QUEUEDSMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Vendor (0x50) Completed without error 00% 1 -SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.I made a FSCK at the startup to fix the problem but it didn't help. Do you have any solution ?
Random EXT4 FS errors
Most of the standard fields for SMART data were defined with only rotational, magnetic harddrives in mind. None of these really appear appropriate for your CF card. Vendors are able to define their own attributes as well and those are not standardized. smartmontools is distributed a database (it's stored /var/lib/smartmontools/drivedb/drivedb.h on my debian machine.) that defines custom/special/overrides for different model harddrives. You'll probably have to input details for your CF card into such a database. If you look at the atpinc.com website, you'll see that you can email their sales team to request a copy of the specifications. The specifications document should list which SMART attributes the device supports, what they're representing, and how to interpret them. Also, you'll get more SMART information if you use -a instead of -A. You can force an offline selftest by using smartctl -t offline /dev/XXX and the device may support automatic, periodic offline testing with smartctl -o on /dev/XXX. You can run an offline selftest (any of the selftests, actually) while using the drive. Performance may be impacted, but you wont break anything. Email ATP and ask em for the docs. Good luck.
I'm testing SMART support on some Compact Flash cards. After running smartctl -A on my card I'm getting the output below (also available here: http://pastebin.com/BX8GcLCX). The UPDATED column says offline, does anyone know exactly what that means? UPDATE - it means the data is only collected offline. Also all the values seem to be at their defaults of 100 (except powercycle count). Does anyone know how to get the card to report it's values? The card I'm testing is an ATP AF1GCFI. Additionally if I try and run an offline test with "smartctl --test=short /dev/sda" I get back "Warning: device does not support Self-Test functions." Given the fact that the parameters can only be reported offline, does this mean I can't get any SMART data at all? === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 1 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x0000 100 100 000 Old_age Offline - 0 2 Throughput_Performance 0x0000 100 100 000 Old_age Offline - 0 5 Reallocated_Sector_Ct 0x0000 100 100 000 Old_age Offline - 0 7 Seek_Error_Rate 0x0000 100 100 000 Old_age Offline - 0 8 Seek_Time_Performance 0x0000 100 100 000 Old_age Offline - 0 12 Power_Cycle_Count 0x0000 100 100 000 Old_age Offline - 358 195 Hardware_ECC_Recovered 0x0000 100 100 000 Old_age Offline - 0 196 Reallocated_Event_Count 0x0000 100 100 000 Old_age Offline - 0 197 Current_Pending_Sector 0x0000 100 100 000 Old_age Offline - 0 198 Offline_Uncorrectable 0x0000 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0000 100 100 000 Old_age Offline - 0 200 Multi_Zone_Error_Rate 0x0000 100 100 000 Old_age Offline - 0
Understanding smartctl output for a CF card
Can confirm, same lack of support here (exact same output as OP when attempting to GET SMART stats off a device through Marvel chipset). :00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller (rev 11)Linux fermmy 5.13.0-39-generic #44~20.04.1-Ubuntu SMP Thu Mar 24 16:43:35 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux *-sata description: SATA controller product: 88SE9230 PCIe SATA 6Gb/s Controller vendor: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:07:00.0 version: 11 width: 32 bits clock: 33MHz capabilities: sata pm msi pciexpress ahci_1.0 bus_master cap_list rom configuration: driver=ahci latency=0 resources: irq:43 ioport:d050(size=8) ioport:d040(size=4) ioport:d030(size=8) ioport:d020(size=4) ioport:d000(size=32) memory:fc710000-fc7107ff memory:fc700000-fc70ffffThere are no viable options directly from Marvell that I see; https://www.marvell.com/support/downloads.html -- BUT LOOK HERE! https://support.lenovo.com/ca/en/downloads/ds539334-marvell-storage-utility-for-linux-for-linux In theory ... this should work right? Let's try. I was on Ubuntu, and didn't feel like making this work on CentOS derivative ; luckily someone did all the heavy lifting already: CREDIT: https://github.com/stegm/marvell_msu_docker Some minor things were stale and I fixed/improved in this fork: https://github.com/fermulator/marvell_msu_docker Follow the README instructions :) - and then we can see: ~/projects/marvell_msu_docker$ docker-compose run --rm msu cli SG driver version 3.5.36. CLI Version: 4.1.10.42 RaidAPI Version: 2.3.10.1088 Welcome to RAID Command Line Interface.> info -o vdVirtual Disk Information ------------------------- id: 0 name: RAID1_SSD status: functional Stripe size: 64 RAID mode: RAID1 Cache mode: Not Support size: 488306 M BGA status: not running Block ids: 4 0 # of PDs: 2 PD RAID setup: 3 2 Running OS: noTotal # of VD: 1BONUS: even the web UI actually works!
I use Marvell 88SE9230 controller on my home Linux server. HP does have utility to setup raid and get some stats. But I'm wondering how to get any status from a Linux system. Quick googling shows only Linux drivers for accessing array itself on previous versions of kernel, but I want to know SMART status of drives. Smartctl doesn't work: root@iris:~# smartctl -a -d marvell -T verypermissive /dev/sda smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.0-96-generic] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.orgRead Device Identity failed: Unknown error=== START OF INFORMATION SECTION === Device Model: [No Information Found] Serial Number: [No Information Found] Firmware Version: [No Information Found] Device is: Not in smartctl database [for details use: -P showall] ATA Version is: [No Information Found] Local Time is: Thu Jan 27 19:11:54 2022 MSK SMART support is: Ambiguous - ATA IDENTIFY DEVICE words 82-83 don't show if SMART supported. SMART support is: Ambiguous - ATA IDENTIFY DEVICE words 85-87 don't show if SMART is enabled. Checking to be sure by trying SMART RETURN STATUS command. SMART support is: Unknown - Try option -s with argument 'on' to enable it. Read SMART Data failed: Success=== START OF READ SMART DATA SECTION === SMART Status command failed: Success SMART overall-health self-assessment test result: UNKNOWN! SMART Status, Attributes and Thresholds cannot be read.Read SMART Error Log failed: SuccessRead SMART Self-test Log failed: SuccessSelective Self-tests/Logging not supportedHow can I get at least some stats from controller?
Linux on Marvell 88SE9230. How to get stats?
According to the SMART readings, the disk seems fine at the moment. The exciting ones for disk sectors are these 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0A reallocated sector is one that failed a write and was remapped elsewhere on the disk. A small number of these is acceptable. Zero is excellent. The current pending sector value is the number of sectors that are waiting to be reallocated elsewhere. (The read failed but the disk is waiting for a write request, which is the point at which the sector gets remapped.) This may become non-zero for a while, and as the sectors get overwritten this number will decrease and the reallocated sector count will increase. The count of offline uncorrectable sectors is the number of sectors that failed and could not be remapped. A non-zero value is bad news because it means you are losing data. Your zero value is just fine. These next group show the duration of use of your disk drive 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 770 9 Power_On_Hours 0x0032 084 084 000 Old_age Always - 12325 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 730You've had the device running for 12325 hours (if that's continuous time it's about 18 months) and during that time it has powered up and down 730 times. If you power it off daily then you've had the disk running for about 16 hours/day over two years. Finally, it would be worth scheduling a full test every week. You can do this with a command such as smartctl -t full /dev/sda. Errors in the tests can become cause for concern.# 1 Extended offline Completed without error 00% 12320 - # 2 Short offline Completed without error 00% 12311 -If you are using this in a NAS I would recommend a NAS grade disk. Personally I find the WD Red are very good in this respect. The cost is a little higher but the warranty is longer.
Ubuntu 17.04; ext4 filesystem on 4TB WD green SATA [WDC WD40EZRX-22SPEB0] Mount (on startup, from fstab) failed with bad superblock. fsck reported / inode damaged, but repaired it. 99% of files restored (the few that are lost are available in backup). Repaired volume mounts and operates normally. Looking at the SMART data, I think the disk is okay. The "extended" smartctl test passed. The data is already backed up (and it's not mission critical). I already have a replacement drive. It's tempting to take a "zero tolerance" policy and replace the disk now, but as it's a £100 item, and I don't want to be chucking a wobbly and binning every disk that ever writes a bad block once. Here's the smartctl dump. Is the disk actually dying, or did it just have a one-time mishap? ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 61 3 Spin_Up_Time 0x0027 195 176 021 Pre-fail Always - 7225 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 770 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 084 084 000 Old_age Always - 12325 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 730 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 40 193 Load_Cycle_Count 0x0032 194 194 000 Old_age Always - 18613 194 Temperature_Celsius 0x0022 121 106 000 Old_age Always - 31 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 21SMART Error Log Version: 1 No Errors LoggedSMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 12320 - # 2 Short offline Completed without error 00% 12311 -
ext4 : bad block fixed, but is this disk dying?
In the end, it's your data, so you would be the one to say whether the drive should be replaced or not. In the end, it's just spinning rust. Though, I should point out that it appears you've created a cat/RAID0 pool, so if a drive fails, you'll lose everything. And without a mirror, ZFS is unable to repair any failed files -- only report them. If you're seeing the error messages sent to syslog while the scrub is running, perhaps its from the drives being taxed while the check the ZFS checksums. And since not all data is accessed, the scrub could be hitting a block the drive deems needs to be reallocated. Or noise on the line. And I'm not referring to Brendan Gregg yelling at disks. ;o) You did note a cable issue, perhaps a controller or port issue is also in the mix? You also noted a Western Digital forum. I've seen many "complaints" on consumer drives not playing well with software or hardware RAID. If your data is important, you may want to consider using a mirror, and possibly even a 3-way mirror since disks aren't that much and something else could fail during a rebuild/resilver. As far as "smart data," the verdict is out on how "smart" or useful it is. I've seen drives pass the vendors tests, yet be useless.
I have a zpool (3x 3TB Western Digital Red) that I scrub weekly for errors that comes up OK, but I have a recurring error in my syslog: Jul 23 14:00:41 server kernel: [1199443.374677] ata2.00: exception Emask 0x0 SAct 0xe000000 SErr 0x0 action 0x0 Jul 23 14:00:41 server kernel: [1199443.374738] ata2.00: irq_stat 0x40000008 Jul 23 14:00:41 server kernel: [1199443.374773] ata2.00: failed command: READ FPDMA QUEUED Jul 23 14:00:41 server kernel: [1199443.374820] ata2.00: cmd 60/02:c8:26:fc:43/00:00:f9:00:00/40 tag 25 ncq 1024 in Jul 23 14:00:41 server kernel: [1199443.374820] res 41/40:00:26:fc:43/00:00:f9:00:00/40 Emask 0x409 (media error) <F> Jul 23 14:00:41 server kernel: [1199443.374946] ata2.00: status: { DRDY ERR } Jul 23 14:00:41 server kernel: [1199443.374979] ata2.00: error: { UNC } Jul 23 14:00:41 server kernel: [1199443.376100] ata2.00: configured for UDMA/133 Jul 23 14:00:41 server kernel: [1199443.376112] sd 1:0:0:0: [sda] tag#25 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jul 23 14:00:41 server kernel: [1199443.376115] sd 1:0:0:0: [sda] tag#25 Sense Key : Medium Error [current] [descriptor] Jul 23 14:00:41 server kernel: [1199443.376118] sd 1:0:0:0: [sda] tag#25 Add. Sense: Unrecovered read error - auto reallocate failed Jul 23 14:00:41 server kernel: [1199443.376121] sd 1:0:0:0: [sda] tag#25 CDB: Read(16) 88 00 00 00 00 00 f9 43 fc 26 00 00 00 02 00 00 Jul 23 14:00:41 server kernel: [1199443.376123] blk_update_request: I/O error, dev sda, sector 4181982246 Jul 23 14:00:41 server kernel: [1199443.376194] ata2: EH completeA while back I had a faulty SATA cable that caused some read/write errors (that were later corrected by zpool scrubs and restoring from snapshots) and originally thought this error was a result of this. However it keeps randomly recurring, this time while I was in the middle of a scrub. So far ZFS says that there are no errors, but it also says it's "repairing" that disk: pool: sdb state: ONLINE scan: scrub in progress since Sun Jul 23 00:00:01 2017 5.41T scanned out of 7.02T at 98.9M/s, 4h44m to go 16.5K repaired, 77.06% done config: NAME STATE READ WRITE CKSUM sdb ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N1366685 ONLINE 0 0 0 (repairing) ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N0K3PFPS ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N0M94AKN ONLINE 0 0 0 cache sde ONLINE 0 0 0errors: No known data errorsSMART data seems to tell me that everything is OK after running a short test, I'm in the middle of running the long self-test now to see if that comes up with anything. The only thing that jumps out is the UDMA_CRC_Error_Count, but after I fixed that SATA cable it hasn't increased at all. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 195 175 021 Pre-fail Always - 5233 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 625 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 100 253 000 Old_age Always - 0 9 Power_On_Hours 0x0032 069 069 000 Old_age Always - 22931 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 625 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 581 193 Load_Cycle_Count 0x0032 106 106 000 Old_age Always - 283773 194 Temperature_Celsius 0x0022 118 109 000 Old_age Always - 32 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 133 000 Old_age Always - 1801 200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 22931 -In addition to that, I'm also getting notifications about ZFS I/O errors, even though according to this it's just a bug related to drive idling/spin up time. eid: 71 class: io host: server time: 2017-07-23 15:57:49-0500 vtype: disk vpath: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N1366685-part1 vguid: 0x979A2C1464C41735 cksum: 0 read: 0 write: 0 pool: sdbMy main question is how concerned should I be about that drive? I'm inclined to go replace it to be safe, but waned to know how soon I need to. Here are the possibilities that I'm thinking might explain discrepancy between SMART data and ZFS/kernel:ZFS io error bug makes the kernel think that there's bad sectors, but according to SMART there aren't any. ZFS keeps repairing that drive (related to previous errors with faulty cable), which also might point to drive failure, despite SMART data. The error is a false alarm and related this unfixed bug in UbuntuEDIT: Now I just realized that the good drives are on firmware version 82.00A82, while the one that's getting the errors is 80.00A80. According to the Western Digital forum, there's no way to update this particular model's firmware. I'm sure that's not helping either. EDIT 2: Forgot to update this a long time ago but this did end up being a hardware issue. After swapping multiple SATA cables, I finally realized that the issue the whole time was a failing power cable. The power flakiness was killing the drive, I but managed to get better drives and save the pool.
ZFS - "Add. Sense: Unrecovered read error - auto reallocate failed" in syslog, but SMART data looks OK
The firmware of your drive mistakenly "thought" a certain sector electrical/mechanical parameters were out of normal but subsequent accesses made it "think" otherwise, so the error disappeared. I've seen it many times. As the units of data are becoming physically smaller and smaller it's bound to happen more often than not. To be extra sure about your disk health you may run an extended SMART test using smartctl -t long /dev/device or use the badblocks utility - but the latter only if the drive in question is not used or mounted. Running both tests (even smartctl -t long) may lead to data loss or hardware failure, so always have fresh verified backups.A little bit offtopic: I run smartctl -t short weekly and smartctl -t long monthly just to be on a safe side but to be honest SSD disks have a habit of dying out of the blue regardless but at least with mechanical rotating disks it's saved me from impeding disasters. Wikipedia has a list of SMART attributes to keep an eye on: https://en.wikipedia.org/wiki/S.M.A.R.T.
For a long time, SMART data told me:Now, I got this:So my question is: What happened to that bad sector? How did it "go away" seemingly in its own?
How did my disk change from "One bad sector" to "Disk OK"?
Do you know why might that be? I though ZFS would know of any errors as soon as anyone... Do I need to run a scrub in order for it to recheck the status of all disks? Can I have S.M.A.R.T. automatically report to ZFS somehow?No, it does not check all blocks all the time, it just makes sure that each written block can be accounted for (and restored, if redundancy is available) as soon as it is needed/accessed. Empty space is not checked at all (because you don't have valuable data there, so it would be a waste of time), and normal data is only checked when it is read (as write is append-only). As mmusante correctly said, you will only get error messages if the error is critical and can not be recovered from automatically (otherwise, you just see a notice and error counts in zpool status). Yes. It may be easier to just regularly (via cronjob) scrub the pool. Common recommended times are about once a month for enterprise-quality disks and once a week for consumer-level disks. Otherwise you could start a manual scrubbing with a script from smartmontools:Most of the time, you only need to place a script in /etc/smartmontools/run.d/. Whenever smartd wants to send a report, it will execute smart-runner and the latter will run your script. You have several variables available to your script (again, see the smartd manpage). These come from a test run: SMARTD_MAILER=/usr/share/smartmontools/smartd-runner SMARTD_SUBJECT=SMART error (EmailTest) detected on host: XXXXX SMARTD_ADDRESS=root SMARTD_TFIRSTEPOCH=1267409738 SMARTD_FAILTYPE=EmailTest SMARTD_TFIRST=Sun Feb 28 21:45:38 2010 VET SMARTD_DEVICE=/dev/sda SMARTD_DEVICETYPE=sat SMARTD_DEVICESTRING=/dev/sda SMARTD_FULLMESSAGE=This email was generated by the smartd daemon running on: SMARTD_MESSAGE=TEST EMAIL from smartd for device: /dev/sdaYour script also has a temporary copy of the report available as "$1". It will be deleted after you finish but the same content is written to /var/log/syslog.You then just need to map from the device name to your pool (you can parse zpool status).
S.M.A.R.T. has found an unrecoverable read-error on one of my disks, but zpool status lists all disks as ONLINE (I.E. not DEGRADED).Do you know why might that be? I though ZFS would know of any errors as soon as anyone... Do I need to run a scrub in order for it to recheck the status of all disks? Can I have S.M.A.R.T. automatically report to ZFS somehow?
Why does ZFS not report disk as degraded?
Try installing the nvme-cli package with apt-get install nvme-cli and then retrieve the errors using nvme error-log /dev/nvme0
My daily driver (Debian Bookworm RC3 + KDE Plasma) is configured to send me emails containing error notifications. Today, I received the following email: This message was generated by the smartd daemon running on: host name: desk DNS domain: local.lanThe following warning/error was logged by the smartd daemon:Device: /dev/nvme0, number of Error Log entries increased from 1754 to 1758Device info: KBG30ZMV256G TOSHIBA, S/N:X8OPD1PGP12P, FW:ADHA0101For details see host's SYSLOG.You can also use the smartctl utility for further investigation. The original message about this issue was sent at Wed May 17 16:09:04 2023 EDT Another message will be sent in 24 hours if the problem persists.This is what sudo journalctl -t smart shows: May 20 15:19:47 desk smartd[550]: smartd 7.3 2022-02-28 r5338 [x86_64-linux-6.1.0-9-amd64] (local build) May 20 15:19:47 desk smartd[550]: Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org May 20 15:19:47 desk smartd[550]: Opened configuration file /etc/smartd.conf May 20 15:19:47 desk smartd[550]: Drive: DEVICESCAN, implied '-a' Directive on line 21 of file /etc/smartd.conf May 20 15:19:47 desk smartd[550]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices May 20 15:19:47 desk smartd[550]: Device: /dev/sda, type changed from 'scsi' to 'sat' May 20 15:19:47 desk smartd[550]: Device: /dev/sda [SAT], opened May 20 15:19:47 desk smartd[550]: Device: /dev/sda [SAT], CT4000MX500SSD1, S/N:2304E6A3D318, WWN:5-00a075-1e6a3d318, FW:M3CR045, 4.00 TB May 20 15:19:47 desk smartd[550]: Device: /dev/sda [SAT], not found in smartd database 7.3/5319. May 20 15:19:47 desk smartd[550]: Device: /dev/sda [SAT], is SMART capable. Adding to "monitor" list. May 20 15:19:47 desk smartd[550]: Device: /dev/sda [SAT], state read from /var/lib/smartmontools/smartd.CT4000MX500SSD1-2304E6A3D318.ata.state May 20 15:19:47 desk smartd[550]: Device: /dev/nvme0, opened May 20 15:19:47 desk smartd[550]: Device: /dev/nvme0, KBG30ZMV256G TOSHIBA, S/N:X8OPD1PGP12P, FW:ADHA0101 May 20 15:19:47 desk smartd[550]: Device: /dev/nvme0, is SMART capable. Adding to "monitor" list. May 20 15:19:47 desk smartd[550]: Device: /dev/nvme0, state read from /var/lib/smartmontools/smartd.KBG30ZMV256G_TOSHIBA-X8OPD1PGP12P.nvme.state May 20 15:19:47 desk smartd[550]: Monitoring 1 ATA/SATA, 0 SCSI/SAS and 1 NVMe devices May 20 15:19:48 desk smartd[550]: Device: /dev/nvme0, number of Error Log entries increased from 1754 to 1758 May 20 15:19:48 desk smartd[550]: Sending warning via /usr/share/smartmontools/smartd-runner to root ... May 20 15:19:48 desk smartd[550]: Warning via /usr/share/smartmontools/smartd-runner to root: successful May 20 15:19:48 desk smartd[550]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.CT4000MX500SSD1-2304E6A3D318.ata.state May 20 15:19:48 desk smartd[550]: Device: /dev/nvme0, state written to /var/lib/smartmontools/smartd.KBG30ZMV256G_TOSHIBA-X8OPD1PGP12P.nvme.state May 20 15:49:48 desk smartd[550]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 73 to 74 May 20 22:49:48 desk smartd[550]: Device: /dev/nvme0, number of Error Log entries increased from 1758 to 1760When I run sudo smartctl -i -a /dev/nvme0, it shows me the error count, but I can't figure out how to see the log message associated to the increase count: smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.1.0-9-amd64] (local build) Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION === Model Number: KBG30ZMV256G TOSHIBA Serial Number: X8OPD1PGP12P Firmware Version: ADHA0101 PCI Vendor/Subsystem ID: 0x1179 IEEE OUI Identifier: 0x00080d Controller ID: 0 NVMe Version: 1.2.1 Number of Namespaces: 1 Namespace 1 Size/Capacity: 256,060,514,304 [256 GB] Namespace 1 Formatted LBA Size: 512 Namespace 1 IEEE EUI-64: 00080d 04004ad9aa Local Time is: Sat May 20 23:09:32 2023 EDT Firmware Updates (0x12): 1 Slot, no Reset required Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test Optional NVM Commands (0x0017): Comp Wr_Unc DS_Mngmt Sav/Sel_Feat Log Page Attributes (0x02): Cmd_Eff_Lg Maximum Data Transfer Size: 512 Pages Warning Comp. Temp. Threshold: 82 Celsius Critical Comp. Temp. Threshold: 85 CelsiusSupported Power States St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat 0 + 3.30W - - 0 0 0 0 0 0 1 + 2.70W - - 1 1 1 1 0 0 2 + 2.30W - - 2 2 2 2 0 0 3 - 0.0500W - - 4 4 4 4 8000 32000 4 - 0.0050W - - 4 4 4 4 8000 40000Supported LBA Sizes (NSID 0x1) Id Fmt Data Metadt Rel_Perf 0 - 4096 0 0 1 + 512 0 3=== START OF SMART DATA SECTION === SMART overall-health self-assessment test result: PASSEDSMART/Health Information (NVMe Log 0x02) Critical Warning: 0x00 Temperature: 32 Celsius Available Spare: 100% Available Spare Threshold: 10% Percentage Used: 30% Data Units Read: 23,188,612 [11.8 TB] Data Units Written: 39,727,036 [20.3 TB] Host Read Commands: 222,771,983 Host Write Commands: 498,052,687 Controller Busy Time: 7,440 Power Cycles: 291 Power On Hours: 20,378 Unsafe Shutdowns: 615 Media and Data Integrity Errors: 0 Error Information Log Entries: 1,760 Warning Comp. Temperature Time: 0 Critical Comp. Temperature Time: 0 Temperature Sensor 1: 32 CelsiusError Information (NVMe Log 0x01, 16 of 64 entries) Num ErrCount SQId CmdId Status PELoc LBA NSID VS 0 1760 0 0x501a 0xc005 0x028 - 1 - 1 1759 0 0xb012 0xc005 0x028 - 1 - 2 1758 0 0x5010 0xc005 0x028 - 0 -How can I figure out what the errors are?
How can I view the smart logs for an NVMe disk in Linux when smartclt is showing there are errors?
You can replace smartctl -t long selftests with badblocks (no parameters). It performs a simple read-only test. You can run it while filesystems are mounted. (Do NOT use the so-called non-destructive write test). # badblocks -v /dev/loop0 Checking blocks 0 to 1048575 Checking for bad blocks (read-only test): done Pass completed, 0 bad blocks found. (0/0/0 errors)Note you should only use this if you don't already suspect there are bad sectors; if you already know it's going bad, use ddrescue instead. (badblocks throws away all data it reads, ddrescue makes a copy that may come in useful later). Other than that, you can do things that SMART doesn't do: use a checksumming filesystem, or dm-integrity layer, or backups & compare, to actually verify contents. Lacking those, just run regular filesystem checks. MicroSD cards also have failure modes that are hard to detect. Some cards may eventually discard writes and keep returning old data on reads. Even simple checksums might not be enough here - if the card happens to return both older data and older checksums, it might still match even if it's the wrong data... Then there are fake capacity cards that just lose data once you've written too much. Neither return any read or write errors, and it can't be detected with badblocks, not even in its destructive write mode (since the pattern it writes are repetitive). For this you need a test that uses non-repetitive patterns, e.g. by putting an encryption layer on it (badblocks write on LUKS detects fake capacity cards when badblocks write on raw device does not).
I have a Raspberry Pi (running Raspbian) that is booting from a microSD card. Since it's acting as a home server, naturally I want to monitor the microSD card for errors. Unfortunately though, microSD cards don't support SMART like other disks I have, so I am unsure how to monitor the disk for errors. How can I monitor / check disks that do not support SMART for errors when they are still in use / have partitions mounted?
How to test a disk that does not support SMART for errors?
smartctl -a will show you the relevant information, including in particular the drive’s age (in power-on hours) and the times at which the last self-tests ran; this will give you some idea of how long ago they ran. For example, ... 9 Power_On_Hours 0x0032 080 080 000 Old_age Always - 14910 ... SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 14898 - # 2 Short offline Completed without error 00% 14874 - # 3 Short offline Completed without error 00% 14850 - # 4 Extended offline Completed without error 00% 14837 - ...tells me that this particular drive ran a short test twelve hours ago, and an extended offline test 73 hours ago. (The drive runs 24/7.) smartctl -c can show whether a test is ongoing, but see man smartctl for details and caveats.
I use smartctl -t long to execute full surface test on a drive, it automatically closes then the test is run in background. Then I use smartctl -H to view the result. But it doesn't say how long ago the reported test was done, or if there's one running at the moment. Is there any way to know it?
When was smartctl last run?
Unreadable sectors are a major sign that the drive is on it's way out. Drives can die without showing bad sectors beforehand, but if a drive starts showing this kind of error it's almost guaranteed that it's not long for this world. A 'short' SMART test doesn't actually verify the entire disk, so it can miss things that a 'long' test would find. You can try the long test to be sure, but I wouldn't trust it with any data going forward; it's better to just replace it.
I keep receiving mails from smartctl related to unreadable and uncorrectable sectors (these are the two errors that I get: Device: /dev/sdb [SAT], 209 Currently unreadable (pending) sectors Device: /dev/sdb [SAT], 200 Offline uncorrectable sectors Is there a way to fix those errors? I also did a conveyance smart test on the HDD (a 3TB WD Green) which failed, the short test passed, haven't done a long test though. The first mails started at 8 uncorrectable/unreadable sectors. Should I assume the drive will probably die soon?
Smartctl utility giving uncorrectable and unreadable sectors error on HDD