output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
dhcp does not change the local configuration directly, it calls a script once it gets the lease (by default /sbin/dhclient-script in Debian).
You can specify your own script with -sf and use the $new_ip_address to find out the leased IP. There is a dedicated manpage for this type of script.
dhcp will keep on running once it gets the lease, so you need to stop it. By default the pid is stored in /var/run/dhclient.pid but you can change it with -pf.
An example script:
#!/bin/shcase $reason in
BOUND|RENEW|REBIND|REBOOT)
echo "MY IP IS " $new_ip_address
kill $(cat /var/run/dhclient.pid)
;;
*)
;;
esacThen, if you run:
dhclient -sf /path/to/your_script -d interface 2>&1 | grep "MY IP"You'll get the value.
Be sure to avoid interaction with other DHCP client processes (dhclient, NetworkManager, ...) since in that case the results could be different.
|
Before launching a VM from my script, I need to figure out which IP address it will get.
So I did:
dhclient <interface>And this works, because dhclient uses the MAC address from the macvtap interface specified, and returns me the IP address from the DHCP server.
This is not a foolproof solution, because there may be some people who have a router at home that does not always return the same IP for the same MAC. But every router I ever owned did, so if it works for 99 percent of the cases it's good enough for me.
But the problem is that dhclient also makes changes to the local configuration because it thinks I want to actually use that address on the host. There is a -n flag that should prevent this, but it is not supported by Debian or most other distributions.
So what is the best way to just ask a DHCP server which IP it is planning to serve to a certain MAC address, without actually modifying any settings on the host?
|
How to "predict" a DHCP IP address?
|
The answer is yes! The following configuration worked for me.
[Match]
Name=<NETWORKD DEVICE NAME>[Network]
DHCP=yes # Required
DHCPServer=true # RequiredIPForward=yes # Required if to pass traffic from client
IPMasquerade=both # Required if to pass traffic from clientAddress=192.168.1.100/24 # Required#[DHCPv4] # Was not necessary
#ClientIdentifier=mac # Was not necessary[DHCPServer]#PoolOffset=100 # Not necessaary
#PoolSize=1 # Not necessaaryBootServerAddress=192.168.1.100 # Required# Hostname= # Forgot to test but should be same as code 12
SendOption=12:string:clientname # optional# 17 "Root Path"
SendOption=17:string:/srv/tftp # Required# I do not know how to select between EFI and i386 depending on the client.
# Comment out the one required. EFI worked for me.# EFI
BootFilename=/srv/tftp/grub/x86_64-efi/core.efi # Required; providing full path but relative path should work? Forgot to test.
#SendOption=67:string:grub/x86_64-efi/core.efi # Should be the same as BootFilename # i386
#SendOption=67:string:grub/i396-pc/core.0
#BootFilename=grub/i396-pc/core.0[DHCPServerStaticLease]
MACAddress=a1:b2:c3:d4:e5:f6
Address=192.168.1.101GotchasDon't do this when tired :)
networkctl reload after changing configuration
Disable any other non-SystemD DHCP server
NetworkD conf files: Don't comment on same line as assigned value, for example Address=192.168.1.100/24 # My Server
|
If I understand this "issue" (systemd-networkd DHCP Server ignores SendOptions #15780), SystemD can be configured to handle network booting. However, I am unable to find more information about that functionality.
I am currently using a DHCPD server with minimal configuration, why it would be nice if it could be moved to systemd-networkd, which handles all other network functionalities in my environment.
# /etc/dhcpd.conf
allow booting; # How is this defined in systemd-networkd?
allow bootp; # How is this defined in systemd-networkd?# If this DHCP server is the official DHCP server for the local
# network, the authoritative directive should be uncommented.
authoritative; # How is this defined in systemd-networkd?option architecture code 93 = unsigned integer 16; # I think this corresponds to SendOption=93:uint16:architecturehost client_computer {
hardware ethernet a1:b2:c3:d4:e5:f6; # This should be captured with [Match] MACAddress=a1:b2:c3:d4:e5:f6 fixed-address 192.168.1.101; # I think this corresponds to something like SendOption=???:ipv4address:192.168.1.101
next-server 192.168.1.100; # This should be defines as [Network] Address? option host-name "clientname"; # I think this corresponds to something like SendOption=12:string:clientname
option root-path "/srv/tftp"; # I think this corresponds to something like SendOption=17:string:/srv/tftp if option architecture = 00:07 {
filename "grub/x86_64-efi/core.efi"; # I think this corresponds to SendOption=67:string:grub/x86_64-efi/core.efi
}
else {
filename "grub/i396-pc/core.0"; # I think this corresponds to SendOption=67:string:grub/i396-pc/core.0
}
}It seems that I need the "option" codes, but where can I find them? Is there a specification? -- Found them :)Dynamic Host Configuration Protocol (DHCP) and Bootstrap Protocol (BOOTP) ParametersThe SystemD NetworkD documentation.
What I have so far:
#allow booting; = ? # Not necessary?
#allow bootp; = ? # Not necessary?
#authoritative; = ? # Not necessary?[Match]
MACAddress=a1:b2:c3:d4:e5:f6[Network]
DHCP=no
DHCPServer=trueAddress=192.168.1.100/24 # DHCP server IP[DHCPv4]
ClientIdentifier=mac[DHCPServer]
PoolOffset=3
PoolSize=7BootServerAddress=192.168.1.100/24#SendOption=93:uint16:architecture # Failed to parse DHCP uint16 data, ignoring assignment: architecture # Not necessary?#SendOption=???:ipv4address:192.168.1.101SendOption=12:string:clientname # 12 "Hostname"SendOption=17:string:/srv/tftp # 17 "Root Path"# BootFilename=grub/i396-pc/core.0 # Sane as code 67
# SendOption=67:string:grub/x86_64-efi/core.efi
SendOption=67:string:grub/i396-pc/core.0[DHCPServerStaticLease]
MACAddress=a1:b2:c3:d4:e5:f6
Address=192.168.1.101Update #1
Using tcpdump -i <INTERFACE> -nn -s0 -v -A udp port 67 I can see that SystemD NetworkD DCHP Server is interacting with the client!
However, the system does not boot, and the problem seems to be that the expected static IP address is not assigned to the client.
The [DHCPServerStaticLease] section does not seem to have an effect. I found something about a bug in systemctl --version <= 253 and added the workaround ClientIdentifier=mac. However, it should not be necessary as I am running version 255.Static IP address not being assigned by DHCP server to host on a certain interface of a systemd-networkd bridgeOh, and I added the Pool* parameters but the DHCP server still assigns the same IP (192.168.1.242).
|
Can systemd networkd be configured for netboot, PXE boot, if yes, how?
|
The problem is that the DHCP server doesn't support the use of DUID as ClientID so the solution is to disable the option duid
|
After installing dhcpcd, the machine is unable to get IPv4 from DHCP, but generate IPv6 as expected and get DHCPv6 options (like DNS, domain, etc.); the config in /etc/networking/interfaces is
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcpin /etc/dhcpcd.conf I have:
hostname
duid
persistent
vendorclassid
option interface_mtu
option host_name
option ntp_servers
option rapid_commit
require dhcp_server_identifier
slaac privateThe version fo Alpine is 3.18 and the version of dhcpcd is 10.0.1
|
Unable to get IPV4 with dhcpcd on Alpine in a dual stack setup
|
Hmm... according to whois, 211.136.17.107 is part of network segment 211.136.16.0/21, which belongs to China Mobile.
If you use a distribution that is based on Debian or Ubuntu, you could install the resolvconf package and modify the /etc/resolvconf/interface-order file to say you specifically want the records for eth0 first, then the records for eth2.
If you don't have resolvconf available, you might want to examine the hook script(s) used by dhcpcd and modify them to order the DNS server addresses as you wish. The hook script is configured with the script keyword in dhcpcd.conf; if not specified, the default is usually something like /usr/lib/dhcpcd/dhcpcd-run-hooks (check your man dhcpcd.conf for possible distribution-specific modifications).
Alternatively, you could work around the problem by adding a custom route for the nameservers that don't automatically get routed the right way. Since nameserver 192.168.8.1 is also your default gateway, it must be in your local network segment and so the correct interface is automatically preferred for it. But for the 211.136.. nameservers, adding a route like
ip route add 211.136.16.0/21 via 10.67.145.17 dev eth2should stop attempts to reach them through the wrong interface. Note that the IP address after the via keyword is the gateway address specified by DHCP on eth2, so you'll probably want to create/modify a dhcpcd hook script to create that route when configuring eth2, using the gateway address specified by the DHCP service instead of hardcoding it.
|
My board has two network interfaces. Both of them are use DHCP to get IP address and DNS. The default route is ordered by metric, but DNS server received by dhcpcd is inverse. My default route table:
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.8.1 0.0.0.0 UG 5 0 0 eth0
0.0.0.0 10.67.145.17 0.0.0.0 UG 10 0 0 eth2resolv.conf:
domain lan
nameserver 211.136.17.107 ---- (eth2)
nameserver 211.136.20.203 ---- (eth2)
nameserver 192.168.8.1 ---- (eth0)eth0 is ethnernet, eth2 is 4G.
I want to have default route and DNS server in the same order, is there have any option to configure?
|
How to set the order of DNS server with two interfaces?
|
dhcpcd expects you to add routes. So setting static interface addresses doesn't stop it from being prepared for actual DHCP work.
Explicitly rejecting BOOTP (through 'require dhcp_message_type') also doesn't stop it from binding to 68...
According to the source code, binding to 68 is necessary "so that the kernel never sends an ICMP port unreachable message back to the DHCP server."
|
I'm using dhcpcd to statically set the interface addresses. However, dhcpcd always listens on port 68, even if setting the interfaces as static addresses. It's probably strange to have a dhcp client that doesn't listen to bootp, but how do i stop dhcpcd from binding to an interface?
|
dhcpcd listening on port 68
|
In most cases, the RandR extension is used to configure display settings. Therefore, I will focus on it in this answer. So this answer may not apply if you're using Wayland, the proprietary NVIDIA drivers without DRM (Direct Rendering Manager) kernel mode setting enabled or have disabled the RandR extensions. If so, calling xrandr should result in an error instead of printing the current display configuration.
While xrandr does not change the display configuration if you unplug a monitor, your desktop environment does this. Technically, the desktop environment is implementing an xrandr client which is handling the XRRScreenChangeNotify event and updating your display configuration when a monitor is disconnected.
Depending on your used desktop environment, you can disable this behavior:
Gnome till 3.1.3:
gsettings set org.gnome.settings-daemon.plugins.xrandr active falseThis option was to be removed in newer versions. Seems to be handled by Mutter now.
Cinnamon:
Copy /etc/xdg/autostart/cinnamon-settings-daemon-xrandr.desktop to $HOME/.config/autostart. Then append the line Hidden=true to the copied file.
Cinnamon before 3.4:
gsettings set org.cinnamon.settings-daemon.plugins.xrandr active falseMate Desktop:
gsettings set org.mate.settings-daemon.plugins.xrandr active falseKDE
kscreen is managing the display settings in the KDE Plasma 5 desktop. There seems to be no way to disable the auto-plug behavior with a configuration setting. However, you can kill the daemon which is responsible for it /usr/lib/kf5/kscreen_backend_launcher to prevent any further changes to your display configuration. Note: The daemon will be restarted when you're launching the KDE display settings.
|
I have Linux Mint Cinnamon 18.3 running with the proprietary drivers from Nvidia. The automatic display detection is great for setting up (way easier than the old days!) but as DisplayPort disconnects monitors when they're turned off, it's moving my windows around on my 3 display setup.
Is there any way to take a snapshot of the currently detected configuration and lock it in xorg to stop things changing and moving my windows around?
Alternatively, is there any way to tell the proprietary driver to ignore power states for the monitor? I'm unaware of the specifics of the DP protocol so not sure if this is an issue.
I had to disable DPMS/ sleep mode for my monitor as it kept crashing on resume so the only other option I have is to leave my monitors on all the time which will waste a lot of power.
Happy to share any configs etc but as it's all auto-detected I wasn't sure that would be much help.
|
Stop display settings changing when turning off DP monitors on Linux Mint
|
[EDIT: I append at the end of this answer a very brief update, one year after i gave the answer here. If this update should be a second, separate answer, please lmk. Apart from this update at the end, the answer is unchanged]
Your questions are very timely, even though you asked them 7 months ago. And you asked two questions, so you get two answers:Is it worth trying to get this to work or should I just send it back directly?A set of kernel patches to support DisplayPort over USB-C have just been published to the Linux-kernel archive here. So for the moment, you need to apply patches and roll your own kernel for it to be possibly worthwhile. (This is less scary than it might seem at first, so I hope you'll consider this encouragement and not the opposite).
A second constraint is that according to the that post in the Linux-Kernel archive, the patches are good for hardware platforms that use FUSB controllers. He will soon also publish support for UCSI controllers -- and I think (but am not positive) that both Intel and ASMedia controllers are of this type. To quote him: I've tested these with a platform that has fusb302, and also with UCSI
platforms. The UCSI driver will need separate support for alternate modes >that I'm not including to this series. I'm still working on it.In other words, "soon." What is the status of DP via USB–C in Linux?I learned about the above in an article in Phoronix, and the article states that the hope is to merge these patches into the 4.19 kernel.
Finally, it's worth noting that for the particular case of DisplayPort over USB-C, the cable is entirely passive and there is a rather mature standard, so you can be close to certain that your cable WILL work once there is OS support for it. This is also true of Thunderbolt over USB-C, but not true of HMDI, for example: A USB-C to HDMI cable is likely to be a DP-to-HDMI adapter on the inside, with the DP side simply using the standard USB-C connector.
If you are not going to deal with kernel patches, I would guess that your cable will 'just work' sometime between 3 months to one year from now.
EDIT/UPDATE: My day-to-day machine is a Dell 7577 Inspiron laptop, running stock Arch Linux. It has a USB-C port and an HDMI port, and I run X/openbox on it with THREE side-by-side monitors: one of them is connected with a stock/standard HDMI cable, and the other with a stock/standard USB-C-to-DisplayPort cable. "Three Monitors with Arch Linux and this particular Dell laptop: It just works". It seems that the prediction I made in the last sentence of the original answer has proved to be accurate.
That being said, there are two important little caveats/nits that I would certainly consider if I were buying a machine today, and wanted this configuration of monitors:I find the whole "hybrid/mixed/dual discrete and integrated GPU" architecture to be a pain to understand and manage. It's a pain, but it is possible (barely). On Dell systems this architecture is called "Optimus", and how you set things up will have an enormous impact on the kind of video function and performance you get. I realize that I'm being very generic, but there isn't any one thing that's true for all set ups. Basically: if you are looking a machine that has BOTH an integrated GPU AND a discrete GPU, do some research to make sure that OS you intend to install can support the configuration you wish to use.
In particular, it seems that many (most? all?) modern laptops seem to hard-wire each monitor output port to exactly ONE of the two GPUs. So, for example, if the laptop built-in LCD display is hard-wired to the integrated GPU, then any time you use the discrete NVIDIA or Radeon GPU with an application, each frame will be copied at the end over to the integrated GPU in order to actually get displayed on the screen. It may well be that the performance gain from the discrete GPU is so enormous that this extra copy is a negligible price to pay. But it might not be; and even if it is, intensive users of discrete GPU-power often are the type of person who don't like to pay even the most negligible of prices. I am no true expert, but i think that that's where linux support for three monitors is today. (If by "three monitors" one means "using simultaneously the built-in LCD screen and the two external monitor ports on the laptop."
|
I already posted this over at reddit, but got no response until now.
I bought this cable just to find out my system doesn't do anything. Both lsusb and tail -f /var(log/kern.log don't show any difference when plugging the cable in and out. Is it worth trying to get this to work or should I just send it back directly? What is the status of DP via USB–C in Linux? (Found a lot of rather confusing questions and answers out there)
$ lspci -d ::0c03 -k
00:14.0 USB controller: Intel Corporation Sunrise Point-LP USB 3.0 xHCI Controller (rev 21)
Subsystem: CLEVO/KAPOK Computer Sunrise Point-LP USB 3.0 xHCI Controller
Kernel driver in use: xhci_hcd
01:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
Subsystem: CLEVO/KAPOK Computer ASM1142 USB 3.1 Host Controller
Kernel driver in use: xhci_hcdOS: elementary OS 0.4.1 Loki
Kernel: 4.9.18-040918-generic
Hardware: Dual-Core Intel® Core™ i5-7200U CPU @ 2.50GHz
Intel Corporation Device 5916 (rev 02)
|
USB C → DisplayPort Adapter support
|
You have a laptop with two GPUs, using Nvidia's "Optimus" technology.
The low-power CPU-integrated Intel iGPU is physically wired to output to the laptop's internal display, while the HDMI output is wired to the more powerful Nvidia discrete GPU. The device ID 10de:1f91 indicates the Nvidia GPU is GeForce GTX 1650 Mobile / Max-Q. The Nvidia codename for that GPU is TU117M.
The laptop may or may not have the capability of switching the outputs between GPUs; if such a capability exists, vga_switcheroo is the name of the kernel feature that can control it. You would then need to have a driver for the Nvidia GPU installed (either the free nouveau or Nvidia's proprietary driver; since the Nvidia GPU model is pretty new, the support for it in nouveau is still very much work-in-progress), then trigger the switch to Nvidia before starting up the X server.
If there is no output switching capability (known as "muxless Optimus"), then you would need to pass the rendered image from the active GPU to the other one in order to use all the outputs. With the drivers (and any required firmware) for both the GPUs installed, the xrandr --listproviders should list two providers instead of one, and then you could use xrandr --setprovideroutputsource <other GPU> <active GPU> to make the outputs of the other GPU available for the active GPU.
Unfortunately, the Nvidia proprietary driver seems to be able to participate in this sharing only in the role of the active GPU, so when using that driver, you might want to keep two different X server configurations to be used as appropriate.
One configuration would be for using with external displays (and probably with power adapter plugged in too) with the Nvidia GPU as the active one, feeding data through the iGPU for the laptop's internal display
The other configuration would be appropriate when using battery power and don't need maximum GPU performance: in this configuration, you would use the Intel iGPU as the active one, and might want to entirely shut down the Nvidia GPU to save power (achievable with the bumblebee package). If you want some select programs to have more GPU performance, you could use the primus package to use the Nvidia GPU with no physical screen attached to render graphics, and then pass the results to the Intel iGPU for display.
With Kubuntu, you probably were asked about using proprietary drivers on installation and answered "yes", so it probably set up one of the configurations described above for you. But Debian tends to be more strict about principles of Open Source software, so using proprietary drivers is not quite so seamless.
Generally, the combination of the stable release of Debian (currently Buster) and the latest-and-greatest Nvidia GPU tends not to be the easy way to happy results, because the Debian-packaged versions of Nvidia's proprietary drivers tend to lag behind Nvidia's own releases: currently the driver version in the non-free section of Debian 10 is 418.116, and the minimum version required to support GeForce GTX 1650 Mobile seems to be 430.
However, the buster-backports repository has version 440 available. To use it, you'll need to add the backports repository to your APT configuration. In short, add this line to /etc/apt/sources.list file:
deb http://deb.debian.org/debian buster-backports non-freeThen run apt-get update as root. Now your regular package management tools should have the backports repository available, and you could use
apt-get -t buster-backports install nvidia-driverto install a new enough version of the Nvidia proprietary driver to support your GPU.
|
OS: GNOME 3.30.2 on Debian GNU/Linux 10 (64-bit)
My laptop has no output from the HDMI port. The monitor shows "NO INPUT DETECTED". Previously I had Kubuntu installed and before that I had windows 10, Both worked fine, which means this is not a hardware issue.
I have tried:Using the package "ARandR" to scan for new
displays.
Plugging in different monitors and HDMI cords.
Booting the machine with the display plugged in.SPECS:
LAPTOP: Acer Nitro 7 (AN715-51)
GPU: GeForce GTX 1650
CPU: Intel Core i7-9750H Output of xrandr:
Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 8192 x 8192
eDP-1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 344mm x 193mm
1920x1080 60.01*+ 60.01 59.97 59.96 59.93
1680x1050 59.95 59.88
1600x1024 60.17
1400x1050 59.98
1600x900 59.99 59.94 59.95 59.82
1280x1024 60.02
1440x900 59.89
1400x900 59.96 59.88
1280x960 60.00
1440x810 60.00 59.97
1368x768 59.88 59.85
1360x768 59.80 59.96
1280x800 59.99 59.97 59.81 59.91
1152x864 60.00
1280x720 60.00 59.99 59.86 59.74
1024x768 60.04 60.00
960x720 60.00
928x696 60.05
896x672 60.01
1024x576 59.95 59.96 59.90 59.82
960x600 59.93 60.00
960x540 59.96 59.99 59.63 59.82
800x600 60.00 60.32 56.25
840x525 60.01 59.88
864x486 59.92 59.57
800x512 60.17
700x525 59.98
800x450 59.95 59.82
640x512 60.02
720x450 59.89
700x450 59.96 59.88
640x480 60.00 59.94
720x405 59.51 58.99
684x384 59.88 59.85
680x384 59.80 59.96
640x400 59.88 59.98
576x432 60.06
640x360 59.86 59.83 59.84 59.32
512x384 60.00
512x288 60.00 59.92
480x270 59.63 59.82
400x300 60.32 56.34
432x243 59.92 59.57
320x240 60.05
360x202 59.51 59.13
320x180 59.84 59.32 Output of xrandr --listproviders:
Providers: number : 1
Provider 0: id: 0x43 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 3 outputs: 1 associated providers: 0 name:modesettingOutput of lspci -nn | grep VGA:
00:02.0 VGA compatible controller [0300]: Intel Corporation UHD Graphics 630 (Mobile) [8086:3e9b]
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1f91] (rev a1)Output of aplay -l:
card 0: PCH [HDA Intel PCH], device 0: ALC255 Analog
[ALC255 Analog]
Subdevices: 0/1
Subdevice #0: subdevice #0Output of lshw -c video:
*-display
description: VGA compatible controller
product: NVIDIA Corporation
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:01:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
configuration: driver=nvidia latency=0
resources: irq:154 memory:a3000000-a3ffffff memory:90000000-9fffffff memory:a0000000-a1ffffff ioport:5000(size=128) memory:a4000000-a407ffff
*-display
description: VGA compatible controller
product: Intel Corporation
vendor: Intel Corporation
physical id: 2
bus info: pci@0000:00:02.0
version: 00
width: 64 bits
clock: 33MHz
capabilities: pciexpress msi pm vga_controller bus_master cap_list rom
configuration: driver=i915 latency=0
resources: irq:128 memory:a2000000-a2ffffff memory:b0000000-bfffffff ioport:6000(size=64) memory:c0000-dffff
|
Debian 10 [Buster]: HDMI Input Not detected
|
Well after much trial and error, I only found one configuration that consistently works (mostly), and that's using KDE 5.8 which is brand-spankin'-new.
In addition to the distros mentioned in my question, I tried the following since then:Elementary - major probs =(
Mint (Cinnamon) - major probs =(
Ubuntu (Unity) - major probs =(
OpenSuse 42.2 beta 3 - worked! (mostly) =D
KDE neon - worked! (mostly) =DI tried OpenSuse just on a whim, and it's the only non-Debian based OS I even tried, I believe. At that point, I thought the others were broken because of a quirk in the Ubuntu/Debian base, until I stumbled across this blog post and realized that OpenSuse was using KDE v5.8!
At that point, i went on a mission to find an Ubuntu or Debian based distro that shipped with KDE 5.8, and that's when I found KDE Neon. So far I love it and it's quickly becoming my new fav (at least for a flashy/slick-desktop edition of Linux).
That blog post mentions "5.7 and beyond" as having this overhauled multi-monitor implementation, but I was having major issues with 5.7. 5.8 will occasionally give me hassles when docking, but it's never gotten to the point where I can't somehow finagle it into working without a reboot (and even better, without corrupting my whole WM/DM and reinstalling Linux!).
But with all that being said, I think it's clear that Linux has some catch-up to do in this arena. Not a single distro worked as well as Windows 8, 8.1, 10, etc does. Even the ones that do work (kinda), I can't say they worked without adding "(kinda)". They're tolerable, and usable, but hardly ideal.
|
I have a laptop + docking station with 1x VGA and 2x Display Port (1 converted to DVI, 1 native DP) outputs on it. When docked I want it to use the 3 external displays and have the onboard display deactivated - and when undocked only use the onboard display.
I've tried:Kubuntu 16.10 Beta 2 (updated, so I think it was more of a "nightly", but hopefully pretty stable since it's due for a stable release next week)
Mint 18 KDE
Debian Jessie (kinda - having other trouble getting it to work for various reasons)I've tried scripting out the configurations with a xrandr shell script, and sometimes it will help, but other times it won't.
I've also tried some kernel options, and I thought they were helping but then it stopped working again so I'm not sure (I think it was "video=DP-2:d" and a couple other things).
The weird issues I've experienced are:A monitor just won't work and there's no way I can seem to get it active without rebooting (always DP).
Image onscreen being offset so only a small corner of it is actually visible to me
Black screen but mouse cursor seems to change to the I-Beam or Resize-Arrow so I think there are windows under it.
Sometimes the screen is black and the mouse can only travel along the vertical axis (super weird)
At some point, the auto-boot to X-Windows always gets corrupt and starts booting to blank screens (and AFAIK, reinstalling the distro is the only fix). However, running startx from TTY1 does work, there's just two instances of sddm running (one which is broken).
When the moon turns blue and the stars align just right I can get all my displays to be on and work the way I want, but then if I undock or reboot it's all haywire again.These issues never occur if I'm only using my VGA monitor on the docking station.
I believe the problem lies in a combination of things related to not-so-great support for DP in Linux and the docking station (dynamically switching display configs).
Kubuntu / Mint KDE are obviously very similar so I'm not sure if it was a worthwhile test. Both were KDE, both were using SDDM, and both were based on Ubuntu / Debian.
The laptop is an HP EliteBook 840 G2. It has an Intel HD Graphics 5500. I verified the xserver_xf86_video_intel package was installed.
One last (semi-)requirement: I really want it to be eye candy. That's why I was kinda sticking with KDE, so it had a nice, modern, crisp look. It will be easier to sell a switch from Win10 to Linux to the folks at work, who are currently all on Win10.
tl;dr
What I need/want specifically:Configuration that handles docking station with monitors connected via display port - laptop friendly
The ability to remove the laptop from the dock without needing to reboot (or reinstall Linux)
Pleasant to look at (KDE would be great)My questions are: Is there a laptop friendly distro that works well with docking stations?
Are there any suggestions for improving my experience with Display Port and/or Docking Stations (configurations, utils, etc)?
Are there any (preferably modern looking) desktop environments that can handle this better than others?Thanks!
|
Desktop Linux distro that is laptop friendly (w/ docking station with 1x VGA and 2x Display Port)
|
For most systems, handling which screen device to output to is dependent on the GPU or some other video display controller. All interfacing with the video device(s) on the system is handled by the Direct Rending Manager (DRM) and the closely related Kernel Mode Setting (KMS) kernel subsystems.
From the Wikipedia page on the topic:In computing, the Direct Rendering Manager (DRM), a subsystem of the Linux kernel, interfaces with the GPUs of modern video cards. DRM exposes an API that user-space programs can use to send commands and data to the GPU, and to perform operations such as configuring the mode setting of the display. DRM was first developed as the kernel space component of the X Server's Direct Rendering Infrastructure, but since then it has been used by other graphic stack alternatives such as Wayland.
User-space programs can use the DRM API to command the GPU to do hardware-accelerated 3D rendering and video decoding as well as GPGPU computing.The official Linux docs can be found in the source repository under Documentation/gpu. Here is the github link, for your convenience.
Additionally, the Wikipedia article seems quite extensive. Depending on your goals, this resource alone might be sufficient, and it is certainly easier and less technical reading than the official documentation is.
|
I'd like to understand how Linux detects which display devices are available (video output) and how it decides what to display on each one.
For example: if I have an embedded device with a serial line and an HDMI port, how do I make the console appear on the HDMI display instead of the serial console?
Also, if I want to use a simple OpenGL application that's linked against video drivers, what interface would OpenGL use to draw on the HDMI port?
Pointers to the proper documentation would be awesome.
|
Where do I start to understand the display controller management?
|
What does the lsusb command say about it?
If the output line for the dock includes ID 17e9:600a, then it is this one: a DisplayLink dock.
DisplayLink docks essentially provide an extra USB-connected almost-GPU that needs its own evdi driver module. The driver package also includes firmware that is needed for the USB-GPU to work, a libevdi library, and a closed-source DisplayLink Manager application.
You could get the firmware and the application by extracting the driver package and then build the driver and library from sources available on GitHub.
The ArchWiki also seems to have advice on using DisplayLink devices on Arch. As far as I've understood, the procedure should be essentially the same as with the USB-3.0 DisplayLink devices, although your dock uses the newer USB-C connection.
|
I've connected the monitors to the dock, and the monitors detect when they are being connected and disconnected, so there doesn't seem to be any issue with the signal as such. All the other plugs on the dock are also working perfectly (power, Ethernet, USB to keyboard and mouse, USB-C to laptop). Basically everything is working fine, but Linux is not detecting the monitors connected to the dock.
sudo dmesg --follow does not show anything when disconnecting and reconnecting a monitor.
Should this be solvable? I'm running XWayland on GNOME on an up-to-date Arch Linux 5.10.47-1-lts.
|
DisplayPort monitors via HP USB-C Universal Dock not detected by HP EliteBook 840 G7
|
First, an xrandr --listmonitors shows the displays visible for your X.
For example, you will see these (I have a single-display, you will have multiple):
Monitors: 1
0: +*DVI-0 1920/598x1080/336+0+0 DVI-0Now if I would want to power off my DVI-0 display, I would issue an
xrandr --output DVI-0 --offYou can get a more detailed list of your display configuration with an xrandr -q.
The problem is, that it is a command line tool. Doing this automatically on display connect/disconnect is possible, but in the case of gnome+suse, I don't know, how.
Maybe you will get a more detailed answer for that - if your question wouldn't be closed until that.
|
My laptop has a 4k display. When I plug in my thunderbolt 3 to display port adapter which connects to 3 1920x1200 displays over MST it fails to properly connect because it exceeds the maximum resolution permitted by my GPU.
A -hopeful- solution to this is disabling the built in display and then connecting to the external displays. However, I can't seem to pull it off properly through the display manager and it usually ends in me crashing things.
Summary:
Laptop -> Thunderbolt 3 to display port adapter -> 3 1920x1200 displays over MST/Daisy Chaining
I am seeking to disable the built-in display when external displays are detected and reenable it when external displays are unplugged.
|
Disable built in display when external is provided in Gnome
|
My findingsThat HP G5 dock (spec) does only USB-C. The video signal will be transmitted with that famous USB-C Alternate mode for DisplayPort; the Thunderbolt 3 functionality from the laptop (same port, different mode of operation) is not used.The tripple 1680x1050 resolution is a strong hint that the laptop is limited to DisplayPort 1.2 over USB-C and uses only 2 lanes. It allows a bandwidth of 8.64Gb/s, for which a tripple 1680x1050 is a tight fit, but 60Hz are possible. What makes you so sure that CVT-RBv2 timings would not work? The error xrandr: Configure crtc 0 failed indicates that the GPU could not be configured to do what you want. If the monitor were actually not able to handle the mode, it could only indicate that on it’s own screen, flicker strangely or go dark etc. For generating useful modelines cvt is useless for this case (and many others). Further, its -r switch generates “v1” timings only.The hotplug issue seems to be an unrelated bug. If you look at the ports without the dock, you can see the odd DP-1-3 port. I’m not sure what it represents, especially without DP-1-1 and DP-1-2 around.—I believe that this odd entry is the same as the DP-1-6 entry in the coldplug case (the number always being +3 of the last DP-1-x). In the hotplug event the enumeration over DP-1 clashes after DP-1-1 and DP-1-2 with that spurious DP-1-3 and fails to register with that name and does not attempt any other name, it seems, making it impossible to see/handle the last video port of the dock.How to proceed
60Hz refresh
(The following text shows ways to get things going. Read and understand and adapt to your environment. Copy&paste might not work.)
Let’s get all monitors to 60Hz. First, make sure you’re coldplugged and that all 3 external monitors work in some resolution. To avoid bottle neck issues or hitting other limits of the GPU (we want to test the monitors!), work on the internal screen and switch all others off
for no in 1 2 3; do xrandr --output DP-1-$no --off; doneNow lets add “my” modelines:¹
xrandr --newmode h160 119 1680 1728 1760 1840 1050 1053 1059 1080 +HSync -VSync
xrandr --newmode h80 114.048 1680 1688 1720 1760 1050 1066 1074 1080 +HSync -VSync
xrandr --newmode h60 112.752 1680 1688 1710 1740 1050 1066 1074 1080 +HSync -VSync
xrandr --newmode h50 112.104 1680 1688 1705 1730 1050 1066 1074 1080 +HSync -VSync
xrandr --newmode h40 111.456 1680 1688 1702 1720 1050 1066 1074 1080 +HSync -VSync
xrandr --newmode h30 110.808 1680 1682 1690 1710 1050 1066 1074 1080 +HSync -VSync
xrandr --newmode h25 110.484 1680 1682 1690 1705 1050 1066 1074 1080 +HSync -VSync
xrandr --newmode h20 110.16 1680 1682 1690 1700 1050 1066 1074 1080 +HSync -VSync
xrandr --newmode h12 109.642 1680 1684 1688 1692 1050 1066 1074 1080 +HSync -VSyncAdd them to your preferred monitor now (and to the others eventually):
for name in h160 h80 h60 h50 h40 h30 h25 h20 h12; do xrandr --addmode DP-1-1 $name; doneNow test them!
xrandr --output DP-1-1 --mode h160h160 should work. It’s the boring CVT-RB timing from cvt -r 1680 1050 60. Then test h80. This is CVT-RBv2. I have four mediocre old monitors, they can all do that. Further:on h60 my crappy Medion brand Full-HD monitor gets an unsteady image
h30 is the last modeline for this BenQ brand Full-HD monitor
h20 HP Full-HD monitor (ok)
h12 Dell office 1680x1050 monitor (working!)Then try enabling all three on the last modeline you have working.
60Hz alternative solution
As mentioned, the bandwidth limit is also dictated by the USB-C Displayport Alternate Mode version and setup. For more bandwidth use DP 1.3+ and/or 4 lanes (maximum) instead of 2 of the USB-C wiring. The former can not be changed on the same hardware, the latter sometimes can in the firmware setup utility (“BIOS”), look for “USB-C High Resolution Mode” or something... In the 4-lane-mode no USB 3.x is available next to the DisplayPort video signal, only USB 2.0.
hotplugging the dock
For the same 3 states (no dock, hotplugged, coldplugged), please send the output of
(cd /sys/class/drm; ls */edid)I have
card0-DP-1/edid card0-DVI-I-2/edid
card0-DVI-I-1/edid card0-HDMI-A-1/edidE.g. totally different names compared to xrandr | egrep '^[^ ]' | grep -v 'Screen 0:' | sed -e 's/ (.*//'
DisplayPort-0 connected primary 1680x1050+0+1080
HDMI-0 connected 1680x1050+1920+1080
DVI-0 connected 1680x1050+3840+1080
DVI-1 connected 1680x1050+1920+0Maybe it sheds some light on what is what. The Razer Blade 15" Base (2018) is supposed to have the internal display, 1×Thunderbolt3/USB-C, 1×HDMI 2.0 and 1×Mini DisplayPort 1.4.
That explains eDP-1, HDMI-1-1 (albeit not its naming) and DP-1 is related to USB-C, as we know. Leaves DP-2 and DP-1-3. I guess DP-2 is the Mini DisplayPort. And maybe DP-1-3 is the Thunderbolt side of that dual-use Thunderbolt/USB port. If there is an interdependency, let’s confirm it by enabling them first individually, then together. Disconnect the dock and switch --off all outputs. Then
xrandr --addmode DP-1 0x74 # some 800x600 standard modeline on my system
xrandr --output DP-1 --mode 0x74 # does this work?
xrandr --output DP-1 --off # afterwardsThis way you can force the GPU to output something there. Try DP-1-3 next. Does it work? Can you enable both at the same time? Also on different (low-res) resolutions (e.g. no mirroring)? Could you borrow a Thunderbolt to DP converter? Check if this adapter uses DP-1-3.
Anyway... if this all turns out to be true, it seems that the Xserver did a bad call on handing out conflicting names for some kind of internal output splitter. Would be interesting to see if you could only restart the Xserver after hotplugging the dock without full reboot. If it turns working, then the kernel GPU driver does not seem to provoke the problem, but indeed the enumeration in X11.¹ Pixel clock calculated with Tom's video timing calculator and hand-molded into a modeline.
|
I have three identical 1680x1050 monitors and am trying to use them via the “HP G5 USB-C dock” on my Razer Blade 15" 2018 laptop and its Thunderbolt 3 port. With xrandr I can get all displays working simultaneously sometimes, but never in the resolutions I want to.
If I build modelines myself, I can run all three with 1680x1050 with 49Hz. Are 60Hz not possible? Maybe it is a pixel clock issue.
xrandr | egrep '^[^ ]' | grep -v 'Screen 0:' | sed -e 's/ (.*//'prints this without the dock connected:
eDP-1 connected primary 1920x1080+0+0
DP-1 disconnected
DP-2 disconnected
HDMI-1-1 disconnected
DP-1-3 disconnectedafter hotplugging the dock I see:
eDP-1 connected primary 1920x1080+0+0
DP-1 disconnected
DP-2 disconnected
DP-1-1 connected
DP-1-2 connected
DP-1-3 disconnected
HDMI-1-1 disconnectedbooting with the dock shows:
eDP-1 connected primary 1920x1080+0+0
DP-1 disconnected
DP-2 disconnected
DP-1-1 connected
DP-1-2 connected
DP-1-3 connected
HDMI-1-1 disconnected
DP-1-6 disconnectedWith the dock hotplugged I can’t configure xrandr --output DP-1-3 --auto. I get xrandr: cannot find preferred mode. Booting with it works, though.
Trying to enable the third monitor with 1680x1050@60 produces xrandr: Configure crtc 0 failed. I even used a modeline generated with cvt -r. I think the monitors are too old for CVT-RBv2.
[asking for a friend @bain]
|
How to get xrandr to output three 1680x1050 video signals over this USB-C dock at 60Hz?
|
Linux supports DisplayPort just as well as any other digital display output. As long as you have your graphics drivers properly installed, it should behave just as it would with DVI-D. No special procedures should be required, unless the monitor you buy just happens to be screwy. Buying a screwy monitor can usually be avoided by taking into account user reviews when looking for a monitor.
Regarding the adapter suggestion, VGA can only support up to 1080p resolutions. Therefore, you won't be able to use a monitor larger than that with a DisplayPort-to-VGA adapter. Besides that, a DisplayPort to VGA adapter will by necessity be an active digital-to-analog converter, meaning it's likely to be expensive, and if you don't get a good one you may experience visual artifacts or signal distortion.
As such, I'd advise against using a converter, because of the price disadvantage (and the increase in system complexity, leaving more points of failure).
|
I have a laptop that only has a Mini DisplayPort (no VGA or HDMI out and no docking station) for output. I'd like to get an external monitor for it, but I've never used DisplayPort before. I'm just going to use the external monitor for basic office work, no gaming or videos. When I'm using the monitor I'll probably use just the monitor and turn off the laptop display. The monitor will stay in one room but the laptop gets moved between that room and another room a couple times a day.
How well do DisplayPort monitors work with Linux? Will things work the way I expect (the monitor will turn on when I plug it in and off when I unplug it, etc.)?
It seems like most monitors don't support DisplayPort; if I get a monitor that doesn't support DisplayPort, and use a DisplayPort-to-VGA adapter, what kinds of features (if any) will I lose?
Is there anything else I should be aware of?
I'm running KDE on Debian Stable 64-bit. I'm okay installing non-free drivers but prefer not to.
|
display port to VGA adapter
|
The solution was to install Laptop Mode Tools.
|
My laptop is connected to an external monitor via DisplayPort(in a port in the docking station). When I boot my laptop while it is connected to the docking station everything works fine, but if I detach the laptop from the docking station and reattach it, the external monitor does not wake up. It does recognize something happened, as it display a power-saving mode message, but, but it remains black. The laptop itself thinks it's connected, as it switches to the multi-monitor mode.
I'm running Arch Linux on a Dell Latitude E7440 with kernel version 3.16.1-1-ARCH and KDE 4.14.0. I have the xf86-video-intel driver installed. I'm using kscreenfor monitor management, but even if I go to the system settings to set it up manually I can't wake up the screen.
When I connected the same screen via DVI it worked fine.
|
DisplayPort won't wake up screen when laptop placed in docking station
|
I just tried this solution again. However, I changed /etc/bumblebee/xorg.conf.nvidia before running optirun intel-virtual-output and it worked this time. The Archwiki didn't mention that this must be done beforehand so I had only changed the configuration files afterwards and re-running optirun intel-virtual-output probably didn't work as there was already an instance of it running.
|
I have a Dell Latitude e6420 laptop with a discrete Nvidia NVS 4200m graphics card in addition to an Intel HD 3000.
When at home, I use a docking station which is connected to another monitor. I used to have my monitor connected to the docking station via DVI which worked perfectly fine. But as I recently got a new monitor (Dell P2418D) with a resolution which is too high for DVI, I attempted to connect it to the docking station via Displayport.
When using the new monitor (connected to the docking station via Displayport) under Windows 10 (I have a Manjaro/Windows dual-boot system), it works fine. However, if I try to use it under Linux, it is recognized by xrandr, but the monitor doesn't recognize any input signal:
Screen 0: minimum 320 x 200, current 2560 x 1440, maximum 8192 x 8192
LVDS-1 connected primary 1600x900+0+0 (normal left inverted right x axis y axis) 310mm x 174mm
1600x900 60.06*+ 59.99 59.94 59.95 59.82 40.32
1400x900 59.96 59.88
1440x810 60.00 59.97
1368x768 59.88 59.85
1280x800 59.99 59.97 59.81 59.91
1280x720 60.00 59.99 59.86 59.74
1024x768 60.04 60.00
960x720 60.00
928x696 60.05
896x672 60.01
1024x576 59.95 59.96 59.90 59.82
960x600 59.93 60.00
960x540 59.96 59.99 59.63 59.82
800x600 60.00 60.32 56.25
840x525 60.01 59.88
864x486 59.92 59.57
700x525 59.98
800x450 59.95 59.82
640x512 60.02
700x450 59.96 59.88
640x480 60.00 59.94
720x405 59.51 58.99
684x384 59.88 59.85
640x400 59.88 59.98
640x360 59.86 59.83 59.84 59.32
512x384 60.00
512x288 60.00 59.92
480x270 59.63 59.82
400x300 60.32 56.34
432x243 59.92 59.57
320x240 60.05
360x202 59.51 59.13
320x180 59.84 59.32
VGA-1 disconnected (normal left inverted right x axis y axis)
LVDS-1-2 disconnected (normal left inverted right x axis y axis)
VGA-1-2 disconnected (normal left inverted right x axis y axis)
HDMI-1-1 disconnected (normal left inverted right x axis y axis)
DP-1-1 connected 2560x1440+0+0 (normal left inverted right x axis y axis) 526mm x 296mm
2560x1440 59.95*+
1920x1440 60.00
1856x1392 60.01
1792x1344 75.00 60.01
2048x1152 59.90 59.91
1920x1200 59.88 59.95
2048x1080 60.00 24.00
1920x1080 59.97 59.96 60.00 50.00 59.94 59.93 24.00 23.98
1920x1080i 60.00 50.00 59.94
1600x1200 75.00 70.00 65.00 60.00
1680x1050 59.95 59.88
1400x1050 74.76 59.98
1600x900 59.99 59.94 59.95 59.82
1280x1024 75.02 60.02
1400x900 59.96 59.88
1280x960 60.00
1440x810 60.00 59.97
1368x768 59.88 59.85
1280x800 59.99 59.97 59.81 59.91
1152x864 75.00
1280x720 60.00 59.99 59.86 60.00 50.00 59.94 59.74
1024x768 75.05 60.04 75.03 70.07 60.00
960x720 75.00 60.00
928x696 75.00 60.05
896x672 75.05 60.01
1024x576 59.95 59.96 59.90 59.82
960x600 59.93 60.00
832x624 74.55
960x540 59.96 59.99 59.63 59.82
800x600 75.00 70.00 65.00 60.00 72.19 75.00 60.32 56.25
840x525 60.01 59.88
864x486 59.92 59.57
720x576 50.00
720x576i 50.00
700x525 74.76 59.98
800x450 59.95 59.82
720x480 60.00 59.94
720x480i 60.00 59.94
640x512 75.02 60.02
700x450 59.96 59.88
640x480 60.00 75.00 72.81 75.00 60.00 59.94
720x405 59.51 58.99
720x400 70.08
684x384 59.88 59.85
640x400 59.88 59.98
576x432 75.00
640x360 59.86 59.83 59.84 59.32
512x384 75.03 70.07 60.00
512x288 60.00 59.92
416x312 74.66
480x270 59.63 59.82
400x300 72.19 75.12 60.32 56.34
432x243 59.92 59.57
320x240 72.81 75.00 60.05
360x202 59.51 59.13
320x180 59.84 59.32
DP-1-2 disconnected (normal left inverted right x axis y axis)
1600x900 (0x45) 246.000MHz -HSync +VSync DoubleScan
h: width 1600 start 1728 end 1900 total 2200 skew 0 clock 111.82KHz
v: height 900 start 901 end 904 total 932 clock 59.99Hz
1600x900 (0x46) 186.500MHz +HSync -VSync DoubleScan
h: width 1600 start 1624 end 1640 total 1680 skew 0 clock 111.01KHz
v: height 900 start 901 end 904 total 926 clock 59.94Hz
1600x900 (0x47) 118.250MHz -HSync +VSync
h: width 1600 start 1696 end 1856 total 2112 skew 0 clock 55.99KHz
v: height 900 start 903 end 908 total 934 clock 59.95Hz
1600x900 (0x48) 97.500MHz +HSync -VSync
h: width 1600 start 1648 end 1680 total 1760 skew 0 clock 55.40KHz
v: height 900 start 903 end 908 total 926 clock 59.82Hz
1400x900 (0x4a) 103.500MHz -HSync +VSync
h: width 1400 start 1480 end 1624 total 1848 skew 0 clock 56.01KHz
v: height 900 start 903 end 913 total 934 clock 59.96Hz
1400x900 (0x4b) 86.500MHz +HSync -VSync
h: width 1400 start 1448 end 1480 total 1560 skew 0 clock 55.45KHz
v: height 900 start 903 end 913 total 926 clock 59.88Hz
1440x810 (0x4c) 198.125MHz -HSync +VSync DoubleScan
h: width 1440 start 1548 end 1704 total 1968 skew 0 clock 100.67KHz
v: height 810 start 811 end 814 total 839 clock 60.00Hz
1440x810 (0x4d) 151.875MHz +HSync -VSync DoubleScan
h: width 1440 start 1464 end 1480 total 1520 skew 0 clock 99.92KHz
v: height 810 start 811 end 814 total 833 clock 59.97Hz
1368x768 (0x4e) 85.250MHz -HSync +VSync
h: width 1368 start 1440 end 1576 total 1784 skew 0 clock 47.79KHz
v: height 768 start 771 end 781 total 798 clock 59.88Hz
1368x768 (0x4f) 72.250MHz +HSync -VSync
h: width 1368 start 1416 end 1448 total 1528 skew 0 clock 47.28KHz
v: height 768 start 771 end 781 total 790 clock 59.85Hz
1280x800 (0x50) 174.250MHz -HSync +VSync DoubleScan
h: width 1280 start 1380 end 1516 total 1752 skew 0 clock 99.46KHz
v: height 800 start 801 end 804 total 829 clock 59.99Hz
1280x800 (0x51) 134.250MHz +HSync -VSync DoubleScan
h: width 1280 start 1304 end 1320 total 1360 skew 0 clock 98.71KHz
v: height 800 start 801 end 804 total 823 clock 59.97Hz
1280x800 (0x52) 83.500MHz -HSync +VSync
h: width 1280 start 1352 end 1480 total 1680 skew 0 clock 49.70KHz
v: height 800 start 803 end 809 total 831 clock 59.81Hz
1280x800 (0x53) 71.000MHz +HSync -VSync
h: width 1280 start 1328 end 1360 total 1440 skew 0 clock 49.31KHz
v: height 800 start 803 end 809 total 823 clock 59.91Hz
1280x720 (0x54) 156.125MHz -HSync +VSync DoubleScan
h: width 1280 start 1376 end 1512 total 1744 skew 0 clock 89.52KHz
v: height 720 start 721 end 724 total 746 clock 60.00Hz
1280x720 (0x55) 120.750MHz +HSync -VSync DoubleScan
h: width 1280 start 1304 end 1320 total 1360 skew 0 clock 88.79KHz
v: height 720 start 721 end 724 total 740 clock 59.99Hz
1280x720 (0x56) 74.500MHz -HSync +VSync
h: width 1280 start 1344 end 1472 total 1664 skew 0 clock 44.77KHz
v: height 720 start 723 end 728 total 748 clock 59.86Hz
1280x720 (0x57) 63.750MHz +HSync -VSync
h: width 1280 start 1328 end 1360 total 1440 skew 0 clock 44.27KHz
v: height 720 start 723 end 728 total 741 clock 59.74Hz
1024x768 (0x58) 133.475MHz -HSync +VSync DoubleScan
h: width 1024 start 1100 end 1212 total 1400 skew 0 clock 95.34KHz
v: height 768 start 768 end 770 total 794 clock 60.04Hz
1024x768 (0x59) 65.000MHz -HSync -VSync
h: width 1024 start 1048 end 1184 total 1344 skew 0 clock 48.36KHz
v: height 768 start 771 end 777 total 806 clock 60.00Hz
960x720 (0x5a) 117.000MHz -HSync +VSync DoubleScan
h: width 960 start 1024 end 1128 total 1300 skew 0 clock 90.00KHz
v: height 720 start 720 end 722 total 750 clock 60.00Hz
928x696 (0x5b) 109.150MHz -HSync +VSync DoubleScan
h: width 928 start 976 end 1088 total 1264 skew 0 clock 86.35KHz
v: height 696 start 696 end 698 total 719 clock 60.05Hz
896x672 (0x5c) 102.400MHz -HSync +VSync DoubleScan
h: width 896 start 960 end 1060 total 1224 skew 0 clock 83.66KHz
v: height 672 start 672 end 674 total 697 clock 60.01Hz
1024x576 (0x5d) 98.500MHz -HSync +VSync DoubleScan
h: width 1024 start 1092 end 1200 total 1376 skew 0 clock 71.58KHz
v: height 576 start 577 end 580 total 597 clock 59.95Hz
1024x576 (0x5e) 78.375MHz +HSync -VSync DoubleScan
h: width 1024 start 1048 end 1064 total 1104 skew 0 clock 70.99KHz
v: height 576 start 577 end 580 total 592 clock 59.96Hz
1024x576 (0x5f) 46.500MHz -HSync +VSync
h: width 1024 start 1064 end 1160 total 1296 skew 0 clock 35.88KHz
v: height 576 start 579 end 584 total 599 clock 59.90Hz
1024x576 (0x60) 42.000MHz +HSync -VSync
h: width 1024 start 1072 end 1104 total 1184 skew 0 clock 35.47KHz
v: height 576 start 579 end 584 total 593 clock 59.82Hz
960x600 (0x61) 96.625MHz -HSync +VSync DoubleScan
h: width 960 start 1028 end 1128 total 1296 skew 0 clock 74.56KHz
v: height 600 start 601 end 604 total 622 clock 59.93Hz
960x600 (0x62) 77.000MHz +HSync -VSync DoubleScan
h: width 960 start 984 end 1000 total 1040 skew 0 clock 74.04KHz
v: height 600 start 601 end 604 total 617 clock 60.00Hz
960x540 (0x63) 86.500MHz -HSync +VSync DoubleScan
h: width 960 start 1024 end 1124 total 1288 skew 0 clock 67.16KHz
v: height 540 start 541 end 544 total 560 clock 59.96Hz
960x540 (0x64) 69.250MHz +HSync -VSync DoubleScan
h: width 960 start 984 end 1000 total 1040 skew 0 clock 66.59KHz
v: height 540 start 541 end 544 total 555 clock 59.99Hz
960x540 (0x65) 40.750MHz -HSync +VSync
h: width 960 start 992 end 1088 total 1216 skew 0 clock 33.51KHz
v: height 540 start 543 end 548 total 562 clock 59.63Hz
960x540 (0x66) 37.250MHz +HSync -VSync
h: width 960 start 1008 end 1040 total 1120 skew 0 clock 33.26KHz
v: heigTht 540 start 543 end 548 total 556 clock 59.82Hz
800x600 (0x67) 81.000MHz +HSync +VSync DoubleScan
h: width 800 start 832 end 928 total 1080 skew 0 clock 75.00KHz
v: height 600 start 600 end 602 total 625 clock 60.00Hz
800x600 (0x68) 40.000MHz +HSync +VSync
h: width 800 start 840 end 968 total 1056 skew 0 clock 37.88KHz
v: height 600 start 601 end 605 total 628 clock 60.32Hz
800x600 (0x69) 36.000MHz +HSync +VSync
h: width 800 start 824 end 896 total 1024 skew 0 clock 35.16KHz
v: height 600 start 601 end 603 total 625 clock 56.25Hz
840x525 (0x6a) 73.125MHz -HSync +VSync DoubleScan
h: width 840 start 892 end 980 total 1120 skew 0 clock 65.29KHz
v: height 525 start 526 end 529 total 544 clock 60.01Hz
840x525 (0x6b) 59.500MHz +HSync -VSync DoubleScan
h: width 840 start 864 end 880 total 920 skew 0 clock 64.67KHz
v: height 525 start 526 end 529 total 540 clock 59.88Hz
864x486 (0x6c) 32.500MHz -HSync +VSync
h: width 864 start 888 end 968 total 1072 skew 0 clock 30.32KHz
v: height 486 start 489 end 494 total 506 clock 59.92Hz
864x486 (0x6d) 30.500MHz +HSync -VSync
h: width 864 start 912 end 944 total 1024 skew 0 clock 29.79KHz
v: height 486 start 489 end 494 total 500 clock 59.57Hz
700x525 (0x6e) 61.000MHz +HSync +VSync DoubleScan
h: width 700 start 744 end 820 total 940 skew 0 clock 64.89KHz
v: height 525 start 526 end 532 total 541 clock 59.98Hz
800x450 (0x6f) 59.125MHz -HSync +VSync DoubleScan
h: width 800 start 848 end 928 total 1056 skew 0 clock 55.99KHz
v: height 450 start 451 end 454 total 467 clock 59.95Hz
800x450 (0x70) 48.750MHz +HSync -VSync DoubleScan
h: width 800 start 824 end 840 total 880 skew 0 clock 55.40KHz
v: height 450 start 451 end 454 total 463 clock 59.82Hz
640x512 (0x71) 54.000MHz +HSync +VSync DoubleScan
h: width 640 start 664 end 720 total 844 skew 0 clock 63.98KHz
v: height 512 start 512 end 514 total 533 clock 60.02Hz
700x450 (0x72) 51.750MHz -HSync +VSync DoubleScan
h: width 700 start 740 end 812 total 924 skew 0 clock 56.01KHz
v: height 450 start 451 end 456 total 467 clock 59.96Hz
700x450 (0x73) 43.250MHz +HSync -VSync DoubleScan
h: width 700 start 724 end 740 total 780 skew 0 clock 55.45KHz
v: height 450 start 451 end 456 total 463 clock 59.88Hz
640x480 (0x74) 54.000MHz +HSync +VSync DoubleScan
h: width 640 start 688 end 744 total 900 skew 0 clock 60.00KHz
v: height 480 start 480 end 482 total 500 clock 60.00Hz
640x480 (0x75) 25.175MHz -HSync -VSync
h: width 640 start 656 end 752 total 800 skew 0 clock 31.47KHz
v: height 480 start 490 end 492 total 525 clock 59.94Hz
720x405 (0x76) 22.500MHz -HSync +VSync
h: width 720 start 744 end 808 total 896 skew 0 clock 25.11KHz
v: height 405 start 408 end 413 total 422 clock 59.51Hz
720x405 (0x77) 21.750MHz +HSync -VSync
h: width 720 start 768 end 800 total 880 skew 0 clock 24.72KHz
v: height 405 start 408 end 413 total 419 clock 58.99Hz
684x384 (0x78) 42.625MHz -HSync +VSync DoubleScan
h: width 684 start 720 end 788 total 892 skew 0 clock 47.79KHz
v: height 384 start 385 end 390 total 399 clock 59.88Hz
684x384 (0x79) 36.125MHz +HSync -VSync DoubleScan
h: width 684 start 708 end 724 total 764 skew 0 clock 47.28KHz
v: height 384 start 385 end 390 total 395 clock 59.85Hz
640x400 (0x7a) 41.750MHz -HSync +VSync DoubleScan
h: width 640 start 676 end 740 total 840 skew 0 clock 49.70KHz
v: height 400 start 401 end 404 total 415 clock 59.88Hz
640x400 (0x7b) 35.500MHz +HSync -VSync DoubleScan
h: width 640 start 664 end 680 total 720 skew 0 clock 49.31KHz
v: height 400 start 401 end 404 total 411 clock 59.98Hz
640x360 (0x7c) 37.250MHz -HSync +VSync DoubleScan
h: width 640 start 672 end 736 total 832 skew 0 clock 44.77KHz
v: height 360 start 361 end 364 total 374 clock 59.86Hz
640x360 (0x7d) 31.875MHz +HSync -VSync DoubleScan
h: width 640 start 664 end 680 total 720 skew 0 clock 44.27KHz
v: height 360 start 361 end 364 total 370 clock 59.83Hz
640x360 (0x7e) 18.000MHz -HSync +VSync
h: width 640 start 664 end 720 total 800 skew 0 clock 22.50KHz
v: height 360 start 363 end 368 total 376 clock 59.84Hz
640x360 (0x7f) 17.750MHz +HSync -VSync
h: width 640 start 688 end 720 total 800 skew 0 clock 22.19KHz
v: height 360 start 363 end 368 total 374 clock 59.32Hz
512x384 (0x80) 32.500MHz -HSync -VSync DoubleScan
h: width 512 start 524 end 592 total 672 skew 0 clock 48.36KHz
v: height 384 start 385 end 388 total 403 clock 60.00Hz
512x288 (0x81) 23.250MHz -HSync +VSync DoubleScan
h: width 512 start 532 end 580 total 648 skew 0 clock 35.88KHz
v: height 288 start 289 end 292 total 299 clock 60.00Hz
512x288 (0x82) 21.000MHz +HSync -VSync DoubleScan
h: width 512 start 536 end 552 total 592 skew 0 clock 35.47KHz
v: height 288 start 289 end 292 total 296 clock 59.92Hz
480x270 (0x83) 20.375MHz -HSync +VSync DoubleScan
h: width 480 start 496 end 544 total 608 skew 0 clock 33.51KHz
v: height 270 start 271 end 274 total 281 clock 59.63Hz
480x270 (0x84) 18.625MHz +HSync -VSync DoubleScan
h: width 480 start 504 end 520 total 560 skew 0 clock 33.26KHz
v: height 270 start 271 end 274 total 278 clock 59.82Hz
400x300 (0x85) 20.000MHz +HSync +VSync DoubleScan
h: width 400 start 420 end 484 total 528 skew 0 clock 37.88KHz
v: height 300 start 300 end 302 total 314 clock 60.32Hz
400x300 (0x86) 18.000MHz +HSync +VSync DoubleScan
h: width 400 start 412 end 448 total 512 skew 0 clock 35.16KHz
v: height 300 start 300 end 301 total 312 clock 56.34Hz
432x243 (0x87) 16.250MHz -HSync +VSync DoubleScan
h: width 432 start 444 end 484 total 536 skew 0 clock 30.32KHz
v: height 243 start 244 end 247 total 253 clock 59.92Hz
432x243 (0x88) 15.250MHz +HSync -VSync DoubleScan
h: width 432 start 456 end 472 total 512 skew 0 clock 29.79KHz
v: height 243 start 244 end 247 total 250 clock 59.57Hz
320x240 (0x89) 12.587MHz -HSync -VSync DoubleScan
h: width 320 start 328 end 376 total 400 skew 0 clock 31.47KHz
v: height 240 start 245 end 246 total 262 clock 60.05Hz
360x202 (0x8a) 11.250MHz -HSync +VSync DoubleScan
h: width 360 start 372 end 404 total 448 skew 0 clock 25.11KHz
v: height 202 start 204 end 206 total 211 clock 59.51Hz
360x202 (0x8b) 10.875MHz +HSync -VSync DoubleScan
h: width 360 start 384 end 400 total 440 skew 0 clock 24.72KHz
v: height 202 start 204 end 206 total 209 clock 59.13Hz
320x180 (0x8c) 9.000MHz -HSync +VSync DoubleScan
h: width 320 start 332 end 360 total 400 skew 0 clock 22.50KHz
v: height 180 start 181 end 184 total 188 clock 59.84Hz
320x180 (0x8d) 8.875MHz +HSync -VSync DoubleScan
h: width 320 start 344 end 360 total 400 skew 0 clock 22.19KHz
v: height 180 start 181 end 184 total 187 clock 59.32Hz(The external monitor is DP-1-1.)
As I suspected the graphics driver to be the problem, I used mhwd to switch from "video-linux" (which is basically nouveau) to the proprietary driver "video-hybrid-intel-nvidia-390xx-bumblebee".
However, using this driver, the external monitor (along with several other outputs) was no longer shown. The output of xrandr:
Screen 0: minimum 8 x 8, current 1600 x 900, maximum 32767 x 32767
LVDS1 connected primary 1600x900+0+0 (normal left inverted right x axis y axis) 310mm x 170mm
1600x900 60.06*+ 40.32
1400x900 59.88
1368x768 60.00 59.88 59.85
1280x800 59.81 59.91
1280x720 59.86 60.00 59.74
1024x768 60.00
1024x576 60.00 59.90 59.82
960x540 60.00 59.63 59.82
800x600 60.32 56.25
864x486 60.00 59.92 59.57
800x450 60.00
640x480 59.94
720x405 59.51 60.00 58.99
640x360 59.84 59.32 60.00
VGA1 disconnected (normal left inverted right x axis y axis)
VIRTUAL1 disconnected (normal left inverted right x axis y axis)Using another driver (video-nvidia-390xx), the external monitor worked fine, but the laptop's own monitor (which was LVDS-1/LVDS1 in the previous xrandr outputs) was no longer shown:
Screen 0: minimum 8 x 8, current 4608 x 1440, maximum 16384 x 16384
VGA-0 disconnected (normal left inverted right x axis y axis)
LVDS-0 disconnected (normal left inverted right x axis y axis)
DP-0 disconnected (normal left inverted right x axis y axis)
DP-1 disconnected (normal left inverted right x axis y axis)
HDMI-0 disconnected (normal left inverted right x axis y axis)
DP-2 connected primary 2560x1440+0+0 (normal left inverted right x axis y axis) 526mm x 296mm panning 4608x1440+0+0
2560x1440 59.95*+
2048x1080 60.00 24.00
1920x1200 59.88
1920x1080 60.00 59.94 50.00 23.98
1680x1050 59.95
1600x1200 60.00
1280x1024 75.02 60.02
1280x800 59.81
1280x720 59.94 50.00
1152x864 75.00
1024x768 75.03 60.00
800x600 75.00 60.32
720x576 50.00
720x480 59.94
640x480 75.00 59.94 59.93
DP-3 disconnected (normal left inverted right x axis y axis)(The external monitor is DP-2)
This confirmed my assumption that I was dealing with a diver-issue but didn't solve my problem (as I need to use the monitor of the laptop for obvious reasons).
Furthermore, this solution using intel-virtual-output (w/ the "video-hybrid-intel-nvidia-390xx-bumblebee" driver) didn't work either.
Another solution I tried was using bbswitch (w/ the "video-hybrid-intel-nvidia-390xx-bumblebee" driver) to turn the discrete graphics card on and then running sudo systemctl restart display-manager but the external monitor still didn't show up in xrandr.
How can I get both monitors (external and laptop monitor) to work at the same time?
Why is the monitor recognized using the "video-linux" driver but I can't get it to receive any input?
Why does the monitor work with the proprietary "video-nvidia-390xx" driver but not with the (also proprietary) "video-hybrid-intel-nvidia-390xx-bumblebee" driver?
The output of inxi -Fxxxz with the "video-linux" driver
inxi -G w/ "video-hybrid-intel-nvidia-390xx-bumblebee"
inxi -G w/ "video-nvidia-390xx"
|
Unable to get Displayport output working with a Nvidia Optimus Laptop
|
Solved by buying a DisplayPort cable (not mini).
Maybe the interface on the monitor was damaged.
|
Suddenly one of three monitors (DELL U2414H) stopped working. In nvidia-settings I can see 4 lanes instead of 2.
It is the only visible difference between working and not working monitor configuration.
Working:Not working:Sometimes, I have a similar issue with the wrong number of lanes, but 1 lane instead of 2. It happens after turning the monitor off and on, and is OS- and driver-independent. The only solution I found is re-plugging the DP several times. With some probability it works.
It does not help now, though (when I see 4 lanes instead of 2).
What can I do to solve this? Is it a problem with signal coming from the video card?
Drivers version: Linux x64 375.82
There is also a projector connected via HDMI as the 4th monitor.
Full screen map:
|
Wrong number of lanes via DisplayPort, probably Nvidia GTX 1080 driver issue [closed]
|
It is the lack of support for the GPU in kernel (and likely also in X.Org video driver) which you need to somehow solve. Proper support for Sky Lake based GPUs in i915 kernel driver should be available from kernel 4.4 on. Then again, myself I still couldn't get a Intel GPU with device code 1912 working in Debian Jessie under 4.4.5 due to something with possibly the X.org version in Jessie (haven't tried any later kernel now, though). So it'll be either major upgrade of the system, or a dedicated GPU.
Getting a used common good known brand GPU which your system has support for could be the easiest way out, but I'm not sure if you could find one that has specifically DisplayPort available.
If you don't want to upgrade the system, you could try just taking a recent kernel and compiling that manually with all the required options to support the GPU. The possible problem with this approach is that it might be hard to get the system to boot with the new kernel, as there might be some conflicts between the kernel and the base software of the system, udev being one possible issue. You'd also need to remember to include much of the deprecated stuff to be compatible with the older software which interfaces the kernel.
Intel does even provide sources for their graphics driver, so if you are willing to try every possible thing, you could try also compiling that.
In addition to compiling either the Linux kernel or just the Intel graphics driver, you'd still also need to get recent enough X.Org Intel video driver which also supports Skylake based GPUs, so you'd probably also end up needing to compile that (possibly the whole of X.Org), too. This might prove to be impossible without upgrading large parts of the rest of the system due to conflicting version requirements for many other components. After all, there is a reason why most people rely on prebuilt distributions instead of trying to get things going from the scratch :)
|
I am working with (one of) my workstation(s) working under Scientific Linux 6, so, basically a quite old version of Red Hat Enterprise Linux. I would need to use 2 screens, but only have 2 DisplayPort and one VGA as outputs from my Intel IGP. I am unable to make the DisplayPort ports working, I guess because the driver and kernel used are too old.
Anyone would have an idea (besides using a dedicated GPU) ?
lspci
00:00.0 Host bridge: Intel Corporation Sky Lake Host Bridge/DRAM Registers (rev 07)
00:02.0 VGA compatible controller: Intel Corporation Sky Lake Integrated Graphics (rev 06)
00:14.0 USB controller: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller (rev 31)
00:14.2 Signal processing controller: Intel Corporation Sunrise Point-H Thermal subsystem (rev 31)
00:16.0 Communication controller: Intel Corporation Sunrise Point-H CSME HECI #1 (rev 31)
00:16.3 Serial controller: Intel Corporation Sunrise Point-H KT Redirection (rev 31)
00:17.0 SATA controller: Intel Corporation Device a102 (rev 31)
00:1f.0 ISA bridge: Intel Corporation Sunrise Point-H LPC Controller (rev 31)
00:1f.2 Memory controller: Intel Corporation Sunrise Point-H PMC (rev 31)
00:1f.3 Audio device: Intel Corporation Sunrise Point-H HD Audio (rev 31)
00:1f.4 SMBus: Intel Corporation Sunrise Point-H SMBus (rev 31)
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31)uname -a
Linux pcbe13615 2.6.32-573.22.1.el6.x86_64 #1 SMP Wed Mar 23 17:13:03 CET 2016 x86_64 x86_64 x86_64 GNU/Linux
|
How to use the proper video driver on Scientific Linux 6 for Display Port screens?
|
Using audible-activator and AAXtoMP3 worked. With a few tweaks, AAXtoMP3 converts to FLAC as well.
|
I have bought some audio books at Audible. The default .aa files play fine in VLC, but the quality is pretty bad - there's a constant background hum during any speech. Their enhanced quality audio files open in VLC, which displays the frontispiece, chapter number, and progress indicator, but there is no sound. Is it possible to fix this (Linux and open source software being the two parameters I care about)?
The latest VLC (2.2.4) displays the following codec information for an .aax file:Stream 0Type: Audio
Codec: MPEG AAC Audio (aavd)
Language: English
Sample rate: 44100 Hz
Bits per sample: 16
Bitrate: 1411 kb/sStream 1Type: Subtitle
Codec: tx3g
Language: English
|
How to play AAX audio books from Audible?
|
As of 2017, qemu doesn't provide text-mode-only graphics card emulation for x86-64 that would force a guest to stay in text mode.
Current distributions like Fedora 25 come with the bochs_drm kernel module that enables a frame buffer (e.g. 1024x768 graphics mode), by default. In contrast to that, e.g. Debian 8 (stable) doesn't provide this module and thus it stays in old-school text-mode during the complete boot.
Thus, when running qemu from a terminal (e.g. with -display curses) it makes sense to enable a serial console as fail safe:
console=tty0 console=ttyS0or
console=tty0 console=ttyS0,115200(Kernel parameters for the guest, default speed is 9600, both settings works with qemu, make the settings persistent in Fedora via assigning them to GRUB_CMDLINE_LINUX in /etc/sysconfig/grub and executing grub2-mkconfig -o /etc/grub2.cfg or grub2-mkconfig -o /etc/grub2-efi.cfg)
In case nothing else works one can then switch inside qemu via Alt+3 to the serial console, then.
A second measure is to disable the framebuffer via a bochs_drm module parameter - i.e. via setting it on the guest kernel command line:
bochs_drm.fbdev=offBlacklist Alternative
Alternatively, the bochs_drm module can be blacklisted - i.e. via creating a config under /etc/modprobe.d - say - bochs.conf:
blacklist bochs_drmSince the initramfs mustn't load the bochs_drm module, as well, one has to make sure that this config is included into the initramfs. On Fedora like distributions this is achieved via:
# dracut -fUEFI Boot
When booting qemu with an UEFI firmware (e.g. -bios /usr/share/edk2/ovmf/OVMF_CODE.fd) the disabling of the bochs fbdev isn't enough. The Fedora boot then hangs while trying to switch to the bochs framebuffer. Blacklisting the bochs_drm fixes this but it isn't sufficient. One just gets a 640 x 480 graphics mode that isn't reset to text mode by the kernel. Thus, for UEFI guests one has to take the serial console route.
Serial Console
Using the serial console in combination with -display curses yields a suboptimal user experience as the curses interferes with the vt110/vt220 terminal emulation. Thus, it only suffices for emergencies.
A better solution is to completely switch the display off and use the combined serial/monitor Qemu mode:
-display none -serial mon:stdio -echr 2(where Ctrl+b h displays a help and Ctrl+b c switches between the modes)
With Fedora 27, the Grub2 is configured with serial console support, by default. Thus, it can be controlled via the serial terminal, as well.
Calling resize after login updates the terminal geometry, thus, the resulting terminal behaves as good as a local one.
Multi-User Target
In case the guest image has a graphical login manager installer it makes sense to disable it:
# systemctl set-default multi-user.targetOtherwise, on has to switch to the first virtual console after each boot (e.g. Alt+2 or Alt+3 when using the curses display).
|
The QEMU options -display curses and -nographic -device sga (the serial graphics adapter) are very convenient for running QEMU outside of a graphical environment.
(think: remote ssh connection, rescue system etc.)
Both modes fail to work with framebuffer text mode, though. The new default with some Linux distirbutions (e.g. Fedora 25) seems to be that at some point during boot a framebuffer text mode seems to be activated such that with -display curses QEMU just displays '1024x768 Graphic mode'. With SGA just nothing is printed.
Thus my question: how can I force the kernel (and the rest of startup) to just use the old-school initial text mode?
Addendum
Adding the nomodeset kernel parameter (and removing the rhgb one) doesn't make a difference.
Most convenient would be some QEMU configuration that forces the kernel to just detect the most basic text mode - since the guest wouldn't have to be modified.
Setting up a serial console (via e.g. adding the console=ttyS0 kernel parameter to the guest) works in my environment, but I observed some escape sequence issues with the Gnome terminal. Also this doesn't help with boot loaders that already use the framebuffer (e.g. the one on the Fedora 25 server ISO) - and needs a modification of the guest.
Fedora Guest Example
With Fedora 25 as guest, the switch to the framebuffer happens during initrd runtime, some log messges (from the serial console):
[ 1.485115] Console: switching to colour frame buffer device 128x48
[ 1.493184] bochs-drm 0000:00:02.0: fb0: bochsdrmfb frame buffer device
[ 1.502492] [drm] Initialized bochs-drm 1.0.0 20130925 for 0000:00:02.0 on minor 0These messages also show up with the nofb and vga=normal (guest) kernel parameters.
|
Disable framebuffer in QEMU guests
|
"Graphics driver" can mean any number of things.
The way X (the graphical windowing system) works is that there is a central X server, which can load modules ("X drivers") for different hardware. Like vesa, fbdev, nvidia, nouveau, amdgpu.
Some of these drivers can work on their own (vesa). Some need linux kernel drivers. Many of these kernel drivers following the "direct rendering manager API", and therefore they are called "DRM drivers". Others, like the proprietary nvidia driver (which needs both an X driver and a kernel driver), don't.
It gets more complicated: The hardware consists of parts that read out the framebuffer and display it at different resolutions etc. This is called "modesetting". Modern graphics card also have a GPU, which is used to accelerate 3D drawing (OpenGL). "DRM kernel drivers" provide an interface for both.
"Mesa" is a software library that understands OpenGL, but does the rendering either on the CPU, or on some (but not all) GPUs (see here for a list). So the Mesa library can offer this functionality for graphics card that do not or do not sufficiently have hardware for this, or can serve as the OpenGL library for a few GPUs.
You could probably make a case to call anything in this complex picture a "graphics driver".
|
I'm trying to understand what the difference is between DRM (Direct Rendering Manager) and a graphics driver, such as AMD or Nvidia GPU drivers.
Reading the DRM wiki[1], it seems to me like DRM is basically a graphics hardware driver, however this doesn't explain the existence of proprietary or FOSS graphics drivers for discrete GPUs.
What then, is the difference, or use case, for DRM over mesa or Nvidia drivers? What happens with DRM when AMD drivers are installed? Are they used for different tasks? Are proprietary drivers built around DRM?
[1]https://en.wikipedia.org/wiki/Direct_Rendering_Manager
|
What's the difference between DRM and a graphics driver?
|
Dissecting the acronym, I get that MCH stands for 'Memory Controller Hub' with is an older name for the northbridge. This chip is part of your I/O controller hub.
As for SSKPD, there is not much information I can find other than what is in various intel manuals. Here is a snippet from one of them:SSKPD — Sticky Scratchpad Data Register
This register holds 64 writable bits with no functionality behind them. It is for the
convenience of BIOS and graphics drivers.Unfortunately this doesn't give much information on what it is. According to Wikipedia, scratchpad is "special high-speed memory circuit used to hold small items of data for rapid retrieval."
Another piece of information is the log from the commit that added the warning:drm/i915: detect wrong MCH watermark values
Some early bios versions seem to ship with the wrong tuning values for
the MCH, possible resulting in pipe underruns under load. Especially
on DP outputs this can lead to black screen, since DP really doesn't
like an occasional whack from an underrun.
Unfortunately the registers seem to be locked after boot, so the only
thing we can do is politely point out issues and suggest a BIOS
upgrade.
Arthur Runyan pointed us at this issue while discussion DP bugs - thus
far no confirmation from a bug report yet that it helps. But at least
some of my machines here have wrong values, so this might be useful in
understanding bug reports.
v2: After a bit more discussion with Art and Ben we've decided to only
the check the watermark values, since the OREF ones could be be a
notch more aggressive on certain machines.So seemingly the value of the register has some meaning on some processors. There isn't anything I can find on the internet at this time which explains exactly what could go wrong by it having the wrong value, but I think this gives a good overall idea.
If you really want to dig further, you could email one of the guys who wrote or reviewed the commit.
|
While i was reading dmesg log just to check that everything is fine, i met
[ 18.956187] [drm] Wrong MCH_SSKPD value: 0x16040307
[ 18.956190] [drm] This can cause pipe underruns and display issues.
[ 18.956192] [drm] Please upgrade your BIOS to fix this.Looks that it does not cause problems on my laptop, but what does this message stands for? What can it cause? Where can I read more about MCH_SSKPD?
|
What is MCH_SSKPD warning in dmesg?
|
The driver is a linux kernel module.
Download the source of the linux kernel, have a look at the code of the existing framebuffer drivers in drivers/video/fbdev (github here) and the documentation in Documentation/fb (github). Google for tutorials how to write kernel modules, practice with a simple module first.
Just mapping memory won't be enough, you'll have to implement a few ioctls.
Writing kernel drivers is not easy. If you have to ask this kind of questions (and you asked a lot in the past few days), you probably won't be able to do it.
X is a server for the X protocol. It can use hardware via the DRM kernel modules, and it can also use hardware via framebuffer drivers (with the fbdev X driver). Details about that are easy to find online, google. /dev/fb0 is a framebuffer device, so you don't need to concern yourself with X or DRM.
|
I want to write a linux driver which maps my specified memory address space to /dev/fb0.
the driver should be specified by what part of linux? drm or frame buffer or server X or somthing else? Which properties should I have in my driver?
|
mapping linux /dev/fb0 to DDR for displaying
|
Edit: there is an interface for kernel debugging purposes only. It is only accessible by root and is not stable. It might be rewritten, renamed, and/or misleading if you are not a kernel developer. (It might even be buggy, for all I know). But if you have a problem, it might be useful to know it's there.
My i915 driver gives me the information here:
$ sudo sh -c 'cat /sys/kernel/debug/dri/*/i915_gem_objects'
643 objects, 205852672 bytes
75 unbound objects, 7811072 bytes
568 bound objects, 198041600 bytes
16 purgeable objects, 5750784 bytes
16 mapped objects, 606208 bytes
13 huge-paged objects (2M, 4K) 123764736 bytes
13 display objects (globally pinned), 14954496 bytes
4294967296 [0x0000000010000000] gtt total
Supported page sizes: 2M, 4K[k]contexts: 16 objects, 548864 bytes (0 active, 548864 inactive, 548864 global, 0 shared, 0 unbound)
systemd-logind: 324 objects, 97374208 bytes (0 active, 115798016 inactive, 23941120 global, 5246976 shared, 3858432 unbound)
Xwayland: 24 objects, 6995968 bytes (0 active, 12169216 inactive, 5283840 global, 5246976 shared, 110592 unbound)
gnome-shell: 246 objects, 89739264 bytes (26517504 active, 120852480 inactive, 63016960 global, 5242880 shared, 3629056 unbound)
Xwayland: 25 objects, 17309696 bytes (0 active, 22503424 inactive, 5304320 global, 5242880 shared, 90112 unbound)Again, exercise caution. I notice mapped objects only shows 600KB. I guess mapped here means something different than I was expecting. For comparison, running the python script below to show the i915 objects mapped in user process' address spaces, I see a total of 70MB.
The line for systemd-logind in my output is representing a second gnome-shell instance, running on a different virtual console. If I switch over to a virtual console which has a text login running on it instead, then this file shows two systemd-logind lines and no gnome-shell lines :-).Otherwise, the best you can do is find some of the shmem files by looking through all open files, in /proc/*/fd/ and /proc/*/map_files/ (or /proc/*/maps).
With the right hacks, it appears possible to reliably identify which files belong to the hidden shmem filesystem(s).
Each shared memory object is a file with a name. And the names can be used to identify which kernel subsystem created the file.SYSV00000000
i915 (i.e. intel gpu)
memfd:gdk-wayland
dev/zero (for any "anonymous" shared mapping)
...The problem is this does not show all DRM / GEM allocations. DRM buffers can exist without being mapped, simply as a numeric handle. These are tied to the open DRM file they were created on. When the program crashes or is killed, the DRM file will be closed, and all its DRM handles will be cleaned up automatically. (Unless some other software keeps a copy of the file descriptor open, like this old bug.)
https://www.systutorials.com/docs/linux/man/7-drm-gem/
You can find open DRM files in /proc/*/fd/, but they show as a zero-size file with zero blocks allocated.
For example, the output below shows a system where I cannot account for over 50% / 300MB of the Shmem.
$ grep Shmem: /proc/meminfo
Shmem: 612732 kB$ df -h -t tmpfs
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 59M 3.8G 2% /dev/shm
tmpfs 3.9G 2.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 9.0M 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 8.0M 778M 2% /run/user/1000
tmpfs 786M 5.7M 781M 1% /run/user/1001$ sudo ipcs -mu------ Shared Memory Status --------
segments allocated 20
pages allocated 4226
pages resident 3990
pages swapped 0
Swap performance: 0 attempts 0 successes All open files on hidden shmem filesystem(s):
$ sudo python3 ~/shm -s
15960 /SYSV*
79140 /i915
7912 /memfd:gdk-wayland
1164 /memfd:pulseaudio
104176Here is a "before and after", logging out one of my two logged-in GNOME users. It might be explained if gnome-shell had over 100MB of unmapped DRM buffers.
$ grep Shmem: /proc/meminfo
Shmem: 478780 kB
$ df -t tmpfs -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 2.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 276K 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 8.0M 778M 2% /run/user/1000
tmpfs 786M 5.7M 781M 1% /run/user/1001
$ sudo ./shm -s
80 /SYSV*
114716 /i915
1692 /memfd:gdk-wayland
1156 /memfd:pulseaudio
117644$ grep Shmem: /proc/meminfo
Shmem: 313008 kB
$ df -t tmpfs -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 2.1M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 204K 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 6.8M 780M 1% /run/user/1000
$ sudo ./shm -s
40 /SYSV*
88496 /i915
1692 /memfd:gdk-wayland
624 /memfd:pulseaudio
90852Python script to generate the above output:
#!/bin/python3
# Reads Linux /proc. No str, all bytes.import sys
import os
import stat
import glob
import collections
import math# File.
# 'name' is first name encountered, we don't track hardlinks.
Inode = collections.namedtuple('Inode', ['name', 'bytes', 'pids'])# inode number -> Inode object
inodes = dict()
# pid -> program name
pids = dict()
# filename -> list() of Inodes
filenames = dict()def add_file(pid, proclink):
try:
vfs = os.statvfs(proclink) # The tmpfs which reports 0 blocks is an internal shm mount
# python doesn't admit f_fsid ...
if vfs.f_blocks != 0:
return
filename = os.readlink(proclink)
# ... but all the shm files are deleted (hack :)
if not filename.endswith(b' (deleted)'):
return
filename = filename[:-10]
# I tried a consistency check that all our st_dev are the same
# but actually there can be more than one internal shm mount!
# i915 added a dedicated "gemfs" so they could control mount options. st = os.stat(proclink) # hack the second: ignore deleted character devices from devpts
if stat.S_ISCHR(st.st_mode):
return # Read process name succesfully,
# before we record file owned by process.
if pid not in pids:
pids[pid] = open(b'/proc/' + pid + b'/comm', 'rb').read()[:-1] if st.st_ino not in inodes:
inode_pids = set()
inode_pids.add(pid) inode = Inode(name=filename,
bytes=st.st_blocks * 512,
pids=inode_pids)
inodes[st.st_ino] = inode
else:
inode = inodes[st.st_ino]
inode.pids.add(pid) # Group SYSV shared memory objects.
# There could be many, and the rest of the name is just a numeric ID
if filename.startswith(b'/SYSV'):
filename = b'/SYSV*' filename_inodes = filenames.setdefault(filename, set())
filename_inodes.add(st.st_ino) except FileNotFoundError:
# File disappeared (race condition).
# Don't bother to distinguish "file closed" from "process exited".
passsummary = False
if sys.argv[1:]:
if sys.argv[1:] == ['-s']:
summary = True
else:
print("Usage: {0} [-s]".format(sys.argv[0]))
sys.exit(2)os.chdir(b'/proc')
for pid in glob.iglob(b'[0-9]*'):
for f in glob.iglob(pid + b'/fd/*'):
add_file(pid, f)
for f in glob.iglob(pid + b'/map_files/*'):
add_file(pid, f)def pid_name(pid):
return pid + b'/' + pids[pid]def kB(b):
return str(math.ceil(b / 1024)).encode('US-ASCII')out = sys.stdout.buffertotal = 0
for (filename, filename_inodes) in sorted(filenames.items(), key=lambda p: p[0]):
filename_bytes = 0
for ino in filename_inodes:
inode = inodes[ino]
filename_bytes += inode.bytes
if not summary:
out.write(kB(inode.bytes))
out.write(b'\t')
#out.write(str(ino).encode('US-ASCII'))
#out.write(b'\t')
out.write(inode.name)
out.write(b'\t')
out.write(b' '.join(map(pid_name, inode.pids)))
out.write(b'\n')
total += filename_bytes
out.write(kB(filename_bytes))
out.write(b'\t')
out.write(filename)
out.write(b'\n')
out.write(kB(total))
out.write(b'\n')
|
My /proc/meminfo shows about 500 MB is allocated as Shmem. I want to get more specific figures. I found an explanation here:
https://lists.kernelnewbies.org/pipermail/kernelnewbies/2013-July/008628.html It includes tmpfs memory, SysV shared memory (from ipc/shm.c),
POSIX shared memory (under /dev/shm [which is a tmpfs]), and shared anonymous mappings
(from mmap of /dev/zero with MAP_SHARED: see call to shmem_zero_setup()
from drivers/char/mem.c): whatever allocates pages through mm/shmem.c.2-> as per the developer comments NR_SHMEM included tmpfs and GEM
pages. what is GEM pages?Ah yes, and the Graphics Execution Manager uses shmem for objects shared
with the GPU: see use of shmem_read_mapping_page*() in drivers/gpu/drm/.I have about50MB in user-visible tmpfs, found with df -h -t tmpfs.
40MB (10,000 pages of 4096 bytes) in sysvipc shared memory, found with ipcs -mu.I would like to get some more positive accounting, for what uses the 500MB! Is there a way to show total GEM allocations? (Or any other likely contributor).
I expect I have some GEM allocations, since I am running a graphical desktop on intel graphics hardware. My kernel version is 4.18.16-200.fc28.x86_64 (Fedora Workstation 28).
|
Can I see the amount of memory which is allocated as GEM buffers?
|
I can't really give you much more information other then a confirmation of the same issue on half a dozen different Linux builds I tried today. I've been using CBS AA since 2017 and it's always worked for me, last tried to use a couple months ago no problem. Today, same problem as you. Doesn't appear to be a User Agent block that is sometimes the case with these, nor a Chrome/Widevine or Firefox version block (tried the same versions on Windows VM without issue). I don't have a proper solution, other then if you have Amazon Prime, you can sign up for CBS All Access through Amazon Prime Video with a trial and same price, and it uses Amazon's video streamer - which works no problem on Linux browsers. There's a Reddit thread stating this issue goes back at least a couple weeks FWIW.
|
I'm porting this over from the Ubuntu SE, since I'm leaning more toward this being a general linux issue.
I saw another post on Ubuntu that had been removed by the moderator on this very same issue. This is a problem, and it seems oriented around linux.
Most of what I've found via google is very sparse and unhelpful.
Are there known issues with streaming CBS All Access with linux/ubuntu?
All I know is: when I used chrome dev tools to try to figure out what was going on; Widevine kept spitting out 400 Bad Request responses from their servers. Nothing seemed amiss as to what was being sent out. All of the proper DRM variables were being sent - yet CBS's servers kept kicking back a 400. So ... I come to you guys.
Aside from the nothing that is out there on the web; is anybody aware of known issues with CBS AA and Linux DRM streaming?
Thank you to anybody who has any information
|
Known Issues With CBS All Access Streaming?
|
This error is an ENOMEM (out of memory error), because CMA size needs to be bigger than one raw frame of the resolution that the display will use
1920x1080 32bpp needs about 8MB, and the default is 16MB so it was working, but 3840x2160 32bpp needs a bit more than 32MB
Armbian changes the default size to 128M on the kernel configuration, using CONFIG_CMA_SIZE_MBYTES=128
But setting CMA size to 64M with the bootarg cma=64M, made it work without any patch or change in configuration
|
I made a small linux distro to use on my projects involving an orange pi one H3, but the HDMI output never works
To know if the device was supported by the linux kernel, i tested another distro (armbian), which worked fine. With that in mind, i tried to change my kernel config based on their, adding every relevant feature, but my version was still not working
I decided to take a look at dmesg after every try, and found that there's one error that i can't get rid of
[ 0.827899] sun4i-drm display-engine: bound 1100000.mixer (ops 0xc0851c2c)
[ 0.835081] sun4i-drm display-engine: bound 1c0c000.lcd-controller (ops 0xc084e2dc)
[ 0.842821] sun8i-dw-hdmi 1ee0000.hdmi: supply hvcc not found, using dummy regulator
[ 0.851453] sun8i-dw-hdmi 1ee0000.hdmi: Detected HDMI TX controller v1.32a with HDCP (sun8i_dw_hdmi_phy)
[ 0.861330] sun8i-dw-hdmi 1ee0000.hdmi: registered DesignWare HDMI I2C bus driver
[ 0.869108] sun4i-drm display-engine: bound 1ee0000.hdmi (ops 0xc0851228)
[ 0.875927] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[ 0.882941] [drm] Initialized sun4i-drm 1.0.0 20150629 for display-engine on minor 0
[ 0.995934] random: fast init done
[ 1.001697] sun4i-drm display-engine: [drm] *ERROR* fbdev: Failed to setup generic emulation (ret=-12)
[ 1.013330] lima 1c40000.gpu: gp - mali400 version major 1 minor 1I couldn't find anything useful about this specific error on the internet, and i couldn't find the explanation for the return code on the kernel source, what could i do to try to fix that problem?
I'm using
- Linux version 5.8.13 (arm-linux-musleabihf-gcc (GCC) 10.2.0, GNU ld (GNU Binutils) 2.35)
- No modules, no initrd/initramfs
- Machine model: Xunlong Orange Pi One
- U-boot (orangepi_one_defconfig)
|
Why the hdmi output doesn't work on orange pi one?
|
The privileged DRI interface came first, and a fixed major device number 226 was initially allocated for them exclusively.
As the ARM devices and the development of GPGPU compute devices proved that display mode-setting and rendering need to be accessible as separate devices, render nodes were developed. It looks like the minor device numbers from 128 upwards was assigned for them, although I could not quickly find any documentation stating that explicitly.
The code that assigns the device numbers begins in the drm_dev_init() function in drivers/gpu/drm/drm_drv.c in the kernel source code. Starting at line 631 it calls the drm_minor_alloc function in the same file:
if (drm_core_check_feature(dev, DRIVER_RENDER)) {
ret = drm_minor_alloc(dev, DRM_MINOR_RENDER);
if (ret)
goto err;
}ret = drm_minor_alloc(dev, DRM_MINOR_PRIMARY);
if (ret)
goto err;It calls the idr_alloc() in lib/idr.c:
r = idr_alloc(&drm_minors_idr,
NULL,
64 * type,
64 * (type + 1),
GFP_NOWAIT);The parameters for idr_alloc() are:
* idr_alloc() - Allocate an ID.
* @idr: IDR handle.
* @ptr: Pointer to be associated with the new ID.
* @start: The minimum ID (inclusive).
* @end: The maximum ID (exclusive).
* @gfp: Memory allocation flags.If the driver supports the DRIVER_RENDER feature and thus render nodes, idr_alloc gets called twice to first allocate the render node and then the primary node. Otherwise only the primary node is allocated.
The DRM_MINOR_* constants are defined in include/drm/drm_file.h:
/* Note that the order of this enum is ABI (it determines
* /dev/dri/renderD* numbers).
*/
enum drm_minor_type {
DRM_MINOR_PRIMARY,
DRM_MINOR_CONTROL,
DRM_MINOR_RENDER,
};Since this is an enum, DRM_MINOR_PRIMARY gets a value of 0, DRM_MINOR_CONTROL has a value of 1 and DRM_MINOR_RENDER 2 respectively.
This works out to allocating minor device numbers in range 0 <= x < 64 to primary interface nodes, 64 <= x < 128 to DRM_MINOR_CONTROL nodes (whatever they might be... seems currently unused in 6.0.x kernels?) and 128 <= x < 192 to DRM_MINOR_RENDER nodes.
The sysfs name for the device is then assigned in drivers/gpu/drm/drm_sysfs.c in function drm_sysfs_minor_alloc(), simply using the minor device number as part of the name:
if (minor->type == DRM_MINOR_RENDER)
minor_str = "renderD%d";
else
minor_str = "card%d";[...]r = dev_set_name(kdev, minor_str, minor->index);And unless udev rules are used to modify the default device name, the sysfs name becomes the name of the device node too.
So basically, it's because nobody has so far bothered to add a condition clause that would subtract 128 from the minor device node number when assigning the default device name if it is a render node.
Also, note that this DRM development blog article from 2013 says:It’s also important to know that render-nodes arenot bound to a specific card. While internally it’s created by the same driver as the legacy node, user-space should never assume any connection between a render-node and a legacy/mode-setting node. Instead, if user-space requires hardware-acceleration, it should openanynode and use it.
[...]
Questions like “how do I find the render-node for a given card?”don’t make any sense. Yes, driver-specific user-space can figure out whether and which render-node was created by which driver,but driver-unspecific user-space should never do that!Given that intent, the choice of deliberately starting the numbering of render nodes from 128 (to break the potentially false assumption that card0 and renderD0 would always refer to the same card) might make sense.
|
Why do DRM render nodes in /dev/dri/renderD<X> start their numbering from 128 while the privileged interaces in /dev/dri/card<X> start at zero?
$ ls -al /dev/dri/
total 0
drwxr-xr-x 3 root root 100 Nov 21 07:46 .
drwxr-xr-x 23 root root 6040 Nov 22 11:09 ..
drwxr-xr-x 2 root root 80 Nov 21 07:46 by-path
crw-rw----+ 1 root video 226, 0 Nov 21 07:46 card0
crw-rw----+ 1 root render 226, 128 Nov 21 07:46 renderD128
|
DRM render node numbering
|
I think I have finally figured it out. At least autorandr now finally detects the plugging in of an external screen, which is the main reason I wanted to enable drm.
I followed the following article
https://vfbsilva.medium.com/howto-set-up-prime-with-nvidia-proprietary-driver-c647e3597447
Step 1: uninstall bumblebee
Step 2: change /etc/X11/xorg.conf.d/90-mhwd.conf to:
Section "Module"
Load "modesetting"
EndSectionSection "Device"
Identifier "nvidia"
Driver "nvidia"
BusID "PCI:1:0:0"
Option "AllowEmptyInitialConfiguration"
EndSectionStep 3 change /etc/modprobe.d/mhwd-gpu.conf to
##
## Generated by mhwd - Manjaro Hardware Detection
##
blacklist nouveau
#blacklist ttm
#blacklist drm_kms_helper
#blacklist drm
options nvidia "NVreg_DynamicPowerManagement=0x02"blacklist nvidiafb
blacklist rivafb
options nvidia_drm modeset=1Step 4
create this file (/usr/local/bin/optimus.sh)
#!/bin/shxrandr --setprovideroutputsource modesetting NVIDIA-0
xrandr --automake it executable
chmod a+rx /usr/local/bin/optimus.shEdit /etc/lightdm/lightdm.conf and set (if you are using another dm check the article)
display-setup-script=/usr/local/bin/optimus
|
I have tried to enable the DRM by setting this kernel parameter in Manjaro i3:
nvidia_drm.modeset=1but afterwards the system refused to start lightDM because of the following Xorg error:
failed to create screen resourcesAfter some digging I have found the following message in the kernel logs that mentions a failing noveau driver, even though I haven't installed the nouveau driver. So can someone make sense of this?
Mai 10 14:47:34 user1-victus kernel: nouveau 0000:01:00.0: timer: stalled at ffffffffffffffff
Mai 10 14:47:34 user1-victus kernel: WARNING: CPU: 14 PID: 746 at drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmtu102.c:43 tu102_vmm_flush+0x165/0x170 [nouveau]
Mai 10 14:47:34 user1-victus kernel: Modules linked in: ntfs3 uas usb_storage ccm cmac algif_hash algif_skcipher af_alg snd_ctl_led snd_soc_skl_hda_dsp snd_soc_intel_hda_dsp_common snd_soc_hdac_hdmi snd_sof_probes snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio qrtr squas>
Mai 10 14:47:34 user1-victus kernel: polyval_generic ecdh_generic gf128mul videobuf2_vmalloc snd_hda_intel mac80211 fat ghash_clmulni_intel i915 nouveau libarc4 snd_intel_dspcfg videobuf2_memops sha512_ssse3 snd_intel_sdw_acpi videobuf2_v4l2 aesni_intel snd_hda_codec videobuf2_com>
Mai 10 14:47:34 user1-victus kernel: CPU: 14 PID: 746 Comm: Xorg Tainted: G W 6.1.1-1-MANJARO #1 58eeef856bad441bca33a8abb39f91301fd24d8d
Mai 10 14:47:34 user1-victus kernel: Hardware name: HP Victus by HP Laptop 16-d0xxx/88F8, BIOS F.22 11/28/2022
Mai 10 14:47:34 user1-victus kernel: RIP: 0010:tu102_vmm_flush+0x165/0x170 [nouveau]
Mai 10 14:47:34 user1-victus kernel: Code: 8b 40 10 48 8b 78 10 48 8b 5f 50 48 85 db 75 03 48 8b 1f e8 6d 09 a9 c8 48 89 da 48 c7 c7 74 be 47 c1 48 89 c6 e8 c6 64 e8 c8 <0f> 0b eb a5 e8 c2 28 ee c8 66 90 f3 0f 1e fa 0f 1f 44 00 00 ff 74
Mai 10 14:47:34 user1-victus kernel: RSP: 0018:ffffb64ac179b778 EFLAGS: 00010282
Mai 10 14:47:34 user1-victus kernel: WARNING: CPU: 14 PID: 746 at drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmtu102.c:43 tu102_vmm_flush+0x165/0x170 [nouveau]
Mai 10 14:47:34 user1-victus kernel: Modules linked in: ntfs3 uas usb_storage ccm cmac algif_hash algif_skcipher af_alg snd_ctl_led snd_soc_skl_hda_dsp snd_soc_intel_hda_dsp_common snd_soc_hdac_hdmi snd_sof_probes snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio qrtr squas>
Mai 10 14:47:34 user1-victus kernel: polyval_generic ecdh_generic gf128mul videobuf2_vmalloc snd_hda_intel mac80211 fat ghash_clmulni_intel i915 nouveau libarc4 snd_intel_dspcfg videobuf2_memops sha512_ssse3 snd_intel_sdw_acpi videobuf2_v4l2 aesni_intel snd_hda_codec videobuf2_com>
Mai 10 14:47:34 user1-victus kernel: CPU: 14 PID: 746 Comm: Xorg Tainted: G W 6.1.1-1-MANJARO #1 58eeef856bad441bca33a8abb39f91301fd24d8d
Mai 10 14:47:34 user1-victus kernel: Hardware name: HP Victus by HP Laptop 16-d0xxx/88F8, BIOS F.22 11/28/2022
Mai 10 14:47:34 user1-victus kernel: RIP: 0010:tu102_vmm_flush+0x165/0x170 [nouveau]
Mai 10 14:47:34 user1-victus kernel: Code: 8b 40 10 48 8b 78 10 48 8b 5f 50 48 85 db 75 03 48 8b 1f e8 6d 09 a9 c8 48 89 da 48 c7 c7 74 be 47 c1 48 89 c6 e8 c6 64 e8 c8 <0f> 0b eb a5 e8 c2 28 ee c8 66 90 f3 0f 1e fa 0f 1f 44 00 00 ff 74
Mai 10 14:47:34 user1-victus kernel: RSP: 0018:ffffb64ac179b778 EFLAGS: 00010282
Mai 10 14:47:34 user1-victus kernel: RAX: 0000000000000000 RBX: ffff98e542c9d160 RCX: 0000000000000027
Mai 10 14:47:34 user1-victus kernel: RDX: ffff98eccfba1668 RSI: 0000000000000001 RDI: ffff98eccfba1660
Mai 10 14:47:34 user1-victus kernel: RBP: ffff98e560866400 R08: 0000000000000000 R09: ffffb64ac179b600
Mai 10 14:47:34 user1-victus kernel: R10: 0000000000000003 R11: ffffffff8b4b7110 R12: 0000000005000001
Mai 10 14:47:34 user1-victus kernel: R13: ffff98e560866400 R14: ffff98e5634136c0 R15: 0000000000000000
Mai 10 14:47:34 user1-victus kernel: FS: 00007f6a3d5a2980(0000) GS:ffff98eccfb80000(0000) knlGS:0000000000000000
Mai 10 14:47:34 user1-victus kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mai 10 14:47:34 user1-victus kernel: CR2: 00007f6a34044060 CR3: 000000012143a004 CR4: 0000000000f70ee0
Mai 10 14:47:34 user1-victus kernel: PKRU: 55555554
Mai 10 14:47:34 user1-victus kernel: Call Trace:
Mai 10 14:47:34 user1-victus kernel: <TASK>
Mai 10 14:47:34 user1-victus kernel: nvkm_vmm_unref_pdes+0xeb/0x1f0 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: nvkm_vmm_unref_pdes+0x182/0x1f0 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: nvkm_vmm_unref_pdes+0x182/0x1f0 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: nvkm_vmm_unref_pdes+0x182/0x1f0 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: nvkm_vmm_unref_ptes+0x18c/0x250 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: nvkm_vmm_iter.constprop.0+0x2a5/0x890 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: ? nvkm_vmm_iter.constprop.0+0x2a5/0x890 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: ? nvkm_vmm_ptes_sparse+0x1e0/0x1e0 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: nvkm_vmm_put_locked+0x109/0x280 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: ? nvkm_vmm_ptes_sparse+0x1e0/0x1e0 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: nvkm_uvmm_mthd+0x686/0x6b0 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: nvkm_ioctl+0xd9/0x180 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: nvif_object_mthd+0xcc/0x200 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: nvif_vmm_put+0x64/0x80 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: nouveau_vma_del+0x80/0xd0 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: nouveau_gem_object_close+0x1eb/0x220 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: drm_gem_handle_delete+0x6a/0xd0
Mai 10 14:47:34 user1-victus kernel: ? drm_mode_destroy_dumb+0x40/0x40
Mai 10 14:47:34 user1-victus kernel: drm_ioctl_kernel+0xca/0x170
Mai 10 14:47:34 user1-victus kernel: drm_ioctl+0x1eb/0x450
Mai 10 14:47:34 user1-victus kernel: ? drm_mode_destroy_dumb+0x40/0x40
Mai 10 14:47:34 user1-victus kernel: nouveau_drm_ioctl+0x5a/0xb0 [nouveau b94536815bcbee6c07bf0305391ef14c5a1db60e]
Mai 10 14:47:34 user1-victus kernel: __x64_sys_ioctl+0x91/0xd0
Mai 10 14:47:34 user1-victus kernel: do_syscall_64+0x5c/0x90
Mai 10 14:47:34 user1-victus kernel: ? syscall_exit_to_user_mode+0x1b/0x40
Mai 10 14:47:34 user1-victus kernel: ? do_syscall_64+0x6b/0x90
Mai 10 14:47:34 user1-victus kernel: ? exit_to_user_mode_prepare+0x145/0x1d0
Mai 10 14:47:34 user1-victus kernel: ? syscall_exit_to_user_mode+0x1b/0x40
Mai 10 14:47:34 user1-victus kernel: ? do_syscall_64+0x6b/0x90
Mai 10 14:47:34 user1-victus kernel: ? exc_page_fault+0x74/0x170
Mai 10 14:47:34 user1-victus kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd
Mai 10 14:47:34 user1-victus kernel: RIP: 0033:0x7f6a3df23c0f
Mai 10 14:47:34 user1-victus kernel: Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00
Mai 10 14:47:34 user1-victus kernel: RSP: 002b:00007ffcd9c547d0 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Mai 10 14:47:34 user1-victus kernel: RAX: ffffffffffffffda RBX: 0000561a976c48d0 RCX: 00007f6a3df23c0f
Mai 10 14:47:34 user1-victus kernel: RDX: 00007ffcd9c54864 RSI: 00000000c00464b4 RDI: 0000000000000015
Mai 10 14:47:34 user1-victus kernel: RBP: 00007ffcd9c54864 R08: 0000561a976da840 R09: 00007f6a3e0857a0
Mai 10 14:47:34 user1-victus kernel: R10: 0000000000000050 R11: 0000000000000246 R12: 00000000c00464b4
Mai 10 14:47:34 user1-victus kernel: R13: 0000000000000015 R14: 0000561a96eef7a0 R15: 00007f6a3d4c8a60
Mai 10 14:47:34 user1-victus kernel: </TASK>
Ma
|
Direct Rendering Manager not working with RTX3060 on propriatery NVIDIA Driver
|
If the two hard-disks are of the same size (or the new one is bigger), why didn’t you just copy the old disk to the new disk? I.e.
dd if=/dev/sda of=/dev/sdbNow, if the new hard-disk is bigger, change the partition sizes with parted or gparted. All this done booting from a live CD/USB-stick.
|
I have a CentOS 6 server with two hard drives in it. My old 3TB drive has been giving me some issues so I'm moving things over to a new drive. Because my / and /home partition are managed by a LVM it was easy to migrate those to the new drive. Now I want to move over my /boot partition and the MBR that makes it all start up.
I loaded up a live CD and rsynced over my /boot partition to the same size partition on my new drive. I also tried to copy over my MBR with the following commands:
dd if=/dev/sda of=mbrbackup bs=512 count=1
dd if=mbrbackup of=/dev/sdb bs=446 count=1After doing this I rebooted, told my BIOS not to look at the old hard drive during the boot cycle and only look at the new drive but all I ended up with was a blinking cursor.
Did I miss a step here? Or is there something else I need to do to make things boot so I can completely remove my old drive?
EDIT: I'm starting to think rsync was not the way to copy the /boot partition from one drive to another. Based on this guide, I tried using the dump command instead. In this command I copied my old, unmounted boot partition to my new, empty, mounted boot partition.
dump -0f - /dev/sdaX | (cd /mnt/boot; restore -rf -) I'm getting a grub error 15 on boot which is better than a blinking cursor but I don't know if that is any closer to a solution.
|
Moving /boot and MBR to a new drive
|
Unix, of which Linux is just one flavour, has a long and rich history. It has not been developed by a single company or group, nor following a master plan, and has evolved by adaption to many niches. You can find many examples where multiple tools cover similar or the same functionality. They have been implemented by different people at different times for similar purposes; check their manpages for hints.
Thanks to the rise of Open Source in general, and the possibilities of the information age, we can enjoy the benefit of many of these tools being generally available to our use. The attempt to merge them into one will result in one more being available.Enjoy; these are amazing times!
A selection for further reading:https://en.wikipedia.org/wiki/History_of_Unix
http://www.catb.org/esr/writings/taoup/html/historychapter.html
http://www.catb.org/esr/writings/cathedral-bazaar/
https://www.levenez.com/unix/
|
What is the difference between the od, hd, hexdump and xxd commands ?
They are all commands for dumping files and they can all dump it in various formats such as hexadecimal, octal or binary. Why creating different programs ?
|
What is the difference between the od, hd, hexdump and xxd commands?
|
First store the password in a file called .my.cnf in the users home directory with the following format:
[mysqldump]
password=secretThen, you have to use mysqldump without the -p flag to dump a mysql database (it now uses the password from the file):
mysqldump -u root database | 7z a -si backup.sql.7zThe a flag of 7z adds to the archive
-si means to read from the standard input (from the anonymous pipe).
|
I've been attempting to compress my mysqldump output via 7z using the pipe operator (I have seen this question, but its answer uses xz not 7z). This is what I have tried so far:
mysqldump -u root -p Linux_Wiki | 7z > backup.sql.7zand:
mysqldump -u root -p Linux_Wiki | 7za > backup.sql.7zand:
mysqldump -u root -p Linux_Wiki | '7za a' > backup.sql.7zand:
mysqldump -u root -p Linux_Wiki | `7za a` > backup.sql.7zAll four failed, but I am sure I have p7zip installed, after all the last of these attempts gave this output:
Enter password: bash: 7-Zip: command not found
mysqldump: Got error: 1045: Access denied for user 'root'@'localhost' (using password: NO) when trying to connect
|
How to compress a mysql dump using 7z via a pipe?
|
Depends on how much time you have. If you know C than the safest way is to connect with gdb to the ssh-agent process (must be root) and print the key data. Identity keys are stored in an array called idtable which contains a linked list of identities. So, you can print the BIGNUM data (as defined in (1)) like:
(gdb) call BN_bn2hex(idtable[2]->idlist->tqh_first->key->rsa->n)where the number 2 is the version (you probably need 2) and the last element is one of the BIGNUM (the rest are engine, e, d, p, q, dmp1, dmq1, iqmp).
Now to use this data you need to write a small utility program where you define a RSA struct (defined as in (1)) and populate them. Probably you could write another utility program to do this automatically but then you need more time, you can just print the data manually. Then you call the PEM_write_RSAPrivateKey (2) function with the above RSA data and you have a new unencrypted rsa file.
Sorry for not having more details but if you have time it could be a starting point.
(1) /usr/include/openssl/rsa.h
(2) see man page for pem(3)
|
I'm on a Mac (OSX).
I've accidentally deleted my ssh keys, but I haven't restarted my computer yet so I'm still able to access servers with my key. I guess the ssh-agent has some form of it in memory?
Is there any way to retrieve the key from the ssh-agent?
I still remember the password etc.
|
Deleted my ssh keys
|
It's the ONLCR .c_oflag termios setting which is causing the newline (\n) to be turned into carriage-return/newline (\r\n) by the pseudo-terminal allocated by ssh on the remote machine (because of ssh's -t option).
Turn it off with stty -onlcr:
ssh -t me@there 'stty -onlcr; ...' > output
|
I have inherited a Ubuntu 14.04 production server which needs to be upgraded to 20.04, and I would like a sandboxed version to experiment with first, hence I want to dump and restore the filesystems over the network from either a MacOS or another 14.04 virtualbox instance. An earlier version of this question is at https://askubuntu.com/q/1314747/963.
The server cannot "see" my machines so I cannot easily run dump and push the result remotely to my machine, but need to invoke ssh from my machine to run dump.
ssh -t me@there "echo MYPASSWORD | sudo -S dump -y -f - /boot 2>/dev/null " > boot.dump Problem is that I've found that running this command inserts a lot of \r characters in front of \n characters which ruins the dump file so restore cannot use it. I understand that this is probably due to a driver translating linefeeds to the characters needed for printing, but I do not see where this is triggered.
How should I do this to get the correct binary dump file?
|
What causes \r's to be inserted before \n's when retrieving a binary file over ssh, and how do I circumvent it?
|
Linux doesn't let you do a plain read(dir_name, buffer, sizeof(buffer) - it always returns -1 and puts EISDIR in errno. This is probably rational, as not all filesystems have directories as files. The commonly-used reiserfs does not, for example.
You can use the open() system call from a C program to get a file descriptor of a directory, but things like readdir(3) (from libc) call getdents(2) to actually retrieve directory entries. There's probably code in each filesystem implementation to create struct linux_dirent from whatever (a file, a database, an on-disk B-tree) that filesystem uses to store directory entries.
|
I was wondering why od(1) used to work in UNIX but doesn't work in GNU/Linux. There is a nice answer on serverfault. The next question is, are you aware of any tools that can emulate od behavior to support dumping directory data in GNU/Linux?
|
od emulation for directories
|
There's a few options.od should be available on POSIX systems, so most Unix and Linux variants will have it. That command has a slew of options to control the output format.
hexdump (from util-linux on my distro) is my favorite for a quick inspection (hexdump -C), but it's not available everywhere.
xxd (installed with Vim) is great too - especially since it allows you to convert to a human-readable hex format, and to convert back to binary. So that gives you a pretty simple hex editor.
|
In OpenVMS the DUMP command:Displays the contents of a file, a directory, a disk volume, a
magnetic tape volume, or a CD-ROM volume in decimal, hexadecimal,
octal format, ASCII, or formatted data structures.This is frequently used when a file is not a simple text file where the content has a mixture of data types such as strings and integers.
What is the Unix/Linux command for this?
|
What is Unix for the OpenVMS DUMP command?
|
sysdumpstart -pit took about ~22 minutes to do this 4 GByte sized one. It automatically reboots after the dump! After reboot, save the dump from the dumpLV to a file.
smitty dump
Copy a system dump from a dump device to a filetrying to get a developer who can analyze the dump file :) opening software call.
How to force a system dump:
https://www-01.ibm.com/support/docview.wss?uid=isg3T1019210
|
There is a server having an interesting problem (a few other had the problem too). We think that SAP takes almost all the paging, but we cannot say that 100%, because when this problem occurs, even a "ps -ef" won't run on the system because the command will hang!
During the problem (before a reboot, because a reboot fixes the problem), how can I do a dump, that the developers later can analyze?
So far I read that if a:
sysdumpstart -pis executed the AIX will do a dump and reboot after it:
-p Initiates a system dump and writes the results to the primary dump device.Question: but is this enough? (the "sysdumpstart -p" command) Will it creates a dump that will store the SAP related infos too, to later debug?
12:root@SERVER:/root # sysdumpdev -l
primary /dev/lg_dumplv
...
12:root@SERVER:/root # sysdumpdev -e
Estimated dump size in bytes: 4660710604
12:root@SERVER:/root # The lg_dumplv is 12288 MByte sized, so it should be enough.
After the reboot, I will find the dump files in the "/var/adm/ras/vmcore.x" ? Or there is an additional command to put the dump to the FS from the dump lv?
|
AIX: how to do a dump that contains the application related infos too?
|
The dump or fs_freq column in /etc/fstab is the dump frequency in days. It is used by dump's -w and -W options to inform the operator which filesystems need to be dumped. To my knowledge, a 0 in that field never prevented dump from running; the filesystem just wouldn't show up in dump -w output.
One use case is that the dump operator would run dump -w to see what needed to be done that day, then would load the appropriate tapes into the tape drives and run dump to do a full or incremental dump for each appropriate filesystem. In practice, most installations I'm familiar with dumped every filesystem every day, so dump -w was used just to check if a filesystem was falling through the cracks.But I have not understood yet that in what condition the command will run? And where will be the location of the dump.It doesn't run by default. You have to set something up yourself. You can have it output to a tape if you have one, or to a file. Many people use a higher-level backup system such as amanda to manage their backups.
|
Whatever I have understood so far about the 5th column of /etc/fstab that it is for dump command to run on that file system. if entry is 0 dump command will not run and if the entry is 1 then dump command will run.
But I have not understood yet that in what condition the command will run? And where will be the location of the dump. How we can check that dump ran on that file file system?
|
/etc/fstab 5th column
|
Take a look at the restore command. It has a switch, -C which looks like what you're looking for.
excerpt from restore man page
-C This mode allows comparison of files from a dump. Restore reads the
backup and compares its contents with files present on the disk. It
first changes its working directory to the root of the filesystem that
was dumped and compares the tape with the files in its new current
directory. See also the -L flag described below.
|
Is there a way to compare the current state of the filesystem to that stored in a backup created by dump?
I recently had some major corruption in the filesystem and my most recent backup is from several months ago. I want to compare the two in order to get an idea of how much was corrupted and hopefully see how much has changed since the backup.
|
Compare a filesystem to a dump
|
The easiest way to get the data out over the network is to pipe them through a TCP connection using nc. Depending on how exact a clone you want, "the data" here may mean either of the following: The entire block device (a complete block-level image of the filesystem): cat /dev/sda | or cat /dev/mtdblocksomething | (Yes, this is a useless use of cat, used here just for consistency with the other option. Feel free to replace it with < /dev/sda.)
Just the files/directories/links/etc.: tar -c / | (possibly with --one-file-system)What comes after the pipe depends on whether you can make TCP connections from the device to your machine or vice versa. For example:nc -l -p someport > deviceimage.tar on your machine
tar -c / | nc yourmachine someport on the deviceIf you can only make connections to the device but not from it, just swap the nc -l and nc around.
Note that cloning a running system like this without first quiescing the filesystem may get you an inconsistent snapshot if any writes occur while your clone is underway. This may be worse when cloning the whole block device (the inconsistency may corrupt the filesystem in your clone). If the device is somewhat busy, consider stopping (kill -STOP) whatever you can.
|
I've managed to root a device and I'd like to dump it's entire filesystem in order to analyze and reverse engineer it.
This device claims to be Linux 2.6.31 mips GNU/Linux. The way I can access the shell interface is via network, by simply telneting to a port.
How can I dump it's entire filesystem outside the device?
Things I triedDD: Kind of out of the question, df -h says that the filesystem is 48% used and a dd image would take that to 98%, potentially making it run out of space and bricking it?
Rsync: This one seems the best option, but AFAIK, rsync uses ssh internally, but there's no ssh to this device, you just open a port to it and it drops you to a shell. Parameters like [emailprotected]:23:/ simply seem to ignore the port (ssh: connect to host 192.168.3.10 port 22: Connection refused). rsync binary is not present in the device.Things to considerFilesystem should remain exactly the same, meaning that even symlinks should still point to where they point right now.
|
Dumping a live filesystem
|
In your comment to @tink's answer you want seperate files in the .gz files:
mysqldump --opt --databases $dbname1 --host=$dbhost1 --user=$dbuser1 --password=$dbpass1 > '/var/tmp/$dbhost1.$dbname1.sql' ; mysqldump --opt --databases $dbname2 --host=$dbhost1 --user=$dbuser1 --password=$dbpass1 > '/var/tmp/$dbhost1.$dbname2.sql' ; mysqldump --opt --databases $dbname3 --host=$dbhost2 --user=$dbuser2 --password=$dbpass2 > '/var/tmp/$dbhost1.$dbname3.sql' ; mysqldump --opt --databases $dbname4 --host=$dbhost2 --user=$dbuser2 --password=$dbpass2 > '/var/tmp/$dbhost1.$dbname4.sql' ; cd /var/tmp; tar cvzf backupfile.sql.gz \$dbhost1.\$dbname*.sqlAs an alternative for the output filename I would use backupfile.sql.tgz so it is more clear to experienced users this is a tar file that is compressed
You can append rm \$dbhost1.\$dbname*.sql to get rid of the intermediate files
You could use zip as alternative to compressed tar.
I am not sure why you want this as a one-liner. If you just want to issue one command you should put the lines in script and excute from there.
With the 'normal' tools used for something like this (tar, zip), I am not aware of circumventing the intermediate files.Addendum
If you really do not want intermediate files (and assuming the output fits in memory) you could try something like the following Python program. You can write this as a oneliner ( python -c "from subprocess import checkout; from cStr....), but I really do not recommend that.
from subprocess import check_output
from cStringIO import StringIO
import tarfileoutputdata = [
('$dbhost1.$dbname1.sql', '$dbname1'),
('$dbhost1.$dbname2.sql', '$dbname2'),
('$dbhost1.$dbname3.sql', '$dbname3'),
('$dbhost1.$dbname4.sql', '$dbname4'),
]with tarfile.open('/var/tmp/backupfile.sql.tgz', 'w:gz') as tgz:
for outname, db in outputdata:
cmd = ['mysqldump', '--opt', '--databases']
cmd.append(db)
cmd.extend(['--host=$dbhost1', '--user=$dbuser1', '--password=$dbpass1'])
out = check_output(cmd)
buf = StringIO(out)
buf.seek(0)
tarinfo = tarfile.TarInfo(name=outname)
tarinfo.size = len(out)
tgz.addfile(tarinfo=tarinfo, fileobj=buf)Depending on how regular your database and 'output' names are you can further improve on that.
|
Let's say I have these series of commands
mysqldump --opt --databases $dbname1 --host=$dbhost1 --user=$dbuser1 --password=$dbpass1
mysqldump --opt --databases $dbname2 --host=$dbhost1 --user=$dbuser1 --password=$dbpass1
mysqldump --opt --databases $dbname3 --host=$dbhost2 --user=$dbuser2 --password=$dbpass2
mysqldump --opt --databases $dbname4 --host=$dbhost2 --user=$dbuser2 --password=$dbpass2How do put all their outputs(assuming that the output name is $dbhost.$dbname.sql) and put it inside one file named backupfile.sql.gz using only one line of code?
Edit: From comments to answers below, @arvinsim actually wants a compressed archive file containing the SQL dumps in separate files, not one compressed SQL file.
|
Chaining mysqldumps commands to output a single gzipped file
|
You can compare the backup with the current contents of the system using restore:
restore -C -f backupwhere backup is the file containing your backup.
You can also list the contents of a backup:
restore -t -f backup
|
When dumping my FS with dump, e.g.:
$ dump -0f /path/to/usb/nonexistant-file-name /I get a binary file, extension-free. It's normal.
I regularly store such backups, if ever I need to restore one. As usual.
But...
How can I check the reliability of such a file produced by dump? How can I check if it's globally correct, to be sure it's restorable?
I cannot test IRL by recovering my own working system, so...
|
How to check if a dump-generated backup is OK?
|
The stat command allows to get the specific data, restrict the output to the file attributes you want, and with user defined formatting. For example to get the time in full resolution:
$ stat -c $'%n:\n%x\n%y\n%z' file1 file2
file1:
2015-04-27 08:25:37.199806691 +0200
2015-04-27 08:25:37.199938422 +0200
2015-04-27 08:25:37.199938422 +0200
file2:
2015-04-27 22:05:54.739008929 +0200
2015-04-27 22:05:54.739091897 +0200
2015-04-27 22:05:54.748412643 +0200Or more compact (<Tab> separated) with time information cropped to seconds:
$ stat -c $'%n:\t%.19x\t%.19y\t%.19z' file1 file2
file1: 2015-04-27 08:25:37 2015-04-27 08:25:37 2015-04-27 08:25:37
file2: 2015-04-27 22:05:54 2015-04-27 22:05:54 2015-04-27 22:05:54
|
On Linux, du offers the displaying of timestamps: atime OR ctime OR mtime
Question: Is there an easy way to get all three of them displayed at the same time (one file, all three timestamps)?
I guess to know how to solve this with diff (and possibly cut), but I'm rather looking for a single command to accomplish this task
|
du: combine both timestamps
|
Inserting pv in your receive-side pipeline should allow you to observe progress:
nc 127.0.0.1 8888 | pv >device_image.ddIf you had pv available on the sending side, you could also use it there:
dd if=/dev/block/mmcblk0 | pv | busybox nc -l -p 8888But pv probably won't be available on your Android device unless you installed it there.
|
how can i monitor netcat transferring from android to my linux machine
i used this command on android device ( sender ) to make a full dump for my device :dd if=/dev/block/mmcblk0 | busybox nc -l -p 8888on receiver side i use this command : nc 127.0.0.1 8888 > device_image.ddi need to watch the progress with pv how can i do it ?
thank
|
watch netcat transfer dump from android to pc
|
1. Exporting
Unfortunately, mysql shell can't dump database contents like mysqldump does, so it's impossible to execute SQL queries and dump database in one call to mysql or mysqldump. However you can:
a) Grant user test access to the ${domain} database:
mysql -u root -p <<-MYSQL
...
GRANT ALL PRIVILEGES ON ${domain}.* TO 'test'@'localhost';
GRANT GRANT OPTION ON ${domain}.* TO 'test'@'localhost';
FLUSH PRIVILEGES;
MYSQLsubsequently call:
mysqldump -u test -p"${psw}" "${domain}" >domain.sqland finally call:
mysql -u test -p"${psw}" <<-MYSQL
REVOKE ALL PRIVILEGES ON ${domain}.* FROM 'test'@'localhost';
REVOKE GRANT OPTION ON ${domain}.* FROM 'test'@'localhost';
MYSQLNo need to enter the password again as it's passed in command line. However passing the password in command line is insecure so you may consider using expect or creation of my.cnf with user/password settings and referring to it with --defaults-extra-file= as motivast suggested.
b) You can read the root password in the beginning of your script and then use it in subsequent mysql calls (this described in motivast comment):
read -s -p 'Enter password: ' root_psw
echo
my_cnf=`tempfile -s .cnf -m 400`
echo "[mysql] >${my_cnf}
echo "user=root" >>${my_cnf}
echo "password=${root_psw}" >>${my_cnf}# Delete the password file after this script finish (even in case of error)
cleanup_my_cnf { rm "${my_cnf}"; }
trap cleanup_my_cnf INT TERM EXITmysql --defaults-extra-file="${my_cnf}" <<-MYSQL
...
MYSQLmysqldump --defaults-extra-file="${my_cnf}" "${domain}" >domain.sqlc) If you only need to dump table structure and you know table names, you could use the SHOW CREATE TABLE SQL:
mysql -u root -p <<-MYSQL
...
use ${domain};
tee domain.dump;
SHOW CREATE TABLE table1;
SHOW CREATE TABLE table2;
MYSQLBut this is too exotic and domain.dump would need a bit of editing afterwards.
2. Importing
That's pretty easy with source command (same as in the bash):
mysql -u root -p <<-MYSQL
...
use test;
source ${domain}.sql;
MYSQL
|
The following code drops a DB user and a DB instance by the name of test, if both exists, then creates an authorized, all privileged DB user and a DB instance with the same name (also test).
mysql -u root -p <<-MYSQL
DROP user IF EXISTS 'test'@'localhost'; SELECT user FROM mysql.user;
DROP database IF EXISTS test; show databases; CREATE user 'test'@'localhost' IDENTIFIED BY '${psw}';
CREATE database test;
GRANT ALL PRIVILEGES ON test.* TO test@localhost;
MYSQLI lack too things in this code:Exporting ${domain} into a {$domain}.sql mysqldump.
Importing ${domain}.sql mysqldump into the test DB database.How could I add these two actions but inside the heredocument? I don't want them to be different actions outside the heredocument (what requires entering username and password again and again), rather, I need them as regular mysql queries inside the heredocument, coming right after the last GRANT query.
|
Exporting and importing a mysqldump from within a mysql CLI heredocument
|
The source code has this:
for (cp = fp->fname; *cp; cp++)
if (!vflag && (*cp < ' ' || *cp >= 0177))
*cp = '?';So it looks like it will substitute '?' for non-printable-ASCII characters unless you give restore the -v option or, in interactive mode, type the verbose command.
|
When using the interactive mode of the restore utility to restore backups made with dump UTF-8-encoded filenames doesn't display correctly, see example below. The ??s should be ös ...
$ sudo restore -if dumpfile
Dump tape is compressed.
restore > ls
./somedir:
lagerl??fSelma_k??rkarlen.txtrestore >How come? What are the solutions do this problem?
The dump and the restore are performed on the same machine, that has the locale set as follows:
$ locale
LANG=sv_SE.UTF-8
LANGUAGE=sv:en_US:en
|
Why doesn't `restore` display my UTF-8 encoded filenames correctly in interactive mode?
|
There's at least one known problem with mksnap_ffs in 9.0 on UFS filesystems; see this bug fix notice. Unless you want to run the bleeding edge stuff, I think you should dump without -L until 9.1.
|
dump -h 0 -0Lauf /backup/ada0p2.dump / - causes total system freeze.
server# cat /etc/fstab
# Device Mountpoint FStype Options Dump Pass#
/dev/ada0p2 / ufs rw 1 1
/dev/ada0p3 none swap sw 0 0
/dev/ada1p1 /backup ufs rw 2 2nodump flags are set to /backup and /usr/home
following options are set to make sure there are no errors on the disks.
fsck_y_enable="YES"
background_fsck="NO"FreeBSD 9.0-RELEASE amd64
Please help.
EDIT: This is the last stats I can get before system freezes
server# top
...
42 processes: 2 running, 40 sleeping
CPU: 0.0% user, 0.0% nice, 47.6% system, 3.6% interrupt, 48.8% idle
Mem: 71M Active, 39M Inact, 139M Wired, 420K Cache, 124M Buf, 1718M Free
Swap: 4096M Total, 4096M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
1723 root 1 97 0 10056K 1188K CPU0 0 0:18 85.06% mksnap_ffs
1711 root 1 20 0 10056K 1360K wait 1 0:00 0.00% dump
|
dump freezes system
|
The command you give,
dump -0f /dev/sdc1 /dev/sda2will back up the contents of /dev/sda2, overwriting all the first partition of your USB key (which won't mean much to standard disk tools then).
If you want to generate a backup file on your USB key instead, you need to mount the key and tell dump to dump to a file on the key:
dump -0f /media/usb/backup /dev/sda2will create a file named backup on the key (you need to replace /media/usb in the command with the real mountpoint for the key).
You can specify a mountpoint instead of a device to back up:
dump -0f /media/usb/backup /will back up the root filesystem.
|
I am dumping my live partition.
All works ok in terminal. Until the 100%.
I dump from live mint partition to an USB key.
$ dump -0f /dev/sdc1 /dev/sda2Once done, I look at my USB key.
Nothing on it, full free space according to disk tools.
Am I wrong somewhere ?
|
Linux dump(8) outputs nothing
|
Firstly, you should ensure that your module is exists in module directories. For example:
find /lib/modules/$(uname -r) -name 'ubi.ko'If your module doesn't exist you need to build it.
Secondly, your output module ubi not found in modules.dep tells to you that there is no info about module in modules.dep (man 5 modules.dep).
You need to call depmod (man 8 depmod) to complete info about module dependencies.
|
I try to do something like this - link.
All examples in the web are not working at all.
modprobe ubi mtd=0
modprobe: module ubi not found in modules.depmodprobe ubi
modprobe: module ubi not found in modules.depmodprobe ubi mtd=/dev/mtd0
modprobe: module ubi not found in modules.dep
|
How to use ubifs image with modprobe? To extract ubifs image
|
Nothing is excluded. It's a level 0 dump, it dumps everything on that file system.
A level 1 dump would dump everything that was changed since the last level 0 dump.
A level 2 dump would change everything changed up to the last level 1 dump (if there was a level 1 dump, otherwise back to the level 0 dump).
Hence you could do incremental backups by doing a level 0, then 1, then 2 etc, until you do your next level 0 and start over again. If you need to restore, you'd have to restore your level 0, then 1 on top of that, then 2 and so on.
You could do differential backups by first doing a level 0, then doing level 1 backups, until your next level 0. This has the benefit of if you have to restore a file, or the entire file system, there's only 2 places to look, in either the latest level 1 or the level 0. You could save each of the intervening level 1 backups to be able to undelete a file that was newer than in the level 0 but older than the most recent level 1.
Note: If /tmp is on a different file system, then it is not dumped. Dump works on a file system bases. Hence if you have 3 file systems mounted, you will need to dump them 3 times. Furthermore, dump dumps the entire file system, you can't dump a directory. Use tar for that!
|
When using dump, level 0, to save the whole concerned partition :
$ dump -0f /path/to/target/drive /I'm said some folders are excluded (I guess, at least, tmp folder).
I have not found more detail about it.
What is the default excluded folder list please ?
|
Linux dump, which folders/files are excluded from first backup?
|
Making it an answer as suggested
find / -name "home_f_dump"
|
I dumped a file system, chose the name for the dump ("home_fs_dump"), the dump was declared successfull, and yet I couldn't locate this name anywhere in the system. (And at the end I tested if locate gives me any files at all, or if I am using it incorrectly.)
Is a dump locateable by other command than locate? If yes, then what?
Or did I make a mistake of some other sort somewhere?
(I need to know this to restore the dump.)
[root@12345 /]# dump -0uf home_fs_dump /dev/mapper/fedora_12345-home
DUMP: Date of this level 0 dump: Sat Apr 25 21:08:02 2015
DUMP: Dumping /dev/mapper/fedora_12345-home (/home) to home_fs_dump
DUMP: Label: none
DUMP: Writing 10 Kilobyte records
DUMP: mapping (Pass I) [regular files]
DUMP: mapping (Pass II) [directories]
DUMP: estimated 22551 blocks.
DUMP: Volume 1 started with block 1 at: Sat Apr 25 21:08:02 2015
DUMP: dumping (Pass III) [directories]
DUMP: dumping (Pass IV) [regular files]
DUMP: Closing home_fs_dump
DUMP: Volume 1 completed at: Sat Apr 25 21:08:04 2015
DUMP: Volume 1 23010 blocks (22.47MB)
DUMP: Volume 1 took 0:00:02
DUMP: Volume 1 transfer rate: 11505 kB/s
DUMP: 23010 blocks (22.47MB) on 1 volume(s)
DUMP: finished in 2 seconds, throughput 11505 kBytes/sec
DUMP: Date of this level 0 dump: Sat Apr 25 21:08:02 2015
DUMP: Date this dump completed: Sat Apr 25 21:08:04 2015
DUMP: Average transfer rate: 11505 kB/s
DUMP: DUMP IS DONE
[root@12345 /]# locate home_fs_*
[root@12345 /]# locate *fs_dump
[root@12345 /]# locate *_fs_*
[root@12345 /]# locate home_*
[root@12345 /]# locate home*
/home
/etc/selinux/targeted/contexts/files/file_contexts.homedirs
/etc/selinux/targeted/contexts/files/file_contexts.homedirs.bin
/etc/selinux/targeted/modules/active/file_contexts.homedirs
/etc/selinux/targeted/modules/active/homedir_template
/etc/selinux/targeted/modules/tmp/file_contexts.homedirs
/etc/selinux/targeted/modules/tmp/homedir_template
(...)
|
Can't find the location of DUMPed file system
|
Well...
After tons of re-tries, I solved this root filesystem restore problem via using installation DVD's recue mode.
I discovered that the restoration of root filesystem from dump always conflicts with the running OS. Therefore, using rescue mode can handle it.
I'll make other try to see whether there's possibility to restore root filesystem when OS is running.
|
I'm currently testing a backup/restore of RHEL 6.4 OS via the "dump" and "restore" on testing environment, and I do know that RHEL 6.4 seemed too outdated in nowadays. Butsome enterprises are still using such version of RHEL to load their services.
Here's the scenario: to backup the system and critical programs in case of host crash/failure event.The test RHEL 6.4 host for backup utilizes windows Hyper-V VM as infrastrucure and the OS root is installed on LVM logical volume.
In order to backup the system, I placed system into single user mode and used command to backup the root filesystem
dump -0uf /<path_to_a_second_storage_to_store_dump>/mybackup.dump /The dump showed "DUMP IS DONE" on screen and the dump file was created with size about 2.2GB therefore I believed that the backup was successful.In order to simulate host crash event, I reinstalled the RHEL 6.4 system utilizing LVM logical volume and boot the system into single user mode before restoration.
However, after restoring root filesystem using
restore -rf /<path_to_a_second_storage_to_store_dump>/mybackup.dumpThe screen showed kernel panic and some other errors, and hung eventually.
I retried several times but always failed.
Can anyone give me some hints why the restoration can't be completed?
|
failed to restore root filesystem from dump backup
|
Solution found
First rewind tape
mt rewindthen pass arguments as this
dump 0udsbf 54000 6000 126 /dev/rmt12 /dev/ra0gfor 2GB 4mm tape
#2g tape
54000 #density
6000 #size
126 #block factor
|
On old 43bsd i want to dump /usr
This command works
dump 0uf /dev/rmt12 /dev/ra0adump is command 0u mean full and update /etc/dumpdates f mean use /dev/rmt12
and /dev/ra0a is the root partition.
The problem is when i want to dump the /usr wich is big,and tape is see as small tape,but is big enough to contain /usr.
The question is: how to pass the size option?
I have tried
dump 0ufs56000 /dev/rmt12 /dev/ra0g
dump 0us5600f/dev/rmt12 /dev/ra0g
dump 0us5600f /de/rmt12/dev/ra0gAnd all fail.
I want to pass the 5600 size to dump,how to do?
|
dump with old 43BSD,question about tape size
|
I ran into this issue today on a 4.8.0 kernel.
According to this forum post it can be circumvented by
$ echo options usb-storage quirks=357d:7788:u | sudo tee /etc/modprobe.d/blacklist_uas_357d.conf
$ sudo update-initramfs -uand rebooting.
|
On Ubuntu 15.10, when I want to format using the NTFS file system an external 4TO disk connected by USB3 (on a StarTech USB/eSATA hard disk dock), I have a lot of I/O errors, and the format fails.
I tried GParted v 0.19, and GParted on the latest live CD gparted-live-0.23.0-1-i586.iso, with the same problem.
After that, and using GParted on Ubuntu 15.10 and the same USB3 connection, I tried to format as ext4, without problems. It's really strange.
Because I don't know if the mkfs.ext4 tools used by GParted to format the disk test the disk like (or not like) mkntfs, I first suppose that the problem is linked to the new disk. Perhaps this new disk is causing problems. So I try running e2fsck -c on this HDD. On Ubuntu 15.10, e2fsck -c freezes at 0.45%, and I don't know why.
So, using another version of Ubuntu (15.04) on the same PC, I try to connect the same 4TO disk using the eSATA connection of the StarTech HDD dock. This time, e2fsck -c runs correctly.
After some hours, you can see the result:
sudo e2fsck -c /dev/sdc1
e2fsck 1.42.12 (29-Aug-2014)
ColdCase : récupération du journal
Vérification des blocs défectueux (test en mode lecture seule) : complété
ColdCase: Updating bad block inode.
Passe 1 : vérification des i-noeuds, des blocs et des tailles
Passe 2 : vérification de la structure des répertoires
Passe 3 : vérification de la connectivité des répertoires
Passe 4 : vérification des compteurs de référence
Passe 5 : vérification de l'information du sommaire de groupeColdCase: ***** LE SYSTÈME DE FICHIERS A ÉTÉ MODIFIÉ *****
ColdCase : 11/244195328 fichiers (0.0% non contigus), 15377150/976754176 blocsI'm not an expert in badblock outputs, but it seems there is no bad block at all on this disk?
So, if the problem is not the hard drive, maybe the problem can be linked to mkntfs used over USB3? What other tests can I try?
Some information about the USB dock:
➜ ~ lsusb
...
Bus 002 Device 002: ID 174c:55aa ASMedia Technology Inc. ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge
...➜ ~ sudo lsusb -v -d 174c:55aa
Mot de passe [sudo] pour reyman: Bus 002 Device 002: ID 174c:55aa ASMedia Technology Inc. ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 3.00
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 9
idVendor 0x174c ASMedia Technology Inc.
idProduct 0x55aa ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge
bcdDevice 1.00
iManufacturer 2 asmedia
iProduct 3 ASM1053E
iSerial 1 123456789012
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
wTotalLength 121
bNumInterfaces 1
bConfigurationValue 1
iConfiguration 0
bmAttributes 0xc0
Self Powered
MaxPower 36mA
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 2
bInterfaceClass 8 Mass Storage
bInterfaceSubClass 6 SCSI
bInterfaceProtocol 80 Bulk-Only
iInterface 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0400 1x 1024 bytes
bInterval 0
bMaxBurst 15
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x02 EP 2 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0400 1x 1024 bytes
bInterval 0
bMaxBurst 15
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 1
bNumEndpoints 4
bInterfaceClass 8 Mass Storage
bInterfaceSubClass 6 SCSI
bInterfaceProtocol 98
iInterface 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0400 1x 1024 bytes
bInterval 0
bMaxBurst 15
MaxStreams 16
Data-in pipe (0x03)
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x02 EP 2 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0400 1x 1024 bytes
bInterval 0
bMaxBurst 15
MaxStreams 16
Data-out pipe (0x04)
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x83 EP 3 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0400 1x 1024 bytes
bInterval 0
bMaxBurst 15
MaxStreams 16
Status pipe (0x02)
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x04 EP 4 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0400 1x 1024 bytes
bInterval 0
bMaxBurst 0
Command pipe (0x01)
Binary Object Store Descriptor:
bLength 5
bDescriptorType 15
wTotalLength 22
bNumDeviceCaps 2
USB 2.0 Extension Device Capability:
bLength 7
bDescriptorType 16
bDevCapabilityType 2
bmAttributes 0x00000002
Link Power Management (LPM) Supported
SuperSpeed USB Device Capability:
bLength 10
bDescriptorType 16
bDevCapabilityType 3
bmAttributes 0x00
wSpeedsSupported 0x000e
Device can operate at Full Speed (12Mbps)
Device can operate at High Speed (480Mbps)
Device can operate at SuperSpeed (5Gbps)
bFunctionalitySupport 1
Lowest fully-functional device speed is Full Speed (12Mbps)
bU1DevExitLat 10 micro seconds
bU2DevExitLat 2047 micro seconds
Device Status: 0x0001
Self PoweredInformation about the disk in question: /dev/sdd
➜ ~ sudo fdisk -l /dev/sdd
Disque /dev/sdd: 3,7 TiB, 4000787030016 octets, 7814037168 secteurs
Unités: sectors of 1 * 512 = 512 octets
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytes
Disklabel type: gpt
Disk identifier: ACD5760B-2898-435E-82C6-CB3119758C9BPériphérique Start Fin Secteurs Size Type
/dev/sdd1 2048 7814035455 7814033408 3,7T Linux filesystemdmesg returns a lot of errors about the disk; see this extract:
[ 68.856381] scsi host6: uas_eh_bus_reset_handler start
[ 68.968376] usb 2-2: reset SuperSpeed USB device number 2 using xhci_hcd
[ 68.989825] scsi host6: uas_eh_bus_reset_handler success
[ 99.881608] sd 6:0:0:0: [sdd] tag#12 uas_eh_abort_handler 0 uas-tag 13 inflight: CMD OUT
[ 99.881618] sd 6:0:0:0: [sdd] tag#12 CDB: Write(16) 8a 00 00 00 00 00 e8 c4 08 00 00 00 00 08 00 00
[ 99.881856] sd 6:0:0:0: [sdd] tag#5 uas_eh_abort_handler 0 uas-tag 6 inflight: CMD OUT
[ 99.881861] sd 6:0:0:0: [sdd] tag#5 CDB: Write(16) 8a 00 00 00 00 00 cd 01 08 f0 00 00 00 10 00 00
[ 99.881967] sd 6:0:0:0: [sdd] tag#4 uas_eh_abort_handler 0 uas-tag 5 inflight: CMD OUT
[ 99.881972] sd 6:0:0:0: [sdd] tag#4 CDB: Write(16) 8a 00 00 00 00 00 cd 01 08 00 00 00 00 f0 00 00
[ 99.882085] sd 6:0:0:0: [sdd] tag#3 uas_eh_abort_handler 0 uas-tag 4 inflight: CMD OUT
[ 99.882090] sd 6:0:0:0: [sdd] tag#3 CDB: Write(16) 8a 00 00 00 00 00 cd 01 07 10 00 00 00 f0 00 00
[ 99.882171] sd 6:0:0:0: [sdd] tag#2 uas_eh_abort_handler 0 uas-tag 3 inflight: CMD OUT
[ 99.882175] sd 6:0:0:0: [sdd] tag#2 CDB: Write(16) 8a 00 00 00 00 00 cd 01 06 20 00 00 00 f0 00 00
[ 99.882255] sd 6:0:0:0: [sdd] tag#1 uas_eh_abort_handler 0 uas-tag 2 inflight: CMD OUT
[ 99.882258] sd 6:0:0:0: [sdd] tag#1 CDB: Write(16) 8a 00 00 00 00 00 cd 01 05 30 00 00 00 f0 00 00
[ 99.882338] sd 6:0:0:0: [sdd] tag#0 uas_eh_abort_handler 0 uas-tag 1 inflight: CMD OUT
[ 99.882342] sd 6:0:0:0: [sdd] tag#0 CDB: Write(16) 8a 00 00 00 00 00 cd 01 04 40 00 00 00 f0 00 00
[ 99.882419] sd 6:0:0:0: [sdd] tag#11 uas_eh_abort_handler 0 uas-tag 12 inflight: CMD OUT
[ 99.882423] sd 6:0:0:0: [sdd] tag#11 CDB: Write(16) 8a 00 00 00 00 00 cd 00 f9 00 00 00 00 f0 00 00
[ 99.882480] sd 6:0:0:0: [sdd] tag#10 uas_eh_abort_handler 0 uas-tag 11 inflight: CMD OUT
[ 99.882483] sd 6:0:0:0: [sdd] tag#10 CDB: Write(16) 8a 00 00 00 00 00 cd 00 f9 f0 00 00 00 f0 00 00
[ 99.882530] sd 6:0:0:0: [sdd] tag#9 uas_eh_abort_handler 0 uas-tag 10 inflight: CMD OUT
[ 99.882532] sd 6:0:0:0: [sdd] tag#9 CDB: Write(16) 8a 00 00 00 00 00 cd 00 fa e0 00 00 00 f0 00 00
[ 99.882593] sd 6:0:0:0: [sdd] tag#8 uas_eh_abort_handler 0 uas-tag 9 inflight: CMD
[ 99.882596] sd 6:0:0:0: [sdd] tag#8 CDB: Write(16) 8a 00 00 00 00 00 cd 00 fb d0 00 00 00 f0 00 00
[ 99.882667] scsi host6: uas_eh_bus_reset_handler start
[ 99.994700] usb 2-2: reset SuperSpeed USB device number 2 using xhci_hcd
[ 100.015613] scsi host6: uas_eh_bus_reset_handler success
[ 135.962175] sd 6:0:0:0: [sdd] tag#5 uas_eh_abort_handler 0 uas-tag 6 inflight: CMD OUT
[ 135.962185] sd 6:0:0:0: [sdd] tag#5 CDB: Write(16) 8a 00 00 00 00 00 cd 40 78 f0 00 00 00 10 00 00
[ 135.962428] sd 6:0:0:0: [sdd] tag#4 uas_eh_abort_handler 0 uas-tag 5 inflight: CMD OUT
[ 135.962436] sd 6:0:0:0: [sdd] tag#4 CDB: Write(16) 8a 00 00 00 00 00 cd 40 78 00 00 00 00 f0 00 00
[ 135.962567] sd 6:0:0:0: [sdd] tag#3 uas_eh_abort_handler 0 uas-tag 4 inflight: CMD OUT
[ 135.962576] sd 6:0:0:0: [sdd] tag#3 CDB: Write(16) 8a 00 00 00 00 00 cd 40 77 10 00 00 00 f0 00 00
[ 135.962682] sd 6:0:0:0: [sdd] tag#2 uas_eh_abort_handler 0 uas-tag 3 inflight: CMD OUT
[ 135.962690] sd 6:0:0:0: [sdd] tag#2 CDB: Write(16) 8a 00 00 00 00 00 cd 40 76 20 00 00 00 f0 00 00
[ 135.962822] sd 6:0:0:0: [sdd] tag#1 uas_eh_abort_handler 0 uas-tag 2 inflight: CMD
[ 135.962830] sd 6:0:0:0: [sdd] tag#1 CDB: Write(16) 8a 00 00 00 00 00 cd 40 75 30 00 00 00 f0 00 00
[ 160.904916] sd 6:0:0:0: [sdd] tag#0 uas_eh_abort_handler 0 uas-tag 1 inflight: CMD OUT
[ 160.904926] sd 6:0:0:0: [sdd] tag#0 CDB: Write(16) 8a 00 00 00 00 00 00 00 29 08 00 00 00 08 00 00
[ 160.905068] scsi host6: uas_eh_bus_reset_handler startI found this information on this forum post, that there is some problem with UAS and new Linux kernels? It seems the problem is known in many places on the internet, USB3 + Linux seems problematic in many cases -- but for old kernels. Any ideas to resolve this problem on a more recent kernel?
My kernel is:
➜ ~ uname -r
4.2.0-16-genericHmm, it seems UAS is broken for different USB3 chips of ASMedia Technology Inc.; you can see more information here.
I suppose this is my problem, but how can I resolve it now, and which chip is used for the USB3 implementation in the StarTech dock?
|
Connection problem with USB3 external storage on Linux (UAS driver problem)
|
Multiply-claimed blocks are blocks which are used by multiple files, when they shouldn’t be. One consequence of that is that changes to one of those files, in one of the affected blocks, will also appear as changes to the files which share the blocks, which isn’t what you want. (Hard links are a different scenario, which doesn’t show up here.)
If there is data loss here, it has already occurred, and it won’t easily be reversible; but it could be made worse...
If you answer “no” to the fsck question, the file system will remain in an inconsistent state. If you answer “yes”, then fsck will copy the shared blocks so that they can be re-allocated to a single file — with the 84 files involved here, each block would be copied 83 times. This will avoid future data loss, since changes to files will be limited to each individual file, as you’d expect. However cloning the blocks could involve overwriting data in other blocks, which currently appear to be unused, but might contain data you want to keep.
So the traditional data-recovery advice applies: if you think you need to recover data from the file system, do not touch it; make a copy of it on another disk and work on that to recover the data. The scenario here where this might be desirable is as follows. Files A and B used to be separate, but following some corruption somewhere, file B now shares blocks with file A. If nothing has overwritten file B’s old blocks, the data is still there, but it is no longer accessible. As long as nothing overwrites those blocks, they can be recovered (with a fair amount of effort perhaps). But once they’re overwritten, they’re gone; and here, cloning the shared blocks from file A could overwrite the old data...
In summary, if you have backups, or you know that the data can be recovered easily, answer “yes”. Otherwise, stop fsck, copy the file system somewhere else, and if you need the system back up and running, run fsck again and answer “yes” (and recover the data from the copy). If the data is important and needs to be recovered, copy the file system somewhere else, but leave the original alone — if you need the system back up and running, make another copy and run the system off of that, after running fsck on it.
|
When running
e2fsck -cck /dev/mapper/xxxI am prompted with
has 487 multiply-claimed block(s), shared with 84 file(s):
... (inode #221446306, mod time Tue Feb 20 19:48:38 2018)
... (inode #221446305, mod time Tue Feb 20 19:48:32 2018)
... (inode #221446304, mod time Tue Feb 20 19:48:38 2018)
... (inode #221446303, mod time Tue Feb 20 19:48:12 2018)
... (inode #221446302, mod time Tue Feb 20 19:59:04 2018)
... (inode #221446300, mod time Tue Feb 20 19:47:52 2018)
Clone multiply-claimed blocks<y>?What will be the possible consequence of continuing with "yes"? Will there be complete data loss? What is the result if continue with "no"?
|
Should I answer yes to "Clone multiply-claimed blocks<y>?" when running e2fsck?
|
The “recovering journal” message is output by e2fsck_run_ext3_journal, which is only called if ext2fs_has_feature_journal_needs_recovery indicates that the journal needs recovery. This “feature” is a flag which is set by the kernel whenever a journalled Ext3/4 file system is mounted, and cleared when the file system is unmounted, when recovery is completed (when mounting an unclean file system, or remounting a file system read-only), and when freezing the file system (before taking a snapshot).
Ignoring snapshots, this means that e2fsck only prints the message when it encounters a file system which hasn’t been cleanly unmounted, so its presence is proof of an unclean unmount (and perhaps shutdown, assuming the unmount was supposed to take place during shutdown).
|
Can we confirm the log message "recovering journal" from fsck should be interpreted as indicating the filesystem was not cleanly unmounted / shut down the last time? Or, are there other possible reasons to be aware of?
May 03 11:52:34 alan-laptop systemd-fsck[461]: /dev/mapper/alan_dell_2016-fedora: recovering journal
May 03 11:52:42 alan-laptop systemd-fsck[461]: /dev/mapper/alan_dell_2016-fedora: clean, 365666/2621440 files, 7297878/10485760 blocksMay 03 11:52:42 alan-laptop systemd[1]: Mounting /sysroot...
May 03 11:52:42 alan-laptop kernel: EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
May 03 11:52:42 alan-laptop systemd[1]: Mounted /sysroot.Compare fsck of /home from the same boot, which shows no such message:
(ignore the -1 hour jump, it's due to "RTC time in the local time zone")
May 03 10:52:57 alan-laptop systemd[1]: Starting File System Check on /dev/mapper/alan_dell_2016-home...
May 03 10:52:57 alan-laptop systemd-fsck[743]: /dev/mapper/alan_dell_2016-home: clean, 1469608/19857408 files, 70150487/79429632 blocks
May 03 10:52:57 alan-laptop systemd[1]: Started File System Check on /dev/mapper/alan_dell_2016-home.
May 03 10:52:57 alan-laptop audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsc>
May 03 10:52:57 alan-laptop systemd[1]: Mounting /home...
May 03 10:52:57 alan-laptop systemd[1]: Mounted /boot/efi.
May 03 10:52:57 alan-laptop kernel: EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
May 03 10:52:57 alan-laptop systemd[1]: Mounted /home.
May 03 10:52:57 alan-laptop systemd[1]: Reached target Local File Systems.Version
$ rpm -q --whatprovides $(which fsck.ext4)
e2fsprogs-1.43.8-2.fc28.x86_64Motivation
This happened immediately after an offline update; it was most likely triggered by a PackageKit bug:
Bug 1564462 - offline update performed unclean shutdown
where it effectively uses systemctl reboot --force. I'm concerned that there's a bug in Fedora here, because systemd forced shutdown is still supposed to kill all processes and then unmount the filesystems cleanly where possible.
The above messages are from Fedora 28, systemd-238-7.fc28.1.x86_64. Fedora 27 was using a buggy version of systemd which could have failed to unmount filesystems:
systemd-shutdown[1]: Failed to parse /proc/self/mountinfo #6796
however the fix should be included in systemd 235 and above. So I'm concerned there's yet another bug lurking somewhere.
The filesystem is on LVM.
I seem to remember that shutdown is associated with a few screenfuls of repeated messages in a few seconds immediately before the screen goes black. I think they are from inside the shutdown initrd. I don't know if this represents a problem or not.
|
Does "recovering journal" prove an unclean shutdown/unmount?
|
According to man fstab:The sixth field (fs_passno). This field is used by the fsck(8) program to determine the order in which filesystem checks are done at reboot time. The root filesystem should be specified with a fs_passno of 1, and other filesystems should have a fs_passno of 2. Filesystems within a drive will be checked sequentially, but filesystems on different drives will be checked at the same time to utilize parallelism available in the hardware. If the sixth field is not present or zero, a value of zero is returned and fsck will assume that the filesystem does not need to be checked.So 3 is void. Moreover the fstab influences just on boot time check not every time a device is mounted. So to check during the boot, change 6th field to 2. If your wants to make check every mount you can do it by simple script or even make alias (for example
alias bk_mount='fsck -a UUID=c870ccb3-e472-4a3e-8e82-65f4fdb73b38 && \
mount /media/backup_disk_1'
|
I have a backup script that mounts and unmounts a USB drive.
I just noticed that its warning me:
EXT3-fs warning: maximal mount count reached, running e2fsck is recommendedMy question:
How can I get it to run e2fsck automatically when the mount command is run?
This is how it looks in /etc/fsck
UUID=c870ccb3-e472-4a3e-8e82-65f4fdb73b38 /media/backup_disk_1 auto defaults,rw,noauto 0 3So <pass> is 3, so I was expecting fsck to be run when required.
EDIT
This is how I ended up doing it, based on the given answer:
(In a Bash script)
function fsck_disk {
UUID=$1
echo "Checking if we need to fsck $UUID"
MCOUNT=`tune2fs -l "UUID=$UUID" 2> /dev/null | sed -n '/Mount count:\s\+/s///p'`
if [ "$MCOUNT" -eq "$MCOUNT" ] 2> /dev/null
then
echo "Mount count = $MCOUNT"
if (( $MCOUNT > 30 ))
then
echo "Time to fsck"
fsck -a UUID=$UUID \
1>> output.log \
2>> error.log
else
echo "Not yet time to fsck"
fi
fi
}fsck_disk a60b1234-c123-123e-b4d1-a4a111ab2222
|
Run fsck automatically when calling mount from command line
|
The fsck already takes place within the initrd/ initramfs (after an unclean shutdown it takes several seconds longer with a lot of disk activity at this stage, where the journal seems to be replayed), and thus, when the normal, more verbose, file system checks are beeing run from the main system, it is already clean.
|
My root-partition is formatted as ext4-filesystem.
I notice that, whenever my machine crashes and I have to hard-reset it, when booting up again and the root filesystem is checked this step takes a bit (like one to two seconds) longer than when booting from a cleanly shut down system, but it is reported as "clean" (and nothing like /dev/<rootpartition> was not cleanly unmounted, check forced). The filesystem is 92% full (352 GiB).
My question: I wonder if this is the normal and a safe behaviour of ext4 or some bug in the startup-scripts. I know that ext4 has much faster fsck than ext3, but I am worried about that it is reported as "clean" after a system crash.
When I run e2fsck -f manually on that partition the check lasts comparable to an ext2/ext3 filesystem. So I am worried and since beeing so i tuned my filesystem to be checked at every boot (tune2fs -c 1), which results in a full check taking as long as e2fsck -f every boot.
Edit, just to clarify:
After a non-clean reset, usually, on /var, which is reiserfs, fsck replays journal entries; on /boot, which is ext2, fsck runs, displays progress bar, and reports "clean" after running. Only on the root filesystem no "check forced" and no fsck-progress appears, which do appear for the other file systems even if they turn out to be clean. That is the worrying difference!
|
ext4 reported as clean by fsck after hard reset: Is that normal?
|
As stated very early by Theodore Tso himself, there can be two immediate reasons for “Multiply claimed blocks” to be reported by fsck :One is that one or more blocks in the inode table get written to the
wrong place, overwriting another block(s) in the inode table.This is most often triggered by some kernel bug. (you can read T'Tso capable to describe some easily recognizable pattern => not at random as whatever spurious corruption caused by the outside would generate.).
This exceptionally occurs in the early times of new features for the EXT family of filesystems and mainly, because of rare race conditions :with bigalloc
delayed allocation,
More, recently, as pointed out by frostschutz in OP's comments, concerning the fast_commit feature.The second case is one where the block allocation bitmap gets
corrupted, such that some blocks which are in use are marked as free,
and then the file system is remounted and files are written to the
file system, such that the blocks are reallocated for new files.These appear much more at random following some corruption which root cause is not likely to be a kernel bug.
This including unclean shutdowns, ill written applications, non-sensible mount options with regards to the hardware environment, misc. memory & other hardware faults.
Of course, one should not forget the possible responsibility of fsck itself producing erroneous reports or even at the root cause of the problem when badly trying to fix some other file system inconsistency, there have actually been
How can you avoid them, well, from what here-above told you can only expect to limit the probability of their occurrence :Stay low-tech ;-) Avoid setting all-brand-new features as soon as they become available,
Use ECC memory and reliable storage devices,
Fine-tune your filesystem options (offered at mkfs time) and select mount options wisely (in coherence with the environment),
Run all untrusted softwares sandboxed.
Ultimately work as T'Tso in case of a crash, run e2croncheck :What I'm actually doing right now is after every crash, I'm rebooting,
logging in, and running e2croncheck right after I log in. This
allows me to notice any potential file system corruptions before it
gets nasty … E2croncheck is much more convenient, since I can be doing
other things while the e2fsck is running in one terminal window.
|
“Multiply claimed blocks” is an error reported by fsck when blocks appear to belong to more than one file. This causes data corruption since both files change when one of the files are written.
But what can be the original causes of multiply claimed blocks? How are they created and how can I avoid them?
|
What can cause “multiply claimed blocks” on an ext4 drive?
|
That's a broken disk. SMART values of interest (last column): 1 Raw_Read_Error_Rate ... Pre-fail Always FAILING_NOW 48799
5 Reallocated_Sector_Ct ... Pre-fail Always - 30664
9 Power_On_Hours ... Old_age Always - 3812
194 Temperature_Celsius ... Old_age Always - 52
196 Reallocated_Event_Count ... Old_age Always - 1543
197 Current_Pending_Sector ... Old_age Always - 93
198 Offline_Uncorrectable ... Old_age Offline - 0The only other comment about this obviously failed disk is that the temperature seems a little higher that might be normal. Is there enough airflow around the disk?
Since it's only six months old I would try to get it replaced with a new one by the seller as "faulty when purchased". However, failing that you will get a warranty replacement from Western Digital - see https://support-en.wd.com/app/warrantystatusweb and enter your serial number starting WX.
There is little point doing anything more with it except,Trying to rescue any data still on the disk for which you don't have a backup (use ddrescue at this point - nothing else will touch it)
Registering that warranty claimGood luck.
|
I have this 6 months old 5TB external hard drive. I started to experience a lot of I/O errors recently, so I backed up my data then used gparted to create a new partition table, a new main partition.
Then I ran the following code trying to see if it's a hardware problem :
sudo e2fsck -fcky /dev/sdc1
So far I have the following results : Checking for bad blocks (read-only test): 9.24% done, 11:44:51 elapsed. (8333/0/0 errors)and I'm honestly kinda shocked, I didn't expect it to be this bad.
So I guess my question really is :
Is it possible to get false results from e2fsck based on wrong usage of the tool ? or are these results always proof that the hard disk is genuinely very damaged ?
Update : Here's the smartctl results :
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-47-generic] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION ===
Device Model: WDC WD50NMZW-59BCBS0
Serial Number: WD-WXC2DA1269FR
LU WWN Device Id: 5 0014ee 2bf66ed54
Firmware Version: 01.01A01
User Capacity: 5.000.947.523.584 bytes [5,00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 4800 rpm
Form Factor: 2.5 inches
TRIM Command: Available, deterministic
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Sun Sep 11 12:26:06 2022 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: FAILED!
Drive failure expected in less than 24 hours. SAVE ALL DATA.
See vendor-specific Attribute list for failed Attributes.General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 73) The previous self-test completed having
a test element that failed and the test
element that failed is not known.
Total time to complete Offline
data collection: (12480) seconds.
Offline data collection
capabilities: (0x1b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
No Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 530) minutes.
SCT capabilities: (0x30b5) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 001 001 051 Pre-fail Always FAILING_NOW 48799
3 Spin_Up_Time 0x0027 253 253 021 Pre-fail Always - 2875
4 Start_Stop_Count 0x0032 099 099 000 Old_age Always - 1290
5 Reallocated_Sector_Ct 0x0033 185 185 140 Pre-fail Always - 30664
7 Seek_Error_Rate 0x002e 198 195 000 Old_age Always - 899
9 Power_On_Hours 0x0032 095 095 000 Old_age Always - 3812
10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 32
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 27
193 Load_Cycle_Count 0x0032 194 194 000 Old_age Always - 20670
194 Temperature_Celsius 0x0022 100 094 000 Old_age Always - 52
196 Reallocated_Event_Count 0x0032 001 001 000 Old_age Always - 1543
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 93
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0SMART Error Log Version: 1
No Errors LoggedSMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed: unknown failure 90% 3792 -
# 2 Short offline Completed: unknown failure 90% 3792 -
# 3 Short offline Completed: unknown failure 90% 3792 -Selective Self-tests/Logging not supported
|
Is it possible to run e2fsck in a way that'll give false results?
|
I resolved this problem
$ dmesg|grep bsd[ 3.467958] sda1:
Then:
$ sudo mount -t ufs -r -o ufstype=ufs2 /dev/sdb1 ~/freebsdOf course, for another version of linux line ubuntu we need to know:
Possible common types are:
old old format of ufs
default value, supported as read-only
44bsd used in FreeBSD, NetBSD, OpenBSD
ufs2 used in FreeBSD 5.x
5xbsd synonym for ufs2
sun used in SunOS (Solaris)
sunx86 used in SunOS for Intel (Solarisx86)
hp used in HP-UX
nextstep used in NextStep
nextstep-cd used for NextStep CDROMs (block_size == 2048)
openstep used in OpenStepand we have to use this command for ubuntu and like that
$ sudo mount -t ufs -r -o ufstype=44bsd /dev/sdb1 /DATA
|
I have a problem like this question
How disk became suddenly write protected in spite configuration is read/write?
And I used these commands to resolve thatumount /dev/sdb1
e2fsck /dev/sdb1
mount /dev/sdb1but
~# e2fsck /dev/sdb1
e2fsck 1.44.5 (15-Dec-2018)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/sdb1The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>/dev/sdb1 contains a ufs file systemadditional commands to help you to know additional details
~#nano /etc/fstab
UUID=###951671### /DATA ufs defaults 1 2mkdir /DATAmount /DATA~# ls -lat | grep DATA
drwxr-xr-x 5 root root 1024 May 26 11:37 DATA~# df -h | grep sd
/dev/sda1 276G 8.7G 254G 4% /
**/dev/sdb1 197G 102G 80G 57% /DATA**~# lsblk -f | grep sd
sda
├─sda1 ext4 ###-c0fb-42ce-9c78-### 253.2G 3% /
├─sda2
└─sda5 swap ###-27b4-485b-98b3-### [SWAP]
sdb
└─sdb1 ufs ###951671### 79.3G 52% /DATA~:/DATA# ls
ls: reading directory '.': Input/output error~:/DATA# mount -o rw,remount /dev/sdb1
mount: /DATA: mount point not mounted or bad option.~# umount /DATA
~# e2fsck /DATA
e2fsck 1.44.5 (15-Dec-2018)
e2fsck: Is a directory while trying to open /DATAThe superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>~# mount /DATA
mount: /DATA: WARNING: device write-protected, mounted read-only.At all, I would like to access to this hard /dev/sdb1 in /DATA folder
How can I resolve this problem?
|
How to resolve e2fsck Superblock problem?
|
It could be that the filesystem iteself is corrupted, and an fsck is needed. Unfortunately, fsck on Linux (which I assume you're using - correct me if I'm wrong) is probably just a link to the ntfs tool ntfsfix, which is not a greatly useful tool.
In that case, to check, I would recommend using your copy of windows (which again, is an assumption, but there's not many other reasons for using NTFS) and running chkdisk on it.
|
I was moving a file with mv but the operation got interrupted. Now I am left with a file I cannot delete on an external NTFS drive. I was moving it from an ext4.
rm file.to.delete
rm: cannot remove ‘file.to.delete’: No such file or directoryls
total 234M
234M file.to.deleteI got inum from ...
ls -ithen
find . -inum 12533 -delete
find: cannot delete `file.to.delete': No such file or directoryWhat should I do next in order to get rid of this file or this file's entry in the file system?
Thanks
Update: I connected my external NTFS drive to my Windows computer and was able to delete the file. I reconnected the external NTFS to my raspberry pi but am currently having trouble mounting it.
FINAL UPDATE: I reconnected my external NTFS drive to my Windows computer and checked for errors. It found errors and then automatically repaired them. I then reconnected my external NTFS drive to my raspberry pi and mount -a and it mounted no problems. FIXED! :D.
|
I have a file I cannot delete after a file mv operation was interrupted
|
You mentioned that this filesystem is used with very old machines. If the filesystem was originally created with a very old mke2fs tool that did not support the resize_inode filesystem feature to reserve some metadata space for on-line extension of the filesystem, it might be possible that your second run with e2fsck version 1.14.1 did just automatically add it.
If I recall correctly, the allocation is completely benign for old systems that don't understand it, but it ensures that some critical metadata structure can extend without a major re-organization, if the filesystem is ever extended.
You can confirm this by running tune2fs -l on the filesystem of your USB drive and on one of the ext2 filesystems of your old machines, and comparing the results. You can do that even if the filesystems are mounted. If the output for your USB drive includes the keyword resize_inode on the Filesystem features: line, and the local ext2 filesystems on your old machines don't have that keyword, then the most likely explanation is that your e2fsck -pfv just took the opportunity to make that tiny allocation in the hope that it might help avoid downtime in the future.
|
Abstract: E2fsck found no error with the -n option but with -p (preen). It corrected the error but did not give any error message. The error is only reflected via exit code. How to interpret this?
I am using an USB hard drive with an Ext2 filesystem to store backups of several machines. Recently I had a huge data throughput on that drive why I decided to do an extra filesystem check. In total, I did four e2fsck runs with different options. Here are the commands I used (as root) together with their outputs, which also contain the exit status of e2fsck. Unfortunately some phrases are localized to German but the (presumably) important ones are in English:
1st run, read-only:
# e2fsck -nv /dev/sdb1; echo $?
e2fsck 1.41.1 (01-Sep-2008)
WD-Elements: sauber, 709312/61046784 Dateien, 96258851/244182016 Blöcke
02nd run, read-only forced:
# e2fsck -nfv /dev/sdb1; echo $?
e2fsck 1.41.1 (01-Sep-2008)
Durchgang 1: Prüfe Inodes, Blocks, und Größen
Durchgang 2: Prüfe Verzeichnis Struktur
Durchgang 3: Prüfe Verzeichnis Verknüpfungen
Durchgang 4: Überprüfe die Referenzzähler
Durchgang 5: Überprüfe Gruppe Zusammenfassung 709312 inodes used (1.16%)
95492 non-contiguous inodes (13.5%)
# von Inodes mit ind/dind/tind Blöcken: 109958/2429/7
96258851 blocks used (39.42%)
0 bad blocks
8 large files 564029 regular files
121351 directories
0 character device files
0 block device files
11 fifos
506224 links
23073 symbolic links (19397 fast symbolic links)
839 sockets
--------
1215527 files
03rd run, preening:
# e2fsck -pv /dev/sdb1; echo $?
WD-Elements: sauber, 709312/61046784 Dateien, 96258851/244182016 Blöcke
04th run, preening forced:
# e2fsck -pfv /dev/sdb1; echo $? 709312 inodes used (1.16%)
95492 non-contiguous inodes (13.5%)
# von Inodes mit ind/dind/tind Blöcken: 109958/2429/7
96258853 blocks used (39.42%)
0 bad blocks
8 large files 564029 regular files
121351 directories
0 character device files
0 block device files
11 fifos
506224 links
23073 symbolic links (19397 fast symbolic links)
839 sockets
--------
1215527 files
1The commands were issued directly one after the other without touching anything else in between.
Please note the differences:In the first two runs, the filesystem was opened read-only (-n option), while the last two were preening runs (-p option).
The first and the third run were not forced, the second and the last run were (-f).
All runs reported coinciding filesystem data with one exception: The last run (-pfv) reported a different number of "blocks used".
All but the last run exited with status 0, the last one (-pfv) with status 1.Obviously the last, forced preening run (-pfv) has found (and corrected) a filesystem error that the other runs were not able to find. Unfortunately it does not give any hint about that error in its output.
Now for my questions:What error was found and corrected there? Was it as simple as an incorrect count of used blocks?
What could have caused that error? The filesystem was always cleanly unmounted.
The filesystem error was finally corrected by e2fsck. But can I trust the data stored therein? Couldn't it be that whatever caused that filesystem error in the first place, also corrupted the data on the disk? This would render all data on the disk worthless. Or is this to paranoid? Why?The last question distinguishes between filesystem and data. In this respect, Mikel's answer to "Do journaling filesystems guarantee against corruption after a power failure?" is of high relevance. Unfortunately it focuses on journaling filesystems, so it does not apply to Ext2.
Also Gilles' answer to "How to test file system correction done by fsck" is a good read: According to that, fsck only guarantees a consistent state of the filesystem, not necessarily the latest one.
Update 1
In his comment, Luciano Andress Martini pointed out that the observed and apparently puzzling behavior of e2fsck could have been caused by RAM errors in the executing machine. While being a highly relevant aspect in comparable situations, it does not seem to apply here: I checked the RAM with "memtest86+" over night and it completed 16 passes without errors. In addition, I performed e2fsck -nfv, e2fsck -pfv, and e2fsck -fv runs on the drive under test using another machine (different hardware, kernel, and version of e2fsck). These did not find any filesystem errors and confirmed the filesystem data that was reported by the last e2fsck command shown above, in particular the number of used blocks. Also the total number of blocks (244182016) that was reported by the unforced checks was confirmed.
Update 2
telcoM's answer suggests, that the observed behavior of e2fsck might be explained with changes of the filesystem feature settings that e2fsck does when dealing with very old filesystems. Unfortunately this very consistent explanation does not apply here: The filesystem was actually created with a newer version of mke2fs (1.42.8) which enabled the features ext_attr, resize_inode, dir_index, filetype, sparse_super, large_file. This was not changed by the e2fsck runs described above.
Update 3
Meanwhile the USB drive successfully passed a non-destructive read-write badblocks test (took 3 days, and yes: the specified block size (-b) and number of blocks (-c) matter lot) and several offline S.M.A.R.T. tests.
|
What is it that e2fsck does not say?
|
I see the below answer from here.
The filesystem check on boot is usually read-only until it finds a problem then it will prompt you before making any changes so it is probably safe to intereupt.
But is is quite possible (and not uncommon for servers that need to come back up after a power-out) for it to be set to auto-fix, so unless you know for sure that your system is not configured this way let it run to completion for safety.
Most fsck programs are written in such a way that any changes that make are as atomic as possible and they will clean-up (completing or rolling back any current change) before responding to a TERM or INT signal (SIGINT is what is sent to the active process when ctrl+c is pressed) so even an actively writing fsck should be safe to interrupt, but I woudl not recommend taking the risk - better safe than sorry!
|
I had some drive errors so I ran e2fsck -cckty to find bad blocks on a 2TB drive. It found some bad blocks at the beginning, but hasn't found any in a day and a half. e2fsck has been running for 40 hours and is 53% done. If I Ctrl-C it, will it update the bad blocks information in the filesystem to reflect the bad blocks it found at the beginning?
|
Stopping e2fsck early
|
It’s OK to let fsck fix this, it refers to a deleted inode — the data has already been deleted, nothing more will be deleted.
|
when I try to resize the disk we get that
resize2fs /dev/sdb
resize2fs 1.42.9 (28-Dec-2013)
Please run 'e2fsck -f /dev/sdb' first.so when I try to do e2fsck
I get the following
e2fsck -f /dev/sdb
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Deleted inode 142682 has zero dtime. Fix<y>?is it ok ? to continue by entering yes option , or this is something that can delete the data on disk ?
|
rhel + efsck + Deleted inode xxxxx has zero dtime
|
The best way to determine whether this particular fsck operation corrected any errors would have been to check its exit code: e2fsck sets bit 1 of its exit code if it corrected errors, and bit 2 if it corrected errors requiring a reboot (i.e. on a mounted file system).
You can also determine that e2fsck didn’t make any change here, because the output doesn’t mention
***** FILE SYSTEM WAS MODIFIED *****which e2fsck outputs if it made any changes (unless the -p option was specified).
|
After doing a fsck on a filesystem, someone asked me if the fsck resolved any problems. I'm not sure how to interprete the following results. Do you see anything important to notice ?
root@server1> fsck -fyv /donnees
fsck 1.35 (28-Feb-2004)
e2fsck 1.35 (28-Feb-2004)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information1468099 inodes used (0%)
114532 non-contiguous inodes (7.8%)
# of inodes with ind/dind/tind blocks: 456970/35761/8
249447788 blocks used (77%)
0 bad blocks
19 large files1176399 regular files
291142 directories
0 character device files
0 block device files
140 fifos
2 links
407 symbolic links (403 fast symbolic links)
2 sockets
--------
1468092 files
|
How can I see if this fsck operation corrected any filesystem errors?
|
When the partition is in clean state, there is no actual fsck run, which is why the date isn't updated.
If you want to force it, the -f option does just that: sudo fsck -f /dev/sda1.
|
I know of various ways in which to check when the last fsck occurred on a file system. e.g.
$ sudo dumpe2fs -h /dev/sda1 | grep 'Mount count' -A3
dumpe2fs 1.42.12 (29-Aug-2014)
Mount count: 74
Maximum mount count: -1
Last checked: Thu Dec 11 21:37:56 2014
Check interval: 0 (<none>)This updates for automatic, fstab-initiated fscks. However, it doesn't seem to take into account manual fscks.
$ sudo fsck /dev/sda1
fsck from util-linux 2.25.2
e2fsck 1.42.12 (29-Aug-2014)
<VOLUME_NAME>: clean, 1066411/183140352 files, 572576302/732557824 blocks
$ sudo dumpe2fs -h /dev/sda1 | grep 'Mount count' -A3
dumpe2fs 1.42.12 (29-Aug-2014)
Mount count: 74
Maximum mount count: -1
Last checked: Thu Dec 11 21:37:56 2014
Check interval: 0 (<none>)Is there a way to either update this value, or to find the real last time fsck was run? This is an ext4 volume.
|
How can I tell when my file system was last fsck-ed at all?
|
As long as the system was not doing a major disk intensive job when things went wrong.
And if the drive settings were not purposly set to cache data before write.You can be reasonably sure that if all the checks pass, that the data is trustworthy. However depending on the age of the drive and use case, I would clone the drive to a newer one and use the new drive.
|
Topic
If a filesystem was successfully repaired by e2fsck, it is guaranteed that it is in a consistent (clean) state. However, it is not easy to assess the reliability of the files themselves after the repair.
This question aims on criteria to judge the integrity of the data stored in ext2 and ext4 filesystems that were repaired after being damaged in a specific failure scenario.Background
I use an ext2 filesystem in an external USB HDD (i.e. platter based, no flash) to back up several Linux machines. For that, I mount the drive manually with the options rw, relatime (in total), so no sync option is used.
Just recently, after doing a large backup (several 100 GB) from an openSUSE 13.1 system (Linux kernel 3.11.6-4) and after all write activities to the USB HDD were finished, I was not able to unmount that drive: The umount command blocked and did not return. The same applied for an subsequently issued sync command, which entered an uninterruptible sleep (ps state D).
This was when I unplugged the USB HDD, which did not release the blocks.
An attempt to power off the machine thereafter by standard means (pm-utils) also got stuck. To bring the machine down, I used the SysRq salute r, e, i, s, u, b. But even there, the requests s (sync) and u (remount read-only) did not succeed: According to the kernel documentation for sysrq.c (sysrq.txt) these requests are not completed before they explicitly announce that the are, which none of them did in this case. So none of the mounted filesystems was confirmed to be cleanly unmounted when the SysRq b (reboot) hit, which finally initiated a complete reboot.
Checking all involved filesystems (ext4 on root partition and ext2 on USB HDD) with e2fsck, I luckily found the root filesystem clean, and the filesystem on the USB HDD only showed wrong counts of free blocks and free inodes, which could be repaired by e2fsck.
The Systemd journal of the machine that was used here did not show any entry related to the blocking of the umount and the syncs. In particular there were no entries related to IO problems. The USB unplug event and the rest of my measures apart from the SysRqs were properly logged.
S.M.A.R.T. and badblocks tests that were performed on the USB HDD after that incident did not reveal any anomalies. The drive, which is about 5 months old, seems to work normal now.Variations
I encountered the same scenario several times in the last years with different USB HDDs (none of them older than 16 months) and on different Linux machines running different kernel versions. The only deviation in my treatment was that I sometimes used the power button instead of SysRq to bring the machine down.
At each of these incidents, I checked all possibly affected filesystems (all ext2 and ext4) with e2fsck, finding all of them in one of the following error states:Clean filesystem.
Unclean filesystem which e2fsck could repair by just replaying the journal (ext4).
Filesystem showing wrong counts of free blocks and free inodes which could be corrected by e2fsck.
Filesystem containing orphaned inodes which e2fsck connected to lost+found.
Filesystem containing multiply-claimed inodes (claimed by several files) which were cloned by e2fsck.The actual question
An ext2 or ext4 filesystem that was affected by the scenario described above and thereafter was successfully repaired by e2fsck is surely in an consistent (clean) state.
But what about contents and metadata of the files within that filesystem?
Is there a unique correlation between the filesystem damages found by e2fsck and data corruption? For example like:If no other damages than wrong counts were found in the filesystem,
the actual file data are okay.Or:If the filesystem contains multiply-claimed inodes, the contents of at
least one file is corrupted.Or is it the opposite: Filesystem and file data are independent in so far as one can not conclude from damages of the one to those of the other—at least without exact knowledge about what caused the damage on device communication level?
In the latter case, the described scenario could have corrupted the file contents even if the filesystem was later found to be clean. Right?
Are there any experience values or reasoned criteria that can be taken to assess the integrity of the files depending on the filesystem errors that were found by e2fsck?
In this context, the answer of Gilles to "How to test file system correction done by fsck" is a good read.
The distinction between filesystem and data integrity is also addressed in the section "Data Mode" in the kernel documentation of the ext4 filesystem. To the latter, I was pointed by the excellent answer of Mikel to "Do journaling filesystems guarantee against corruption after a power failure?", which is also very relevant to this topic.Own guess and impact
Systemd offers the service unit (template) [emailprotected] which by default "preens" filesystems selected by passno in /etc/fstab at boot time. According to the description of the -p option in man page e2fsck(8), preening "automatically fix[es] any filesystem problems that can be safely fixed without human intervention." Unfortunately the description does not specify whether "safely" refers to the filesystem consistency alone or it also includes the contents and metadata of the files.
However, since this Systemd service initiates the preening in a way that is totally transparent to the user, there are at least some experts who sufficiently trust in the results of corresponding filesystem repairs.
So, based on a vague feeling (!), I would say that for clean filesystems (error state 1 described above) and such that could be repaired by just replaying the journal (error state 2) it is safe to assume that the files themselves are not corrupted, even after such an incident.
For filesystems that were in error state 5, on the other hand, I would refer to a backup.
So, why all that fuss? Agreed: In case of a standard home or root filesystem, I would just compare its contents against the latest backup. But in this case, these backups are on the affected USB HDD themselves. If there are some doubts about their integrity, several machines need to be instantly backed up again In addition, this renders older backups which were accumulated during a revolving backup strategy on that drive, and which otherwise could have been used as snapshots of the corresponding data, meaningless.
So it would be quite useful to have some reasoned and reliable criteria on how far we can trust the data on an ext2 or ext4 filesystem that was repaired after being affected by the described scenario.Further findings
Trying to solve that problem by my own, I found this excellent chapter about fsck in Oracles System Administration Guide for Sun. Albeit it describes the USF version of fsck, the general ideas apply to e2fsck as well. But also this very detailed document focuses on the usage of fsck and the filesystem itself rather than considering the latter's payload.
In this answer to "What does fsck -p (preen) do on ext4?", Noah posted a list of filesystem errors that can be handled automatically by fsck preening an ext4 filesystem and those that can not be. It would be great to have such a list of filesystem errors that indicates which ones of them imply in addition a corruption of file data and which ones do not—of course only if such a correlation exists...
It his answer, Michael Prokopec mentioned the importance of write caches to this question. In this respect, I found in the answer of Tall Jeff to "SATA Disks that handle write caching properly?" that at least most SATA drives have write caching enabled by default. However, according to the same post, drives try to flush these caches as fast as they can. But of course there are no guarantees...
|
Can we trust the files in a filesystem that was repaired by e2fsck?
|
I resized the partition to a too small value have corrupted the fs?It's unlikely in your case, especially since you were kind enough to stop that fs(c)killer, but you can't rule out the possibility entirely.
For example, corruption happens when it's a logical partition inside the extended partition of a msdos partition table. Logical partitions are linked lists, so between logical partitions there is a sector used to point to the next partition in the list. If you shrink/resize such a logical partition there is a sector (partially) overwritten somewhere in the middle of the disk.
Also some partitioner programs might enjoy zeroing things out. This is also the case with LVM, on each lvcreate it zeroes out like the first 4K of the created LV, and besides there is no guarantee that reversing a botched lvresize will give you the same extents back that were used before. If unlucky the LV might be located physically elsewhere, which is why you can only undo such accidents by vgcfgrestore something from /etc/lvm/{backup,archive}/ that was created before the lvresize.
With SSDs there's this TRIM fad that causes all sorts of programs to issue unwarranted TRIM commands to the SSD. LVM does this if issue_discards=1 in lvm.conf (always set it to 0), here's to hoping that the various partitioning programs will never adopt this behaviour.Is the successful run of e2fsck enough to be sure that data has not been damaged?Most filesystems are not able to detect data corruption outside of their own metadata. Which is usually not a problem since you're not supposed to pull stunts like these. If you have a backup you could compare file timestamps / checksums with what you have in your backups.I haven't mounted the filesystem in the whole process (and not mounted it yet even).You can mount it read-only like so:
mount -o loop,ro /dev/sdn1 /mnt/somewhereand then check out the files.
The loop,ro tells mount to create a read-only loop device and mount that. Surprisingly, ro by itself does not guarantee readonlyness for some filesystems including ext4. (And for multiple-device filesystems like btrfs, the loop,ro doesn't either because it affects only one device, not all of them).
|
I shrinked an ext4 filesystem with resize2fs:
resize2fs -p /dev/sdn1 3500G(FS is used for 2.3 TB)
Then I resized the partition with parted and left a 0.3% margin (~10 GB) when setting the new end:
(parted) resizepart 1 3681027097kbEventually, this turned out to be too tight:
# e2fsck -f /dev/sdn1
e2fsck 1.42.9 (4-Feb-2014)
The filesystem size (according to the superblock) is 917504000 blocks
The physical size of the device is 898688000 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? yesThen I resized the partition again, this time with 3% margin:
(parted) resizepart 1 3681027097kbAfter this, filesystem checks pass:
# e2fsck -f /dev/sdn1
e2fsck 1.42.9 (4-Feb-2014)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/sdn1: 278040/114688000 files (12.4% non-contiguous), 608536948/917504000 blocksI have run partprobe /dev/sdn after the two resizepart commands.
I haven't mounted the filesystem in the whole process (and not mounted it yet even).
May the intermediate step in which I resized the partition to a too small value have corrupted the fs?
Is the successful run of e2fsck enough to be sure that data has not been damaged?
|
Resized partition to too small value after shrinking filesystem
|
If the filesystem is really on that device, running mkfs.ext4 with the same arguments plus a -n will give you a list of superblocks that you can use as alternates. Eg:
# mkfs.ext4 -n /dev/vg1/lvol2
...
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208Then you can run e2fsck -b 32768 /dev/vg1/lvol2 or other backup superblock to see if it will fix it. PS: 32768 is a typical backup block while the other locations depend on the size of the partition.
|
I am playing with LVM and while doing lvreduce. I now get this error:
[root@localhost raja]# e2fsck -f /dev/vg1/lvol2
e2fsck 1.41.12 (17-May-2010)
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/vg1/lvol2The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>How can I fix this?
|
e2fsck giving some error
|
I have found a solution from LinuxTechi. When my boot attempt failed I had to do a hard shutdown and this non-clean shutdown has caused a problem with LVM.
The solution:
# lvchange -an /dev/ubuntu-vg/root
# lvchange -an /dev/ubuntu-vg/swap_1
# vgchange -an ubuntu-vg
# vgchange -ay ubuntu-vg
# lvchange -ay /dev/ubuntu-vg/swap_1
# lvchange -ay /dev/ubuntu-vg/roonow it mounts just fine and I can read all the files
|
I have a 1TB WD Blue SSD. It has two partitions,
Disk /dev/sdd: 931.53 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: 2115
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 33553920 bytes
Disklabel type: gpt
Disk identifier: 8FF3A4A2-ACE0-4E7A-A9E4-29056B4BAD11Device Start End Sectors Size Type
/dev/sdd1 2048 1050623 1048576 512M EFI System
/dev/sdd2 1050624 1953523711 1952473088 931G Linux LVM/dev/sdd2 has two logical volumes, a swap and an ext4 partition.
I tried to boot this SSD from USB and something went wrong. Ever since then I can display the VG and LVs on /dev/sdd2 but I cannot mount the ext4 LV, nor access any data on either the swap or ext4 LV, with cat for example.
When I try to mount I get
can't read superblock on /dev/mapper/ubuntu--vg-rootI have run
mke2fs -n /dev/ubuntu-vg/rootto get other superblocks and tried
e2fsck -b but with no success
When I run
cat /dev/sdd2I get characters but if I run
cat /dev/ubuntu-vg/swap_1
cat /dev/ubuntu-vg/rootI get Input/output error
simialrly with dd or ddrescue I can recover data from /dev/sdd2, but not from /dev/ubuntu-vg/swap_1 nor /dev/ubuntu-vg/root.
I have run smartctl tests and my return values are always 0 which indicates no problems.
How can I diagnose the I/O error and/or fix these file systems?
|
diagnose I/O error on WD Blue SSD
|
Filesystem badblock lists are obsolete (ignoring flash filesystems, because you're talking about ext4). bad blocks are remapped by the drive. Look for errors - there should be a permanent log of these in SMART counters. If you see one or more errors / "bad blocks" / "bad sectors" you should consider the disk untrustworthy.
If your valued data is saved redundantly (RAID, backups), some people develop methods to re-establish trust in the drive over a testing period.[*] You aren't using RAID to start with, so I'm not able to recommend this.
Those are the facts of life. The behaviour of mkfs v.s. fsck is unfortunate. A read-write test is still potentially useful to stress-test a newly-acquired drive. It should take more than one hour, because disk IO speed is around 100MB/s and you want to both write and read the whole disk. (The relative performance of modern disks also affects the viability of certain RAID modes). I also notice that badblocks -w runs several passes with different patterns, which would explain why it takes so long. Since badblock lists are obsolete, you can run badblocks directly and just look for any error.
However given how long this would take & that you could not use the disk for this period, you might prefer to use the longest available SMART test, or simply dd if=/dev/sdX bs=10M of=/dev/null and see if you get any read errors.
SMART features are available in GNOME Disks. (It also has a benchmark feature). The error counters are measured in sectors; you can just look at all the counters that say "sectors" and check that they're all zero. It sounds like you might have some under "reallocated sectors".[*] Writing new data to a bad sector will clear the error. This works by writing the logical sector to a different physical sector in a "spare area", and the drive will make sure to remap future reads of the logical sector.
|
Gentlemen,
I need some fatherly advice about e2fsck: I have a disk that has been getting cranky, and "e2fsck -ccv" was indeed showing bad blocks. However, I repartitioned the disk, and now the same command reports that the disk is in perfect health! What happened to my bad blocks? Of course the partitions are now all empty, but surely a bad block is still a bad block? Has the disk's internal housekeeping somehow flagged those blocks off to the point that even e2fsck doesn't get a look at them? Or does e2fsck not work on empty partitions? Or has a repair somehow been made? How can I find out?
And: what are the practicalities of using '-c' vs. '-cc', that is, when and where do I want a read-write test vs. a read-only test?
And: after repartitioning, I tried this: "mkfs.ext4 -vcc ..." in the hopes of checking the disk at the same time as creating the FS, but it took hours and hours. In constrast: "e2fsck -ccvy ..." after the FS was created was much faster, less than an hour for a 500GB disk with 12 partitions. Why? One needs to know the facts of life before one starts fscking.
|
e2fsck: bad blocks disapearing!
|
Your ext4 filesystem is (much) larger than your block device (54TB filesystem on a 12TB block device). e2fsck and resize2fs can be quite uncooperative in this situation. Filesystems hate it when huge chunks are missing.
For a quick data recovery, you can try your luck with debugfs in catastrophic mode:
# debugfs -c /dev/md127
debugfs 1.47.0 (5-Feb-2023)
debugfs: ls -l
| (this should list some files)
| (damaged files usually show with 0 bytes and 1-Jan-1970 timestamp)
debugfs: rdump / /some/recovery/dir/This should copy out files (use an unrelated HDD for recovery storage) but some files might result in errors such as Attempt to read block from filesystem resulted in short read or similar.In order to actually fix the filesystem, it's usually best to restore the original device size, and then go from there. Sometimes, shrinking a block device is reversible. But in your case, it's not reversible.
You could grow the RAID back to 11 devices but even with the correct drive order, it would not give back any of the missing data and even overwrite any that might have been left on the leftover disks. mdadm shifts offsets in every grow operation, so the layout would be all wrong.
So anything beyond the cutoff point is lost.
Furthermore it would take ages to reshape all this data (again) and the result won't be any better than just tacking on some virtual drive capacity (all zeroes with loop devices and dm-linear, or LVM thin volumes, or similar).At best you could reverse it partially, by re-creating (using mdadm --create on copy-on-write overlays) your original 11 drive RAID 6 with 4 drives missing (as drives fully zeroed out).
But at most this would give you disconnected chunks of data with many gaps in between them, since this is beyond what RAID 6 can recover from. It's even more complicated since you no longer have the metadata (need to know the original offset, which was already changed on your current raid, as well as the drive order).
If you could manage to do it, you could stitch your current RAID (0-12TB) and restored raid (12TB-54TB) together with dm-linear (all on top of copy-on-write overlays) and see what can be found.
But this process is complicated and probability of success is low. For any data that was stored outside those 12TB that were kept by your shrink operation, some smaller than chunk/stripe files could have survived, while larger files would all be damaged.
|
On my server, I had an SSD as the boot drive with 11 6TB HDDs in a RAID6 setup as additional storage. However, after running into some issues with the motherboard, I switched the motherboard to one with only 4 SATA ports, so I reduced the size of the RAID6 setup from 11 to 4 drives. With <6TB of actual data being stored on the array, the data should be able to fit in the reduced storage space.
I believe I used the instructions on the following pages to shrink the array. Since it was quite a while ago, I don't actually remember if these were the pages or instructions used, nor do I remember many of the fine details:https://superuser.com/questions/834100/shrink-raid-by-removing-a-disk
https://delightlylinux.wordpress.com/2020/12/22/how-to-remove-a-drive-from-a-raid-array/On the 7 unused drives, I believe I zeroed the superblocks: sudo mdadm --zero-superblock.
For the 4 drives I want to use, I am unable to mount. I do not believe I used any partitions on the array.
sudo mount /dev/md127 /mnt/md127
mount: /mnt/md127: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error.From /var/log/syslog:
kernel: [ 1894.040670] EXT4-fs (md127): bad geometry: block count 13185878400 exceeds size of device (2930195200 blocks)Since 13185878400 / 2930195200 = 4.5 = 9 / 2, I assume there is a problem with shrinking the file system or something similar. Since the RAID6 has 2 spare drives, going from 11 (9 active, 2 spare) to 11 (2 active, 9 spare)? to 4 (2 active, 2 spare) would explain why the block count is much higher than the size of the device by an exact multiple of 4.5.
Other information from the devices:
sudo mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Wed Nov 24 22:28:38 2021
Raid Level : raid6
Array Size : 11720780800 (10.92 TiB 12.00 TB)
Used Dev Size : 5860390400 (5.46 TiB 6.00 TB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sun Apr 9 04:57:29 2023
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0 Layout : left-symmetric
Chunk Size : 512KConsistency Policy : bitmap Name : nao0:0 (local to host nao0)
UUID : ffff85d2:b7936b45:f19fc1ba:29c7b438
Events : 199564 Number Major Minor RaidDevice State
9 8 16 0 active sync /dev/sdb
1 8 48 1 active sync /dev/sdd
2 8 32 2 active sync /dev/sdc
10 8 0 3 active sync /dev/sdasudo mdadm --examine /dev/sd[a-d]
/dev/sda:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : ffff85d2:b7936b45:f19fc1ba:29c7b438
Name : nao0:0 (local to host nao0)
Creation Time : Wed Nov 24 22:28:38 2021
Raid Level : raid6
Raid Devices : 4 Avail Dev Size : 11720780976 sectors (5.46 TiB 6.00 TB)
Array Size : 11720780800 KiB (10.92 TiB 12.00 TB)
Used Dev Size : 11720780800 sectors (5.46 TiB 6.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=176 sectors
State : clean
Device UUID : 07f76b7f:f4818c5a:3f0d761d:b2d0ba79Internal Bitmap : 8 sectors from superblock
Update Time : Sun Apr 9 04:57:29 2023
Bad Block Log : 512 entries available at offset 32 sectors
Checksum : 914741c4 - correct
Events : 199564 Layout : left-symmetric
Chunk Size : 512K Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : ffff85d2:b7936b45:f19fc1ba:29c7b438
Name : nao0:0 (local to host nao0)
Creation Time : Wed Nov 24 22:28:38 2021
Raid Level : raid6
Raid Devices : 4 Avail Dev Size : 11720780976 sectors (5.46 TiB 6.00 TB)
Array Size : 11720780800 KiB (10.92 TiB 12.00 TB)
Used Dev Size : 11720780800 sectors (5.46 TiB 6.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=176 sectors
State : clean
Device UUID : 3b51a0c9:b9f4f844:68d267ed:03892b0dInternal Bitmap : 8 sectors from superblock
Update Time : Sun Apr 9 04:57:29 2023
Bad Block Log : 512 entries available at offset 32 sectors
Checksum : 294a8c37 - correct
Events : 199564 Layout : left-symmetric
Chunk Size : 512K Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : ffff85d2:b7936b45:f19fc1ba:29c7b438
Name : nao0:0 (local to host nao0)
Creation Time : Wed Nov 24 22:28:38 2021
Raid Level : raid6
Raid Devices : 4 Avail Dev Size : 11720780976 sectors (5.46 TiB 6.00 TB)
Array Size : 11720780800 KiB (10.92 TiB 12.00 TB)
Used Dev Size : 11720780800 sectors (5.46 TiB 6.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=176 sectors
State : clean
Device UUID : 0fcca5ee:605740dc:1726070d:0cef3b39Internal Bitmap : 8 sectors from superblock
Update Time : Sun Apr 9 04:57:29 2023
Bad Block Log : 512 entries available at offset 32 sectors
Checksum : 31472363 - correct
Events : 199564 Layout : left-symmetric
Chunk Size : 512K Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : ffff85d2:b7936b45:f19fc1ba:29c7b438
Name : nao0:0 (local to host nao0)
Creation Time : Wed Nov 24 22:28:38 2021
Raid Level : raid6
Raid Devices : 4 Avail Dev Size : 11720780976 sectors (5.46 TiB 6.00 TB)
Array Size : 11720780800 KiB (10.92 TiB 12.00 TB)
Used Dev Size : 11720780800 sectors (5.46 TiB 6.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=176 sectors
State : clean
Device UUID : e1912abb:ba98a568:8effaa66:c1440bd8Internal Bitmap : 8 sectors from superblock
Update Time : Sun Apr 9 04:57:29 2023
Bad Block Log : 512 entries available at offset 32 sectors
Checksum : 82a459ba - correct
Events : 199564 Layout : left-symmetric
Chunk Size : 512K Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)After looking online, I tried to use fsck, e2fsck, and resize2fs to try to resolve the issue. However, I did not make any progress by trying this, and I may have made the problem worse by accidentally changing the data on the disk.
With resize2fs,
sudo resize2fs /dev/md127
resize2fs 1.46.5 (30-Dec-2021)
Please run 'e2fsck -f /dev/md127' first.Since I could not use resize2fs to actually do anything, I used e2fsck which ran into many errors. Since there were thousands of errors, I quit before the program was able to finish.
sudo e2fsck -f /dev/md127
e2fsck 1.46.5 (30-Dec-2021)
The filesystem size (according to the superblock) is 13185878400 blocks
The physical size of the device is 2930195200 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? no
Pass 1: Checking inodes, blocks, and sizes
Error reading block 3401580576 (Invalid argument) while getting next inode from scan. Ignore error<y>? yes
Force rewrite<y>? yes
Error reading block 3401580577 (Invalid argument) while getting next inode from scan. Ignore error<y>? yes
Force rewrite<y>? yes
Error reading block 3401580578 (Invalid argument) while getting next inode from scan. Ignore error<y>? yes
Force rewrite<y>? yes
Error reading block 3401580579 (Invalid argument) while getting next inode from scan. Ignore error<y>? yes
Force rewrite<y>? yes
Error reading block 3401580580 (Invalid argument) while getting next inode from scan. Ignore error<y>? yes
Force rewrite<y>? yes
Error reading block 3401580581 (Invalid argument) while getting next inode from scan. Ignore error<y>? yes
Force rewrite<y>? yes
Error reading block 3401580582 (Invalid argument) while getting next inode from scan. Ignore error<y>? yes
Force rewrite<y>?My hypothesis is that there is probably some inconsistency in the reported size of the drives. I do not believe I had any partitions on the RAID nor any LVM volumes.
sudo fdisk -l
...Disk /dev/sda: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: WDC WD60EZAZ-00S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk /dev/sdb: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: WDC WD60EZAZ-00S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk /dev/sdc: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: WDC WD60EZAZ-00S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk /dev/sdd: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: WDC WD60EZAZ-00S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk /dev/md127: 10.92 TiB, 12002079539200 bytes, 23441561600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytesThe data on the 4 currently in use may or may not be altered by fsck / e2fsck, but the data should also be on the other 7 unused drives with the zeroed superblocks. It is not important to me which drives I recover the data from, so working solutions to recover from any grouping of the drives would be highly appreciated!
If any additional information is needed, I would be more than happy to provide it.
|
RAID6 unable to mount EXT4-fs: bad geometry: block count exceeds size of device
|
Quoting this SuperUser post:fsck is just the original name. When they came out with new file systems they would need a specific tool for each one, efsck for ext, e2fsck for ext2, dosfsck, fsckvfat. So they made fsck the front end that just calls whichever is the appropriate tool.fsck.xfsis probably what you are after.XFS-related update:
xfs_check and xfs_repair should help you evaluate the damage and repair if possible.
Please see the manual pages for specific usage information.
|
we need to fix filesystem corruption on sdb on redhat 6 version
sdb is xfs file system
df -h | egrep "Filesystem|/data"
Filesystem Size Used Avail Use% Mounted on
/dev/sdb 8.2T 7.0T 1.0T 86% /databecaus the data on sdb is huge
we want to know what is the best option 1 or 2 ?
or other idea to do the file-system fixing ?
option 1
umount /data
fsck -y /dev/sdb
mount /dataoption 2
umount /data
e2fsck -y /dev/sdb
mount /dataoption 3
umount /data
xfs_repair /dev/sdb
mount /datasecond - what are the risks when doing fsck on huge data ?
|
what is the best approach to fix file-system corruption on huge data
|
Unless you have a fresh backup to compare with, there's nothing you can do.
In rare cases e2fsck truncates files to zero - you might look for them.
|
I had filesystem errors on /home, an ext4 partition. I was able to reboot into recovery mode and run e2fsck, which found and fixed a long list of errors. (Later I found that periodic checking was disabled.) After that, I was able to reboot to the desktop, and everything appears to be fine.
Now I'm wondering: How do I assess the damage (if any)? I checked in /home/lost+found and it is empty; that's encouraging. But is there anything else I could or should do to find out whether data was lost?
In case it matters, I'm on Debian stable 10.7.
|
e2fsck aftercare
|
I don't know what you've been doing with this disk, but that's crazy numbers! Looking at that output that SSD has been on:1470 hours (61 days)
performed 4312400063 (2.0GiB) block erases
163210068006 (76TiB) media writes.That's a constant 16MiB a second of writes over 61 days.
I imagine you've got internal NAND failure. You might not be able to get your data back.
I suggest your best solution here going forwards is to use a raid mirror of some form to buffer the errors between multiple disks.
Ideally, it would be two disks of different ages and/or different production batches to attempt to spread out the distribution of errors and failures between multiple disks.
Just to clarify, I consider that an abnormally high amount of writes over a very short period. You're going to need to factor that in to the storage setup you go with.
|
just noticed I was using SDD for SSD. Corrected
I need help interpreting this situation. /dev/sda is a data disk backed up and with reproducible data so this is not system critical but I'd like to avoid the effort of restoring/reconstructing the data some of which will be quite time consuming
Is recovery / repair possible?
If so how? If I wipe the disk for re-use what is its reliability?
Summary (detailed reports below):will not mount: bad superblock
badblocks finds no bad blocks
smartctl reports no errors
fsck cannot set superblock flags
fdisk shows clean partition
dmesg shows write errors
parted shows 792 GB free of 1 TB driveMount ssd fails as so:
[stephen@meer ~]$ sudo mount /dev/sda1 /mnt/sda
mount: /mnt/sda: can't read superblock on /dev/sda1.
dmesg(1) may have more information after failed mount system call.
[stephen@meer ~]$
but badblocks finds no bad blocks
[root@meer stephen]# badblocks -v /dev/sda1
Checking blocks 0 to 976760831
Checking for bad blocks (read-only test): done
Pass completed, 0 bad blocks found. (0/0/0 errors)But smartctl finds no errors
[root@meer stephen]# smartctl -a /dev/sda
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-5.17.9-arch1-1] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: WD Blue / Red / Green SSDs
Device Model: WDC WDS100T2B0A-00SM50
Serial Number: 213159800516
LU WWN Device Id: 5 001b44 8bc4fdc6e
Firmware Version: 415020WD
User Capacity: 1,000,204,886,016 bytes [1.00 TB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
TRIM Command: Available, deterministic, zeroed
Device is: In smartctl database 7.3/5319
ATA Version is: ACS-4 T13/BSR INCITS 529 revision 5
SATA Version is: SATA 3.3, 6.0 Gb/s (current: 1.5 Gb/s)
Local Time is: Tue May 24 16:06:23 2022 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 0) seconds.
Offline data collection
capabilities: (0x11) SMART execute Offline immediate.
No Auto Offline data collection support.
Suspend Offline collection upon new
command.
No Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
No Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 10) minutes.
SMART Attributes Data Structure revision number: 4
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
5 Reallocated_Sector_Ct 0x0032 100 100 --- Old_age Always - 124
9 Power_On_Hours 0x0032 100 100 --- Old_age Always - 1470
12 Power_Cycle_Count 0x0032 100 100 --- Old_age Always - 134
165 Block_Erase_Count 0x0032 100 100 --- Old_age Always - 4312400063
166 Minimum_PE_Cycles_TLC 0x0032 100 100 --- Old_age Always - 1
167 Max_Bad_Blocks_per_Die 0x0032 100 100 --- Old_age Always - 65
168 Maximum_PE_Cycles_TLC 0x0032 100 100 --- Old_age Always - 14
169 Total_Bad_Blocks 0x0032 100 100 --- Old_age Always - 630
170 Grown_Bad_Blocks 0x0032 100 100 --- Old_age Always - 124
171 Program_Fail_Count 0x0032 100 100 --- Old_age Always - 128
172 Erase_Fail_Count 0x0032 100 100 --- Old_age Always - 0
173 Average_PE_Cycles_TLC 0x0032 100 100 --- Old_age Always - 2
174 Unexpected_Power_Loss 0x0032 100 100 --- Old_age Always - 90
184 End-to-End_Error 0x0032 100 100 --- Old_age Always - 0
187 Reported_Uncorrect 0x0032 100 100 --- Old_age Always - 0
188 Command_Timeout 0x0032 100 100 --- Old_age Always - 64
194 Temperature_Celsius 0x0022 070 053 --- Old_age Always - 30 (Min/Max 18/53)
199 UDMA_CRC_Error_Count 0x0032 100 100 --- Old_age Always - 0
230 Media_Wearout_Indicator 0x0032 001 001 --- Old_age Always - 0x002600140026
232 Available_Reservd_Space 0x0033 097 097 004 Pre-fail Always - 97
233 NAND_GB_Written_TLC 0x0032 100 100 --- Old_age Always - 2703
234 NAND_GB_Written_SLC 0x0032 100 100 --- Old_age Always - 2842
241 Host_Writes_GiB 0x0030 253 253 --- Old_age Offline - 466
242 Host_Reads_GiB 0x0030 253 253 --- Old_age Offline - 622
244 Temp_Throttle_Status 0x0032 000 100 --- Old_age Always - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Extended offline Completed without error 00% 1470 -
Selective Self-tests/Logging not supported
and fsck fails as so:
[root@meer ~]# e2fsck -cfpv /dev/sda1
/dev/sda1: recovering journal
e2fsck: Input/output error while recovering journal of /dev/sda1
e2fsck: unable to set superblock flags on /dev/sda1
/dev/sda1: ********** WARNING: Filesystem still has errors **********
May 24 15:38:29 meer kernel: I/O error, dev sda, sector 121899008 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
May 24 15:38:29 meer kernel: sd 2:0:0:0: [sda] tag#31 CDB: Write(10) 2a 00 07 44 08 00 00 00 08 00
May 24 15:38:29 meer kernel: sd 2:0:0:0: [sda] tag#31 Add. Sense: Unaligned write command
May 24 15:38:29 meer kernel: sd 2:0:0:0: [sda] tag#31 Sense Key : Illegal Request [current]
May 24 15:38:29 meer kernel: sd 2:0:0:0: [sda] tag#31 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
May 24 15:38:29 meer kernel: ata3.00: configured for UDMA/33
May 24 15:38:29 meer kernel: ata3.00: error: { ABRT }
May 24 15:38:29 meer kernel: ata3.00: status: { DRDY ERR }
May 24 15:38:29 meer kernel: ata3.00: cmd ca/00:08:00:08:44/00:00:00:00:00/e7 tag 31 dma 4096 out
res 51/04:08:00:08:44/00:00:07:00:00/e7 Emask 0x1 (device error)
May 24 15:38:29 meer kernel: ata3.00: failed command: WRITE DMA
May 24 15:38:29 meer kernel: ata3.00: irq_stat 0x40000001
May 24 15:38:29 meer kernel: ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
May 24 15:38:29 meer kernel: ata3: EH complete
May 24 15:38:29 meer kernel: ata3.00: configured for UDMA/33
May 24 15:38:29 meer kernel: ata3.00: error: { ABRT }
May 24 15:38:29 meer kernel: ata3.00: status: { DRDY ERR }
May 24 15:38:29 meer kernel: ata3.00: cmd ca/00:08:00:08:44/00:00:00:00:00/e7 tag 6 dma 4096 out
res 51/04:08:00:08:44/00:00:07:00:00/e7 Emask 0x1 (device error)
May 24 15:38:29 meer kernel: ata3.00: failed command: WRITE DMA
May 24 15:38:29 meer kernel: ata3.00: irq_stat 0x40000001
May 24 15:38:29 meer kernel: ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
Partitioning as seen by fdisk.
Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WDS100T2B0A
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 3F701164-2CF8-6D48-A94E-478634C140BE
Device Start End Sectors Size Type
/dev/sda1 2048 1953523711 1953521664 931.5G Linux filesystemFrom dmesg
[ 5292.895300] ata3.00: configured for UDMA/33
[ 5292.895315] ata3: EH complete
[ 5293.021851] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
[ 5293.021859] ata3.00: irq_stat 0x40000001
[ 5293.021864] ata3.00: failed command: WRITE DMA
[ 5293.021866] ata3.00: cmd ca/00:08:00:08:44/00:00:00:00:00/e7 tag 18 dma 4096 out
res 51/04:08:00:08:44/00:00:07:00:00/e7 Emask 0x1 (device error)
[ 5293.021874] ata3.00: status: { DRDY ERR }
[ 5293.021877] ata3.00: error: { ABRT }parted :
root@meer stephen]# parted /dev/sda
GNU Parted 3.5
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print free
Model: ATA WDC WDS100T2B0A (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
17.4kB 1049kB 1031kB Free Space
1 1049kB 1000GB 1000GB ext4
1000GB 1000GB 729kB Free Space
|
ssd won't mount: bad superblock but no bad blocks: write errors
|
It is not enough that a LV exists on the PV, it must also be active for being used i.e. the device mapper device (/dev/mapper/fedora-root) must be created:
lvchange -ay fedora/rootor
vgchange -ay fedora
|
I extended an lvm from the terminal in system rescue live CD using the commands:
# pvcreate /dev/sda7
# vgextend fedora /dev/sda7
# lvextend -l +100%FREE /dev/fedora/rootThe above worked but when I try to check the LV file system or resize it I get the following errors:
# e2fsck -f /dev/fedora/roote2fsck: No such file or directory while trying to open /dev/fedora/root
Possibly non-existent device?# resize2fs /dev/fedora/root
open: No such file or directory while opening /dev/fedora/rootDo I have to activate or mount the volume before I run those commands? I didn't change the name of the volume group.UPDATE
Resolved by simply adding command provided by Hauke Laging before resize2fs or e2fsck
|
LVM not able to be resized or checked with resize2fs and e2fsck
|
Are there any secure UNIX tools to recover data, that was removed with rm, from a USB flash drive?Yes and, by the way, recovery of photos is one of the most common scenarios.
The conditions you described are actually optimal because:you directly deleted the files
the file system is not damaged
you did not use the drive anymoreThese conditions lead to two available options.
If you care about the file names (or have fragmented files)
When you write a lot of pictures sequentially on a drive, the risk of fragmentation is actually very low, but still. To recover files and file names you need a tool which is file-system aware.
Enter TestDisk:
sudo testdisk /dev/sdbIt will show you a step-by-step procedure through a TUI (textual user interface). The essential steps are:scanning the drive
selecting the partition
pressing P to show the files
copying the deleted (red) files with CIf you actually just want the photos back
For pictures, you might as well not care about the names. Moreover, the file system might be damaged (not your case) and TestDisk would not help.
PhotoRec (from the same developer) comes to the rescue:
sudo photorec /dev/sdbHere you just need to specify the output directory. You can also disable detection for some file types which you don't care about.
|
This event actually took place a few years ago, but I still have the unchanged USB flash drive in my possession. I may be out of luck, but I thought I would ask all you smart people here for your suggestions.
Short Story:
A few years back, my wife wanted to store all of her photos from her iPhone onto a USB flash drive because she was running out of storage. We picked up a brand new USB flash drive from the store, so I assume it had a FAT32 file system. We plugged the flash drive into a Mac OS X and were able to backup all of her photos. We realized after the backup had complete that almost every photo had a duplicate file. photo.jpg had a duplicate file called photo\ 1.jpg. All of the duplicate files ended with the \ 1.jpg suffix.
Just having started UNIX, I knew that I could use the shell's simple regex to remove all of the duplicate files, but I ended up not putting my command in quotes... And I ended up executing the following: rm * 1.jpg. As you can see, I told the system to remove every single file and then remove 1.jpg. Instead of telling the system to remove every file that ended in 1.jpg. After this occurred, with my furious wife (at the time girlfriend) next to me, I unplugged the flash drive and stored it in a drawer.
Question:
Are there any secure UNIX tools to recover data, that was removed with rm, from a USB flash drive? Or am I out of luck? As I stated above, I have not touched the flash drive since the event occurred.
If this question is far too broad, feel free to move it to meta or wherever it best fits.
|
How can I safely recover deleted data from a USB flash drive?
|
mount use libblkid to guess the filesystem from the device you're trying to mount, and you can see that it work from the error message it give:mount: unknown filesystem type 'vfat'but the weird thing here is that if the required filesystem is in a module that isn't yet loaded, mount try to auto-load the module using modprobe.
So my only guess so far is that something is wrong with your kernel modules:
/lib/modules/3.2.0-4-686-pae/kernel/fs/fat/vfat.ko
/lib/modules/3.2.0-4-686-pae/kernel/fs/fat/fat.koedit
or for some reason mount fail to execute modprobe.
|
I want to mount my usb drive (kindle vfat32). When I do
mount -t auto /dev/sdf1 /mnt/usbI get
mount: unknown filesystem type 'vfat'I checked if the drive is recognized with sudo fdisk -l and the recognized filesystem is W95 FAT32
my kernel is 3.2.0-4-686-pae.
I checked the recognized filesystem with cat /proc/filesystems and vfat is not there.
dosfstools is installedwhat should I do?I am using the kernel released in the minimal netbook installation of debian.
If I run modprobe vfat as root I get the following:
libkmod: ERROR ../libkmod/libkmod-module.c:174 kmod_module_parse_depline: ctx=0xb8556008 path=/lib/modules/3.2.0-4-686-pae/kernel/fs/fat/fat.ko error=No such file or directory
libkmod: ERROR ../libkmod/libkmod-module.c:174 kmod_module_parse_depline: ctx=0xb8556008 path=/lib/modules/3.2.0-4-686-pae/kernel/fs/fat/fat.ko error=No such file or directory
ERROR: could not insert 'vfat': Unknown symbol in module, or unknown parameter (see dmesg)When I cat /proc/modules I get:
dm_mod 57362 0 - Live 0xfcb45000
ip6table_filter 12492 1 - Live 0xf847a000
ip6_tables 17185 1 ip6table_filter, Live 0xf8564000
snd_hda_codec_realtek 142274 1 - Live 0xf86a3000
ppdev 12651 0 - Live 0xf8408000
binfmt_misc 12813 1 - Live 0xf8454000
lp 12797 0 - Live 0xf846d000
nfsd 173890 0 - Live 0xf8711000
nfs 265921 0 - Live 0xf86cf000
nfs_acl 12463 2 nfsd,nfs, Live 0xf8437000
auth_rpcgss 32143 2 nfsd,nfs, Live 0xf8501000
fscache 31978 1 nfs, Live 0xf8494000
lockd 57277 2 nfsd,nfs, Live 0xf850a000
sunrpc 143904 6 nfsd,nfs,nfs_acl,auth_rpcgss,lockd, Live 0xf852e000
iptable_filter 12488 1 - Live 0xf8403000
ip_tables 17079 1 iptable_filter, Live 0xf8421000
x_tables 18158 4 ip6table_filter,ip6_tables,iptable_filter,ip_tables, Live 0xf8414000
usbhid 31554 0 - Live 0xf84cf000
hid 60152 1 usbhid, Live 0xf84e5000
nouveau 526808 3 - Live 0xf856d000
mxm_wmi 12467 1 nouveau, Live 0xf8385000
video 17459 1 nouveau, Live 0xf841b000
ttm 47786 1 nouveau, Live 0xf842a000
drm_kms_helper 22738 1 nouveau, Live 0xf840d000
drm 146387 5 nouveau,ttm,drm_kms_helper, Live 0xf84aa000
snd_hda_intel 21786 6 - Live 0xf8473000
snd_hda_codec 63477 2 snd_hda_codec_realtek,snd_hda_intel, Live 0xf8443000
snd_hwdep 12943 1 snd_hda_codec, Live 0xf835b000
snd_pcm 53461 3 snd_hda_intel,snd_hda_codec, Live 0xf8459000
snd_page_alloc 12867 2 snd_hda_intel,snd_pcm, Live 0xf843e000
snd_seq 39512 0 - Live 0xf8394000
snd_seq_device 13016 1 snd_seq, Live 0xf82e3000
snd_timer 22356 2 snd_pcm,snd_seq, Live 0xf8363000
power_supply 13283 1 nouveau, Live 0xf8356000
snd 42722 19 snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_seq,snd_seq_device,snd_timer, Live 0xf834a000
i2c_nforce2 12520 0 - Live 0xf8345000
i2c_algo_bit 12713 1 nouveau, Live 0xf83fe000
i2c_core 19116 5 nouveau,drm_kms_helper,drm,i2c_nforce2,i2c_algo_bit, Live 0xf8378000
soundcore 12921 1 snd, Live 0xf82d0000
evdev 17225 8 - Live 0xf82f7000
acpi_cpufreq 12807 0 - Live 0xf82e8000
mperf 12421 1 acpi_cpufreq, Live 0xf82de000
processor 27565 1 acpi_cpufreq, Live 0xf84a2000
thermal_sys 17752 2 video,processor, Live 0xf8372000
parport_pc 22036 0 - Live 0xf83a9000
parport 31254 3 ppdev,lp,parport_pc, Live 0xf83a0000
coretemp 12770 0 - Live 0xf838f000
container 12525 0 - Live 0xf838a000
button 12817 1 nouveau, Live 0xf8380000
wmi 13051 2 nouveau,mxm_wmi, Live 0xf836d000
pcspkr 12515 0 - Live 0xf82fe000
loop 17810 0 - Live 0xf82d8000
autofs4 22784 2 - Live 0xf82f0000
ext4 306996 3 - Live 0xf83b2000
crc16 12327 1 ext4, Live 0xf82be000
jbd2 52330 1 ext4, Live 0xf82b0000
mbcache 12938 1 ext4, Live 0xf8239000
usb_storage 35142 0 - Live 0xf82c6000
sg 21476 0 - Live 0xf8264000
sr_mod 17468 0 - Live 0xf82aa000
sd_mod 35425 5 - Live 0xf8295000
cdrom 34813 1 sr_mod, Live 0xf82a0000
crc_t10dif 12332 1 sd_mod, Live 0xf8234000
ata_generic 12439 0 - Live 0xf822f000
ahci 24917 4 - Live 0xf828d000
libahci 18308 1 ahci, Live 0xf8242000
ohci_hcd 22059 0 - Live 0xf826c000
r8169 41802 0 - Live 0xf8281000
mii 12595 1 r8169, Live 0xf821f000
libata 125014 3 ata_generic,ahci,libahci, Live 0xf8325000
ehci_hcd 35509 0 - Live 0xf8273000
scsi_mod 135037 5 usb_storage,sg,sr_mod,sd_mod,libata, Live 0xf8303000
usbcore 104555 5 usbhid,usb_storage,ohci_hcd,ehci_hcd, Live 0xf8249000
usb_common 12338 1 usbcore, Live 0xf8215000
|
vfat not recognized in debian
|
These characters ? and : are not valid on a FAT32 filesystem, so if that is where you need to copy your files you will need to rename them.
From the command line you can use command-line tools such as rename (sometimes known as prename) to replace these characters with _ or even to remove them:
rename 's/[?<>\\:*|\"]/_/g' # Change invalid characters to _
rename 's/[?<>\\:*|\"]//g' # Remove invalid charactersI am not familiar with thunar so I do not know if there is a way to perform this substitution/replacement operation directly.
I have just found Linux copy to fat32 filesystem: invalid argument which suggests adding this into the pax command (another tool to copy files), so that you can keep your full filenames on your local disk but convert the filenames during the copy to your USB device:
pax -rw -s '/[?<>\\:*|\"]/_/gp' *.mp3 /media/usb_deviceIf the complete filenames are really important to you, I would suggest that you reformat the USB stick to use a Linux-native filesystem such as ext4. (There are Windows drivers available for the extN family of filesystems if that's necessary.)
|
I just tried to move a directory containing music files with thunar 4.10
It complained that a file name was invalid.
It turned out that one file name (song title) contained a question mark.
I suspected that this was a problem, removed the question mark and could indeed copy the file.
Adding the "?" back in was not possible. I also tried it with rename on the command line but that didn't work either. (not sure what thunar uses under the hood, so this test might be moot)
Now if a question mark makes the file name invalid, how could this file be created in the first place? I created the files with SoundJuicer from a newly obtained CD. I was able to play the file (with "?" in the name) in various players.
What's going on here? Can I have the "?" in the name or not? Why is the file manager unable to handle such files while other applications seem to be ok with it?
Update:
Next song has a ":" in it. Same problem as with the "?".These are not invalid characters to Unix; typically only the NUL character and the / character are invalid filenames (the / being the directory separator).This was what my intuition told me as well, because I never had any issues with file names in Linux and could throw pretty much everything sensible at it and it worked ok. This is what motivated the question here. I never encountered invalid file names before.Were you trying to move the files to a USB stick? If so, is that stick formatted as FAT32 or as a native Linux filesystem?The target is indeed a USB stick that I bought today. I opened gparted and it is formatted as FAT32.
I'm not exactly sure but that's a Windows thing right? And Windows has a bunch of characters that it doesn't support, apaprently including ? and :. Am I right?
|
How to deal with characters like ":" or "?" that make invalid filenames?
|
What I do is to store tarballs on the USB drive (formatted as VFAT). I'm wary of reformatting USB drives, they are build/optimized for VFAT so to level wear, and I'm afraid it will die much sooner with other filesystems. Besides, formatting another way will make it useless for ThatOtherSystem...
|
I often need to move files between two Linux computers via USB. I use gparted to format the USB's. When I formatted the USB to use FAT32, the USB was unable to copy symlinks, so I had to recreate the symlinks on the other computer after copying the files. When I formatted the USB to use EXT3, I created a lost+found directory on the USB, and prevented me from copying files to the USB unless I became root.Is there a preferred file system to use when transferring files between two Linux computers?
How can I copy files without running into the problems presented by the FAT32 and EXT3 filesystems?
|
What filesystem should be used when transferring files between Linux systems?
|
OK, I tried it.
First two problems from the beginning: NO support for hard and symbolic links. It means that I had to copy each file, duplicating it and wasting space.
Second problem: no special file support at all. This means things like /dev/console are unavailable at boot time to init before even /dev is remounted as tmpfs.
Third problem: you will loose permissions enforcing.
But out of this, there were no issues. My own system was booted successfully on a vfat volume.Normally I would not do that, too.
|
Out of curiosity, is this possible nowadays? I remember some old Slackware versions did support FAT root partition but I am not sure if this is possible with modern kernels and if there are any distros offering such an option. I am interested in pure DOS FAT (without long names support), VFAT 16/32 and exFAT.
PS: Don't tell me I shouldn't, I am not going to use this in production unless necessary :-)
|
Can I install GNU/Linux on a FAT drive?
|
A FAT32 filesystem has a minimum size: it should contain at least 65525 clusters*. The cluster size is a multiple of the sector size. In your case the sector size is 4096 and mkfs.vfat has used a default multiple of 8 for the number of sectors per cluster. Use -s 1 to specify one sector per cluster:
mkfs.fat -v -F 32 -S 4096 -s 1 /dev/sde1This results in a cluster size of 4096, which should be small enough to fit more than the minimum of 65525 clusters in your 264 MiB partition.* From the Windows documentation on UEFI/GPT-based hard drive partitions:For Advanced Format 4K Native drives (4-KB-per-sector) drives, the minimum size is 260 MB, due to a limitation of the FAT32 file format. The minimum partition size of FAT32 drives is calculated as sector size (4KB) x 65527 = 256 MB.
Advanced Format 512e drives are not affected by this limitation, because their emulated sector size is 512 bytes. 512 bytes x 65527 = 32 MB
|
When formatting my EFI partition I get this error:
Not enough clusters for a 32 bit FAT!" My disk use 4096 sector size.
#mkfs.fat -v -F 32 -S 4096 /dev/sde1mkfs.fat 4.1 (2017-01-24)
WARNING: Not enough clusters for a 32 bit FAT!
/dev/sde1 has 255 heads and 63 sectors per track,
hidden sectors 0x4000;
logical sector size is 4096,
using 0xf8 media descriptor, with 67584 sectors;
drive number 0x80;
filesystem has 2 32-bit FATs and 8 sectors per cluster.
FAT size is 16 sectors, and provides 8440 clusters.
There are 32 reserved sectors.
Volume ID is 05deb9f7, no volume label.My disk partition:
gdisk -l /dev/sde
GPT fdisk (gdisk) version 1.0.1Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: presentFound valid GPT with protective MBR; using GPT.
Disk /dev/sde: 244190646 sectors, 931.5 GiB
Logical sector size: 4096 bytes
Disk identifier (GUID): D0BA102E-86C5-4379-B314-9534F873C377
Partition table holds up to 128 entries
First usable sector is 6, last usable sector is 244190640
Partitions will be aligned on 256-sector boundaries
Total free space is 244123051 sectors (931.3 GiB)Number Start (sector) End (sector) Size Code Name
1 2048 69631 264.0 MiB 0700 EFI_FAT32fsck.fat give the following:
#fsck.fat -v /dev/sde1
fsck.fat 4.1 (2017-01-24)
Checking we can access the last sector of the filesystem
Warning: Filesystem is FAT32 according to fat_length and fat32_length fields,
but has only 8440 clusters, less than the required minimum of 65525.
This may lead to problems on some systems.
Boot sector contents:
System ID "mkfs.fat"
Media byte 0xf8 (hard disk)
4096 bytes per logical sector
32768 bytes per cluster
32 reserved sectors
First FAT starts at byte 131072 (sector 32)
2 FATs, 32 bit entries
65536 bytes per FAT (= 16 sectors)
Root directory start at cluster 2 (arbitrary size)
Data area starts at byte 262144 (sector 64)
8440 data clusters (276561920 bytes)
63 sectors/track, 255 heads
16384 hidden sectors
67584 sectors total
Checking for unused clusters.
Checking free cluster summary.
/dev/sde1: 1 files, 1/8440 clusters
|
Cannot format my EFI partition (FAT32)
|
My question is, if I purchase a standard Windows external hard drive with a USB connection, will I be able to copy the files from the Linux cluster's files server to the external drive?Yes, there is no technical problem to this, however:
The hardware us not a "standard windows hard drive with USB connection". Please scrap the windows part from that sentence. And external USB HDD will work equally well with or without windows as the OS.I am assuming that the Linux cluster has a USB port, but this is something that I will need to verify.For a large amount of data (and 1TB is a lot) connecting the drive locally is probably a lot faster. However with USB2 you are still limited to 35-ish MB/sec. That means that copying 1TB over USB2 takes about 8-9 hours.*
You can speed that up a lot if the drive is locally mounted (via plain SATA), if the cluster and your drive have eSATA, if both have USB3 or if both have firewire.
Alternatively you can connect the drive to your own desktop and copy the files. In this case the network might be the speed limit. You also risk an angry administrator asking why you are making the network so slow for other users. :-)It looks like many standard Windows external hard drives are formatted in either NTFS or FAT32, whereas our Ubuntu Linux file server uses NFS.uhm, no.
The hard disk does not care which filesystem is used. It may come pre-formatted with NTFS (which is a sensible choice for most people who buy them), but nothing stop you from changing the filesystem and reformatting. That should only take a few minutes.
Also, your file server does not use NFS on its hard disks. It is probably using ext2, ext4 or ZFS. Neither of which you need to worry about. As long as you can read the data you can write it in any format.
(Consider the analogy: You copy the text written in a notebook. Do not worry about the form or the colour of the original notebook. As long as you can read it and have a large enough notebook of your own you can copy the content from one notebook to another).*: 8-9 hours estimated based on this:
35 MiB/second
100 MiB per 3 seconds.
1000 MiB per 30 seconds, which is the same as 1GiB per 30 seconds.
1GiB per 30 seconds
1000GiB per 30000 seconds
1TiB per 30000 seconds. 30000/3600=8.3 (3600 seconds per hour)
|
I am a graduate student and a relative Linux novice. My institution has an in-house Linux cluster on which I run many scientific simulations. I have a Windows desktop computer from which I access the Linux cluster via SSH.
I have a large amount (~1 TB) of simulation results data on the Linux cluster's file server. When the project is finished, the research group probably will not have the space to save the simulation results. However, I would like to save the files (with the group's permission, of course) on an external drive that I myself will purchase.
My question is, if I purchase a standard Windows external hard drive with a USB connection, will I be able to copy the files from the Linux cluster's files server to the external drive? (I am assuming that the Linux cluster has a USB port, but this is something that I will need to verify.)
It looks like many standard Windows external hard drives are formatted in either NTFS or FAT32, whereas our Ubuntu Linux file server uses NFS. Here are some examples from Amazon:Seagate Backup Plus 4 TB USB 3.0 Desktop External Hard Drive
STCA4000100 (NTFS)
WD My Book 4TB External Hard Drive Storage USB 3.0 File Backup and
Storage (NTFS)
BUFFALO MiniStation Plus 1 TB USB 3.0 Portable Hard Drive -
HD-PNT1.0U3BS Silver (FAT32)Do you think any or all of the above hard drives will be able to be easily reformatted in NFS for use with the Linux cluster?
On the other hand, Amazon does have a section for "Linux platform support" external hard drives, such as:LaCie 3 TB Minimus Hard Disk USB 3.0 (302004) (file system not
specified, as far as I can tell; perhaps it is unformatted?)But, even if standard Windows external hard drives are easily reformatted, the problem is that I may subsequently want to copy the files from the external hard drive to a Windows computer, which is NTFS. This part of the question may require a separate question or a question on SuperUser, but is it possible to copy NFS files from an external hard drive to a Windows NTFS computer? Thanks for your time.
|
Copying Linux NFS files to a standard consumer external hard drive
|
You can fix it (each time it happens) with this command:find local_directory_name -depth -exec sh -c 'dir="$(dirname "$0")"; FILE="$(basename "$0")"; lowfile="$(echo "$FILE" | tr "A-Z" "a-z")"; if [ "$lowfile" != "$FILE" ]; then mv "$0" "$dir/$lowfile"; fi' {} ";"Type this all as one line (replacing local_directory_name
with the name of the directory to which you copied the files).
You can break it into multiple lines by inserting backslashes.
Or you can put the part after sh -c into a script file.
This enumerates all the files in the directory (including subdirectories, recursively)
and executes the given commands on each one.
-depth makes it work "bottom-up", so it processes all the entries in a directory
before it processes (renames) the directory itself.
Each filename (relative path starting from local_directory_name) is broken down
into a directory portion and a plain filename (just the bottom component).
Then the filename is converted from upper case to lower case.
If this is different from the existing filename,
it renames the file to the lower-case name.
I added this check to prevent the diagnostic messages you would otherwise get
from trying to rename a file to itself, which would happen if you had a file
whose name contained no letters (i.e., was numerals and special characters only).
Or, for that matter, if you had a file whose name contained no capital letters.
Afterthought:
another way to avoid mv 123 123 errors is to add -name "*[A-Z]*" after -depth,
which tells find to process only names that contain at least one capital letter.
|
I am currently using OpenBSD 5.5-release.
Whenever I copy files or directories from my USB device to the local HDD, the names of the copied files have all become uppercase.
What causes it?
How do I fix it?
|
Names of copied files from USB device to HDD have all become uppercase. How to fix it?
|
The message
failed to preserve ownership for '/mnt2/iwn-firmwae.tgz': Operation not permittedis more of a warning than an error. The files copied successfully, but permissions and ownership of the files were not copied.
Most likely this is a DOS filesystem which does not support unix ownership and permissions. For the purposes you describe, permissions and ownership are not important, so you can safely ignore this message as a warning.
|
I'm following a tutorial on how to install firmware on OpenBSD. The tutorial has me creating a new msdos file system on the usb with: newfs_msdos -F 32 /dev/rsd2c then to take usb to a system with an internet connection, then move the firmware tarball into the USB. I have never moved data to a msdos fs via the command line before. The tutorial shows him using dolphin on a manajaro install, however I do not have any systems with gui's installed.
How can I move the tarball to the usb drive?
I've tried mounting it them moving to the mounted directory but it does not work.
Stating failed to preserve ownership for '/mnt2/iwn-firmwae.tgz': Operation not permitted
Here's a link to the tutorial: https://www.youtube.com/watch?v=kUrUq2qfWiY
|
Error: "failed to preserve ownership" when trying to move files to a FAT32 partition on OpenBSD
|
My working solution was to copy the contents of ddrescue's output file to a different physical hard drive (of equal or, preferably, larger size):
# ddrescue -f defekt_wd.img /dev/sdb to_harddrive.log
GNU ddrescue 1.19
Press Ctrl-C to interrupt
rescued: 468428 MB, errsize: 0 B, current rate: 4653 kB/s
ipos: 468428 MB, errors: 0, average rate: 34703 kB/s
opos: 468428 MB, run time: 3.74 h, successful read: 0 s ago
Finished The physical harddrive with the rescued content was able to mount, and I have been lucky enough to retrieve around 80% of the important 50 GB of photos. Since the majority was JPEG-photos, I could even crop some of the photos that had been partly damaged.
|
I have a failed harddrive with around 400 GB of data, of which approximately 50 GB need to be recovered. All the data is located in a specific directory (/Fotos2018/).
The hard drive is a WD My Passport Essential WDBAAA5000ABK (500 GB, USB 2.0). It contained a FAT32 partition containing my data, as well as another partition containing some WD software.
I attempted to back up my data to a healthy hard drive, using ddrescue --no-split -r3 /dev/sdb1 defekt_wd.iso defekt_wd.log. It generated tons of errors (I don't have the output), but ended up with the output file. The log file is 1.2 MB if that gives any indication.
During this operation, the hard drive sounded increasingly scratchy, and became rather hot.
I have found various methods to extract the contents, but none of them were succesful. Below are my attempts:
First, traditional mounting (however, I cannot recompile the kernel on the current machine due to the warranty terms, but if you believe this would work on a different machine, I can copy the image file)
# mount defekt_wd.img /tmp/defektdisk
mount: Could not find any loop device. Maybe this kernel does not know about the loop device? (If so, recompile or `modprobe loop'.)
# modprobe loop
FATAL: Module loop not found.Second, using xorriso.
# xorriso -indev defekt_wd.img -ls
xorriso 1.3.2 : RockRidge filesystem manipulator, libburnia project.
xorriso : NOTE : Loading ISO image tree from LBA 0
libisoburn: WARNING : No ISO 9660 image at LBA 0. Creating blank image.
Drive current: -indev 'defekt_wd.img'
Media current: stdio file, overwriteable
Media status : is written , is closed
Media summary: 1 session, 228724832 data blocks, 436g data, 0 free
Volume id : 'ISOIMAGE'
Valid ISO nodes found: 0I have also tried to extract/list/test the archive using 7-zip, e.g.:
# 7z l defekt_wd.img
7-Zip 9.20 Copyright (c) 1999-2010 Igor Pavlov 2010-11-18
p7zip Version 9.20 (locale=C,Utf16=off,HugeFiles=on,2 CPUs)
Error: defekt_wd.img: Can not open file as archive
Errors: 1Here's the output of file:
# file defekt_wd.img
defekt_wd.img: x86 boot sector, code offset 0x58, OEM-ID "BSD 4.4", sectors/cluster 64, Media descriptor 0xf8, heads 255, hidden sectors 2048, sectors 975394816 (volumes > 32 MB) , FAT (32 bit), sectors/FAT 119038, reserved3 0x800000, serial number 0xac2710e2, label: "XYZ "My current theory is that the image file contains two partitions, but I do not know how to extract the contents of just one of them.
Can you offer any suggestions on what to do next?
|
Get contents of ddrescue image file
|
EFI-based systems boot using an EFI system partition, whose format is defined in the EFI specifications. This format is based on FAT, but is maintained by the Unified Extensible Firmware Interface Forum. What happens to FAT now has no effect on the EFI system partition format itself.
So whether FAT32 is deprecated or not, you’ll still see EFI system partitions with a FAT-based format, for a long time to come.
|
Isn't fat32 a deprecated filesystem format?
Why does grub for efi booting is still required to be installed on a fat32 partition?
|
Why grub for efi is still installed on fat32?
|
REMINDER: commands like this are designed to overwrite filesystem data. You must take extreme care to avoid targeting the wrong disk.
EDIT:
Before formatting the card, you may also want to perform a discard operation.
blkdiscard /dev/mmcblk0This might improve performance - the same as TRIM on a SATA SSD. Resetting the block remapping layer might also theoretically help resolve corruption at or around that layer, although this method is not as good as a dedicated full device erase command (SATA secure erase). This may not be suppported by all card readers. On my Dell Latitude laptop, it reset the card to all-zeros in one second. This implies that on this card it only affected the block remapping layer; it cannot have performed an immediate erase of the entire 16GB of flash.MicroSD cards contain one or more flash chips and a small microprocessor that acts as an interface between the SD card specification and the flash chip(s). Cards are typically formatted from the factory for near-optimal performance. However, most operating systems default partitioning and formatting utilities treat the cards like traditional hard drives. What works for traditional hard drives results in degraded performance and lifetime for flash-based cardshttp://3gfp.com/wp/2014/07/formatting-sd-cards-for-speed-and-lifetime/
A script is available for cards up to 32GiB. I have modified it to work with current versions of sfdisk. Running file -s on the resulting partition returned the same numbers as before, except for the number of heads/sectors per track. Those are not used by current operating systems, although apparently some embedded bootloaders will require specific values.
#! /bin/sh
# fdisk portion of script based on mkcard.sh v0.4
# (c) Copyright 2009 Graeme Gregory <[emailprotected]>
# Additional functionality by Steve Sakoman
# (c) Copyright 2010-2011 Steve Sakoman <[emailprotected]>
# Updated by Alan Jenkins (2016)
# Licensed under terms of GPLv2
#
# Parts of the procudure base on the work of Denys Dmytriyenko
# http://wiki.omap.com/index.php/MMC_Boot_Format# exit if any command fails
set -eexport LC_ALL=Cformat_whole_disk_fat32() {
if ! id | grep -q root; then
echo "This utility must be run prefixed with sudo or as root"
return 1
fi local DRIVE=$1 # Make sure drive isn't mounted
# so hopefully this will fail e.g. if we're about to blow away the root filesystem
for mounted in $(findmnt -o source | grep "^$DRIVE") ; do
umount "$mounted"
done # Make sure current partition table is deleted
wipefs --all $DRIVE # Get disk size in bytes
local SIZE=$(fdisk -l $DRIVE | grep Disk | grep bytes | awk '{print $5}')
echo DISK SIZE – $SIZE bytes # Note: I'm changing our default cluster size to 32KiB since all of
# our 8GiB cards are arriving with 32KiB clusters. The manufacturers
# may know something that we do not *or* they're trading speed for
# more space.
local CLUSTER_SIZE_KB=32
local CLUSTER_SIZE_IN_SECTORS=$(( $CLUSTER_SIZE_KB * 2 )) # This won't work for drives bigger than 32GiB because
# 32GiB / 64kiB clusters = 524288 FAT entries
# 524288 FAT entries * 4 bytes / FAT = 2097152 bytes
# 2097152 bytes / 512 bytes = 4096 sectors for FAT size
# 4096 * 2 = 8192 sectors for both FAT tables which leaves no
# room for the BPB sector
if [ $SIZE -ge $(( ($CLUSTER_SIZE_KB / 2) * 1024 * 1024 * 1024 )) ]; then
echo -n "This drive is too large, >= $(($CLUSTER_SIZE_KB / 2))GiB, for this "
echo "formatting routine."
return 1
fi # Align partitions for SD card performance/wear optimization
# Summary: start 1st partition at sector 8192 (4MiB) and align FAT32
# data to start at 8MiB (4MiB logical)
# There's a document that explains why, but its too long to
# reproduce here.
{
echo 8192,,0x0C,*
} | sfdisk -uS -q $DRIVE sleep 1 if [ -b ${DRIVE}1 ]; then
PART1=${DRIVE}1
elif [ -b ${DRIVE}p1 ]; then
PART1=${DRIVE}p1
else
echo "Improper partitioning on $DRIVE"
return 1
fi # Delete any old filesystem visible in new partition
wipefs --all $PART1 # Format FAT32 with 64kiB clusters (128 * 512)
# Format once to get the calculated FAT size
local FAT_SIZE=$(mkdosfs -F 32 -s $CLUSTER_SIZE_IN_SECTORS -v ${PART1} | \
sed -n -r -e '/^FAT size is/ s,FAT size is ([0-9]+) sectors.*$,\1,p') # Calculate the number of reserved sectors to pad in order to align
# the FAT32 data area to 4MiB
local RESERVED_SECTORS=$(( 8192 - 2 * $FAT_SIZE )) # Format again with padding
mkdosfs -F 32 -s $CLUSTER_SIZE_IN_SECTORS -v -R $RESERVED_SECTORS ${PART1} # Uncomment to label filesystem
#fatlabel ${PART1} BOOT
}#set -xformat_whole_disk_fat32 "$@"
|
I need to reformat an SD card back to factory status.
SD card filesystem used for media has become corrupted. Accessing a certain directory causes the filesystem to be remounted readonly, and it cannot be deleted. fsck.vfat says that it does not have a repair method for the specific type of corruption.
|
Reformat SD card
|
Use fatresize. Be sure to tell it the right partition size, beware of rounding and of different units (SI vs 1024-based). Run grep sdb1 /proc/partitions to get the size of the partition in units of 1024 bytes, and run fatresize -s NNNki /dev/sdb1 (change sdb1 to the actual name of the partition of course).
|
I have an SD card with 3 partitions: FAT32, EXT4 and swap. I shrank and moved them recently, but due to a bug in GParted (segfault while resizing FAT32) it is left like this:
Size: 5.87 GiB
Used: 623 MiB
Unused: 4.37 GiB
Unallocated: 915 MiB
GParted suggests me to repair the partition with Partition -> Check, but there's that bug. Any other tools which can do the same thing - expand fat32 size to the same as in the partition table? I tried dosfsck/fsck.vfat and MS chkdsk, none of them helped.
|
FAT32 - Unallocated space within partition
|
As we discussed the issue was that fdisk does not create a filesystem, it only creates partitions.
To create a FAT32 filesystem on raspbian you need to install dosfstools and then use mkfs.vfat as follows:
mkfs.vfat -F 32 <device>In this specific case
mkfs.vfat -F 32 /dev/mmcblk0p3After this the device is mountable.Note: FAT32 has no uid/gid or access permissions written to the files on the filesystem. Therefore you may want to use the -o option of mount to use the files as a normal user. For example
mount -o uid=myuser /dev/mmcblk0p3 /home/myuser/mymountpoint
|
On a Raspberry Pi with the raspbian distro i need to make an extra partition that can be read from both windows and linux.
So i use FDISK on /dev/mmcblk0 (the sd card) to create a new partition which is a FAT32 partition like so
Device Boot Start End Sectors Size Id Type
/dev/mmcblk0p1 16 125055 125040 61.1M b W95 FAT32
/dev/mmcblk0p2 125056 2000000 1874945 915.5M 83 Linux
/dev/mmcblk0p3 * 2000001 15523839 13523839 6.5G c W95 FAT32 (LBA)After i have written the above and rebooted the device there is no extra drive or anything listed like the above partition, which i determine by using the df -h command:
Filesystem Size Used Avail Use% Mounted on
/dev/root 885M 442M 384M 54% /
devtmpfs 483M 0 483M 0% /dev
tmpfs 487M 0 487M 0% /dev/shm
tmpfs 487M 6.5M 481M 2% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 487M 0 487M 0% /sys/fs/cgroup
tmpfs 487M 0 487M 0% /tmp
/dev/mmcblk0p1 61M 35M 27M 57% /boot
tmpfs 98M 0 98M 0% /run/user/0The partition needs to be visible inside the linux terminal (mounted) and also visible if i pull out the card and plug it into a windows pc
|
Paritioning a SD Card with both Linux and Fat32 partitions
|
OK, so in Computer Science, I'm not overly fond of saying "you can't get there from here", but in this case, you're trying to fit a square peg into a round hole.
The Sector size is usually set by the DEVICE. The 2048B sector size reported is normal for a CD/DVD drive, whereas 512B (or 520B -- which is why I said USUALLY -- some hard drives can actually switch from 512 to 520 and back).
When you ran fdisk, it clearly showed that the media sector size is 2048B. You can't easily change that, and in all likelihood, you can't change that period. You could try contacting the manufacturer of the USB drive to see if there is a tool available to reset the sector size on that device... or you could drive to the store (Walmart? Target? Staples? you name it!) and spend the $5 to $10 to buy a new USB stick.
|
I am trying hard to format a 1GB USB stick so that I can use it to install a new linux OS. Because the Disk utility has failed me when creating the file system. I tried to do it manually using fdisk by going through the following steps to create the master boot record and a 1GB partition:
# fdisk /dev/sdcCommand (m for help): p
Disk /dev/sdc: 994.5 MiB, 1042808832 bytes, 509184 sectors
Units: sectors of 1 * 2048 = 2048 bytes
Sector size (logical/physical): 2048 bytes / 2048 bytes
I/O size (minimum/optimal): 2048 bytes / 2048 bytes
Disklabel type: dos
Disk identifier: 0x967a68dbDevice Boot Start End Blocks Id System
/dev/sdc1 * 1 509183 1018366 b W95 FAT32Command (m for help): oCreated a new DOS disklabel with disk identifier 0x727b4976.Command (m for help): nPartition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (512-509183, default 512):
Last sector, +sectors or +size{K,M,G,T,P} (512-509183, default 509183): Created a new partition 1 of type 'Linux' and of size 993.5 MiB.Command (m for help): v
Partition 1: cylinder 253 greater than maximum 252
Partition 1: previous sectors 509183 disagrees with total 507835
Remaining 511 unallocated 2048-byte sectors.Command (m for help): wThe partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.Then I tried to format it to FAT32 file system with 512 bytes sector size, but it says the minimum allowed is 2048 bytes.
# mkfs.fat -v -F 32 -S 512 /dev/sdc1
mkfs.fat 3.0.26 (2014-03-07)
Warning: sector size was set to 2048 (minimal for this device)
WARNING: Not enough clusters for a 32 bit FAT!
/dev/sdc1 has 33 heads and 61 sectors per track,
hidden sectors 0x0800;
logical sector size is 2048,
using 0xf8 media descriptor, with 508672 sectors;
drive number 0x80;
filesystem has 2 32-bit FATs and 8 sectors per cluster.
FAT size is 125 sectors, and provides 63548 clusters.
There are 32 reserved sectors.
Volume ID is 1ab3abc1, no volume label.I need 512 bytes sector as syslinux does not support larger sector size.
|
How to format a 1GB USB stick to FAT32 with 512 bytes sector?
|
The fact the card behaves erratically and in a unpredictable way , the same errors surface again and again is not a good sign, and is actually a sure symptom of damaged media. It got nothing to do with FAT problems.
I would discard the card, as it cannot be trusted. Unfortunately, the SD cards only last so much, and cards with extensive operations of writes have usually a shorter life.
Android also supports ext2fs filesystems. If you use the card exclusively in Linux, it might be an interesting alternative.
Be aware that whilst other Linux filesystems might be supported, it is not a good idea to use in SD cards transactional filesystems as ext3fs. The transactional support/writes on the filesystem will increase the wear and tear.
|
I'm trying to repair a SD card with FAT, but fsck doesn't write changes — even the magic -w option doesn't help
$ sudo fsck.fat -aw /dev/sda1
fsck.fat 3.0.26 (2014-03-07)
0x41: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt.
Automatically removing dirty bit.
Free cluster summary wrong (240886 vs. really 241296)
Auto-correcting.
Performing changes.
/dev/sda1: 3471 files, 240319/481615 clustersLooks like repaired ↑. But every restart of fsck, it reports the same problems, and pretends that it fixes them with the same text.
Here's the verbose variant
$ sudo fsck.fat -awv /dev/sda1
fsck.fat 3.0.26 (2014-03-07)
fsck.fat 3.0.26 (2014-03-07)
Checking we can access the last sector of the filesystem
0x41: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt.
Automatically removing dirty bit.
Boot sector contents:
System ID "mkfs.fat"
Media byte 0xf8 (hard disk)
512 bytes per logical sector
4096 bytes per cluster
32 reserved sectors
First FAT starts at byte 16384 (sector 32)
2 FATs, 32 bit entries
1926656 bytes per FAT (= 3763 sectors)
Root directory start at cluster 2 (arbitrary size)
Data area starts at byte 3869696 (sector 7558)
481615 data clusters (1972695040 bytes)
62 sectors/track, 61 heads
2048 hidden sectors
3860480 sectors total
Reclaiming unconnected clusters.
Checking free cluster summary.
Free cluster summary wrong (240886 vs. really 241296)
Auto-correcting.
Performing changes.
/dev/sda1: 3471 files, 240319/481615 clusters
|
fsck doesn't write changes
|
The EFI System Partition (ESP) is a partition on a data storage device (usually an HDD or SSD) that is used by computers adhering to the Unified Extensible Firmware Interface (UEFI). The EFI System Partition is an interface that's used by the computer to boot Windows. It's like a step that is taken before it runs the Windows partition. It's a small partition, but without that partition your computer wouldn't know how to boot Windows, so don't delete it.
The EFI System Partition is a dedicated partition on GPT. It's usually a small one (100-500 MB) formatted as FAT located at the beginning of the disk, and its partition record is at the beginning of the GPT (GUID Partition Table).
|
I want to increase the size of my Linux partition (/dev/sda5) using the 52.41GB of unallocated space on my SSD but from what I understand the /dev/sda3 partition is in the way of using the unallocated 52GB.
What is the sda3 partition likely to be? Can it be safely deleted or is there a way around this?
Here is an image of GParted
|
What is this FAT32 partition on GParted?
|
Best not to use FAT32 for larger partitions. Use NTFS.
FAT32 has a file size limit of 4GB and you cannot then copy large files to it. It also does not have a journal so chkdsk can take longer or not be able to repair it.
You cannot change permissions nor ownership on Windows formatted partitions. How you mount it is then the default. And root is often normal owner, but permissions opened up to make it usable. But you can make owner user 1000 when mounting.
Are you manually mounting or using fstab.
https://askubuntu.com/questions/46588/how-to-automount-ntfs-partitions
https://askubuntu.com/questions/22215/why-have-both-mnt-and-media
An example of parameters for NTFS.
nodev,permissions,windows_names,nosuid,noatime,async,big_writes,timeout=2,uid=1000,gid=1000 windows_names,big_writesbig-writes helps speed, and windows_names prevents use of invalid characters that are valid in Linux. Use noatime if SSD or relatime if HDD.
My ESP - efi system partition is mounted this way, but it is a smaller partition.
/dev/sda1 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)You may also have issues with hibernation. Fast start up uses the hibernation flag and the Linux tools will not normally mount hibernated partitions to prevent damage.
http://askubuntu.com/questions/843153/ubuntu-16-showing-windows-10-partitions &
https://askubuntu.com/questions/145902/unable-to-mount-windows-ntfs-filesystem-due-to-hibernation
|
I created a new partition in Windows 10 formatted as Fat32, so that I could work with files located in one place despite being logged into my MX Linux installation or Windows 10.
While logged into Windows 10, I can move files in and out of the partition no problem.
While logged into MX Linux, the drive wasn't mounted, so I modified /etc/fstab by adding this line:
UUID=3F02-4BFD /mnt/sda4 vfat defaults 0 2Then I rebooted, only to find I couldn't mkdir inside /mnt/sda4. So I looked up the permissions and found that every owner and group was root.
So I logged into root and attempted to run:
chown foo:users sda4/and got the error:
chown: changing ownership of 'sda4/': Operation not permittedNote that I got with this sudo and while actually logged in as the root user.
I did some research and apparently there might be some immutability properties so I ran:
lsattr sda4/And got this on all of the directories:
lsattr: Inappropriate ioctl for device While reading flags on sda4/fooCurrently stuck at this point.
|
Sharing a Partition Between Windows and Linux Throws Permission Errors
|
Yes, NTFS and FAT32 are supported as read-write in Linux Mint XFCE.But it is suggested to transfer from NTFS to exFAT because exFAT has more supported devices than NTFS like XBox,etc.
If you want to connect to Mac then NTFS opens as read-only in mac but exFAT opens as read-write mode.FILE SYSTEM SUPPORTED BY LINUX
ext2,ext3,ext4,NTFS,VFAT,FAT32,exFAT and many more.BUT you would notice that when you connect exFAT Drives then Linux
gives error messages. So to enable exFAT support read below.Enabling exFAT support in Linux
Open terminal and type below line and HIT ENTER
sudo apt-get install exfat-fuse exfat-utilsNow when you connect exFAT USB or Disk it will open easily.However Windows only support NTFS,FAT32 and exFAT (listed file systems
are the majorly used ones).
|
Will my FAT32 and NTFS USB and hard disks work in Linux ?
I mean can windows compatible file systems USB and hard disks work in Linux?
[EDIT]
By work I mean - like I have a NTFS hard disk so can I access it's file and cut, copy , paste, rename amd create new files in hard disk in Linux ?
|
Regarding file system Support in Linux Mint xfce
|
The MS-DOS filesystem variants do not support file permissions or owners (stored on disk). So instead, the kernel defaults them to the mounting user — in this case, root.
You can override this by passing the uid= and gid= options. E.g., sudo mount -o loop,uid=1000,gid=1000 -t msdos "$DISK" "$MOUNTPOINT". (I added quoting there, which is a good habit to get in to). You can check what your uid/gid is with id; it may well be something other than 1000, or alternatively do the following:
UID=`id -u`
GID=`id -g`
sudo mount -o loop,uid=$UID,gid=$GID -t msdos "$DISK" "$MOUNTPOINT"These options are documented in man 8 mount, at least.
PS: There are several options for mounting w/o sudo mentioned in that question; e.g., udisks.
|
I'm trying to look into the innards of FAT32, and towards that I'm trying to create a FAT32 image, mount it and do a few file operations on the command line. Per the question here, I know there's no way around using sudo to mount the image. I'm still wondering though why I end up needing sudo in order to do file operations within the mountpoint. A small bash script follows which demonstrates what works and what doesn't. Could someone show me how to do this without root?
DISK=/tmp/disk1.raw
MOUNTPOINT=/tmp/mount1
SIZE=512M
rm -f $DISK
rm -rf $MOUNTPOINT# create disk image
qemu-img create -f raw $DISK $SIZE
# format disk image
mkfs.fat -v -F 32 $DISK
# make mountpoint
mkdir -p $MOUNTPOINT# can't be helped
sudo mount -o loop -t msdos $DISK $MOUNTPOINT
# should work but doesn't
mkdir -p $MOUNTPOINT/tmp/
# actually works
sudo mkdir -p $MOUNTPOINT/tmp/
# should work but doesn't
dd of=$MOUNTPOINT/tmp/ticket2.txt if=/dev/zero bs=512 count=9
# actually works
sudo dd of=$MOUNTPOINT/tmp/ticket2.txt if=/dev/zero bs=512 count=9ls -lR $MOUNTPOINT
sudo umount $MOUNTPOINT
|
Mounting without needing sudo *afterwards*
|
If I'm moving data on the SAME drive but a different partition, shouldn't it be fast? I assumed the move would be a fat table change...No, because a FAT is part of a file system, and each partition contains one filesystem. So if you move data to a different filesystem, the operating system cannot simply rearrange things in a fat table -- there are two to consider, and they do not map each other arbitarily. The destination must allocate some of its own space, and the source (in a move) frees some.
If it were just a matter of rearranging the tables, you would run into inconsistencies such as:I have a 100 GB partition and a 2 GB partition. If moving one to the other just involved rearranging tables, I should be able to move a 20 GB file from the former to the latter.
I move files to a partition on a USB stick, then I move the stick: if moving files just involved rearranging tables, where are the files going to be when I stick this in another computer?I realize the second case is not part of the context you are referring to, but the reason they amount to the same thing is because otherwise you would require another abstraction layer stored on the device. It cannot be something simply invented and juggled by the operating system, because you may move the device and/or use it under a different OS: now where is the information?
Devices may contain meta data indicating the size, type, and offset of their partitions. Fortunately, they do not contain information about the content of these partitions. I say fortunately because this is bound to create more problems than it solves.
Filesystems are intended to be top level, discrete entities, not things that are part of a larger system of storage (although they may be that in some contexts).
Some devices such as SSDs may implement an optimizing feature akin to what you imply on a hardware level, however. In other words, if you move something from one partition to another on an SSD, it may only rearrange some references, in so far as that hardware is doing accounting for itself as a whole irrespective of how it has been broken into different partitions on a higher level of abstraction. This would be totally opaque to the operating system and everything else, but you may notice it as an extremely fast move. It requires that the device run some kind of firmware which presents a virtual set of block addresses to the operating system, then maps them to the physical itself, which traditional drives do not do: they present the actual physical addresses to the operating system so that it may make whatever optimal use of this that it can. Hence, file system implementations (FAT, etc.) must assume they are organizing an actual physical region of a device and there is no layer above the filesystem to try to further organize the contents of the entire device (beyond breaking it into partitions).
|
Moving data from one drive to another is slow.
Copying data on a drive to it's self is slow.
Moving data from one drive to it's self is fast.
If I'm moving data on the SAME drive but a different partition, shouldn't it be fast? I assumed the move would be a fat table change and not an actual move (copy/delete) of the data on the disk. How can I make sure this is what happens?
FYI I'm on mac osx and I'm dealing with two fat32 partitions on the same external.
|
Move Data from Partition to Partition on Same Drive
|
It's telling you the reason:
** Reading file would overwrite reserved memory **Based on the first line of the error message, reading the file into memory using the start address you specified would cause some reserved memory area to be overwritten.
You should either use a different start address (and perhaps rebuild your file(s) to match the changed start address), or perhaps change U-Boot (and rebuild it) to place itself into a different location if U-Boot is the one reserving the memory you are trying to use.
You will have to understand the boot-time memory map of the system you're trying to boot. Without knowing the actual hardware you're using, it's kind of difficult to help you there, but the bdinfo command of U-Boot could be a good starting point.
|
For some reason, my U-Boot does not seem to be able to load files from my FAT32 partition:
=> mmc partPartition Map for MMC device 1 -- Partition Type: DOSPart Start Sector Num Sectors UUID Type
1 2048 62519296 a1d1165e-01 0b
=> fatls mmc 1:1
52560 file1.bin
1984 file2.bin
456 file3.bin
64 file4.bin
=> fatload mmc 1:1 0x0001FF80 file1.bin
** Reading file would overwrite reserved memory **
Failed to load 'file1.bin'Why do I get Failed to load and how can I get around it?
|
Why am I not able to load files from a partition with U-Boot?
|
According to https://en.wikipedia.org/wiki/Design_of_the_FAT_file_system :
For FAT32 file systems, the reserved sectors include a File System Information Sector at logical sector 1 and a Backup Boot Sector at logical sector 6
Which means that you can fix the issue by invoking these two commands (replace sdXX with your partition, e.g. sdb1):
sudo dd if=/dev/sdXX of=bootrec.dat bs=512 count=1
sudo dd if=bootrec.dat of=/dev/sdXX bs=512 seek=6In case you're working with a disk image file, you must add conv=notrunc,nocreat parameters at the end of the second command or otherwise you will truncate and destroy the image.
I've tested the commands and they result in fsck.vfat being totally happy.
|
How do you get syslinux to install to fat32, and have it write the backup boot sector. It only writes to the main boot sector, and then fsck.fat complains. You can get fsck.fat to fix it, but this requires running it in interactive mode, and hence is not possible from a script.
/tmp # fallocate -l 50m test_image
/tmp # mkfs.fat -F32 test_image
mkfs.fat 4.1 (2017-01-24)
/tmp # syslinux --directory syslinux --install test_image/tmp # fsck.vfat test -a
fsck.fat 4.1 (2017-01-24)
There are differences between boot sector and its backup.
This is mostly harmless. Differences: (offset:original/backup)
3:53/6d, 4:59/6b, 5:53/66, 6:4c/73, 7:49/2e, 8:4e/66, 9:55/61, 10:58/74
, 90:fa/0e, 91:fc/1f, 92:31/be, 93:c9/77, 94:8e/7c, 95:d1/ac, 96:bc/22
, 97:76/c0, 98:7b/74, 99:52/0b, 100:06/56, 101:57/b4, 102:1e/0e, 103:56/bb
, 104:8e/07, 105:c1/00, 106:b1/cd, 107:26/10, 108:bf/5e, 109:78/eb
------------ SNIP ---------------------------------------------------------
, 484:0d/00, 485:0a/00, 504:fe/00, 505:02/00, 506:b2/00, 507:3e/00
, 508:18/00, 509:37/00
Not automatically fixing this.
test: 2 files, 353/100792 clusters
|
How to get syslinux to install to fat32 backup boot sector
|
Use mdir (from mtools):
$ mdir -i boot.img ::
...
g2ldr mbr 8192 2020-05-04 19:14
WIN32-~1 INI 178 2020-05-04 19:14 win32-loader.ini
43 files 76 373 022 bytes
921 333 760 bytes freeAs you can see, none of the numbers you have match the remaining free space.
|
Given the current Debian installer hd-media boot image files, how do I find out how much free space is remaining within the contained FAT32-formatted partition?
Here's what I have so far:
$ curl -fsSLO https://deb.debian.org/debian/dists/stable/main/installer-amd64/current/images/hd-media/boot.img.gz$ gzip -fdk boot.img.gz$ stat boot.img
File: boot.img
Size: 999997440 Blocks: 1953120 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 7998443 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/ neil) Gid: ( 1000/ neil)
Access: 2020-07-23 16:42:25.173516535 +0000
Modify: 2020-07-23 16:41:58.025469623 +0000
Change: 2020-07-23 16:42:35.437534306 +0000
Birth: -$ file boot.img
boot.img: DOS/MBR boot sector, code offset 0x58+2, OEM-ID "SYSLINUX", sectors/cluster 8, Media descriptor 0xf8, sectors/track 63, heads 255, sectors 1953120 (volumes > 32 MB), FAT (32 bit), sectors/FAT 1904, serial number 0xdeb00001, label: "Debian Inst"$ fdisk -l boot.img
Disk boot.img: 953.7 MiB, 999997440 bytes, 1953120 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x20ac7ddaDevice Boot Start End Sectors Size Id Type
boot.img1 3224498923 3657370039 432871117 206.4G 7 HPFS/NTFS/exFAT
boot.img2 3272020941 5225480974 1953460034 931.5G 16 Hidden FAT16
boot.img3 0 0 0 0B 6f unknown
boot.img4 50200576 974536369 924335794 440.8G 0 EmptyPartition table entries are not in disk order.$ fatresize -i boot.img
fatresize 1.0.2 (10/15/17)
FAT: fat32
Size: 999997440
Min size: 536870912
Max size: 999997440Is any of the aforementioned numbers the one I want?
|
How do I discover the remaining space on disk image FAT32 partition?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.