output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
Your writes to stdout are being buffered by python, and only written when the buffer is full. There are 2 simple fixes:Add the -u option to your python command to ask for unbuffered output ('python -u /script.py'). Alternatively, flush the output after each write. In your example, after the line sys.stdout.write( 'Input: '+kbInput) add the line: sys.stdout.flush()
I'm using netcat to create a backdoor running a python script with the following command: netcat -l -p 1234 -e 'python /script.py'then I'm connecting to the backdoor with another shell using: netcat localhost 1234script.py is a simple loop that reads input, saves it to a file, and then prints it back. Now whatever I write in the second shell goes to the script and is successfully saved to a file. However, I don't see the output of the script on the second shell. I tried both the print and sys.stdout.write methods of python and both seem to fail. I don't know why the output is relayed back to the second shell if I use this: netcat localhost 1234 /bin/bashBut not with my script. I'm obviously missing something important. Here is my script: import syswhile 1: kbInput = sys.stdin.readline() sys.stdout.write( 'Input: '+kbInput) f = open("output.txt", "w") f.write(kbInput) f.close() print
'netcat -e' not relaying stdout
This below is the entry point to a multi-input script. #!/bin/bash [ $# -ge 1 -a -f "$1" ] && input="$1" || input="-"# your script's payload hereThe #! line is self explanatory I hope on the second line $# -ge 1 and is testing for at least one command line argument -a is the boolean and operator -f "$1" is testing if the first argument is a file && is followed by the directive to be executed if the previous condition holds true || is followed by what happens if the test condition is not true nc -k -l 127.0.0.1 4444 > filename.out my_processing_script filename.out-or- nc -k -l 127.0.0.1 4444 | my_processing_scriptso, if I have an argument and it is a file, my input is this file, if not, my input is coming from the pipe, i.e. "-" then you can run your thing as you wish. Either I tested with payload of awk '{print $2}' ${input}and my input was coming from netstat -rn command and worked either way. I hope this is what you are asking about
I have simple bash nc script: #!/bin/bash nc -k -l 127.0.0.1 4444 > filename.outwhich listens 4444 port for TCP connection. Instead of redirecting received data to filename.out I would like, if possible, to pass each chunk of data (single lines of text) to script.sh as argument. How do I do that? Thanks in advance. EDIT: I've also tried with: #!/bin/bash nc -k -l 127.0.0.1 4444 | /path/to/script.shbut that doesn't work
How to pass received data from netcat to another script as argument?
Consider this Python3 example. Server A: #!/usr/bin/env python3 # coding=utf8from subprocess import check_call from xmlrpc.server import SimpleXMLRPCServer from xmlrpc.server import SimpleXMLRPCRequestHandler# Restrict to a particular path class RequestHandler(SimpleXMLRPCRequestHandler): rpc_paths = ('/JRK75WAS5GMOHA9WV8GA48CJ3SG7CHXL',)# Create server server = SimpleXMLRPCServer( ('127.0.0.1', 8888), requestHandler=RequestHandler)# Register your function server.register_function(check_call, 'call')# Run the server's main loop server.serve_forever()Server B: #!/usr/bin/env python3 # coding=utf8import xmlrpc.clienthost = '127.0.0.1' port = 8888 path = 'JRK75WAS5GMOHA9WV8GA48CJ3SG7CHXL'# Create client s = xmlrpc.client.ServerProxy('http://{}:{}/{}'.format(host, port, path))# Call your function on the remote server s.call(['alarm'])
I want one of machine have a remote control alarm running that can be triggered by any remote machine. More preciselyMachine A is running the service in the background Any remote machine B can send a packet to machine A to trigger the alarm (a command called alarm)How would you suggest do do it? I would use nc:Service on machine A: nc -l 1111; alarmMachine B triggers the alarm with nc <IP of machine A> 1111I can also write some python to open a socket...
Remote control alarm
What you are attempting to do doesn't make any sense. You are trying to create two TCP sockets with the same 5-tuple { SRC-IP, SRC-PORT, DST-IP, DST-PORT, PROTO } therefore the two sockets would be indistinguishable from each other. Think of it this way: if this were allowed, then, when a TCP packet arrives sourced from 127.0.0.1:80 and destined to 127.0.0.1:80, which socket receives it? Both?
Here is simple server listening to port 80 of localhost: nc -4 --listen 127.0.0.1 80 Here is the client to connect to server on localhost using source port same as destination port of server: nc -4 --source-port 80 --source 127.0.0.1 127.0.0.1 80 I get error: libnsock mksock_bind_addr(): Bind to 127.0.0.1:80 failed (IOD #1): Address already in use (98)According to the rule that states: { SRC-IP, SRC-PORT, DST-IP, DST-PORT, PROTO } must be unique, the creation of this connection should be allowed. There was no such tuple before attempting to create the connection for the first time. Why is this not allowed? I'm running Fedora 23 with kernel 4.4.6.
Connecting to server on localhost with same source and destination port
I know a way with socat: socat TCP-LISTEN:3000,fork SYSTEM:'sleep 2; cat httprespose',pty,echo=0Roughly based on my another answer.
Scenario Whenever the netcat server receives a connection, I want it to sleep for 2s before returning a HTTP response. I know we can turn netcat into a simple HTTP server by something like nc -lp 3000 < httprespose. Question How do I simulate the 2s delay?
How to setup simple netcat server which sleeps before it returns a HTTP response
Your code is most of the way there, all you need to do now is to treat $cmd as a queue of newline-terminated strings. In naive Python-ish pseudo-code because I haven't written PHP for a long time: input = '' try: first_command, remaining_input = input.split('\n', 1) execute(first_command) input = remaining_input except ValueError: # No newline yet; we have to wait for more input input = socket.receive()This way you can handle both streams of input and single, interactive commands, which will probably not be the way your application will be used.
I've written a small UDP server in PHP which listens on a socket, accepts a command, and performs an action. It's really quite basic. I can connect to it manually like this: % nc -u host port(where nc = Ncat: Version 7.50 ( https://nmap.org/ncat )) As I enter commands, I see the resulting response. It works exactly the way I want it to work from the command line. However, if I simply "cat" a file in like this: cat FILE| nc -u host portor send it data like this: echo "command1\ncommand2\n" | nc -u host port... then my PHP app reads everything including the end of line characters all at once. I only want to read up to the end of the line. Sure, I could wrap around the contents of the file, and send each line to nc: for x in `cat <file>`; do echo $x | nc -u host port done... but that's a complete waste. I want 1 connection to nc, not many. The EOL characters make it into the string because when I print the output in the PHP app, I see: command1 command2 ... but it's all in one string. Why is the interactive mode behaving differently than the non-interactive mode? I've been experimenting all afternoon, and I can't seem to make this work. I'm sure there's an explanation though. Thanks for any information you can provide. PS: The basics of the PHP code is: $sock = socket_create(AF_INET, SOCK_DGRAM, SOL_UDP); socket_bind($sock, '0.0.0.0', 2000); for (;;) { socket_recvfrom($sock, $cmd, 1024, 0, $ip, $port); echo "command is $cmd"; $reply = process($cmd); echo "reply is $reply"; socket_sendto($sock, $reply, strlen($reply), 0, $ip, $port); }While entering text with nc interactively, you can hit CTRL-D to separate packets. If I do the same thing in a shell script: printf 'command1\0014commandd2\0014'| nc -u host port... the commands all appear as one command. If I reduce the packet size in the PHP code to say, 5, then send data that is 20 bytes long, the data is truncated, and not separated. I've also ready online this could be a buffering issue. I tried using: stdbuf -oL -eL cat FILE | nc -u host port ... but that surprisingly didn't make a difference either. Finally, I have discovered that if I do: for x in command1 command2; do echo $x sleep 1 done | nc -u host port... that everything goes as planned. The server receives the first command, then the second command. What isn't clear really is why the sleep 1 makes a difference. Take it out, and it fails. The above is certainly better than sending each echo to nc.
problem piping data into nc
This is because the kernel doesn't really distinguish between inbound and outbound traffic beyond the source and destination IP addresses. The packets don't get "double-counted" because the kernel looks at the source IP, sees that it's local, classifies it as outbound, and doesn't bother classifying the packet any further (e.g. as inbound).
I was curious to know how much traffic the Linux kernel could handle on the loopback network, so I decided to benchmark it. In one terminal, I ran: % nc -l -p 5235 127.0.0.1 > /dev/nullAnd in another I ran: % nc 127.0.0.1 5235 < /dev/zeroThen to actually measure traffic I ran sudo nethogs lo. This shows an entry for the second nc showing that it sends about 570,000 KB/second (on average). The first nc seems to send about 1,300 KB/second on average, which I assume is TCP control packets. However, both nc processes show 0 KB/second received. Why is this? It seems like each process should report a received value equal to the other's sent value. Version information: % nethogs -V version 0.8.1% uname -a Linux file-not-regular.strugee.net 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux% nc -h |& head -1 [v1.10-41]
Why does NetHogs report 0 KB/second while benchmarking lo?
Notes, too long for a comment.server.txt could better be named server-list or servers.txt if you wish.Use lower case variable names like, avoid SERVER and alike, could better be named server_ip anyways, because it is unclear if you use host names or IPs.Double quote all non-integer-number variables like "$server_ip".Use a direct if statement instead of variable $OPEN with a redirection to the black hole (/dev/null).Rewritten based on above: #!/bin/shwhile read -r server_ip; do if nc -z -v -w5 "$server_ip" 22 >/dev/null 2>&1; then echo "Found SSH port open on $server_ip." else echo "Did not find open SSH port on $server_ip." >&2 fidone < server-list
Im trying to create a rather basic script to run through a list of servers and check if the SSH port is open using nc. I've tried a few different ways, but can't seem to get this to work. I am definitely not great at any type of scripting. Here is the script. I just want it to perform an action if it sees "succeeded" in the response from the nc command in OPEN. while read SERVER do OPEN=$(nc -z -v -w5 $SERVER 22) echo $SERVER if [[ $OPEN = *"suc"* ]]; then echo "Found SSH open on $SERVER" else echo "No open ports on $SERVER!" fi done < server.txtThe list of servers is in the server.txt file that is referenced at the end on the script. Here is the response that I get. I not nc: connect to 10.10.51.55 port 22 (tcp) failed: No route to host 10.10.51.55 No open ports on test1! Connection to 10.10.51.65 22 port [tcp/ssh] succeeded! 10.10.50.65 No open ports in test2!It give me the "No open ports on $SERVER" no matter what. I thank you for any guidance.
Failing IF in a WHILE loop in BASH script that checks for open 22 ports
The target port is blocked by the target firewall (at least for your and my IP addresses). If you run tcpdump -i any -n icmp then you see a host unreachable - admin prohibited ICMP packet.
I want to check if I can connect to Rspamd's Fuzzy port and have a very strange problem - I can ping a the host and get an answer (0% packet loss). But when I try to telnet him, I get "No route to host": # telnet 88.99.142.95 11335 Trying 88.99.142.95... telnet: Unable to connect to remote host: No route to hostAnd the same with nc: nc -vz 88.99.142.95 11335 mail.highsecure.ru [88.99.142.95] 11335 (?) : No route to hostSure, at first glance this looks like a firewall problem, but refer to the following output (the firewall is completely off - better said, there are no blocking rules): # iptables -S -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT # iptables -v -n -L Chain INPUT (policy ACCEPT 98 packets, 6560 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 64 packets, 5736 bytes) pkts bytes target prot opt in out source destinationI'm on a server virtual machine with Debain 10. Has anyone an idea where the problem might be?
Ping works, but I get 'No route to host' even though my firewall is off
@muru's comment is very valid, but for the sake of academical purpose: Computer A: cat source | ssh user@ComputerB 'cat > destination'(Assumes password-less authentication by public key.)
every time that I'll send a thing from computer A to computer B with nc, I to use these commands, Computer A tar cfp - film.mp4 | nc -w 3 192.168.xxx.xx 1234Computer B nc -l -p 1234 | tar xvfp -My question, I have openssh, how can send mine videos, file, directories,.. with ssh and nc, without to used other programmes like rsync, tar, sftp,... Very thanks for your recommendations!
netcat with ssh
I do not know which version of netcat you are using, but mine does not have a -c parameter. However, tail -F /var/log/changes.log | nc 127.0.0.1 1234 works for me.
Wait for file changes and send new lines to a TCP port server. I've tried nc 127.0.0.1 1234 -c "tail -F /var/log/changes.log" & But get broken pipe
How to send new lines from file to a tcp port?
I'm using netcat and not gnu-netcat; I'm not sure what version you're using, but if it's gnu-netcat the options might be different. I have a -q option:-q seconds after EOF is detected, wait the specified number of seconds then quitSo, if I do: $ nc -l localhost -p 7000 -q 0 < /etc/passwdFollowed by: $ nc localhost 7000Then I get the content of /etc/passwd on the second terminal, and both instances of nc terminate.
I expected this: nc -l localhost 7000 </dev/null & nc localhost 7000 </etc/profileand this nc -l localhost 7000 </etc/profile & nc localhost 7000 </dev/nullto finish after printing my /etc/profile but both command groups end up getting stuck (both processes in the first case; in the second case, the server finishes but the client does not). Why don't the commands finish? Is this a bug in my nc/Linux (4.15)? I tried it on MacOS and Cygwin and only the Linux commands aren't finishing.
nc getting stuck unexpectedly
A rdr is only a redirect; one also may need a pass (or appropriate clicky around in the System Preferences) to permit access to that port. This works for me (though I've almost totally disabled the Apple firewall rules on my Mac OS X 10.11 laptop) in /etc/pf.conf: set skip on { lo0, vboxnet0 } rdr pass inet proto tcp from any to any port 445 -> 127.0.0.1 port 5441 block in pass in inet proto tcp from any to any port 445And then a sudo pfctl -f /etc/pf.conf to load that and then testing with nc -l 127.0.0.1 5441 and connections to port 445 from a remote machine shows access. Hmm! However, locally to the Mac OS X system telnet 127.0.0.1 445 fails, probably on account of the skip lo0. This can be rectified by not using skip if localhost access to the redirect is necessary: set skip on vboxnet0 rdr pass inet proto tcp from any to any port 445 -> 127.0.0.1 port 5441 block in pass on lo0 pass in inet proto tcp from any to any port 445Also note that locahost may either mean 127.0.0.1 or ::1 so you may also need to setup inet6 related rules, or ensure that the connections are always done with IPv4 so that IPv6 either works or is not used.
I've setup port-forwarding on my mac like this: sudo sysctl net.inet.ip.forwarding=1 echo "rdr pass inet proto tcp from any to any port 445 -> 127.0.0.1 port 5441" | sudo pfctl -ef -To this setup, I am running a server using nc like this: $ nc -l 5441When I try to connect to this server via telnet, the attempt fails with the following error: $ sudo telnet 127.0.0.1 445 Trying 127.0.0.1... telnet: connect to address 127.0.0.1: Connection refused telnet: Unable to connect to remote hostRunning tcpdump on port 445 doesn't capture any packets. I am not sure what's going on and would appreciate all the help.
Port forwarding isn't working on mac osx
You have set up a listening datagram socket with socat+UNIX-RECV: and are attempting to talk to it via a stream socket with nc. The second scenario works because in that case you added the missing -u flag to nc, so that both it and socat were employing a datagram socket. It wasn't anything to do with there being a proxy. Further readinghttps://unix.stackexchange.com/a/294221/5132
I'm trying to debug why data isn't being sent over a Unix Domain Socket. I have 2 applications which should be communicating over a UDS but aren't. To test I've done the following: Using socat, I listen on a socket like this: socat -x -u UNIX-RECV:/tmp/dd.sock STDOUT and using netcat to send data like this: echo "hello" | nc -U -w1 /tmp/dd.sock nothing happens. But if I also set up socat as a proxy, to listen to a UDP port, and write that to the socket like this: socat -s -u UDP-RECV:9988 UNIX-SENDTO:/tmp/dd.sock Then sending via netcat to the UDP port works: echo "Hello" | nc -u localhost 9988 I've also been able to get my client application to write UDP to the proxy and it's successful where is wasn't when writing to the unix socket. I would like to understand why socat doesn't receive data written to it by nc, but does if I proxy over UDP. Using Amazon Linux 4.14.101-75.76.amzn1.x86_64
Sending data to Unix socket failing unless proxied with socat via UDP
Hi if you want check connection between server with socat try below command and refer link .. CWsocat [options] <address> <address> CWsocat -V CWsocat -h[h[h]] | -?[?[?]] CWfilan CWprocantry this link for better understanding.. there are other method to check the connectivity..
I want to get the behavior of "nc -z host port ; echo $?" with socat, since my network admins have disabled netcat. The purpose is just to test that a TCP connection is open between two servers. How would I go about doing this?
How to β€œnc -z <address>” with socat?
The && only applies the second part if the first part is true. So what you have is thisnc succeeds, so run the if it suceeded (which it did) echo 1 nc fails, so don't go past the &&I think what you want is this if nc -z 10.102.10.22 10003 > /dev/null; then echo 1; else echo 0; fi
I have the following command line, which should return the value 1 in case of, by means of nc, check communication with the IP and Port in question: /bin/nc -z 10.102.10.22 10003 > /dev/null && if [ $? -eq 0 ]; then echo 1; else echo 0; fiThe result is satisfactory and I get the value 1. But when there is no communication with the Port or IP, the command is left "waiting", without getting value of any kind. What would be the correct way to return the value 0 (the else statement) after a specific time has passed (e.g. 10 seconds). The results of this command are monitored every short time to draw a communications graph, so it is interesting to know when it is 0.
Conditional `if` with command that doesn't respond in else
Pipe the whole loop into nc: while true; do echo 'lol' sleep 1 done | nc -l 9000This will start a single instance of nc, listening for connections on port 9000, and send β€œlol” once per second to it. Note that β€œlol”s will accumulate until the connection is opened, so you might see a number of β€œlol”s send immediately on connection. You could add a delay at the start: (sleep 5 while true; do echo 'lol' sleep 1 done) | nc -l 9000
I'm learning the Apache Flink. Here is the Hello World of Flink: https://ci.apache.org/projects/flink/flink-docs-stable/getting-started/tutorials/local_setup.html This example is a program, which counts the words in every 5 seconds. If we want to run this sample, we need to do the following steps:Execute nc -l 9000 on one terminal (A); Execute ./bin/flink run examples/streaming/SocketWindowWordCount.jar --port 9000 on another terminal (B); Go to the terminal A and type some words.If we Ctrl-c on the terminal A, this sample will be terminated. I want to know if it is possible to type the words programmatically at the terminal A. For example, I want to type the word lol per second at the terminal A, what should I do? The code below won't work. #!/bin/bashwhile true; do echo 'lol' | nc -l 9000 sleep 1 doneOf course, I may try to modify the SocketWindowWordCount.java to do so but for now, for some reason, I can not change the java code.
How to use nc to send message consecutively per second
There are a couple of things to note when using NFSv4 id mapping on mounts which use the default AUTH_SYS authentication (sec=sys mount option) instead of Kerberos. NOTE: With AUTH_SYS idmapping only translates the user/group names. Permissions are still checked against local UID/GID values. Only way to get permissions working with usernames is with Kerberos. On recent kernels, only the server uses rpc.idmapd (documented in man rpc.idmapd). When using idmap, the user names are transmitted in user@domain format. Unless a domain name is configured in /etc/idmapd.conf, idmapd uses the system's DNS domain name. For idmap to map the users correctly, the domain name needs to be same on the client and on the server. Secondly, kernel disables id mapping for NFSv4 sec=sys mounts by default. Setting nfs4_disable_idmapping parameter to false enables id mapping for sec=sys mounts. On server: echo "N" > /sys/module/nfsd/parameters/nfs4_disable_idmappingand on client(s): echo "N" > /sys/module/nfs/parameters/nfs4_disable_idmappingYou need to clear idmap cache with nfsidmap -c on clients for the changes to be visible on mounted NFSv4 file systems. To make these changes permanent, create configuration files in /etc/modprobe.d/, on server (modprobe.d/nfsd.conf): options nfsd nfs4_disable_idmapping=Non client(s) (modprobe.d/nfs.conf): options nfs nfs4_disable_idmapping=N
I have a Server (Debian) that is serving some folders trough NFS and a Client (Debian) that connects to the NFS Server (With NFSv4) and mounts that exported folder. So far everything is fine, I can connect and modify the content of the folders. But the users are completely messed up. From what I understand this is due to NFS using the UIDs to set the permissions, and as the UIDs of the users from the Client and the Server differ, then this happens, which is still expected. But from what I understood, by enabling NFSv4, IDMAPD should kick in and use the username instead of the UIDs. The users do exist on the Server and Client side, they just have different UIDs. But for whatever reason IDMAPD doesn't work or doesn't seem to do anything. So here is what I've done so far: On Server Side:installed nfs-kernel-server populated the /etc/exports with the proper export settings --> /rfolder ip/24(rw,sync,no_subtree_check,no_root_squash) and changed /etc/default/nfs-common to have NEED_IDMAPD=yesOn the Client Sideinstalled nfs-common and changed /etc/default/nfs-common to have NEED_IDMAPD=yes and mount the folder with "mount -t nfs4 ip:/rfolder /media/lfolder"Rebooted and restarted both several times, but still nothing. When I create from the Server a folder with user A, on the Client I see that the folder owner is some user X. When I create a file from the Client with user A, on the Server side it says its from some user Y. I checked with HTOP that the rpc.idmap process is running on the Server and it is indeed. Although on the Client it doesn't appears to be running. By trying to manually start the service on the Client I just got an error message stating that IDMAP requires the nfs-kernel-server dependency to run. So I installed it on the Client side, and now I have the rpc.idmap process running on both Client and Server. Restarted both, and the issue still persists. Any idea what is wrong here? Or how to configure this properly?
How to get NFSv4 idmap working with sec=sys?
The man page for exports says about the fsid parameterNFS needs to be able to identify each filesystem that it exports. Normally it will use a UUID for the filesystem (if the filesystem has such a thing) or the device number of the device holding the filesystem (if the filesystem is stored on the device). As not all filesystems are stored on devices, and not all filesystems have UUIDs, it is sometimes necessary to explicitly tell NFS how to identify a filesystem. This is done with the fsid= option.Now that we know why the fsid parameter is sometimes required, let's look at what the man page says about NFSv4 (this part differs significantly from older versions):For NFSv4, there is a distinguished filesystem which is the root of all exported filesystem. This is specified with fsid=root or fsid=0 both of which mean exactly the same thing.So in other words with NFSv4 the first line in your example is defining a base point under which all directories the server is exporting to a certain client are located. At first sight this might sound like a step back compared to the flexibility of NFSv3 which allows you to export any directory. However, you can easily bring directories which have been mounted elsewhere into this directory. Just add a line like the following one to /etc/fstab /path/to/music/dir /srv/nfs/music none rbind 0 0and run mount -a to mount the new filesystem. The new approach essentially acts as a pseudo root file system preventing clients to get access to data outside of this directory allowing the NFS server to easily check if a client should be granted access to a file reducing complexity while at the same time improving security.
In tutorials on NFSv4 it is common to see the recommendation to export a shared root directory to the entire subnet similar to this from the Arch wiki: /etc/exports: /srv/nfs 192.168.1.0/24(rw,sync,crossmnt,fsid=0,no_subtree_check) /srv/nfs/music 192.168.1.0/24(rw,sync,no_subtree_check) /srv/nfs/public 192.168.1.0/24(ro,all_squash,insecure,no_subtree_check) desktop(rw,sync,all_squash,anonuid=99,anongid=99,no_subtree_check)Would it be more secure to not have the first line at all? (Throughout my question, when I mention "first line" I'm referring to the first export line in the snippet above.) With NFSv4, fsid=0 is optional, and it seems to be mostly a convenience to shorten the path (by translating /srv/nfs to /). In my preliminary testing, the exports do work correctly without that first line as long you one uses /srv/nfs. What, if any, important functions does that first line provide? I also want to consider the example when we know that there is at least one more bind mount such as this one: /srv/nfs/projects(Assume the contents of "projects" are sensitive unlike the music bind mount.) The first export line, as it is written above, seems to give every client on the LAN access to the entire exported file system, especially with crossmnt and,no_subtree_check. I'm not suggesting that owner and group permissions are ignored, but I am thinking that the exports would work just as well without the first line and that the other bind mounts under /srv/nfs (such as /srv/nfs/projects) would not be unnecessarily opened up to the entire subnet, exposing any potential oversights around owner and group permissions. Of course, one can add an export similar to this properly share "projects": /etc/exports: /srv/nfs/projects 192.168.1.123(rw,sync,root_squash,subtree_check)Is this new line impacted in any way by the first line? I fail to see how the first export line has served any important purpose. If having that line is helpful or recommended, would it be wise to do something like this? /srv/nfs 192.168.1.0/24(ro,sync,fsid=0,subtree_check,all_squash)Is it correct that the more specific exports that follow will override this one? For example, would the following line properly grant rw access to "music" even when the first line is changed to ro and made more restrictive as in the example below? /srv/nfs 192.168.1.0/24(ro,sync,fsid=0,subtree_check,all_squash) /srv/nfs/music 192.168.1.2(rw,sync,no_subtree_check)Is order important in listing the exports? What determines precedence? Do more specific addresses (e.g., 192.168.1.2) override more general ones (192.168.1.0/24)? NOTE: there is a single focused theme to this question, and it could probably use a better title to make that clear, but I'm not coming up with the right title for it so far. I welcome edits.
implications of using NFSv4 fsid=0 and exporting the NFS root to entire LAN (or not)
It seems to me you're trying to use rsync wrong. Rsync's protocol is designed for the exact senario of comparing / synchronising large file systems on two separate servers. It does at much as it can locally on both the local and remote machine before comparing in the middle. Its protocol is designed such that an rsync agent on one machine talks to an rsync agent on another and the protocol is designed to massively reduce the number of round trips (and total data) required to complete the task. That is rsync is designed to work: [fast] [slow SSH] [fast] File system <----> rsync <----------> rsync <----> File systemRsync is optimised for network performance between the two agents, but it has no way to control the protocol used to access the disk. So when you mount a remote NFS file system you change the profile of network access: [fast] [fast] [slow NFS] File system <----> rsync <------> rsync <---------> File systemRsync can't do anything about this because it has absolutely no control over the NFS protocol.One concrete difference here is that over NFS, every file must be individually requested. To explore a file tree containing /foo/bar/baz you have to request / [wait] then request /foo [wait] then request /foo/bar [wait] then finally request /foo/bar/baz. At 110ms latency per request that's 330ms latency and you only got one file. Rsync's protocol between agents doesn't have this limitation. The agent running on the remote machine eagerly compiles a list of every file and directory in the remote file system being synchronised and sends over everything. There's only one request for the entire file tree! See how rsync works
We are using rsync to synchronize data from two NFS servers. One NFS server is on the east coast, the other is on west coast. RTT is about 110ms. On the east coast NFS server I mount the west coasts NFS server mount point. <server>:/home/backups on /mnt/backups type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=krb5,clientaddr=x.x.x.x,local_lock=none,addr=y.y.y.y). The data is ALREADY on both servers and just to do a validation of the data (e.g. sync folders and when nothing needs to be changes). The following is how long it takes to validate that east coast server is the same as west cost server of a 7GB folder. The follow takes about 8 minutes to complete over 7GB of data. rsync -r -vvvv --info=progress2 --size-only /<local_path>/ /<remote_path>/ The following (which avoids using NFS mount) takes about 15seconds to complete over 7GB of data (same as above). rsync -r -vvvv --info=progress2 --size-only /<local_path>/ <user>@<west_cost_NFS>:/<remote_path>/ again the above is NOT moving any data as the folders are already synchronized, its just validating the data is the same (based on size of files). I've tried using -o async on client and in /etc/exports async on the server but the client won't ever show async when I run "mount" on the client. I assume async is default. I've tried changing rsize, wsize as well to larger values, but performance doesn't get much better. Am I just SOL on getting any better performance out of NFS?
NFS performance over high latency is poor, rsync over ssh is about 100x faster
From the kernel documentation for kernel parameters nfs.nfs4_disable_idmapping=[NFSv4] When set to the default of '1', this option ensures that both the RPC level authentication scheme and the NFS level operations agree to use numeric uids/gids if the mount is using the 'sec=sys' security flavour. In effect it is disabling idmapping, which can make migration from legacy NFSv2/v3 systems to NFSv4 easier. Servers that do not support this mode of operation will be autodetected by the client, and it will fall back to using the idmapper. To turn off this behaviour, set the value to '0'.nfsd.nfs4_disable_idmapping=[NFSv4] When set to the default of '1', the NFSv4 server will return only numeric uids and gids to clients using auth_sys, and will accept numeric uids and gids from such clients. This is intended to ease migration from NFSv2/v3.nfs.nfs4_disable_idmapping=1 and nfsd.nfs4_disable_idmapping=1 Disabling the id mapper nfsd.nfs4_disable_idmapping=1 and nfs.nfs4_disable_idmapping=1 on the SERVER and CLIENT resulted in systemd starting up to the user login prompt, with only 1 error:Failed to start Remount Root and Kernel File System, which was however resolved by adding modconf to the mkinitcpio hooks; together with block keyboard in an attempt to deal with the other apparent problem: the laptop keyboard does not work... The rpc.idmapd -fvvv did not output any messages. I am able to login as root using an external USB keyboard, read and create files. I have not done any extensive testing so there could still be problems with this solution. nfs.nfs4_disable_idmapping=0 and nfsd.nfs4_disable_idmapping=0 It seems that echo "options nfs nfs4_disable_idmapping=0" >> /etc/modprobe.d/nfs.conf (or cat /sys/module/nfsd/parameters/nfs4_disable_idmapping -> N) on the CLIENT did not have any effect. The CLIENT id mapper was disabled until I explicitly passed the parameter nfs.nfs4_disable_idmapping=0 to the kernel during boot (GRUB). The rpc.idmapd -fvvv did not output any complaints. On the other hand, it did not print out anything else after establishing the first rpc.idmapd: Server : (user) id "0" -> name "root@localdomain"... The Wireshark log however no longer records a NFS4ERR_BADOWNER. Nonetheless, all the systemd startup failures persist...Failed to mount POSIX Message Queue File System. Failed to start Remount Root and Kernel File System. Failed to mount Huge Pages File System. Failed to mount Kernel Debug File System. Failed to mount Kernel Configuration File System. Then ends with Not tainted 4.13.12-1-ARCH #1...Conclusion nfs.nfs4_disable_idmapping=0 and nfsd.nfs4_disable_idmapping=0 Save for setting up Kerberos and troubleshooting, I am not sure what to try next. The rpc.idmapd still seems to be unable to map correct permissions, but rpc.idmapd -fvvv no longer outputs any errors...? What to do? The boot errors could perhaps be caused by something else... I dunno... nfs.nfs4_disable_idmapping=1 and nfsd.nfs4_disable_idmapping=1 Although it works, the approach seems wrong; I am not migrating, and I should be able to set up the system using rpc.idmapd. For now it will have to do; it will probably come back and bite me in the future...
updates: 5 (20171209) updates: 5 (20171210)mount -t nfs4 [SERVER IP]:/archlinux /mnt works. ss -ntp | grep 2049 the client establishes a connection to the server before systemd begins. NSF4 id mapper can only be used with Kerberos?the problem I am attempting to set up a diskless node/workstation/system. The OS (4.13.12-1-ARCH) is installed on the SERVER /srv/archlinux. After a successful netboot from GRUB to NFSv4, systemd begins but fails at multiple stages, for example:Failed to mount Kernel Configuration File System. Failed to mount Kernel Debug File System. Failed to mount Huge Pages File System Failed to start Load/Save Random Seed. Failed to mount /tmp. Failed to start Rebuild Journal Catalog. Then ends with Not tainted 4.13.12-1-ARCH #1...Or,Failed to mount POSIX Message Queue File System. Failed to start Remount Root and Kernel File System. Failed to mount Huge Pages File System. Failed to mount Kernel Debug File System. Failed to mount Kernel Configuration File System. Then ends with Not tainted 4.13.12-1-ARCH #1...I suspect the failures are caused by an incorrect configuration of NFSv4 or the local network. rpc.idmapd /etc/idmapd.conf [General] Verbosity = 7 Pipefs-Directory = /var/lib/nfs/rpc_pipefs Domain = localdomain [Mapping] Nobody-User = nobody Nobody-Group = nobody [Translation] Method = nnswitch/etc/exports (printed using # exportfs -v) /srv <world>(rw,sync,wdelay,hide,no_subtree_check,fsid=0,sec=sys,no_root_squash,no_all_squash) /srv/archlinux <world>(rw,sync,wdelay,hide,no_subtree_check,sec=sys,no_root_squash,no_all_squash)(Exposed to "world" for debugging purposes)Running rpc.idmapd -fvvv on a separate tty during bootup logs the following: rpc.idmapd: libnfsidmap: using domain: localdomain rpc.idmapd: libnfsidmap: Realms list: 'LOCALDOMAIN' rpc.idmapd: libnfsidmap: processing 'Method' list rpc.idmapd: libnfsidmap: loaded plugin /usr/lib/libnfsidmap/nsswitch.so for method nsswitch rpc.idmapd: Expiration time is 600 seconds. rpc.idmapd: Opened /proc/net/rpc/nfs4.nametoid/channel rpc.idmapd: Opened /proc/net/rpc/nfs4.idtoname/channel rpc.idmapd: nfsdcb: authbuf=* authtype=user rpc.idmapd: nfs4_uid_to_name: calling nsswitch->uid_to_name rpc.idmapd: nfs4_uid_to_name: nsswitch->uid_to_name returned 0 rpc.idmapd: nfs4_uid_to_name: final return value is 0 rpc.idmapd: Server : (user) id "0" -> name "root@localdomain"If exportfs sec=sys, it continues like: rpc.idmapd: nfsdch: authbuf=* authtype=user rpc.idmapd: nfs4_name_to_uid: calling nsswitch->name_to_uid rpc.idmapd: nss_getpwnam: name '0' domain 'localdomain': resulting localname '(null)' rpc.idmapd: nss_getpwnam: name '0' does not map into domain 'localdomain' rpc.idmapd: nfs4_name_to_uid: nsswitch->name_to_uid returned -22 rpc.idmapd: nfs4_name_to_uid: final return value is -22 rpc.idmapd: Server : (user) name "0" -> id "99" (stops here)+(20171209) After making sure that the /etc/hostname for the CLIENT was set to client2 (duh), if exportfs sec=none or sec=sys, it continues like: rpc.idmapd: nfsdch: authbuf=* authtype=group rpc.idmapd: nfs4_gid_to_name: calling nsswitch->gid_to_name rpc.idmapd: nfs4_gid_to_name: nsswitch->gid_to_name returned 0 rpc.idmapd: nfs4_gid_to_name: final return value is 0 rpc.idmapd: Server : (group) id "190" -> name "systemd-journal@localdomain" rpc.idmapd: nfsdch: authbuf=* authtype=user rpc.idmapd: nfs4_name_to_uid: calling nsswitch->name_to_uid rpc.idmapd: nss_getpwnam: name '0' domain 'localdomain': resulting localname '(null)' rpc.idmapd: nss_getpwnam: name '0' does not map into domain 'localdomain' rpc.idmapd: nfs4_name_to_uid: nsswitch->name_to_uid returned -22 rpc.idmapd: nfs4_name_to_uid: final return value is -22 rpc.idmapd: Server : (user) name "0" -> id "99" (stops here)If I instead change method from nsswitch to static (UID mapping in NFS) /etc/idmapd.conf ... [Translation] Method = static [Static] root@localdomain = rootThe rpc.idmapd -fvvv on a separate tty during bootup logs the following: rpc.idmapd: libnfsidmap: using domain: localdomain rpc.idmapd: libnfsidmap: Realms list: 'LOCALDOMAIN' rpc.idmapd: libnfsidmap: processing 'Method' list rpc.idmapd: static_getpwnam: name 'root@localdomain' mapped to 'root' rpc.idmapd: static_getpwnam: group 'root@localdomain' mapped to ' root' rpc.idmapd: libnfsidmap: loaded plugin /usr/lib/libnfsidmap/static.so for method static rpc.idmapd: Expiration time is 600 seconds. rpc.idmapd: Opened /proc/net/rpc/nfs4.nametoid/channel rpc.idmapd: Opened /proc/net/rpc/nfs4.idtoname/channel rpc.idmapd: nfsdcb: authbuf=* authtype=user rpc.idmapd: nfs4_uid_to_name: calling static->uid_to_name rpc.idmapd: nfs4_uid_to_name: static->uid_to_name returned 0 rpc.idmapd: nfs4_uid_to_name: final return value is 0 rpc.idmapd: Server : (user) id "0" -> name "root@localdomain"If exportfs sec=sys, it continues like: rpc.idmapd: nfsdch: authbuf=* authtype=user rpc.idmapd: nfs4_name_to_uid: calling static->name_to_uid rpc.idmapd: nfs4_name_to_uid: static->name_to_uid returned -2 rpc.idmapd: nfs4_name_to_uid: final return value is -2 rpc.idmapd: Server : (user) name "0" -> id "99" (stops here)If exportfs sec=none, it continues like: rpc.idmapd: nfsdch: authbuf=* authtype=group rpc.idmapd: nfs4_gid_to_name: calling static->gid_to_name rpc.idmapd: nfs4_gid_to_name: static->gid_to_name returned -2 rpc.idmapd: nfs4_gid_to_name: final return value is -2 rpc.idmapd: Server : (group) id "190" -> name "nobody" rpc.idmapd: nfsdch: authbuf=* authtype=user rpc.idmapd: nfs4_name_to_uid: calling static->name_to_uid rpc.idmapd: nfs4_name_to_uid: static->name_to_uid returned -2 rpc.idmapd: nfs4_name_to_uid: final return value is -2 rpc.idmapd: Server : (user) name "0" -> id "99" (stops here)Similar problems with the user ID mapping:NFSv4 User Mapping NFS user mapping Mapping UID and GID of local user to the mounted NFS share And many many more... Often related to a switch from NFSv3 to NFSv4, and rarely about netboot.troubleshootingNo firewall No Kerberos, LDAP, etc. No SELinux The user root exists on both SERVER and CLIENT, with the same password.SERVER All other relevant configuration files for NFSv4 I could identify on the SERVER. /etc/nsswitch.conf passwd: compat mymachines systemd group: compat mymachines systemd shadow: compat publickey: files hosts: files mymachines resolve [!UNAVAIL=return] dns myhostname networks: files protocols: files services: files ethers: files rpc: files netgroup: files/etc/nfs.conf (all settings commented out) /etc/conf.d/nfs-common.conf (all settings commented out)network configurationHow to set the domain name on GNU/Linux? Archlinux Wiki Network configuration: Set the hostname Archlinux Wiki Network configuration: Local network hostname resolutionThe SERVER hostname is server and has 3 network devices (nd[1-3]). The Gateway default via 192.168.0.1 nd1. /etc/hosts 127.0.0.1 localhost.localdomain localhost ::1 ip6.localhost localhost 192.168.0.101 nd1.localdomain server servernd1 192.168.1.101 nd2.localdomain server servernd2 192.168.2.101 nd3.localdomain server servernd2 192.168.1.102 client1.localdomain client1 192.168.2.102 client2.localdomain client2/etc/resolveconf.conf name_servers=192.168.0.1# hostname -f # nd1.localdomain# hostname -i 192.168.0.101 192.168.1.101 192.168.2.101# getent hosts IP -> the corresponding line in /etc/hosts # getent ahosts HOSTNAME -> the corresponding line in /etc/hosts# ping -c 3 server.localdomain -> 0% packet loss# id -u root -> 0 # id -un 0 -> rootDisplay the system's effective NFSv4 domain name on stdout. # nfsidmap -d -> localdomainDisplay on stdout all keys currently in the keyring used to cache ID mapping results. These keys are visible only to the superuser. # nfsidmap -l -> nfsidmap: '.id_resolver' keyring was not found.CLIENT /etc/hostname +(20171209) client2 /etc/hosts (exactly the same as the hosts file on the server) /etc/resolveconf.conf name_servers=192.168.0.1 /etc/idmapd.conf (exactly the same as the idmapd.conf file on the server) /etc/fstab # sys=sec or sys=none to correspond to server export settings. /dev/nfs / nfs rw,hard,rsize=9151,sec=sys,clientaddr=192.168.2.102 0 0 devtmpfs /dev devtmpfs defaults proc /proc proc defaults none /run tmpfs defaults sys /sys sysfs defaults run /run tmpfs defaults tmp /tmp tmpfs defaultsThe fstab was defined by comparing the mounted directories on the server using findmnt -A. net_nfs4+(20171210) NFS version on SERVER and CLIENT cat /proc/fs/nfsd/versions -> -2 +3 +4 +4.1 +4.2 On SERVER and CLIENT cat /sys/module/nfsd/parameters/nfs4_disable_idmapping -> N. On SERVER echo "options nfsd nfs4_disable_idmapping=0" > /etc/modprobe.d/nfsd.conf. On CLIENT the /sys/module/nfs/parameters/nfs4_disable_idmapping does not exist, and not sure how to manually create it as the /sys is read only. +(20171210) On CLIENT echo "options nfs nfs4_disable_idmapping=0" > /etc/modprobe.d/nfs.conf.The CLIENT IP is 192.168.2.102/24. The CLIENT network device is connected to SERVER nd2 192.168.2.101/24 (hostname: servernd2). The network information during boot: :: running early hook [udev] starting version 235 :: running hook [udev] :: Triggering uevents... :: running hook [net_nfs4] IP-Config: eth0 hardware address [CLIENT NETWORK DEVICE MAC] mtu 1500 DHCP hostname client2 IP-Config: eth0 guessed broadcast address 192.168.2.255 IP-Config: eth0 complete (from 192.168.0.101): address: 192.168.2.102 broadcast: 192.168.2.255 netmask: 255.255.255.0 gateway: 192.168.2.101 dns0 : 192.168.0.1 dns1 : 0.0.0.0 host : client2 domain : localdomain rootserver: 192.168.0.101 rootpath: /srv/archlinux filename : /netboot/grub/i386-pc/core.0 NFS-Mount: 192.168.2.101:/archlinux Waiting 10 seconds for device /dev/nfs ... (systemd takes over from here)Why the NSFv4 errors occur? Server : (group) id "190" -> name "nobody"With NFSv4, things change: users are mapped by username, and the mapping between user names and user IDs is handled by a process called "ID map daemon" (idmapd). In particular, NFSv4 clients and server should use the same domain for the mapping to work properly, otherwise requests will be mapped to the anonymous user/group. -- Trying out NFSv4 (on Linux and Solaris) -- March 15th, 2012 - 13:03 / brontoIn an ideal world, the user and group of the requesting client would determine the permissions of the data returned. We don't live in an ideal world. Two real-world problems intervene:You might not trust the root user of a client with root access to the server's files. The same username on client and server might have different numerical ID'sProblem 1 is conceptually simple. John Q. Programmer is given a test machine for which he has root access. In no way does that mean that John Q. Programmer should be able to alter root owned files on the server. Therefore NFS offers root squashing, a feature that maps uid 0 (root) to the anonymous (nfsnobody) uid, which defaults to -2 (65534 on 16 bit numbers). -- NFS: Overview and Gotchas -- Copyright (C) 2003 by Steve Litt+(20171209) rpc.idmapd: nss_getpwnam: name '0' domain 'localdomain': resulting localname '(null)' According to Steve Dickson in a comment (2011-08-12 16:01:55 EDT) to a Red Hat Bugzilla – Bug 715430 reportThe [error] statement explains the problem. DNS on the local machine was not set up (or returning NULL) and the Domain= variable in /etc/idmapd.conf was not set.nss_getpwnam: name '0' does not map into domain On the Debian Mailing Lists, in an e-mail correspondence between Jonas Meurer and Christian Seiler (20150722) concerning "Kerberos-secured NFSv4" the error is explained in detail. My summary of the discussion: When the NFS client sends nss_getpwnam: name '8' domain 'freesources.org': resulting localname '(null)'The NFS client sends just the uid converted to a string in some cases instead of the properly translated NFS username, which the server then rejects.The client should send nss_getpwnam: name '[emailprotected]' domain 'freesources.org': resulting localname 'mail'Here you can see that the owner name that was transmitted by the NFS client was '[emailprotected]' (and not simply '8'), so that does contain an @; nss_getpwname can see that the domain name matches and just strips it, resulting in a user name 'mail', which it looks up in /etc/passwd, returns the user id (in this case, 8, because it's the same on client and server) and the server is perfectly happy.So why does the client send the wrong username? ... every once in a while, idmapping will fail, so the kernel will just send a number. But that number will cause the chown command to fail, since the server won't translate it back. Short answer: I have no idea. Longer answer: ...If I understand the longer answer correctly, the problem could occur because the NFS client relies on the "kernel's key cache". For the NFS server this should never be a problem because the "kernel's key cache" is never used. Nonetheless,Since you are using just regular nsswitch via /etc/passwd, nss_getpwnam should never fail in your case, unless you do some weird stuff with /etc/passwd at the same time.The answer also refers to an alternative method to idmapd; nfsidmap, although reading the man I cannot quite understand how it would replace idmapd. +(20171209) nss_getpwnam: name '[emailprotected]' does not map into domain 'localdomain' This error message does not seem to occur for me, I am however including the answer from SUSE's support knowledgebase -- 10-DEC-13 Modified Date: 12-OCT-17 -- because of the description of cause, and the proposed remedy which stands in contrast to the other found discussions.NFSv4 handles user identities differently than NFSv3. In v3, an nfs client would simply pass a UID number in chown (and other requests) and the nfs server would accept that (even if the nfs server did not know of an account with that UID number). However, v4 was designed to pass identities in the form of @. To function correctly, that normally requires idmapd (id mapping daemon) to be active at client and server, and for each to consider themselves part of the same id mapping domain. Chown failures or idmapd errors like the ones documented above are typically a result of either:The username is known to the client but not known to the server, or The idmapd domain name is set differently on the client than it is on the server.Therefore, this issue can be fixed by insuring that the nfs server and client are configured with the same idmapd domain name (/etc/idmapd.conf) and both have knowledge of the usernames / accounts in question. However, it is often not convenient to insure that both sides have the same user account knowledge, especially if the nfs server is a filer. The NFS community has recognized that this idmapd feature of NFSv4 is often more troublesome that it is worth, so there are steps and modifications coming into effect to allow the NFSv3 behavior to work even under NFSv4.The proposed remedy is to disable idmapd. nfs.nfs4_disable_idmapping=1+(20171209) Wireshark Analyzing the Wireshark log, it is quite extensive but begins with something like: [IP CLIENT] -> [IP SERVER] NFS 226 V4 Call ACCESS FH: [HEX VALUE], [Check: RD LU MD XT DL] [IP SERVER] -> [IP CLIENT] NFS 238 V4 Reply (Call In 34) ACCESS, [Allowed: RD LU MD XT DL] [IP CLIENT] -> [IP SERVER] NFS 246 V4 Call LOOKUP DH: [HEX VALUE]/archlinuxwhere a similar pattern [A HEX VALUE]/[PATH] can be discerned for /sbin, /usr, /bin, /init, /lib, /systemd, /dev, /proc, /sys, /run, /, /lib64. When the CLIENT requests /Id-linux-x86-64.so.2 the first errors start to appear: [IP CLIENT] -> [IP SERVER] NFS 342 V4 Call OPEN DH: [HEX VALUE]/Id-linux-x86-64.so.2 [SERVER IP] -> [CLIENT IP] NFS 166 V4 Reply (Call In 124) OPEN Status: NFS4ERR_SYMLINKThe pattern more or less repeats itself with more frequent errors, for example, LOOKUP Status; and OPEN Status: reporting NFS4ERR_NOENT. Interestingly, it is at the very end of the log where to first and only reference to user permission is made, [SERVER IP] -> [CLIENT IP] NFS 182 V4 Reply (Call In 9562) SETATTR Status: NFS4ERR_BADOWNERRFC According toRFC7530 (Network File System (NFS) Version 4 Protocol, 201503, PROPOSED STANDARD) -- Updated by RFC7931 RFC5661 (Network File System (NFS) Version 4 Minor Version 1 Protocol, 201001, PROPOSED STANDARD) -- Updated by RFC8178 RFC7862 (Network File System (NFS) Version 4 Minor Version 2 Protocol, 201001, PROPOSED STANDARD) -- Updated by RFC8178 -- which refers back to [RFC5661].NFS4ERR_BADOWNER (Error Code 10039)This error is returned when an owner or owner_group attribute value or the who field of an ACE within an ACL attribute value cannot be translated to a local representation.The specifications discuss in Section 5.9. Interpreting owner and owner_group, I am not sure what to cite as relevant however. NFS4ERR_SYMLINK (Error Code 10029)The current filehandle designates a symbolic link when the current operation does not allow a symbolic link as the target.NFS4ERR_NOENT (Error Code 2)This indicates no such file or directory. The file system object referenced by the name specified does not exist.The error could however be expected ...The current filehandle is assumed to refer to a regular directory a named attribute directory. LOOKUPP assigns the filehandle for its parent directory to be the current filehandle. If there is no parent directory, an NFS4ERR_NOENT error must be returned. Therefore, NFS4ERR_NOENT will be returned by the server when the current filehandle is at the root or top of the server's file tree.+(20171210) mount -t nfs4 [SERVER IP]:/archlinux /mnt On the client computer, using the Archlinux "LiveUSB" I was able to mount the network drive, download the latest kernel (4.14-4-1-ARCH) via the SERVER internet connection, and install archlinux on the [SERVER IP]/archlinux. During install rpc.idmapd -fvvv indicated a successful mapping of usernames, for example, rpc.idmapd: Server : (user) id "0" -> name "root@localdomain" rpc.idmapd: Server : (group) id "99" -> name "nobody@localdomain" ... -> name "tty@localdomain" ... -> name "systemd-journal-upload@localdomain" ... -> name rpc@localdomain ... -> name systemd-journal@localdomain ... -> name utmp@localdomainThe result of genfstab was also different: [SERVER IP]:/archlinux / nfs4 rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,times=600,retrans=2,sec=sys,clientaddr=[CLIENT IP],local_lock=none,addr=[SERVER IP] 0 0Nevertheless, after reboot systemd failed again with the same failures as described at the beginning of the post. +(20171210) Is the remote directory on the server mounted to /new_root? The mkinitcpio script uses the variable mount_handler to carry an assigned "mounting function", in this case nfs_mount_handler(), to which the "root path" is passed $1 at a later stage; /new_root. I am trying to verify that the client has mounted the [SERVER IP]:/archlinux to the /new_root. On the server, I can only observe that the client has established a connection but not if the directory is mounted and to where? showmount -a server -> All mount points on server: (empty)ss -ntp | grep 2049 -> ESTAB 0 0 192.168.2.101:2049 192.168.2.102:809 (random port)+(20171210) NFS4, sec=sys and id mapper are incompatible?Reading the doco, it looks like sec=sys and the id mapper can be used to correctly map uid/gid to name where the client and server have different mappings in /etc/passwd and /etc/group. This simply isn't true. That's because with sec=sys the id mapper doesn't come into play in the authentication part of the nfs protocol, only the file attributes part. With sec=sys authentication, nfs just passes the client uid/gid which is used directly by the server. So permissions checks will be screwed if client and server uid and gid don't align. To confuse things further, when the client creates a new file it is the authentication credentials that are used, so the file gets created at the server with the client's uid/gid. After that nfs uses idmap to get the file attributes, so the uid/gid (which originally came from the client) gets mapped at the server, and you end up seeing the server's name for a client uid/gid. Borkage! On the other hand, if the file was originally created at the server, you will see the correct name at the client, even if the uid/gid differs. But permissions checking will still be broken. -- kimmie -- Posted: Wed Feb 20, 2013 3:14 am Post subject: -- Emphasis in original
archlinux netboot diskless node/system, systemd on NFS (v4) fails, rpc.idmapd
I found the issue. Basically, I've created a Debian unprivileged container in Proxmox. That means NFS is unavailable. Until now, I was unaware of that restriction while using Proxmox containers. To be able to access the NFS share within that container, I followed some suggestions from Proxmox forum. First, I mounted the NFS share in the Proxmox host (no issues there). Then, in Proxmox, I created a "bind mount" to bind that NFS partition to my container. # pct set 903 -mp0 /mnt/host_dir,mp=/mnt/guest_dirI'm not sure this is the best approach, but now I can access that NFS share from within the container. Another possibility is to recreate the container with privilege and NFS enabled.
I'm trying to mount a simple NFS share, but it keeps saying "operation not permitted". The NFS server has the following share. /mnt/share_dir 192.168.7.101(ro,fsid=0,all_squash,async,no_subtree_check) 192.168.7.11(ro,fsid=0,all_squash,async,no_subtree_check)The share seems to be active for both clients. # exportfs -s /mnt/share_dir 192.168.7.101(ro,async,wdelay,root_squash,all_squash,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,all_squash) /mnt/share_dir 192.168.7.11(ro,async,wdelay,root_squash,all_squash,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,all_squash)The client 192.168.7.101 can see the share. $ sudo showmount -e 192.168.7.10 Export list for 192.168.7.10: /mnt/share_dir 192.168.7.101192.168.7.101 's mount destination: # ls -lah /mnt/share_dir/ total 8.0K drwxr-xr-x 2 me me 4.0K Aug 28 19:21 . drwxr-xr-x 3 root root 4.0K Aug 28 19:21 ..When I try to mount the share, the client says "operation not permitted" with either nfs or nfs4 type. $ sudo mount -vvv -t nfs 192.168.7.10:/mnt/share_dir /mnt/share_dir mount.nfs: timeout set for Sun Aug 28 21:56:03 2022 mount.nfs: trying text-based options 'vers=4.2,addr=192.168.7.10,clientaddr=192.168.7.101' mount.nfs: mount(2): Operation not permitted mount.nfs: trying text-based options 'addr=192.168.7.10' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: trying 192.168.7.10 prog 100003 vers 3 prot TCP port 2049 mount.nfs: prog 100005, trying vers=3, prot=17 mount.nfs: trying 192.168.7.10 prog 100005 vers 3 prot UDP port 46169 mount.nfs: mount(2): Operation not permitted mount.nfs: Operation not permittedI've set fsid=0 and insecure to the export options, but it didn't work. RPCInfo from the client's side: # rpcinfo -p 192.168.7.10 program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100005 1 udp 59675 mountd 100005 1 tcp 37269 mountd 100005 2 udp 41354 mountd 100005 2 tcp 38377 mountd 100005 3 udp 46169 mountd 100005 3 tcp 39211 mountd 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 3 tcp 2049 100003 3 udp 2049 nfs 100227 3 udp 2049 100021 1 udp 46745 nlockmgr 100021 3 udp 46745 nlockmgr 100021 4 udp 46745 nlockmgr 100021 1 tcp 42571 nlockmgr 100021 3 tcp 42571 nlockmgr 100021 4 tcp 42571 nlockmgrUsing another client, 192.168.7.11, I was able to mount that share with no issues. I can not see any issue or misconfiguration, and could not find a fix anywhere. There's no firewall in the way and both server and client are using Debian 11. Any idea of what's going on?
Mount NFS - "operation not permitted" in Proxmox container
My problem turned out to be a failure to understand how POSIX ACLs work; in particular I assumed that default ACLs set permissions on the directory they've been applied to, when in fact these only apply to subdirectories and files created after the default ACL's have been applied to the parent object and don't in fact apply the parent object at all. What tipped me off was an appeal the NFS developers list regarding this issue. One of the developers pointed out (and this is important to know) that NFS is entirely agnostic about this and simply appeals to the permissions set on the server's underlying filesystem. That pointed me in the direction of looking at the POSIX ACLs on the server filesystem and -- after some experimentation (this is poorly documented) -- I realized the functionality described above and recreated the POSIX ACLs to actually do what I wanted them to do. The issue went away after that.
I have an XFS filesystem in which some folders (using mode setting) have no public accessiblity, with group owner having read-only privileges. There is a program (which runs as user cryosparc_user) which needs read access to all the files, so I added a default POSIX ACL granting cryosparc_user read access. Unfortunately most of the processing is done on workstations which NFSv4 mount this filesystem, and for some reason the POSIX ACLs aren't being translated or honored on the workstation (well, they are, but not in a usable way apparently), and I can't figure out why. Both server and workstation are running Ubuntu 18.04, and I can't simply add cryosparc_user to the group, as the group is an Active Directory security group (we're doing authentication through AD), and cryosparc_user is a local user which can't be set up in AD. Here are the permissions on the fileserver: root@kraken:/EM/EMtifs# getfacl pgoetz # file: pgoetz # owner: pgoetz # group: cns-cnsitlabusers user::rwx group::r-x other::--- default:user::rwx default:user:cryosparc_user:r-x default:group::r-x default:mask::r-x default:other::---root@kraken:/EM/EMtifs# id cryosparc_user uid=1017(cryosparc_user) gid=1017(cryosparc_user) groups=1017(cryosparc_user),10002(mclellan),10003(taylorlab)Here is what they look like on the workstation with the NFSv4 mount: root@javelina:/EM/EMtifs# getfacl pgoetz # file: pgoetz # owner: pgoetz # group: cns-cnsitlabusers user::rwx group::r-x other::---root@javelina:/EM/EMtifs# id cryosparc_user uid=1017(cryosparc_user) gid=1017(cryosparc_user) groups=1017(cryosparc_user)root@javelina:/EM/EMtifs# nfs4_getfacl pgoetz A::OWNER@:rwaDxtTcCy A::GROUP@:rxtcy A::EVERYONE@:tcy A:fdi:OWNER@:rwaDxtTcCy A:fdi:1017:rxtcy A:fdi:GROUP@:rxtcy A:fdi:EVERYONE@:tcyNotice the 3rd line from the bottom in the NFS4 ACL query. The cryosparc_user user seems to be afforded read acces (local UID is 1017 on both systems), however cryosparc_user@javelina:/EM/EMtifs$ whoami cryosparc_user cryosparc_user@javelina:/EM/EMtifs$ ls pgoetz ls: cannot open directory 'pgoetz': Permission deniedFrom everything I've read there are no mount flags to set or anything like this; this should just work automatically and I can't figure out why it's not working. My fallback plan is to revert to using local groups on these folders (so that cryosparc_user can be added to the local group), but this would require duplicating AD authentication structure on each system, which will be a maintenance headache. Another idea was to also do a read-only SMB mount of this filesystem using cryosparc_user user credentials for the mount, but I'm not terribly excited about double mounting a 500T filesystem, either. I'd rather authentication just work in a rational way.
Why is NFSv4 not translating POSIX ACL's in a usable way?
From looking at a packet capture in Wireshark, it appears this is a bug in the Linux server, not the FreeBSD client. I believe this is a bug in Linux kernel commit e377a3e698fb, first included in version 5.18. That commit adds support for reporting TIME_CREATE aka birth time. After that commit, the code that writes out file timestamps looks like this: if (bmval1 & FATTR4_WORD1_TIME_ACCESS) { p = xdr_reserve_space(xdr, 12); if (!p) goto out_resource; p = xdr_encode_hyper(p, (s64)stat.atime.tv_sec); *p++ = cpu_to_be32(stat.atime.tv_nsec); } if (bmval1 & FATTR4_WORD1_TIME_DELTA) { ... } if (bmval1 & FATTR4_WORD1_TIME_METADATA) { ... } if (bmval1 & FATTR4_WORD1_TIME_MODIFY) { ... } if (bmval1 & FATTR4_WORD1_TIME_CREATE) { ... }I believe the way the protocol works is that the times are written in an order that's supposed to match the bmval1 bitmask. In this case, the order would be [ACCESS, METADATA, MODIFY, CREATE]. However, let's look at the actual bit flag values: #define FATTR4_WORD1_TIME_ACCESS (1UL << 15) #define FATTR4_WORD1_TIME_ACCESS_SET (1UL << 16) #define FATTR4_WORD1_TIME_BACKUP (1UL << 17) #define FATTR4_WORD1_TIME_CREATE (1UL << 18) #define FATTR4_WORD1_TIME_DELTA (1UL << 19) #define FATTR4_WORD1_TIME_METADATA (1UL << 20) #define FATTR4_WORD1_TIME_MODIFY (1UL << 21) #define FATTR4_WORD1_TIME_MODIFY_SET (1UL << 22)So the order they should be written in is [ACCESS, CREATE, METADATA, MODIFY]. This matches what I'm seeing:Linux FreeBSDACCESS 1991-12-14 00:00:00.000000000 Dec 14 00:00:00 1991CREATE 2023-06-23 15:19:21.718006131 Jun 23 15:19:24 2023METADATA 2023-06-23 15:19:24.718067075 Dec 15 00:00:00 1991MODIFY 1991-12-15 00:00:00.000000000 Jun 23 15:19:21 2023
I have an NFS mount served from a Linux server to a FreeBSD client. If I use touch to set the atime and mtime of a file on the FreeBSD client, tavianator@muon $ touch -at "199112140000" ./foo tavianator@muon $ touch -mt "199112150000" ./foothen print the stat times, tavianator@muon $ stat -f $'Access: %Sa\nModify: %Sm\nChange: %Sc\n Birth: %SB' ./foo Access: Dec 14 00:00:00 1991 Modify: Jun 22 16:44:08 2023 Change: Dec 15 00:00:00 1991 Birth: Jun 22 16:45:56 2023the ctime and mtime are switched! However, the view from the Linux server is correct: tavianator@tachyon $ stat /srv/nfs/freebsd/usr/home/tavianator/foo File: /srv/nfs/freebsd/usr/home/tavianator/foo Size: 0 Blocks: 0 IO Block: 4096 regular empty file Device: 0,33 Inode: 54691999 Links: 1 Access: (0640/-rw-r-----) Uid: ( 1000/tavianator) Gid: ( 1000/tavianator) Access: 1991-12-14 00:00:00.000000000 -0500 Modify: 1991-12-15 00:00:00.000000000 -0500 Change: 2023-06-22 16:45:56.731038486 -0400 Birth: 2023-06-22 16:44:08.075496568 -0400Any idea what could cause this, or how to fix it? Some more information that might be helpful: root@tachyon ~ # uname -a Linux tachyon 6.3.8-arch1-1 #1 SMP PREEMPT_DYNAMIC Wed, 14 Jun 2023 20:10:31 +0000 x86_64 GNU/Linux root@tachyon ~ # findmnt -T /srv/nfs/freebsd/usr/home/tavianator TARGET SOURCE FSTYPE OPTIONS / /dev/mapper/cryptslash1[/@] btrfs rw,relatime,ssd,discard=async,space_cache=v2,subvolid=261,subvol=/@ root@tachyon ~ # exportfs -v /srv/nfs 100.101.179.2/32(sync,wdelay,hide,no_subtree_check,fsid=0,sec=sys,rw,secure,no_root_squash,no_all_squash) /srv/nfs 100.114.24.115/32(sync,wdelay,hide,no_subtree_check,fsid=0,sec=sys,rw,secure,no_root_squash,no_all_squash) /srv/nfs/freebsd 100.101.179.2/32(sync,wdelay,nohide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash) /srv/nfs/freebsd 100.114.24.115/32(sync,wdelay,nohide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)tavianator@muon $ uname -a FreeBSD muon 13.2-RELEASE FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC amd64 tavianator@muon $ mount | grep nfs, 100.107.249.85:/freebsd/usr/home/tavianator on /usr/home/tavianator (nfs, nfsv4acls)
NFS mount mixing up ctime and mtime
In older NFS versions (v2 and v3) there are two distinct RPC services handled by separate software: the "MOUNT" protocol that's used only to obtain the initial filesystem handles from rpc.mountd, and the "NFS" protocol that's used for everything else. So the mountproto= option defines the transport used to access rpc.mountd whenever you mount an NFSv3 filesystem. It has no effect on performance, only compatibility (older mountd did not support TCP). NFSv4 doesn't have mountproto= anymore because the mount-related operations have been integrated into the core protocol. Meanwhile proto= defines the protocol used to transfer file data (the actual NFS operations.) NFSv3 supports UDP, so yes, proto=udp,vers=3 is possible, but keep in mind the caution message in the manual page – NFS via UDP over a Gigabit or faster connection risks data corruption, and the faster your connection is, the higher risk of corruption. NFSv4 supports TCP and RDMA only – it doesn't support UDP anymore.
in RHEL 8.8 when I am trying to test NFS tcp versus udp and versions 3 versus 4.0, 4.1 and 4.2, I observe on my nfs client a mountproto= in addition to proto= when typing mount. What is the significance of this and what does it mean? Should I be able to, in RHEL 8.8, have NFS operating showing specifically vers=3 and proto=udp on the nfs client? What does it mean, on the nfs client, when I see proto=tcp and mountproto=udp ?
difference NFS proto and mountproto
The primary cause is probably the fact that the NFS share is mounted with the sync option. This theoretically improves data safety in scenarios where the server might suddenly disappear or the client might unexpectedly disconnect, but it also hurts performance. Using the sync mount option is equivalent to the application calling fsync() on the file it is writing to after every call to write(). IOW, every time the client submits an IO request, it has to wait for that to finish before it can submit another. This has a nontrivial impact on how fast data can be written even when used with local filesystems, but network filesystems make it much worse (because at least a full network round-trip is required after each IO request before the next one can be issued). If you have the time, you can actually see this type of effect yourself by trying to copy a file that is a few hundred MB in size using TFTP as compared to SCP. TFTP bakes this type of synchronization into the protocol at a basic level, and does so in a way that each individual packet has to be acknowledged before the next can be sent, so it will likely get even less performance than you’re seeing from NFS. Provided you are using responsible software that atomically replaces files and handles copies sanely, you can probably safely switch to async mode for NFS to avoid this issue.
RHEL 7.9 x86-64 high end dell servers with Xeon cpu's, 512gb ram, intel nic card I am the only user on the server(s), and there is no other work load on them cisco 1gbps wired LAN data.tar is ~ 50 gb /bkup is NFS mounted as vers=4.1 and sync a scp data.tar backupserver:/bkup/ runs at 112 MB/sec consistently; I've seen this for 5+ years and believe this to be correct a rsync -P data.tar /bkup runs at 55 MB/sec consistently; this one is copying over NFS running both the copies at the same time, scp drops from 112 to 55, and rsync over NFS drops from 55 to 35when one finishes, the other copy speed resumes to the original ratewhy? and how can I improve speed over NFS?
why is NFS copy speed half that of SSH scp?
For automatically mounting NFS when present, autofs can be used (autofs) As mentioned in man fstab(5) nofail do not report errors for this device if it does not exist.AFAIK nobootwait was only for ubuntu-based distros (which is not a valid option anymore) You can use x-systemd.device-timeout= (more info systemd.mount) x-systemd.device-timeout=Configure how long systemd should wait for a device to show up before giving up on an entry from /etc/fstab. Specify a time in seconds or explicitly append a unit such as "s", "min", "h", "ms". Note that this option can only be used in /etc/fstab, and will be ignored when part of the Options= setting in a unit file.The default device timeout is 90 seconds, so a disconnected external device with only nofailwill make your boot take 90 seconds longer, unless you reconfigure the timeout as shown. Make sure not to set the timeout to 0, as this translates to infinite timeout.
I have a Debian 10 machine which has a nfs mountpoint specified in fstab. This is the line 10.0.0.2:/mnt/md0 /mnt/md0 nfs4 _netdev,auto,nofail 0 0I thought nofail would prevent my boot sequence hanging for (precicely) 1:32 while a time out takes place while the system is looking for the nfs drive. However this doesn't appear to be the correct opion, as it is not mentioned in my systems man pages. A search suggested nobootwait might be an alternative but again this is not mentioned in the man pages. There doesn't appear to be any relevant option, unless I am looking in the wrong document? Is there any way to specify that the drive should be automatically mounted, when it is present, and only when it is present. Both at boot time, and additionally, if the drive is "somehow seen" later on. eg; If I boot my workstation, and the drive is not present (server not booted) it should not wait an additional minute and a half to boot. then; If I boot the server at a later time, is there any way to automatically detect/mount the nfs drive? I guess this could be done with some kind of cron script which pings the network address 10.0.0.2? (My server IP.)
Debian 10 fstab - Is there an option to prevent boot sequence hanging when device does not exist?
regarding samba not working with an NFS mounted folder... getsebool -a | grep samba by default samba_share_nfs is off. doing an setsebool -P samba_share_nfs on fixes that. The -P is for persistence, otherwise it'll be off again after reboot.
in RHEL 7.9, samba-4.10.16-19, nfs-utils-1.3.0, server A and server B both have SELINUX as enforcing from server A, the folder /data is NFS exported. on server B, mount A:/data /data successfully mounts as NFS v4.1. on server B if I samba share out /data which is nfs mounted, it cannot be accessed until I do setenforce 0 on server B. on server A, /data has the samba_share_t label and will samba share out from server A. on server B where /data is nfs mounted, it has the nfs_t label and I am unable to get the samba_share_t label to be applied with an semanage fcontext -a -t samba_share_t "/data(/.*)?"What are the do's and dont's regarding samba sharing folders, NFS, and SELINUX as I have described it? I really want to be able to samba share out the nfs mounted folder from server B because that's where all the user accounts and permissions information resides, everyone has a user account on server B that creates data under /data but they do not have an account on server A where /data physically resides and I do not want to have to create user accounts on server A.
samba sharing nfs mounted folder and SELINUX
GParted is often worth using because it helps avoid several nasty mistakes. I guess the main advantage of command-line tools here is to have more visibility of details. This can be useful in unexpectedly fragile situations (at least once it's broken, the details might help you realize why). However I wouldn't recommend using them to others unless they want to be able to learn from mistakes up to "my disk is now full of zeros and I need to start from scratch". Also a desktop Linux install process should provide a user-friendly tool for resizing the Windows partition. (Or official documentation). It's the common case. This would be my first recommendation in general. All of these options will recommend making backups in case of any error. Confusingly you should not use the parted command-line tool. It used to be a convenient option, but the developers no longer support resizing filesystems with it.Otherwise, you use ntfsresize, then delete and re-create the partition (fdisk) with the same details except for the size. BEWARE UNITS - SOME TOOLS USE MB; OTHERS MAY SAY MB BUT MEAN MiB. fdisk uses MiB and ntfsresize uses MB. The lazy way is to ntfsresize to much smaller than you need (e.g. 2x), then after recreating the partition you run ntfsresize a second time with no explicit size. For the hard way, to convert units, you can run numeric expressions in bash. E.g. to see 10GiB in bytes: echo $((10 * 1024 * 1024 * 1024)). You can use those expressions as arguments to command-line tools like ntfsresize. The partition name for ntfsresize will look like /dev/sda1. lsblk -f will list all partitions (including your boot disc) with their size, and tell you about the filesystem. fdisk will want the name of the disk, like /dev/sda. For MBR, the partition details to recreate are: partition type and "active"/bootable flag, as well as starting offset.[1] fdisk should show the partition offset in sectors by default. (If not, there may be fractions which are not shown - possibly indicated by a + on the end, but there might be a trap there - you should be sure to always use fdisk in sectors mode). To avoid typing errors inside fdisk, I sometimes select numbers + paste them with the middle mouse button. That requires either X Windows, or in text mode you need gpm. I think it's less common to provide gpm on the console by default now, but it's there when I use Clonezilla Live. It's convenient, but you could probably lose the number first. So you should probably write the original partition offset down before you delete it.[1] GPT uses a different format for the type, adds some more flags and a partition UUID. I don't think they'd usually be important; flags wouldn't apply to the main Windows partition and the partition UUID isn't used by much yet.
Suppose I have a native (i.e. coming from manufacturers) windows 7 installation on a laptop (with an SSD device, BIOS/MBR partition table if that matters). The partition on the device is completely allocated and dedicated to windows. I now want to install a linux system alongside windows, and to do that I need to first shrink the windows partition. While I can find ways to do that from within windows or using gparted, how can I do this using only command line programs, like parted or fdisk?
How do I resize a windows partition without using gparted?
From https://github.com/tuxera/ntfs-3g/wiki/Using-Extended-Attributes#file-times,An NTFS file is qualified by a set of four time stamps β€œrepresenting the number of 100-nanosecond intervals since January 1, 1601 (UTC)”, though UTC has not been defined for years before 1961 because of unknown variations of the earth rotation.You'll find even more information in there including: Newer versions of ntfs-3g expose a ntfs.ntfs_crtime and ntfs.ntfs_crtime_be attribute. So: getfattr --only-values -n system.ntfs_crtime_be /some/file | perl -MPOSIX -0777 -ne '$t = unpack("Q>"); print ctime $t/10000000-11644473600'See also: ntfsinfo -F /file/in/ntfs /dev/fs-deviceWith older ntfs-3g, this should work: getfattr --only-values -n system.ntfs_times /some/file | perl -MPOSIX -0777 -ne 'print ctime unpack(Q)/10000000-11644473600'Or with GNU tools and sub-second precision: date '+%F %T.%N' -d "@$({ echo 7k getfattr --only-values -n system.ntfs_times /some/file | od -A n -N 8 -vt u8; echo '10000000/ 11644473600-p'; } |dc)"
I created an NTFS logical volume on my Linux system for Windows file storage because I want to retain the creation date of my files (I would probably zip them into an archive and then unzip them, though I have no idea if that would work). Does NTFS-3G save the creation date of files on Linux? If so, how do I access it? Reading this thread, the OP links documentation on NTFS that provides a shell script for finding the creation date. I modified it in an attempt to get the seconds from the hex value, but I believe that I am doing something wrong: #!/bin/sh CRTIME=`getfattr -h -e hex -n system.ntfs_times $1 | \ grep '=' | sed -e 's/^.*=\(0x................\).*$/\1/'` SECONDS=$(($CRTIME / 10000000)) echo `date --date=$SECONDS`
How do I get the creation date of a file on an NTFS logical volume?
You can use ntfs-3g, but make sure you place the mappings file in the right place. Once you do that you should see file ownerships in ../User/name match the unix user. However, if you just want to use it as backup you should probably just save a big tarball onto the ntfs location. If you also want random access you can place an ext2 image file and loop mount it. That will save you from a lot of these headaches. Ok, assuming you will mount NTFS under /ntfs run ntfs-3g.usermap /dev/sdb1 (or whatever your ntfs partition is). Answer the questions. Then mkdir /ntfs/.NTFS-3G. Then cp UserMapping /ntfs/.NTFS-3G/UserMapping. Now put an entry in /etc/fstab: /dev/sdb1 /ntfs ntfs-3g defaults 0 0 Then mount /ntfs. The command ls -l /ntfs/Users/Carl should show your Linux user as the owner of files there.
I'm having some doubts about how to install and allow Linux to correctly read/write to a NTFS formatted harddrive used as backup of various machines (windows included, that's how I need NTFS). For now, I've read some pages and I have the feeling I need someone else's guidance from who already did this step-by-step, to not ruin things here. What I need is to be able to save a Linux file, with its chown and chmod settings, to a NTFS filesystem, and be able to retrieve this information back. What I have today is a NTFS that saves all files with the owner:group of who mounted the volume, and permissions rwxrwxrwx for all. I read this article but it is too much information and I could not understand some things when trying to actually implement:Is it stable in the current version? Does Ubuntu 10.04 have all things needed already? Or do I need to install anything? What is the relation of POSIX ACL to this? Do I need install anything regarding this or just ntfs-3g will do? Where are Ubuntu packages to run with apt-get? If I map the users (with usermap) can bring the harddrive to another computer with different users, will I be able to read them? (Under Linux/Windows)?For one thing I noticed, usermap was not ready to use. So I downloaded and compiled (but not installed because I was afraid to mess up things here), the latest version of ntfs-3g. In the README file it says: > TESTING WITHOUT INSTALLING > > Newer versions of ntfs-3g can be > tested without installing anything and > without disturbing an existing > installation. Just configure and make > as shown previously. This will create > the scripts ntfs-3g and lowntfs-3g in > the src directory, which you may > activate for testing : > > ./configure > make > > then, as root : > src/ntfs-3g [-o mount-options] /dev/sda1 /mnt/windows > > And, to end the test, unmount the > usual way : > umount /dev/sda1But it tells nothing about the mount-options that I need to use to have full backups (full == backing up / restoring files, owners, groups and permissions). This faq says:Why have chmod and chown no effect? By default files on NTFS are owned by root with full access to everyone. To get standard per-file protection you should mount with the "permissions" option. Moreover, if you want the permissions to be interoperable with a specific Windows configuration, you have to map the users.Also, I did used the ntfs-3g.usermap /dev/sdb2 tools to create the map file and got this result: # Generated by usermap for Linux, v 1.1.4 :carl:S-1-5-21-889330461-3416208041-4118870141-511 :default:S-1-5-21-2592120051-4195220491-4132615201-511 carl:carl:S-1-5-21-889330462-3416208046-4118870148-1000Now this default was mapped because I wrote "default" to one file that was under the default user during the inquiring. I'm not sure if I did that right. I don't care for any users but carl (and root for that matter), and for any other groups but users. I saw the FAQ telling me to answer the group with the username. Isn't it the case to tell the group as "users"? And how can I check, booting Windows, if this mapping is correct? Summary:I need rsync to save Linux files and Windows files from various computers, to a NTFS external USB HD, without losing file permissions. I don't know how to install and run the driver ntfs-3g to allow chown, chmod and anything else that is needed to make that possible. What options, and where? All computers have carl username, but that doesn't guarantee that their SID, UID or GID are the same. The environment is composed of 18 "documents" folders, 6 of them Linux, 6 of them Win7, 6 of them virtualbox Win XP. All of them will be a single "documents" folder into the NTFS external hard drive.Reference:I also read this forum, and maybe it is useful to someone trying to help me here. Also thought of these other three solutions, making the filesystem ext. But the external HD may be used in Windows boxes; I could not install or have write to install drivers, so it needs to be readable easily by any Windows and NTFS is the standard. All my Google searches was too much technical to follow.
Is NTFS under linux able to save a linux file, with its chown and chmod settings?
It is ntfs-3g bug. Downgrade ntfs-3g and it will work. I had the same problem with 1:2014 version, and no problem with 1:2012 version (which in "stable" repository)
For the past 3 days (after an update) my Debian Jessie refuses to mount NTFS disks. I reinstalled libfuse2 and ntfs-3g, yet I get the same Input/output error I tried the same disks under Windows 7 and OSX Mavericks (using ntfs-3g) and they work fine. I purged ntfs-3g and reinstalled, and still the same problem. The disks will sometimes mount and sometimes won't mount. If they do mount, I am sometimes able to go into the mount directory, whereas some other times, I get a bash error Input/output error for the mount directory. The times I am able to go into the mount directory, when I try an ls -l, I see tons of question marks, instead of file/dir attributes. I have tried ntfsfix and chkdisk under windows, and they both reported no problems, it is only under this Jessie install that all of a sudden I can't mount them properly. dmesg has no usefull info other than the external disk being attached: [12816.210969] scsi 20:0:0:0: Direct-Access Seagate External SG16 PQ: 0 ANSI: 4 [12816.211825] sd 20:0:0:0: Attached scsi generic sg7 type 0 [12816.212542] sd 20:0:0:0: [sdg] 732566642 4096-byte logical blocks: (3.00 TB/2.72 TiB) [12816.213591] sd 20:0:0:0: [sdg] Write Protect is off [12816.213595] sd 20:0:0:0: [sdg] Mode Sense: bf 00 00 00 [12816.214782] sd 20:0:0:0: [sdg] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [12816.215561] sd 20:0:0:0: [sdg] 732566642 4096-byte logical blocks: (3.00 TB/2.72 TiB) [12816.242055] sdg: sdg1 sdg2 [12816.243244] sd 20:0:0:0: [sdg] 732566642 4096-byte logical blocks: (3.00 TB/2.72 TiB) [12816.246031] sd 20:0:0:0: [sdg] Attached SCSI diskparted /dev/sdg 'print' Model: Seagate External (scsi) Disk /dev/sdg: 3001GB Sector size (logical/physical): 4096B/4096B Partition Table: msdosNumber Start End Size Type File system Flags 1 258kB 1038GB 1038GB primary 2 1038GB 3001GB 1962GB primaryfdisk -l /dev/sdg Note: sector size is 4096 (not 512)Disk /dev/sdg: 3000.6 GB, 3000592965632 bytes 255 heads, 63 sectors/track, 45600 cylinders, total 732566642 sectors Units = sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00090a06 Device Boot Start End Blocks Id System /dev/sdg1 63 253473569 1013894028 7 HPFS/NTFS/exFAT /dev/sdg2 253473792 732566527 1916370944 83 Linuxmount -t ntfs-3g /dev/sdg1 /media/Downloads ntfs-3g-mount: failed to access mountpoint /media/Downloads: Input/output errorIf I manage to mount it via mount -t ntfs-3g /dev/sdg1 /media/DownloadsOnce I cd into it: cd media/Downloads root@athena:/media/Downloads# ls -l ls: reading directory .: Input/output error total 0 root@athena:/media/Downloads#mount however, says: /dev/sdf1 on /media/Downloads type fuseblk (rw,relatime,user_id=0,group_id=0,allow_other,blksize=4096)What did I brake? EDIT ntfsinfo -m /dev/sdg1Volume is scheduled for check. Please boot into Windows TWICE, or use the 'force' option. NOTE: If you had not scheduled check and last time accessed this volume using ntfsmount and shutdown system properly, then init scripts in your distribution are broken. Please report to your distribution developers (NOT to us!) that init scripts kill ntfsmount or mount.ntfs-fuse during shutdown instead of proper umount. Failed to open '/dev/sdg1'.EDIT#2 ntfsinfo -fm /dev/sdg1 WARNING: Dirty volume mount was forced by the 'force' mount option. Volume Information Name of device: /dev/sdg1 Device state: 11 Volume Name: Volume State: 91 Volume Flags: 0x0001 DIRTY Volume Version: 3.1 Sector Size: 4096 Cluster Size: 4096 Index Block Size: 4096 Volume Size in Clusters: 253473506 MFT Information MFT Record Size: 4096 MFT Zone Multiplier: 0 MFT Data Position: 24 MFT Zone Start: 0 MFT Zone End: 31684192 MFT Zone Position: 4 Current Position in First Data Zone: 31684192 Current Position in Second Data Zone: 0 Allocated clusters 145403 (0.1%) LCN of Data Attribute for FILE_MFT: 4 FILE_MFTMirr Size: 4 LCN of Data Attribute for File_MFTMirr: 126736753 Size of Attribute Definition Table: 2560 Number of Attached Extent Inodes: 0 FILE_Bitmap Information FILE_Bitmap MFT Record Number: 6 State of FILE_Bitmap Inode: 80 Length of Attribute List: 0 Number of Attached Extent Inodes: 0 FILE_Bitmap Data Attribute Information Decompressed Runlist: not done yet Base Inode: 6 Attribute Types: not done yet Attribute Name Length: 0 Attribute State: 3 Attribute Allocated Size: 31686656 Attribute Data Size: 31684192 Attribute Initialized Size: 31684192 Attribute Compressed Size: 0 Compression Block Size: 0 Compression Block Size Bits: 0 Compression Block Clusters: 0 Free Clusters: 199331046 (78.6%)I will try mounting it under windows in a few hours (I'm running a check on another disk I don't want to interrupt). EDIT#3 I went back into windows, and scanned the disks. Windows indeed found problems with one of them, but both were fixed, mountable and browsable. Yet, under Debian, I still cannot do anything. I opened Gparted, and interestingly enough, it complains: Unable to read the contents of this file system! Because of this some operations may be unavailable. The cause might be a missing software package. The following list of software packages is required for ntfs file system support: ntfsprogs / ntfs-3g.However, apt-cache policy ntfs-3g ntfs-3g: Installed: 1:2014.2.15AR.2-1 Candidate: 1:2014.2.15AR.2-1 Version table: *** 1:2014.2.15AR.2-1 0!!! So, have I run into some kind of ntfs-3g bug, or is my system now broken???
ntfs-3g: Input/output error
There IS a way to recognize Windows permissions on a ntfs-3g mount. You have to create a user-mapping file. See here. This can be done from within Linux too, with the ntfs-3g.usermap utility. See the manual pages for mount.ntfs-3g and ntfs-3g.usermap. (I use Fedora 14.) EDIT: I don't know what effect enabling this will have on Nautilus' mount feature. Me, I like to mount the partitions in /etc/fstab and leave it at that.
After installing ntfs-3g I have an option in nautilus to mount a Windows directory but I need to give root password. While I have no objection to giving root password I would prefer to be restricted to permission of corresponding Windows user (i.e. disallowing modification of system files). Is is easily achievable or do I need to post feature request?
Allowing user to read only parts of NTFS filesystem
The kernel driver is still read only and has no full write support yet, only with many restrictions.
When configuring the kernel I see an option to add read-write support for NTFS. Then when mounting my NTFS partition I still have to install ntfs-3g and pass ntfs-3g as the type. I thought if I add NTFS support in the kernel then I wouldn't have to install a library for it. Why is it so?
Why do I need ntfs-3g when I have already enabled NTFS support in the kernel?
The ntfs-3g binaries must be set uid root in order for user mounting to work. And you need permission to the block device & mount point. sudo chmod 1755 /sbin/mount.ntfs-3g /usr/bin/ntfs-3g sudo chmod 666 /dev/sda2 sudo chmod 777 /media/Windows(Note: these are the Debian locations, they may differ for Suse, so you will want to check that they are actually in those locations.) You also need to have ntfs-3g version 1.2506 or later. See here for more info:http://www.tuxera.com/community/ntfs-3g-faq/#useroption
I'm trying to mount a Windows ntfs partition on openSuse 11.4. When I mount it using the root account (either directly or via sudo) it mounts without problems. But when I try mounting it without any root privileges, it gives me the following error: Error opening '/dev/sda2': Permission denied Failed to mount '/dev/sda2': Permission denied Please check '/dev/sda2' and the ntfs-3g binary permissions, and the mounting user ID. More explanation is provided at http://ntfs-3g.org/support.html#unprivilegedMy fstab entry for the concerned device is: /dev/sda2 /media/Windows ntfs defaults,noauto,user 1 2I've searched Google for possible solutions, but I don't seem to be getting anywhere. Edit 1: As suggested, I tried to set the UID/GID bits on the ntfs-3g binary. All the files (/sbin/mount.ntfs, /sbin/mount.ntfs-3g) point to /usr/bin/ntfs-3g, so I changed the permissions on that. The permissions now are: -rwsr-sr-x 1 root root 51512 Feb 18 22:18 ntfs-3gBut the result is still the same and I get the same permission denied error. Edit 2: After setting the correct permissions on all the files: -rwsr-xr-x 1 root root 51512 Feb 18 22:18 ntfs-3g brw-rw-rw- 1 root disk 8, 2 Aug 6 21:53 sda2 drwxrwxrwx 1 asad users 8192 Jul 30 13:09 WindowsI was able to mount with out a privileged user account. However, now when I try to unmount using the same account, I get: asad@jb-laptop:~> umount /dev/sda2 umount: only root can unmount /dev/sda2 from /media/WindowsEdit 3: I finally found the problem. I needed to add users instead of user in fstab for some reason, although I can't understand why. Now I have a new problem :) Whenever I unmount the device /dev/sda2, somehow the file permission ends up going back to default (0660). I tried to create a rule in udev but it doesn't seem to be working.
Unable to mount NTFS partition from user account
That there was no proper read/write NTFS support before NTFS-3G. Initially, on a dual boot system, it was possible to write a file on an NTFS partition, but on rebooting, under Windows NT/XP, you would have to do a filesystem check to get the (meta) data on disc corrected. It was therefore common to have a VFAT partition for data exchange between Windows NT/XP and Linux, as the driver for that filesystem type did not have this limitation/problems. Since the introduction of NFTS-3G (2006) this is no longer necessary and you can write new files and update existing ones, reboot under Windows and use those files without doing a filesystem check. (By that time I had largely dispensed with rebooting and was using Windows in virtual machines instead). NTFS-3G runs in user space, that means that it doesn't have direct access to kernel data and routines, but has to go through system calls like any normal program (and in contrast to a kernel space (device) driver). As for df -T, that seems to operate with Fuse and that (correctly) identifies the filesystem type as fuseblk. Fuse doesn't know anything about NTFS so it doesn't provide any deeper probing. Neither does df -T probe the disc, it just asks the filesystem driver, what type it is handling (if it could, you would not have to mount a filesystem for it to show up in df -T, in that case it could just probe the device blocks directly and make a guess).
I am reading a text (R. W. Smith LPIC_1 study Guide) that says:Linux can reliably read NTFS and can overwrite existing files, but the Linux kernel can’t write new files to an NTFS partition.What does it mean that the "kernel" cannot write new files to an NFTS partition? In another place it says:NTFS-3G is a read/write NTFS driver that resides in user space rather than in kernel space. It’s used as the default NTFS driver by some Linux distributions.How is kernel space different from user space? Also, as we have access to Windows drives in dual-boot systems, why can't we see the Windows filesystem's type with commands like df -T?
How does Linux kernel deal with Windows NTFS filesystem?
This is how the system is designed. Since the filesystem is being mounted by root and it's not listed in /etc/fstab with the user option, only root can unmount it. You can't change this behavior. What you can do is to modify your script to mount it in a location you own as your user. You'll also need to make the block device readable/writable by you. That would be to change this: ACTION=="add", RUN+="/bin/mkdir -p /media/%E{dir_name}", RUN+="/bin/mount -o $env{mount_options} /dev/%k /media/%E{dir_name}"To this: ACTION=="add", RUN+="/bin/chown invert:invert /dev/%k", RUN+="sudo -u invert /bin/mkdir -p /home/invert/media/%E{dir_name}", RUN+="sudo -u invert /bin/mount -o $env{mount_options} /dev/%k /home/invert/media/%E{dir_name}"Giving users direct read/write access to block devices is not something to be encouraged, but if this is only your workstation the reduction in security is probably negligible.
I have ntfs-3g installed, and use this udev rule to auto mount external drives automatically. When I try umount it as non-root, it says: umount: /media/umm is not in the fstab (and you are not root)The device mounts as: /dev/sdc1 fuseblk 150G 143G 6.6G 96% /media/ummand is part of the users group. I did a chkdsk on a Windows machine to ensure no file system errors. Any ideas? (personally I'd prefer not to use ntfs, but I need it for sharing with all the non-UNIX systems and allow for files > 4GB).
umount an external ntfs drive as non-root
The system.ntfs_times extended attribute contains 32 bytes that consist of the btime, mtime, atime, ctime as 64bit integers. You can list them with for instance: getfattr --only-values -n system.ntfs_times -- "$file" | perl -MPOSIX -0777 -ne 'print ctime $_/10000000-11644473600 for unpack("Q4",$_)'So you can just copy the second integer to the first with something like: getfattr -n system.ntfs_times -e hex -- "$file" | sed '2s/0x.\{16\}\(.\{16\}\)/0x\1\1/' | setfattr --restore=-
In my first question: How do I get the creation date of a file on an NTFS logical volume, I asked how to get the "Date created" field in NTFS-3G. Now, that I know I can get the "Date created", I have started adding files onto my NTFS-3G partition and would like to set the "Date created" of each file to its "Date modified" value. Since this needs to be done on a whole repository of files, I would like to recursively apply it to a single directory on down. If I know how to do this for a single file, I could probably do the recursion myself, but if you want to add that in I would be more than happy.
How do I recursively set the date created attribute to the date modified attribute on NTFS-3G?
ntfs-3g is the following of the first NTFS driver created back in 1995 by Martin von LΓΆwis. The driver has been mostly reverse engineered which mean by observing and analyzing the data structure and find a way to correctly handling it. According to the original project site The method was roughly: 1 Look at the volume with a hex editor 2 Perform some operation, e.g. create a file 3 Use the hex editor to look for changes 4 Classify and document the changes 5 Repeat steps 1-4 forever After a long developement and a laborious work, a fork has been created from NTFS-Linux according to the first release note of ntfs-3g back in 2006: Hello, As part of the Linux-NTFS project, I'm happy to announce my contribution to ntfsmount and libntfs which resulted ntfs-3g, a read-write ntfs driver, capable for unlimited file creation and deletion.I hope this partial answer help you see how this was born and how it continues to leave. It's important to note that today this driver is maintained by Tuxera and is no longer an amateur product.
Since NTFS is a proprietary file system created by Microsoft, how did the ntfs-3g developers manage to create an open source version of the NTFS drivers without referring to the NTFS source code? Or is there some kind of agreement with Microsoft regarding this??
How was ntfs-3g created?
As far I know, currently there is no Linux tool for the fixing of ntfs partitions. ntfsfix is only a trick, it simply sets the partition as "clean", but it actually doesn't clean it. Writing to a corrupted filesystem endangers the data on it, and we generally don't trust ntfs, thus we try to avoid the further data corruption, this is why this tool rather chooses to reject the deletion. You need to use a different operating system to fix the partition. Ideally, to avoid rebooting your Linux, it is useful if you use some virtualization technology for that with direct partition access.
I have an NTFS partition(/dev/sda3) mounted via ntfs-3g on arch linux. This partition contains a file called cee431d2730eb5e1697bd57b3bb529 which I want to delete. ls -la returns the following output ls: cannot access 'data/cee431d2730eb5e1697bd57b3bb529': Input/output error total 16611578 #Some other files... d????????? ? ? ? ? ? cee431d2730eb5e1697bd57b3bb529Similarly file cee431d2730eb5e1697bd57b3bb529 returns cee431d2730eb5e1697bd57b3bb529: cannot open 'cee431d2730eb5e1697bd57b3bb529' (Input/output error) ls -i also returns ? cee431d2730eb5e1697bd57b3bb529(it can't find the inode) I tried deleting it with rm -f which also fails with an input/output error(both as root and normal user). Running ntfsfix /dev/sda3 also didn't fix the problem.
How to delete a corrupted file on an NTFS partition?
The original code in Linux for NTFS partitions could change an NTFS partition, but required you to do a disk check after rebooting into Windows NT. I am not sure when this was, it might have been those in last millenium with SuSE 4. And not working from a live CD, but from a dual boot machine. That changed with NTFS3G, where this is no longer necessary (praise the coders), hence the explicit mentioning of safe handling of NTFS file systems. I am not sure, but I don't think live CDs were common before NTFS3G became mainstream, so I don't think you will find any that would corrupt NTFS to require a disk check. Any Live CD from 2008 onwards should probably be ok. (Question is why not take a recent Live CD to work with).
I was reading this article - how do I Access or mount windows NTFS partition in Linux that mentions:NTFS3G is an open source cross-platform, stable, GPL licensed, POSIX, NTFS R/W driver used in Linux. It provides safe handling of Windows NTFS file systems viz create, remove, rename, move files, directories, hard links, etc.So, is it compulsory to have NTFS3G on a Live Linux CD so that when I am moving my files from one NTFS partitions to another NTFS partitions of a disk will ensure that it will not corrupt the files in the NTFS partitions? Or in another words, does a Live Linux CD or DVD in general (without NTFS3G) provide safe handling of Windows NTFS file operations (such as moving files)? Also does it apply on a certain version of NTFS too?
Does Live Linux CD in general have safe handling of Windows NTFS files
I do not know the difference between ntfs and ntfs-3g. Regarding the umask option, it specifies a bit mask such that the bits set in the umask are cleared in the file access permissions. These permission bits are RWXRWXRWX, where R is read access, W is write access, and X is execute access, with some higher bits used in special cases. The high order RWX is for the owner of the file being accessed, the next RWX group gives access for the group of the file, and the last is for everybody. Because these permissions come three bits at a time, they are traditionally in octal. The leading 0 can indicate either octal, or 0 for some of the special case bits since it is traditionally represented in octal anyway, depending on the context. So a umask of 222 or 0222, which are the same since the number is traditionally octal, is 010010010 in binary. This means the W bit is set for the user, the group, and everybody else. Setting this bit in umask clears the W bit in the file access permissions. This is not to avoid error messages. By specifying a umask of 222, it makes files non-writable by anybody, when otherwise they might have been writable.
I see numerous how-to examples for mounting an ntfs partition with either a mount command or an entry in fstab. In all cases, specifying ntfs as the filesystem is associated with also specifying umask=0222, and specifying ntsf-3g never has a umask parameter. Trying to research umask, I came across numerous explanations like this one. I can't get from those explanations to understanding "0222", which among other things, has one more digit than the specification seems to describe. I understand that it supposedly reduces permissions from the default definition. That's not much help, either. I'm guessing that it relates to writing, since in Linux, ntfs-3g supports it and at least as of a few years ago, ntfs did not. What are the default permissions (I assume they relate to the directories and files and are independent of the filesystem), and what does "0222" do to that? Why is it needed? Is it just to avoid an error message trying to write to a partition when Linux doesn't support it?
mount command permissions: ntfs vs. ntfs-3g
I assume the client machine is running Linux. Linux has the ability to create multiple views of all or part of the same filesystem. You can use this to make only part of a filesystem accessible to a user (subject to further permission checks). /dev/sda3 /home/userA/data ntfs-3g defaults,rw,nouser,uid=userA,umask=077,exec /home/userA/data/subdir /home/userB/subdir bindThe command mount --bind /home/userA/data/subdir /home/userB/subdir sets up that second view. If /home/userA is not accessible to user B then user B will not be able to access the NTFS partition through that view. However user B will be able to access the subdir directory through the view at /home/userB/subdir. Permissions still apply: some files under subdir may not accessible if their permissions exclude userB. If you want to tweak permissions as well (to allow userB to access all files, or to grant read-only access only, etc.), you can use bindfs. See read only access to all files in a specific sub-folder and Allow a user to read some other users' home directories for example.
I have two users: userA and userB. I have also NTFS formatted parition. Whole parition is only accessible to userA thanks to this in /etc/fstab: /dev/sda3 /home/userA/data ntfs-3g defaults,rw,nouser,uid=userA,umask=077,exec 0 0. I want to allow ONE folder (for example /home/userA/data/movies) to be accessible for userB, but not whole drive. How can I do this? If I allow all users in fstab, both users have access to whole drive, regardless if it is mounted in /home/userA/ folder. userB can simply do ls /home/userA/dataeven if he can't do ls /home/userAIf I leave fstab as I have it set now and I use symlink, symlink respects permissions to folder it's linked to and userB won't be allowed to use this symlink. I also tried to use remount option, but only thing it can change is ro/rw option, it can't change uid, guid or similar for ntfs partitions. I guess policy below (from man mount) applies to ntfs too:The -o remount may not be able to change mount parameters (all ext2fs-specific parameters, except sb, are changeable with a remount, for example, but you can't change gid or umask for the fatfs).
How to allow access to only one NTFS folder of already mounted partition for specific user?
If it's just the dump of the partition, there's no partition table. The partition is the file, you just need to shrink the file: truncate -s 27000832000 datapartition(27000832000 is 26999992832 rounded up to the next MiB just to be on the safe side, would you like for instance to compress it to a qcow2 format or any other mountable compressed format)
I'm trying to reduce the size of a backup drive image. Original disk had these partitions: Model: ST916082 1A (scsi) Disk /dev/sde: 160GB Sector size (logical/physical): 512B/512B Partition Table: msdosNumber Start End Size Type File system Flags 1 32.3kB 65.7GB 65.7GB primary ntfs boot 2 65.7GB 160GB 94.4GB extended lba 5 65.7GB 160GB 94.4GB logical ntfsImage was created from the logical partition using the command > sudo ddrescue /dev/sde5 datapartition logfilePress Ctrl-C to interrupt Initial status (read from logfile) rescued: 0 B, errsize: 0 B, errors: 0 Current status rescued: 94368 MB, errsize: 0 B, current rate: 23068 kB/s ipos: 94368 MB, errors: 0, average rate: 28839 kB/s opos: 94368 MB, time from last successful read: 0 s Finishedntfsresize -i -f datapartition says: ntfsresize v2012.1.15AR.5 (libntfs-3g) Device name : datapartition NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 26999992832 bytes (27000 MB) Current device size: 94368605184 bytes (94369 MB) Checking filesystem consistency ... 100.00 percent completed Accounting clusters ... Space in use : 26107 MB (96.7%) Collecting resizing constraints ... You might resize at 26106810368 bytes or 26107 MB (freeing 893 MB). Please make a test run using both the -n and -s options before real resizing!So it looks like I already resized the filesystem to fit the data, but did not resize the device? (This was 2 years ago, I forget.) And I need to resize the device using fdisk, right? But fdisk doesn't recognize the partition: > fdisk -lu datapartition Disk datapartition: 94.4 GB, 94368605184 bytes 255 heads, 63 sectors/track, 11472 cylinders, total 184313682 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x69205244This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System datapartition1 ? 218129509 1920119918 850995205 72 Unknown datapartition2 ? 729050177 1273024900 271987362 74 Unknown datapartition3 ? 168653938 168653938 0 65 Novell Netware 386 datapartition4 2692939776 2692991410 25817+ 0 EmptyPartition table entries are not in disk ordernor does cfdisk: > cfdisk datapartition FATAL ERROR: Bad primary partition 1: Partition begins after end-of-disk Press any key to exit cfdiskI can mount the partition and copy files off of it, though. How do I resize the device?
How do I resize a disk image device?
You're using NTFS-3g, a user-space NTFS filesystem driver. Between the kernel and any such user-space filesystem drivers, there is an interface layer called FUSE (short of Filesystem in USErspace). Note that the filesystem type is listed as fuseblk, not as ntfs or ntfs-3g. When you see type fuseblk (some options), then the options within parentheses are FUSE options, not actual filesystem options. See man 8 fuse if you want to know more details. Specifically, the user_id=0 means "this FUSE filesystem was mounted by root" and nothing else. The actual mount options are handed to the filesystem driver process, which can do whatever it wants with them. (FUSE allows only the user that mounted the filesystem to access it, unless the FUSE option allow_other is specified.) Unfortunately the FUSE interface layer does not allow showing the actual mount options of the FUSE-based filesystem in the mount command output the same way as classic kernel-based filesystems show them. Instead, if you run pgrep -a ntfs-3g, you will see the ntfs-3g filesystem driver processes and their command-line options, which will include the mount options you specified. For example, on my system, I have these lines in /etc/fstab: UUID="A268B5B668B599AD" /win/c ntfs-3g defaults,windows_names,inherit,nofail 0 0 UUID="56A31D4569A3B7B7" /win/d ntfs-3g defaults,windows_names,inherit,nofail 0 0And so, I'll see these processes: $ pgrep -a ntfs-3g 775 /sbin/mount.ntfs-3g /dev/nvme0n1p3 /win/c -o rw,windows_names,inherit 1008 /sbin/mount.ntfs-3g /dev/sdb2 /win/d -o rw,windows_names,inherit
Why I cannot change the ownership on mounting ntfs drive? I give uid=1000,gid=1000, etc in my /etc/fstab file, but found it is not working. So I'm testing it out on command line: root@host:~# mount | grep /mnt/tmp1 | wc 0 0 0root@host:~# mount -o uid=1000 /dev/nvme0n1p4 /mnt/tmp1/root@host:~# mount | grep /mnt/tmp1 /dev/nvme0n1p4 on /mnt/tmp1 type fuseblk (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)root@host:~# umount /mnt/tmp1root@host:~# mount -o user_id=1000 /dev/nvme0n1p4 /mnt/tmp1/root@host:~# mount | grep /mnt/tmp1 /dev/nvme0n1p4 on /mnt/tmp1 type fuseblk (rw,relatime,user_id=0,group_id=0,allow_other,blksize=4096)$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 21.10 Release: 21.10 Codename: impish$ apt-cache policy mount mount: Installed: 2.36.1-8ubuntu1 Candidate: 2.36.1-8ubuntu2 Version table: 2.36.1-8ubuntu2 500 500 http://archive.ubuntu.com/ubuntu impish-updates/main amd64 Packages *** 2.36.1-8ubuntu1 500 500 http://archive.ubuntu.com/ubuntu impish/main amd64 Packages 100 /var/lib/dpkg/statusAm I missing something? Why I cannot change the ownership on mounting ntfs drive?
Cannot change the ownership mounting ntfs drive
You might look at duplicity and its gui deja-dup. It does incremental backups using tar files, optionally encrypted, optionally to a remote server. It uses librsync and its rolling-checksum algorithm so that each incremental archive holds only the changed parts of files. The home page says it handles Unix permissions, symbolic links, fifos, and device files, but does not preserve hard links. If you have many large hard-linked files it may be sub-optimal in the archive, but more importantly, you may also want to note separately which files are interlinked so that if you need to restore them you can put back the link. If possible, converting to symbolic links would solve this problem.You can look for hard links with something like find /home/me -links +1 -type f -printf '%n %i %D %p\n' | sort -n where the format string shows %n the number of links, %i the inode number, %D the device the file is on, and %p the pathname. Lines with the same inode number and device are hard links. The device is only useful if you have mount points within the directory tree (as the same inode on a different device is not the same file). Of course, hard links to files outside the tree cannot be handled, even by rsync.
I want to backup my home directory to an external SSD drive using rsync. I'm on Arch Linux. My home is ext4 (251G), the SSD is NTFS-3G mounted as fuseblk (512G). The exact rsync invocation is: rsync -aSh --info=progress2 --delete --exclude=/me/.cache /home/me /run/media/me/Samsung_T5/Eventually, it fails with this being its last words: 218.76G 99% 25.08MB/s 2:18:36 (xfr#2093188, ir-chk=1368/2286507) rsync: write failed on "/run/media/me/Samsung_T5/me/a_file": No space left on device (28)So, rsync allegedly copied around 218G of data and couldn't go furhter due to my SSD being full. When I ask du how much data is there on my SSD rsync destination, it says 466G. $ du -hs /run/media/me/Samsung_T5/me 466G /run/media/me/Samsung_T5/meThis is weird. rsync tried to copy 281G, but it copied 218G and failed because it actually copied 466G. What am I getting wrong here? I do know that NTFS and ext4 are different. But are they different enough to make my files more than 2x larger? Am I copying more than I actually have in my home? What would be the correct rsync procedure to back up my ~280G home to my SSD as something comparable in size with my home? UPDATE [Thanks to the comments below]: I have a large number of small files in my source directory and a certain amount of sparse files. For example, there is a file 4K big in the source and 128K big in the destination. There is also a sparse file that is 12K in the source and 128K in the destination. Also, I do have 244 hard links to different executables (e.g., shared libraries). Some of those hard links point to some relatively large files. For example, a version of binutils linker (ld) is around 7M and I have 4 hard links to it.
rsync doubles the size when copied from ext4 to NTFS-3G
To answer your first question, 40MB/s sounds like a bottleneck with USB 2.0. The Pi 4 supports USB 3, but ensure your HDD and cable are USB 3. Updated with information from the comments: Also note that your rsync involves 2 sides: external HDD and wherever your home is. To remove the second part from the equation try dd for just-HDD benchmarking:Writing involving filesystem: sudo dd if=/dev/zero of=/mnt/usb_hdd/blob bs=16M count=100 status=progress oflag=direct Reading from a filesystem: sudo dd if=/mnt/usb_hdd/blob of=/dev/null bs=16M count=100 status=progress Reading straight from a disk: sudo dd if=/dev/sda of=/dev/null bs=16M count=100 status=progress iflag=direct
Disclaimer: I know there's Raspberry Pi community here, but I don't think it's Pi-specific, more like Raspbian (or Debian?) vs USB HDD vs NTFS, etc. TL;DR: So, I've got a Raspberry Pi 4 with an external USB HDD. Read/write speeds there are quite low, and the most surprising thing for me is that read is actually slower than write! So how could it be and where's the culprit? In details:OSpi@raspberrypi:~ $ uname -a Linux raspberrypi 5.10.17-v7l+ #1403 SMP Mon Feb 22 11:33:35 GMT 2021 armv7l GNU/Linuxfstab:UUID=1276F80376F7E57F /mnt/usb_hdd ntfs-3g defaults,big_writes,noatime 0 0hdparm testpi@raspberrypi:~ $ sudo hdparm -tT /dev/sda/dev/sda: Timing cached reads: 1496 MB in 2.00 seconds = 747.81 MB/sec Timing buffered disk reads: 258 MB in 3.01 seconds = 85.60 MB/secrsync readpi@raspberrypi:~ $ rsync --progress -hv /mnt/usb_hdd/Share/Downloads/Games/Civ5.iso ~/Civ5.iso Civ5.iso 2.37G 100% 18.90MB/s 0:01:59 (xfr#1, to-chk=0/1)sent 2.37G bytes received 35 bytes 19.19M bytes/sec total size is 2.37G speedup is 1.00The strange thing is, it starts with ~70MB/s, but almost iimediately drops to ~20 and then deviates between 8 and 25. Very unstable.rsync writepi@raspberrypi:~ $ rsync --progress -hv ~/Civ5.iso /mnt/usb_hdd/Share/Downloads/Civ5.iso Civ5.iso 2.37G 100% 39.15MB/s 0:00:57 (xfr#1, to-chk=0/1)sent 2.37G bytes received 35 bytes 40.52M bytes/sec total size is 2.37G speedup is 1.00This one also starts with ~65MB/s, but gradually slows to ~35. At least not that random as read. QuestionsAren't both read and write slow in general? Even considering Pi and NTFS - 40MB/s is kind of slow, isn't it? If so - where it the problem? Why is read slower than write?? And why is read speed so unstable over time?CPU is at ~30% during transfer and there's plenty of memory... Is it "just NTFS"? Anyway I'll appreciate any help here.
Why is USB HDD slow (reading slower than writing)?
Okay. I do not have a solution, but I think I found the cause. I can still BROWSE my HDD via Windows, but i cannot access any files, because they are reported with 0 bytes on disk. So either my file-system is corrupt, or I just invented the most efficient compression... (I do not think it is the latter ;) ) I do not know if it was caused by ntfs-3g or not, but this is definitely why I get those "errors". (Technically correct description of what is going on.) Screenshot of a Windows properties window:Thanks for your comments anyways!
I just connected my NTFS HDD to my homeserver and I have an odd problem. EVERY file on the HDD is displayed as an "unsupported reparse point" in ls -lsa I tried mounting with different permissions and even did an ntfsfix, but no workey... When i plug the drive into my Windows machine everything works flawlessly, so it cannot be a corrupt Filesystem. I cannot access any files because of this :/ Any Ideas? The output looks like this: xendo@CloudKicker:/mnt/4t$ ls -lsa total 2126 40 drwxrwxrwx 1 root root 40960 May 15 14:20 . 4 drwxr-xr-x 3 root root 4096 May 15 15:38 .. 0 lrwxrwxrwx 2 root root 26 Jan 8 2014 file.txt -> unsupported reparse point ...My fstab entry is: /dev/disk/by-partuuid/1b7eed2e-32c0-4221-82dd-e0f0a16b910f /mnt/4t ntfs-3g defaults,auto,umask=000,allow_others 0 0ntfs-3g verision is: ntfs-3g 2014.2.15AR.3 integrated FUSE 28
NTFS-3G: All files are an "unsupported reparse point"
As suggested by @jigglynaga, you can get part of what you want using a different mount option. According to the manual page, these are the relevant options:umask=value Set the bitmask of the file and directory permissions that are not present. The value is given in octal. The default value is 0 which means full access to everybody. fmask=value Set the bitmask of the file permissions that are not present. The value is given in octal. The default value is 0 which means full access to everybody. dmask=value Set the bitmask of the directory permissions that are not present. The value is given in octal. The default value is 0 which means full access to everybody.You were using umask, which applies to both files and directories. But since you need executable permissions on directories, and disallowed this, the driver did not cooperate. Changing that to fmask affects only files. Just in case, you might want to review the dmask setting as well (full access to everybody may not be what you want). As for ls (and dircolors). No: the ls program checks for EXEC before checking any pattern, so you could not make a special case with a pattern such as *.exe That is not well documented; you can read the source code to seeattribute-checks and laterpattern-checks (if no attribute was applicable).ntfs-3g - Third Generation Read/Write NTFS Driver
I am using dircolor-solarized to render my ls output. It works well in my linux partition. However, in a NTFS partition mounted by ntfs-3g, all the files were colored green because /etc/fstab grants the executable permission to the partition: /dev/sdb5 /mnt/win10_E ntfs-3g rw,uid=1000,gid=1000,dmask=0022,fmask=0033 0 0and in my dircolors.256dark there is: EXEC 00;38;5;64I have tried umask=0022 but the output keeps the same. Actually I don't think things will change if executable permission is granted to any of the users. But when I tried 'umask=0111', the partition just failed to be mounted. So I am here to ask for a help: 1) Is there any way to mount a ntfs partation writable and readable, while executable permission is absent? 2) If 1) is not possible in ntfs-3g, is there a way to lower the priority of EXEC rendering? For example, let dircolor firstly match the extension names, and then EXEC if no match found in the list. 3) Any other workaround? My distribution: $ uname -a Linux debian-Z620 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2+deb8u3 (2016-07-02) x86_64 GNU/LinuxThanks!
dircolor mistakes caused by the permission of ntfs-3g
You have your options right in /etc/fstab, but the order matters; exec has to come after user because user imposes noexec (among others). So your /etc/fstab entry should look like this: UUID=6F537BB96F6E0CBC /home/technomage/Migration ntfs-3g rw,umask=000,uid=1000,gid=1000,user,exec 0 0After the change to /etc/fstab, unmount the drive then sudo mount -a and try again. Also, make sure your uid and gid are correct (by executing the command id when logged in with your user).
I'm trying to execute a script located on an NTFS partition that I own. I own the mount point, which is ~/Migration. ls -l in the directory where the mount point is contained shows me drwxrwxrwx 1 technomage technomage 4096 Sep 30 18:04 MigrationDespite being the owner of the entire structure, from the mount point and onwards, and having rwx privileges, it prevents me from executing this script, startup.sh. Bash gives me the following error: bash: ./startup.sh: Permission deniedIn the directory that contains the script, ls-la shows me: drwxrwxrwx 1 technomage technomage 4.0K Oct 1 12:51 . drwxrwxrwx 1 technomage technomage 4.0K Oct 1 12:51 .. -rwxrwxrwx 1 technomage technomage 1.9K Oct 1 12:51 startup.shStill I cannot execute startup.sh. I know that permissions on NTFS partitions in linux can be somewhat finnicky, so I went into the /etc/fstab and set the privileges, owners and masks as best as I could: UUID=6F537BB96F6E0CBC /home/technomage/Migration ntfs-3g rw,exec,user,umask=000,uid=1000,gid=1000 0 0I then proceeded to sudo umount Migration, followed by reloading the fstab file configuration with sudo mount -a. The remounting is successful. Despite all of this, I still cannot execute the script despite even using root. The mount | grep sda6 command shows me the following, which tells me somehow, that the partition isn't mounting properly or using the configurations I gave it: /dev/sda6 on /home/technomage/Migration type fuseblk (rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,user)I'm running Debian Jessie, and even went into stretch's repository to get the latest version of the ntfs-3g driver, thinking it was some kind of bug.. no dice. I'm not quite sure what's wrong. Please show me how I misconfigured how I mount my NTFS partition? I need total access to it.
NTFS Partition Not Mounting Properly, Cannot Execute Despite Ownership
Edit udisk2 mount options with: sudo nano /etc/udisks2/mount_options.conf and add [defaults] ntfs_defaults=uid=$UID,gid=$GID,windows_names ntfs_allow=uid=$UID,gid=$GID,umask,dmask,fmask,locale,norecover,ignore_case,windows_names,compression,nocompression,big_writesif still doesn't work: sudo nano /etc/udev/rules.d/90-usb-disks.rulesand add this ENV{ID_FS_TYPE}=="ntfs", ENV{ID_FS_TYPE}="ntfs-3g"
I need to read and write to an usb ntfs pendrive through www-data group (that has uid 33) so I have added UUID=34A0456D004536A0 /home/mypath ntfs-3g rw,defaults,uid=1000,gid=33,dmode=770,fmode=660,dmask=007,fmask=117,auto 0 0the disk is mounted but with generic permissions applied to all USB drivers ignoring everything I placed in fstab but mount path that is correct. I have also used sudo ntfsusermap to generate a mapping file to place in .NTFS-3G folder in the drive. What could be the reason? How can solve this problem?
Permissions and groups in fstab ignored
In any Linux OS you can check the version of NTFS file system in a drive using sudo ntfsinfo -m /dev/sdbX | grep VersionOmit grep Version for full details. Replace /dev/sdbX with correct device file. Unmount the device before running the above command. In Windows use open cmd as an administrator and use the following to check NTFS file system version and other details of your disk. fsutil fsinfo ntfsinfo X:Replace X: with proper Drive letter. For information I formatted 4 TB HDD in Windows 7 and NTFS version is 3.1
This is somewhat related to Looking for 4 TB cross-platform filesystem for standalone disk Currently I am posessing a 2 TB external hdd and sometimes the internal ntfs index gets corrupted hence I have to resort to this - > sudo ntfsfix -d /dev/sdb1 [95%] Mounting volume... $MFTMirr does not match $MFT (record 28). FAILED Attempting to correct errors... Processing $MFT and $MFTMirr... Reading $MFT... OK Reading $MFTMirr... OK Comparing $MFTMirr to $MFT... FAILED Correcting differences in $MFTMirr record 28...OK Correcting differences in $MFTMirr record 29...OK Correcting differences in $MFTMirr record 30...OK Correcting differences in $MFTMirr record 31...OK Correcting differences in $MFTMirr record 32...OK Correcting differences in $MFTMirr record 33...OK Correcting differences in $MFTMirr record 34...OK Correcting differences in $MFTMirr record 35...OK Correcting differences in $MFTMirr record 36...OK Correcting differences in $MFTMirr record 37...OK Correcting differences in $MFTMirr record 38...OK Correcting differences in $MFTMirr record 39...OK Processing of $MFT and $MFTMirr completed successfully. Setting required flags on partition... OK Going to empty the journal ($LogFile)... OK Checking the alternate boot sector... OK NTFS volume version is 3.1. NTFS partition /dev/sdb1 was processed successfully.Now I'm not sure whether this hdd was partitioned during Windows XP or Windows 7 as officially even under Windows 7 it is known as NTFS 3.1 (although unofficially known as NTFS 5.0 according to wikipedia.) I could use smartctl interface to get some more details about when likely the drive was manufactured, it's remaining life etc. I don't think there is anyway to know as they are pre-formatted hdd ( A seagate slim backup) Has anybody formatted an external hdd in Windows 7 and can share if there is any metadata whhich shows the difference between external hdds created during windows x0 and windows 7 which might give me more idea about the disk and particularly the nfs version it uses.
what is the version of ntfs when an external disk is partioned under MS-Windows 7
Generic FUSE options 1) I noticed allow_other wasn't set on the ntfs-3g filesystem mount. The default for FUSE is not to allow access by other users. mhddfs is a FUSE filesystem and so is ntfs-3g (but see next section). 2) When you use allow_other, you also want to consider permissions checking. The default for FUSE is not to check permissions. So just adding allow_other to a filesystem can make it accessible by all users. This is probably undesirable; separate user IDs are often used to contain services, like the CUPS printer daemon, in case they are compromised by network attack. To enable user/group/mode permissions checks on generic FUSE filesystems, the option is called default_permissions. NTFS-3G specific behaviour 1 -> According to its man page, ntfs-3g will enable allow_other by default. (FUSE defaults will only allow the root user to do that. Not a problem here though, as you're using mount which runs as root). 2 -> It sounds like the ntfs-3g option permissions enabled permission checking for you. Otherwise, you wouldn't have noticed any permission errors. (SELinux might do, but you're not using SELinux, because you're on Ubuntu. Ubuntu AppArmor is described as being path-based, so from what you've described I think it's unlikely to be causing a problem). Thesis I believe your ntfs-3g mount is set up to perform permission checks, and FUSE is not separately blocking access by other users. This sounds sensible for a mount in fstab which is used to provide system directories like /var/mail. However your mhddfs mount is not performing permission checks itself, because it does not have default_permissions set. That would explain why the mhddfs setup was able to work (despite options for uid,gid,umask which only allow access to your user-id 1000). You don't show the underlying filesystems, so I don't know whether they're checking permissions, but I suspect that mhddfs is simply running as root and avoiding the permissions checks that way. Here's a test you could run on the mhddfs mount. It should show if the permission bits are being checked or not. mkdir dir chmod a-w dir # make directory read-only touch dir/t # attempt writing to directoryTo solve your permission errors, you need to determine which user(s) should have what access to the files in question, and set the correct permissions accordingly. You've never said what user (or even what software) is failing the permission checks so it's hard to be any more specific.
I have two mounts /mount1 and /mount2. I ran the command: rsync -azrt /mount1/* /mount2/to clone everything from /mount1 to /mount2. I then altered the /etc/fstab (see below) to remove /mount1 and mount /mount2 to /mount1 but things (including my email servers local user folders) are not working properly for permission reasons anymore, even though when comparing the permissions with the mounts before and after they are identical?!/etc/fstab before (working): UUID="3999A4F22570EAC4" /mount2 ntfs-3g nobootwait,permissions,locale=en_US.utf8 0 2 mhddfs#/mount3,/mount4 /mount1 fuse defaults,allow_other,nobootwait,nonempty,uid=1000,gid=1000,umask=007 0 0/etc/fstab after (not working): UUID="3999A4F22570EAC4" /mount1 ntfs-3g nobootwait,permissions,locale=en_US.utf8 0 2Where UUID="3999A4F22570EAC4" is /mount2 that has the content of the previous /mount1
Migrating mounts with identical permissions not working
"Input/output error" points to a low-level problem that likely has little to do with the filesystem. It should show up in dmesg and the output of smartctl -x /dev/sdX may also provide clues. You can also try to strace -f -s200 ntfs-3g [args] 2>&1 | less to see which syscall hits the i/o error. The root cause is probably one of the following:defective SATA cable in Debian box; problem with power supply or SATA power cable in Debian box; failing disk; bug in ntfs-3g causing it to try accessing beyond the end of the device (perhaps coupled with some weirdness in the specific NTFS volume you have that is somehow not affecting the other implementations); defective RAM in Debian box.If you post the output of the above commands, it may be possible to say which.
This drive has been working fine for a while, but I recall having had some slight trouble getting it mounted in the past. Anyway, it was disconnected from the machine for some time and when I reconnected it and tried to mount it again with ntfs-3g, I got the following error: Failed to mount '/dev/sdb1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details.So I plugged the drive into a Windows machine and ran chkdsk. While I don't have the output of chkdsk readily available, there were no obvious warning/error messages and I understood the output to have been indicative of a successful run. I could also mount, read, and write to the disk from within Windows Explorer. I dismounted the drive, then plugged it back into the Debian box. Attempting to mount it had the same effect as the first time. I plugged the disk into an OSX machine, which was able to read from (but obviously not write to) the drive. Plugging it back into the Windows machine seemed to indicate that all was well. After a few minutes however, the drive (mounted in Windows) became unresponsive and Windows Explorer gave me alternating error messages along the lines of "invalid parameters" and "access denied" (with no further detail!). So I'm a little bit lost at this point. I can still read from the disk from several machines and write to it from Windows, but Debian still won't mount it. Any suggestions?
NTFS drive not mounting in Debian
Here's my configuration and it works: [public] comment = Public Storage path = /media/hddusb create mask = 0660 directory mask = 0771 read only = no guest ok = yes browseable = yes
I have a Raspberry Pi (though could be any Debian Linux box) connected to an external hard disk formatted as NTFS. My disk mount in fstab is: /dev/sda1 /media/disk ntfs-3g defaults,uid=1000,gid-1000,dmask=007,fmask=007 0 0where user 1000 is the 'pi' user /media/disk/shared is my Samba root folder. Must be accessible from Windows and Mac Now, I can see the share in Windows, but I get permission denied. If I try mapping a drive to it, and attempt login using \machinename\pi the login fails. Any ideas? Edit-- smb.conf below. I've removed all comment lines (I assume lines beginning # or ; are comments) [global] workgroup = WORKGROUP server string = %h server dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user usershare allow guests = yes[shared] comment = Ali and Greg Shared Folders writeable = yes public = yes browseable = yes path = /media/disk/shared guest only = yes guest ok = yes read only = no create mask = 0777 directory mask = 0777[printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = yes create mask = 0700[print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no[hdd] comment = Samba server's HDD read only = no locking = no path = /media/disk/shared guest ok = yes
Login fails from Windows on NTFS-3G Samba Share
I managed to make it work. Not sure this did the trick, but here were the steps, diskinfo -c da1mount_ntfs-3g -o ro /dev/da1s1 /media/multi-media/ fuse: failed to open fuse device: No such file or directoryvi /etc/rc.conf # add the following line to the end # fusefs_enable="YES" kldload fuse.komount_ntfs-3g -o windows_names,inherit /dev/da1s1 /usr/jails/sharedfs/media/ # finally works without errorsWonder if using diskinfo got the rest of BSD to recognise that the hard disk has a non standard sector size?
I have an NTFS formatted USB hard disk that works fine (mounts and unmounts cleanly) on my windows desktop. I can't however seem to mount it on my freebsd box at all. Stripping this back to the basics, I can confirm the box sees the USB device, pfSense log/ root^> dmesg ugen1.5: <Seagate> at usbus1 umass1: <Seagate Expansion Desk, class 0/0, rev 2.10/1.00, addr 5> on usbus1 da1 at umass-sim1 bus 1 scbus2 target 0 lun 0 da1: <Seagate Expansion Desk 0604> Fixed Direct Access SCSI-6 device da1: Serial Number NA4KXT5F da1: 40.000MB/s transfers da1: 3815447MB (976754645 4096 byte sectors: 255H 63S/T 60800C) da1: quirks=0x2<NO_6_BYTE>The USB device also shows up under camcontrol and usbconfig pfSense log/ root^> usbconfig ugen0.1: <XHCI root HUB 0x8086> at usbus0, cfg=0 md=HOST spd=SUPER (5.0Gbps) pwr=SAVE (0mA) ugen1.1: <EHCI root HUB Intel> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA) ugen1.2: <product 0x8001 vendor 0x8087> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA) ugen1.3: <USB2.0 Hub vendor 0x05e3> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (100mA) ugen1.4: <USB Storage Generic> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (500mA) ugen1.5: <Expansion Desk Seagate> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (0mA)pfSense log/ root^> camcontrol devlist <C400-MTFDDAK256MAM 070H> at scbus0 target 0 lun 0 (ada0,pass0) <Generic STORAGE DEVICE 9451> at scbus1 target 0 lun 0 (pass1,da0) <Seagate Expansion Desk 0604> at scbus2 target 0 lun 0 (da1,pass2)But running a simply command like fdisk -p gets me nowhere, pfSense log/ root^> fdisk -p /dev/da1 fdisk: could not detect sector sizeAny pointers on where I am going wrong would be very helpful. PS, in case anyone picked up on it from the hostname, this is a box running pfsense with Finch for various jails. The ntfs-3g and all the troubleshooting is under finch. Many thanks
fdisk: could not detect sector size of USB hard disk on FreeBSD
Note This post was made before OP gave the additional info that he's using a windows filesystem (NTFS) on a linux machine. I was under the impression he's using a native linux filesystem.You need to set the read, write and executable flag for the owner, and the read, executable flag for the group for mydirectory. The executable flag is needed to enter the folder. Without it you get a "permission denied" when trying to cd myfolder as a user belonging to the group or other. chmod 755 myfolder is giving access for the group and others, or chmod 750 myfolder just giving access for the group and lock others out. Set the ownership to root and the group to users: sudo chown root:users myfolderNow, only root can create new files in myfolder ie. sudo touch mytest the new file gets the ownership root and the group root. To force new files getting the group users, you need to set the SGID bit to myfolder. this can be done in two ways, which results are equal sudo chmod +s myfolder (adding the sgid bit) or sudo chmod 2755 myfolder (same + user, group, others) doing a ls -l should show something like this: drwsr-sr-x myfolder # last x optional depending on your others settingif you now sudo touch mytest2 in myfolder, mytest2 belongs to root, and the group users with the permission 644 Existing Files in myfolder would be treated like this: cd myfolder sudo chown root:users * sudo chmod 644 *1 = execute 2 = write 4 = readread + write = 4 + 2 = 6 P.S.: You can replace root with any user, users with any groupUpdate as requested by @Rastapopoulos a further explaination Let's assume myfolder belongs to tom When doing a chmod -R 444 myfolder/ you set the folder for user (tom), group, others to read only and all files within it, too So no nobody would be able to enter the folder, even tom (except root) because it's lacking the executable flag. When doing a chmod 644 myfolder tom still can't enter the folder. The correct way would be to set the read, write, executable flag for tom, and the read executable flag for the group/others. (executable flag = 1)ie. chmod 755 myfolder (only setting permission for myfolder, not files) To change only the permission for files in myfolder but not the permission for myfolder you'd do a: chmod 444 myfolder/* But you might probably still want to edit/write your files as owner/tom so you'd rather do a chmod 644 myfolder/* (or 640)
I am trying to make a folder and its file read only so I do not accidentally delete it. I have run chmod -R 444 myfolder/but when I then right click on the folderand go Properties>Permissions, it is still showing as read and write. I also tested by modifying a file, and the modification succeeds. In addition, when I try to change the permission in the filemanager gui to read only, it immediately flips back to read and write. I am under the impression that 4 means read only access. Is this correct? EDIT: I think my issue has to do with how the drive is mounted. Here is the fstab entry. UUID=6F7C5E910607D747 /media/storage1 ntfs-3g uid=1000,gid=1000,umask=0022,auto,rw 0 0
Setting an NTFS file to be read only from Linux
As ChrisDavies said in the comments section, the answer is in man ntfs-3g, in the Access Handling and Security section, to be precise. I could do ntfs-3g -o uid=1000,gid=1000,umask=700 /dev/nvme0n1p6 /mnt/Contenido/ to get my regular user as the owner. I don't know if it is worth mentioning, but it is important to rebuild ntfs-3g with integrated FUSE support so the regular user can mount the file system, as explained in the Arch Wiki.
I have a dual boot system (Windows 10/Archlinux) and I have created a NFTS partition which is mounted at startup via /etc/fstab so I can access it from both OS'es. The fstab file shows that the partition is mounted with read and write permissions (rw) with user_id=0 and group=0, both values related to the root user, followed by the option allow_other which lets my regular user access the mounted file system. When files or folders are created by the regular user (non root) into the mounted partition those are created as if they were owned/created by root as shown by ls -l command. Even if I try to use chmod, the permissions are unaffected and no errors are shown. I've also tried changing in /etc/fstab both user_id and group_id to 1000, corresponding to the non-root user and reloading entries with sudo mount -av. After that, I created a file in the mounted partition but it keeps showing the root as the user owner. I suspect the issue could be the fstab configuration, but I'm not sure. Next I'll share some info related to the configuration of the partition inside the fstab file and the before mentioned commands and its outputs: /etc/fstab/ UUID=B23A2CB93A2C7C8B /mnt/Contenido ntfs-3g rw,nosuid,nodev,user_id=0,group_id=0,allow_other,blksize=4096 0 0$ cd /mnt/Contenido$ whoami > joao$ touch random_file$ ls -l > -rwxrwxrwx 1 root root 0 Feb 19 16:45 random_file$ sudo chmod -v 700 random_file > mode of 'random_file' changed from 0777 (rwxrwxrwx) to 0700 (rwx------)$ ls -l > -rwxrwxrwx 1 root root 0 Feb 19 16:45 random_file >
Files created by user in a mounted partition show root as owner
FUSE is a filesystem in userland – due to context switching overhead, it's not ever going to be as fast as an in-kernel file system, and my guess is this hurts even more when you have to do very file-system intense things like a find on it. So.Either use the NTFS3 in-kernel driver (as available in Linux 5.15 and on, if I remember correctly), or move all the data to a different file system once (and synchronize it to NTFS if you ever need that again), or run a paravirtualized Windows Server VM to serve that file system via NFSI'd personally strongly tend towards the second option. What sense does it make to permanently access something from a definitely-not-made-for-that file system? We're talking about not even 40GB of actual data – that's really nothing. I mean, you have an NFS Team. There's people employed to make your data accessible via NFS. Why they even support directly exporting NTFS is a bit beyond me.
I have two Linux systems: NFSServer1 (RHEL) and NFSClient1 (Ubuntu). On NFSServer1, ntfs-3g driver and ldmtool is installed. The NTFS device partitions are mounted by executing the command: mount -t ntfs-3g -o ro,noatime $devPath $mountPath Note: The two partitions /dev/mapper/ldm_vol_VishalWDD-Dg0_Volume1 and /dev/mapper/ldm_vol_VishalWDD-Dg0_Volume2 are Windows Dynamic disk partitions derived using ldmtool [root@ROADQAScaleNFS2 ~]# df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/sdc4 fuseblk 127G 11G 117G 9% /monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/sdc4 /dev/sdc2 fuseblk 450M 13M 438M 3% /monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/sdc2 /dev/mapper/ldm_vol_VishalWDD-Dg0_Volume1 fuseblk 10G 5.8G 4.3G 58% /monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume1 /dev/mapper/ldm_vol_VishalWDD-Dg0_Volume2 fuseblk 10G 6.0G 4.1G 60% /monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume2[root@ROADQAScaleNFS2 ~]# mount /dev/sdc4 on /monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/sdc4 type fuseblk (ro,noatime,user_id=0,group_id=0,allow_other,blksize=4096) /dev/sdc2 on /monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/sdc2 type fuseblk (ro,noatime,user_id=0,group_id=0,allow_other,blksize=4096) /dev/mapper/ldm_vol_VishalWDD-Dg0_Volume1 on /monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume1 type fuseblk (ro,noatime,user_id=0,group_id=0,allow_other,blksize=4096) /dev/mapper/ldm_vol_VishalWDD-Dg0_Volume2 on /monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume2 type fuseblk (ro,noatime,user_id=0,group_id=0,allow_other,blksize=4096)Totally all these partitions have about 10 million files. These mounted partitions are accessed from NFSClient1 as NFS shares: [root@NFSClient ~]# mount 10.4.0.5:/monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/sdc4 on /monitor1/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/sdc4 type nfs4 (ro,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.148.66.49,local_lock=none,addr=10.4.0.5) 10.4.0.5:/monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/sdc2 on /monitor1/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/sdc2 type nfs4 (ro,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.148.66.49,local_lock=none,addr=10.4.0.5) 10.4.0.5:/monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume1 on /monitor1/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume1 type nfs4 (ro,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.148.66.49,local_lock=none,addr=10.4.0.5) 10.4.0.5:/monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume2 on /monitor1/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume2 type nfs4 (ro,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.148.66.49,local_lock=none,addr=10.4.0.5)The number of NFS daemon threads on NFS server is set to 64. Next, on the NFS client, when we issue a stat a fuseblk partition using a find command: find -H /monitor1/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume2 -printf '%p|' | xargs -d '|' stat --printf="%F, %i:\t%n\t%.19x\t%.19y\t%.19z\t%.19w\t%s\t%u\t%g\n" \ |$SED -e "s|/monitor1/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume2/||g" -e "s|directory,|d/d|g" -e "s|symbolic link,|l/l|g" -e "s|regular file,|r/r|g" -e "s|socket,|h/h|g" \ -e "s|regular empty file,|r/r|g" -e "s|fifo,|p/p|g" Its execution is extremely slow. It takes a break of 5-6 minutes and then resumes for a few seconds. The same is true for all other mount points. The execution does not finish even after 12 hours. This sluggish behavior is not observed for ext4 and xfs devices types. As a test, I tried executing the same find command on the NFSServer1, it was quite fast. The whole execution finished in ~40 minutes. I don't have access to the NFS server though. I have asked the NFS server team to try different mount options as mentioned in the ntfs-3g man page, but it didn't help. If there is any way I could improve the read performance of fuseblk partitions over NFS, I would grateful to you guys. Many thanks!
Slow reading of millions of files in Fuseblk partitions shared over NFS
After a bit of research I noticed the kernel was updated to 5.3 from 5.0, it was certainly due to system updates. After downgrading the kernel to 5.0 all things came back to normal. I dont know whats wrong with version 5.3, but it resulted in very high cpu usage specially mount.ntfs process which is around 60 to 70 percent. The whole kde desktop seems to freeze when coping of large files is going on. Even on fat 32 system the problem was there. I also tried the kernel 5.4, same issue was there.
I regularly update my system running KDE Neon, but this time after the update something broke in the "file copy" process. The system slows down during copying to external hdd or pendrive so much so that the system becomes unusable, CPU usage runs too high. Initially after reading some online forums I thought it was some taskbar animation issue, but after I tried to copy big files using terminal and tty, the results are the same in both cases, so the problem is not with the animations. Any ideas on what's causing the issue? My system specs:CPU: Intel i5-7200U RAM: 8GB HDD: 1TB
Copying files slows down the system, making it unusable (KDE Neon)
I have followed this solution for Ubuntu:The error you are seeing indicates the filesystem is not clean and needs checked by Windows chkdsk. There are components to NTFS filesystem ($MFT and $MFTMirr respectively in this case) which say what is where on the disk. These files no longer match each other, which suggests there may be some type of filesystem corruption.On a windows machine I have used [chkdsk]chdsk: Checks the file system and file system metadata of a volume for logical and physical errors. If used without parameters, chkdsk displays only the status of the volume and does not fix any errors. If used with the /f, /r, /x, or /b parameters, it fixes errors on the volume.First, on a Windows machine I have done: chkdsk <volume> /f, which takes around 2-3 hours for 2 TB external hard disk. Then, on macOS, the error I was facing was fixed, also Microsoft NTFS for Mac by Paragon Software able to recognize the external hard disk as well.
I have upgraded my macOS from High Sierra to Catalina, which, I think, causes for me to have following error. Is there any way to fix this error? $ mkdir /Volumes/FOLDER $ sudo /usr/local/bin/ntfs-3g /dev/disk2s1 /Volumes/FOLDER -olocal -oallow_other -o auto_xattr $MFTMirr does not match $MFT (record 3). Failed to mount '/dev/disk2s1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details.Please note that I have followed the following NTFS-3G guide.
NTFS-3G cannot on macOS catalina: MFTMirr does not match $MFT (record 3)
It could be useful to recover from filesystem corruption. When you format c:, or delete files by accident, the metadata is what you lose first and without metadata, it's a very difficult process to restore / undelete / recover individual files or entire filesystems. However, it would be easy if you had a metadata clone. It instantly gives you an intact mountable filesystem. As for the contents, they would still be there, if they were not overwritten/discarded in the incident and not relocated since the metadata backup was made. Of course, it's still much better to have a real backup (metadata+contents) that survives drive failures and the like.
Been reading the ntfsclone manual but I still can understand and see the point of doing a Metadata-only cloning? What is the point of this, when can it be valuable? I assume "metadata" is just information about the filesystem objects (e.g. folder and files) but not the objects themself.
ntfsclone, metadata-only cloning....why?
man ntfs-3g Windows hibernation and fast restarting On computers which can be dual-booted into Windows or Linux, Windows has to be fully shut down before booting into Linux, otherwise the NTFS file systems on internal disks may be left in an inconsistent state and changes made by Linux may be ignored by Windows. So, Windows may not be left in hibernation when starting Linux, in or‐ der to avoid inconsistencies. Moreover, the fast restart feature avail‐ able on recent Windows systems has to be disabled. This can be achieved by issuing as an Administrator the Windows command which disables both hibernation and fast restarting : powercfg /h off If either Windows is hibernated or its fast restart is enabled, parti‐ tions on internal disks are forced to be mounted in read-only mode.
I use for a half and a year Linux with 6 HDD with NTFS format and modify my fstab to can read and write, also I have ntfs-3g installed. UUID=480a3f32-e304-49b0-b322-4964349fd941 / ext4 rw,relatime,data=ordered 0 1 UUID=BAF0D1D3F0D195CB /media/ntfs/Anime ntfs-3g rw,uid=1000,umask=022 0 0 UUID=561CAEE01CAEB9FF /media/ntfs/Anime2.0 ntfs-3g rw,uid=1000,umask=022 0 0 UUID=68B283CAB2839AE8 /media/ntfs/Anime3.0 ntfs-3g rw,uid=1000,umask=022 0 0 UUID=E094004194001CA2 /media/ntfs/Anime4.0 ntfs-3g rw,uid=1000,umask=022 0 0 UUID=CAE8F43AE8F425FB /media/ntfs/Anime5.0 ntfs-3g rw,uid=1000,gid=users,umask=022 0 0 UUID=8A34984034983165 /media/ntfs/Anime6.0 ntfs-3g rw,uid=1000,gid=users,umask=022 0 0 UUID=64E6CDCBE6CD9E24 /media/ntfs/Win ntfs-3g rw,uid=1000,umask=022 0 0 UUID=AADEEA03DEE9C7A1 /media/ntfs/KK ntfs-3g rw,uid=1000,umask=022 0 0Today I can't write or modify the HDD on the fstab with ntfs-3g, the mount command return the following info: proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) dev on /dev type devtmpfs (rw,nosuid,relatime,size=8173120k,nr_inodes=2043280,mode=755) run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755) /dev/sdh1 on / type ext4 (rw,relatime,data=ordered) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=35,pgrp=1,timeout=0,minproto=5,maxproto=5,direct) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M) tmpfs on /tmp type tmpfs (rw,nosuid,nodev) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) configfs on /sys/kernel/config type configfs (rw,relatime) fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime) /dev/sdh3 on /media/ntfs/Win type fuseblk (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096) /dev/sdb2 on /media/ntfs/Anime2.0 type fuseblk (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096) /dev/sdc2 on /media/ntfs/Anime6.0 type fuseblk (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096) /dev/sde2 on /media/ntfs/Anime5.0 type fuseblk (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096) /dev/sda1 on /media/ntfs/Anime3.0 type fuseblk (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096) /dev/sdd2 on /media/ntfs/KK type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096) /dev/sdg2 on /media/ntfs/Anime type fuseblk (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096) /dev/sdf2 on /media/ntfs/Anime4.0 type fuseblk (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096) tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=1642420k,mode=700,uid=1000,gid=985) gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=985)As you can see the ntfs-3g partitons are mounted as ro instead rw.
Unable to mount read an write partitions
Yes, found out ! To activate VIRTUAL output of the intel driver, you need to create a 20-intel.conf file in the Xorg configuration directory (/usr/share/X11/xorg.conf.d under Debian stretch, found out by reading /var/log/Xorg.0.log) Section "Device" Identifier "intelgpu0" Driver "intel" Option "VirtualHeads" "2" EndSectionMy /etc/bumblebee/xorg.conf.nvidia is as follows: Section "ServerLayout" Identifier "Layout0" Option "AutoAddDevices" "true" Option "AutoAddGPU" "false" EndSectionSection "Device" Identifier "DiscreteNvidia" Driver "nvidia" VendorName "NVIDIA Corporation" Option "ProbeAllGpus" "false" Option "NoLogo" "true" Option "AllowEmptyInitialConfiguration" EndSectionSection "Screen" Identifier "Screen0" Device "DiscreteNVidia" EndSectionSome explanations: it needs a "Screen" section, else it tries to use the Intel device declared in 20-intel.conf (that we just added before, oh my...). It also needs "AllowEmptyInitialConfiguration" to remain able to start with optirun when no external monitor is attached. With this configuration and starting intel-virtual-output, I was able to access my HDMI port. Yeehaa !!! Troubleshooting: if optirun or intel-virtual-output do not work, take a look at /var/log/Xorg.8.log (bumblebee creates an X server with display :8 used internally). Notes I read at several places that KeepUnusedXServer should be set to true and PMMethod to none in /etc/bumblebee/bumblebee.conf, I did not do that and it works fine. If I do that, it works, but then the discrete GPU remains on even after exiting an optirun-ed application or killing intel-virtual-output, which I did not want. More notes Something else that made me bang my head on the wall was deactivating Nouveau and starting the Intel X server: it needs to be done by flags passed to the kernel, specified in GRUB parameters. In /etc/defaults/grub, I have the following line: GRUB_CMDLINE_LINUX_DEFAULT="quiet blacklist.nouveau=1 i915.modeset=1 gfxpayload=640x480 acpi_backlight=vendor acpi_osi=! acpi_osi=\"Windows 2009\""(beware the quotes and escaped quotes). Some explainations: it avoids loading nouveau (that is incompatible with the Nvidia X server), and tells the Intel driver to go to graphics mode right at boot time. If you do not do that, then the Intel X server cannot start, and it falls back to a plain old VESA server with CPU-side 3D rendering. The acpi_xxx flags are required on this specific machine to overcome a BIOS bug that makes it crashing when going in graphics mode with the discrete GPU off. Note that it is specific to this particular notebook (HP ZBook portable workstation), it may be unnecessary or differ for other laptops. Update (Dec 6 2017) With the latest Debian distro (Buster), "915.modeset=1 gfxpayload=640x480" is unnecessary. To remove nouveau, I needed also to create a nouveau.conf file in /etc/modprobe.d with "blacklist nouveau" in it, then recreate the ramdisk with "update-initramfs -u". Reboot and make sure "nouveau" is not loaded anymore with "lsmod |grep nouveau". Update (Dec 17 2016) With the latest xorg-server (1.19), there seems to be a problem in a RandR function that manages Gamma when used with intel-virtual-output. Here is the procedure to patch the Xserver and get it to work: sudo apt-get build-dep xserver-xorg-core apt-get source xorg-serveredit hw/xfree86/modes/xg86RandR12.c Line 1260, insert "return" (so that the function xf86RandR12CrtcComputeGamma() does nothing) dpkg-buildpackage -rfakeroot -us -uc cd .. sudo dpkg -i xserver-xorg-core_n.nn.n-n_amd64.deb(replace the n.nn.n-n with the correct version), reboot and Yehaa !! works again ! (but it's a quick and dirty fix) Update filed a bug report (was already known, and was just fixed): https://bugs.freedesktop.org/show_bug.cgi?id=99129 How I figured out: Installed xserver-xorg-core-dbg and did gdb /usr/lib/xorg/Xorg <xorg pid> from another machine through ssh. Update (Jan 11 17) Seems that the bug is now fixed in the latest Debian packages. Update (Jan 24 18) When you want to plug a beamer for doing a presentation and need to configure everything right before starting (intel-virtual-output + xrandr), it can be stressful. Here is a little script that does the job (disclaimer: a lot of room for improvement, regarding style etc...): # beamer.sh: sets Linux display for doing a presentation, # for bumblebee configured on a laptop that has the HDMI # plugged on the NVidia board. # # Bruno Levy, Wed Jan 24 08:45:45 CET 2018 # # Usage: # beamer.sh widthxheight # (default is 1024x768)# Note: output1 and output2 are hardcoded below, # change according to your configuration. output1=eDP1 output2=VIRTUAL1# Note: I think that the following command should have done # the job, but it does not work. # xrandr --output eDP1 --size 1024x768 --output VIRTUAL1 --size 1024x768 --same-as eDP1 # My guess: --size is not implemented with VIRTUAL devices. # Thus I try to find a --mode that fits my needs in the list of supported modes.wxh=$1if [ -z "$wxh" ]; then wxh=1024x768 fi# Test whether intel-virtual-output is running and start it. ivo_process=`ps axu |grep 'intel-virtual-output' |egrep -v 'grep'` if [ -z "$ivo_process" ]; then intel-virtual-output sleep 3 fi# Mode names on the primary output are simply wxh (at least on # my configuration...) output1_mode=$wxhecho Using mode for $output1: $output1_mode# Mode names on the virtual output are like: VIRTUAL1.ID-wxh # Try to find one in the list that matches what we want. output2_mode=`xrandr |grep $output2\\\. |grep $wxh |awk '{print $1}'` # There can be several modes, take the first one. output2_mode=`echo $output2_mode |awk '{print $1}'` echo Using mode for $output2: $output2_mode# Showtime ! xrandr --output $output1 --mode $output1_mode --output $output2 --mode $output2_mode --same-as $output1update (10/07/2019) A "fix" for the new crash: write what follows in a script (call it bumblebee-startx.sh for instance): optirun ls # to load kernel driver /usr/lib/xorg/Xorg :8 -config /etc/bumblebee/xorg.conf.nvidia \ -configdir /etc/bumblebee/xorg.conf.d -sharevts \ -nolisten -verbose 3 -isolateDevice PCI:01:00:0 \ -modulepath /usr/lib/nvidia/nvidia,/usr/lib/xorg/modules/(replace PCI:nn:nn:n with the address of your NVidia card, obtained with lspci) Run this script from a terminal window as root (sudo bumblebee-startx.sh), keep the terminal open, then optirun and intel-virtual-output work as expected (note: sometimes I need to run xrandr in addition to make the screen/videoprojector detected). Now I do not understand why the very same command started from bumblebee crashes, so many mysteries here ... (but at least it gives a temporary fix). How I figured out: wrote a 'wrapper' script to start the xserver, declared it as XorgBinary in bumblebee.conf, made it save the command line ($*) to a file, tried some stuff involving LD_PRELOADing a patch to the XServer to fix the crash in osLookupColor (did not work), but when I tried to launch the same command line by hand, it worked, and it continued working without my patch (but I still do not understand why). Update 11/15/2019 After updating, I experienced a lot of flickering, making the system unusable. Fixed by adding kernel parameter i915.enable_psr=0 (in /etc/defaults/grub, then sudo update-grub). If you want to now, PSR means 'panel self refresh', a power-saving feature of intel GPUs (that can cause screen flickering).
I am trying to use the HDMI output on a PC (HP ZBook) with Debian (stretch). I have configured Bumblebee, it works well (glxinfo and optirun glxinfo report the expected information, and I tested complicated GLSL shaders that also work as expected). Now I would like to be able to plug a videoprojector on the HDMI. I have read here [1] that intel-virtual-output can be used to configure it when the HDMI is connected on the NVidia board (using a VIRTUAL output that can be manipulated by xrandr). However, intel-virtual-output says: no VIRTUAL outputs on ":0"When I do xrandr -q, there is no VIRTUAL output listed, I only have: Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 8192 x 8192 eDP-1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 345mm x 194mm 1920x1080 60.02*+ 59.93 1680x1050 59.95 59.88 1600x1024 60.17 ... other video modes ... 400x300 60.32 56.34 320x240 60.05 DP-1 disconnected (normal left inverted right x axis y axis) HDMI-1 disconnected (normal left inverted right x axis y axis) DP-2 disconnected (normal left inverted right x axis y axis) HDMI-2 disconnected (normal left inverted right x axis y axis)My installed version of xserver-xorg-video-intel is: xserver-xorg-video-intel_2.99.917+git20160706-1_amd64.deb Update (Sat. Dec. 09 2016) I have updated Debian, and now X crashes when second monitor is active when I starting some applications (for instance xemacs). Sat. Dec. 17 2016: Yes, found out ! (updated the answer). Update (Wed Sep 27 2017) The method works in 99% of the cases, but last week I tried a beamer that only accepts 50Hz modes, and could not get anything else than 60Hz (so it did not work). Anybody knows how to force 50Hz modes ? Update (Tue 01 Oct 2019) Argh! Broken again: After updating X and the NVidia driver, optirun now crashes (/var/log/Xorg.8.log says crash in Xorg, OsLookupColor+0x139). Update (07 Oct 2019) Found a temporary fix (updated answer). [1] https://github.com/Bumblebee-Project/Bumblebee/wiki/Multi-monitor-setup
Do not manage to activate HDMI on a laptop (that has optimus / bumblebee)
I finally found a solution to my problem by continuing my research. Solution: do not use Optimus to switch between GPU The Primus and Optimus programs are made to be used with Nvidia proprietary drivers. It is therefore not recommended to use them with Nouveau drivers. The Linux kernel has tools that allow you to switch GPUs without installing additional programs. The tool in question is VGA Switcheroo. Note that this tool only works with open source drivers. The tool may not be active by default on your system, some manipulations are then necessary. To check if the tool is enabled, look for the switch file with # cat /sys/kernel/debug/vgaswitcheroo/switchIn my case, the tool was not activated, I just had to uninstall Bumblebee to fix the problem. If the problem persists after uninstalling Bumblebee, follow the instructions in this article. Now that vga_switcheroo is enabled, you can switch off the active GPU with # echo OFF > /sys/kernel/debug/vgaswitcheroo/switchand activate the dedicated card with # echo DIS > /sys/kernel/debug/vgaswitcheroo/switchor activate the integrated card with # echo IGD > /sys/kernel/debug/vgaswitcheroo/switchReferencesHybridGraphics - Communauty Help Wiki by Ubuntu VGA Switcheroo - Linux Kernel Documentation VGA_Switcheroo by Chibi-nah
I'm trying to get the optirun command to work with the FOSS Nouveau drivers on my computer that has an embeddded graphics unit and a discrete graphics processing unit. Here's my setup provided by the lspci | egrep -i 'vga|3d'command: 00:02.0 VGA compatible controller: Intel Corporation Skylake GT2 [HD Graphics 520] (rev 07) 01:00.0 3D controller: NVIDIA Corporation GK208BM [GeForce 920M] (rev a1)According to the Nouveau CodeNames website page, my GPU is supported by the NV108 (GK208) Nouveau driver. So there's no reason why I can't make it work with the optirun command, right? However, after having followed the classic installation procedureuninstall proprietary drivers install bumblebee & mesa-utils packages install VirtualGLI can't get the optirun command to work. As an example, optirun glxgearsgives the error [ERROR]Cannot access secondary GPU - error: [XORG] (EE) [ERROR]Aborting because fallback start is disabledThe problem seems to be with the Nouveau module in the kernel: $ optirun -vv glxgears ---------------------- [DEBUG]Reading file: /etc/bumblebee/bumblebee.conf [DEBUG]optirun version 3.2.1 starting... [DEBUG]Active configuration: [DEBUG] bumblebeed config file: /etc/bumblebee/bumblebee.conf [DEBUG] X display: :8 [DEBUG] LD_LIBRARY_PATH: [DEBUG] Socket path: /var/run/bumblebee.socket [DEBUG] Accel/display bridge: auto [DEBUG] VGL Compression: proxy [DEBUG] VGLrun extra options: [DEBUG] Primus LD Path: /usr/lib/x86_64-linux-gnu/primus:/usr/lib/i386-linux-gnu/primus:/usr/lib/primus:/usr/lib32/primus [DEBUG]Using auto-detected bridge virtualgl [INFO]Response: No - error: [XORG] (EE) [ERROR]Cannot access secondary GPU - error: [XORG] (EE) [DEBUG]Socket closed. [ERROR]Aborting because fallback start is disabled. [DEBUG]Killing all remaining processes.What I tried I tried to force Optimus to use the Nouveau drivers in the /etc/bumblebee/bumblebee.conf by setting Driver=nouveau. It makes no difference.What I fixed Initially I had another error while executing the command: [ERROR]Cannot access secondary GPU - error: [XORG] (EE) [ERROR]Failed to load module "mouse" (module does not exist, 0)I fixed it by installing the missing package xserver-xorg-input-mouse.
Nvidia Optimus with Nouveau drivers
I switch to windows 10 and still not detect. So I turned my laptop to ASUS store. Turned out, it was hardware problem. My motherboard was broken. Now it’s work.
#Solved It was hardware problem, my motherboard was broken. Fixed now. #Problem I can’t figure out how to install Nvidia driver on my laptop. (I’m being linux user for only 4-5 days, but I think I try hard enough.) paraduxos@ASUSDOGE:/$ neofetch _,met$$$$$gg. paraduxos@ASUSDOGE ,g$$$$$$$$$$$$$$$P. ------------------ ,g$$P" """Y$$.". OS: Debian GNU/Linux 10 (buster) x86_64 ,$$P' `$$$. Host: ROG Strix G531GT_G531GT 1.0 ',$$P ,ggs. `$$b: Kernel: 4.19.0-8-amd64 `d$$' ,$P"' . $$$ Uptime: 1 hour, 42 mins $$P d$' , $$P Packages: 2256 (dpkg) $$: $$. - ,d$$' Shell: bash 5.0.3 $$; Y$b._ _,d$P' Resolution: 1920x1080 Y$$. `.`"Y$$$$P"' DE: Xfce `$$b "-.__ WM: Xfwm4 `Y$$ WM Theme: Default `Y$$. Theme: Xfce [GTK2], Adwaita [GTK3] `$$b. Icons: Tango [GTK2], Adwaita [GTK3] `Y$$b. Terminal: xfce4-terminal `"Y$b._ Terminal Font: Monospace 12 `""" CPU: Intel i7-9750H (12) @ 4.500GHz GPU: Intel UHD Graphics 630 Memory: 1434MiB / 7828MiB I'm using laptop: ASUS ROG STRIX G 531GT (GPU: NVIDIA GeForce GTX 1650, Intel on-board) paraduxos@ASUSDOGE:/$ lspci 00:00.0 Host bridge: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers (rev 07) 00:02.0 VGA compatible controller: Intel Corporation UHD Graphics 630 (Mobile) 00:04.0 Signal processing controller: Intel Corporation Skylake Processor Thermal Subsystem (rev 07) 00:08.0 System peripheral: Intel Corporation Skylake Gaussian Mixture Model 00:12.0 Signal processing controller: Intel Corporation Cannon Lake PCH Thermal Controller (rev 10) 00:14.0 USB controller: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller (rev 10) 00:14.2 RAM memory: Intel Corporation Cannon Lake PCH Shared SRAM (rev 10) 00:14.3 Network controller: Intel Corporation Wireless-AC 9560 [Jefferson Peak] (rev 10) 00:15.0 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH Serial IO I2C Controller (rev 10) 00:15.1 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH Serial IO I2C Controller (rev 10) 00:16.0 Communication controller: Intel Corporation Cannon Lake PCH HECI Controller (rev 10) 00:17.0 SATA controller: Intel Corporation Cannon Lake Mobile PCH SATA AHCI Controller (rev 10) 00:1d.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port (rev f0) 00:1d.6 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port (rev f0) 00:1f.0 ISA bridge: Intel Corporation Device a30d (rev 10) 00:1f.3 Audio device: Intel Corporation Cannon Lake PCH cAVS (rev 10) 00:1f.4 SMBus: Intel Corporation Cannon Lake PCH SMBus Controller (rev 10) 00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller (rev 10) 01:00.0 Non-Volatile memory controller: Intel Corporation Device f1a8 (rev 03) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)First, my laptop can't find NVIDIA GPU paraduxos@ASUSDOGE:/$ nvidia-detect No NVIDIA GPU detected.I also try with lspci (as shown above), lshw (also with sudo), no NVIDIA found.After I do some research (aka google.com)using lspci with grep something -> still not found installing Nvidia-driver -> still not found (and has some problem) some said bumblebee need (linuxquestions.org) some said it's BIOS problem (forums.developer.nvidia.com): I try go to BIOS set up (F2), no NVIDIA as well (I can capture, please tell me if you need.)I don't know how to configure BIOS so I go with nvidia-driver and bumblebee choice. From Debian wiki, I found 3 wikis that might be related to my problems: https://wiki.debian.org/NvidiaGraphicsDrivers:The NVIDIA graphics processing unit (GPU) series/codename of an installed video card can usually be identified using the lspci command. Note: if this lspci command returns more than one line of output, you have an Optimus (hybrid) graphics chipset, and the instructions on this page do not apply to you. Check the NVIDIA Optimus page instead.Well, I got 0 output. But I decide to go with Optimus and discontinue this wiki. (I think I'm right, maybe?) (I actually come back to this later, and install Version 440.59 (via buster-backports) and after reboot, nothing happen.) In Configuration part I haven't tried, since it state thatHowever, the configuration described below should not be applied to Nvidia Optimus systems;So I came to the second wiki https://wiki.debian.org/NvidiaGraphicsDrivers/Optimus $ lspci | grep 3D (No output)This wiki said that there are 2 ways.First: Dynamic Graphics Disabled - xrandr and Display Manager ScriptsThis method require BusID from lspci. so I can't go with this method.Second: Dynamic Graphics with Bumblebeeparaduxos@ASUSDOGE:/$ glxinfo | grep OpenGL OpenGL vendor string: Intel Open Source Technology Center OpenGL renderer string: Mesa DRI Intel(R) UHD Graphics 630 (Coffeelake 3x8 GT2) OpenGL core profile version string: 4.5 (Core Profile) Mesa 18.3.6 OpenGL core profile shading language version string: 4.50 OpenGL core profile context flags: (none) OpenGL core profile profile mask: core profile OpenGL core profile extensions: OpenGL version string: 3.0 Mesa 18.3.6 OpenGL shading language version string: 1.30 OpenGL context flags: (none) OpenGL extensions: OpenGL ES profile version string: OpenGL ES 3.2 Mesa 18.3.6 OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20 OpenGL ES profile extensions:No hybrid GPU??. I'm not quite understand the output so I continue to install Bumblebee. https://wiki.debian.org/Bumblebee Since I'm using Debian 10 (Buster) I following the wiki but found problem. paraduxos@ASUSDOGE:/$ sudo apt install bumblebee-nvidia primus libgl1-nvidia-tesla-glx Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package libgl1-nvidia-tesla-glxI tried google this but none seems answer my question. So I tried paraduxos@ASUSDOGE:/$ sudo dpkg --add-architecture i386 && sudo apt update && sudo apt install bumblebee-nvidia primus libgl1-nvidia-glx primus-libs:i386 libgl1-nvidia-glx:i386 Hit:1 http://security.debian.org/debian-security buster/updates InRelease Hit:2 http://deb.debian.org/debian buster InRelease Hit:3 http://deb.debian.org/debian buster-updates InRelease Reading package lists... Done Building dependency tree Reading state information... Done All packages are up to date. Reading package lists... Done Building dependency tree Reading state information... Done primus-libs:i386 is already the newest version (0~20150328-7). Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation:The following packages have unmet dependencies: libgl1-nvidia-glx : Depends: libnvidia-glcore (= 418.74-1) but it is not going to be installed Recommends: nvidia-driver-libs-nonglvnd (= 418.74-1) but it is not going to be installed Recommends: nvidia-kernel-dkms (= 418.74-1) but it is not going to be installed or nvidia-kernel-418.74 E: Unable to correct problems, you have held broken packages.I don't know what to do next. Please help.UPDATE:Since sudo apt install bumblebee-nvidia primus libgl1-nvidia-tesla-glx return E: Unable to locate package libgl1-nvidia-tesla-glx so I remove that and run sudo apt install bumblebee-nvidia primus I do the same with sudo dpkg --add-architecture i386 && sudo apt update && sudo apt install bumblebee-nvidia primus libgl1-nvidia-glx primus-libs:i386 change to sudo dpkg --add-architecture i386 && sudo apt update && sudo apt install bumblebee-nvidia primus primus-libs:i386After I run bumblebee, it returns paraduxos@ASUSDOGE:~$ optirun glxgears -info [ 1097.543100] [ERROR]The Bumblebee daemon has not been started yet or the socket path /var/run/bumblebee.socket was incorrect. [ 1097.543133] [ERROR]Could not connect to bumblebee daemon - is it running?This is my second attempt after re-install Debian 10 (Live install non-free (XFCE) Debian non-free) This is my sources.list # See https://wiki.debian.org/SourcesList for more information. deb http://deb.debian.org/debian buster main contrib non-free deb-src http://deb.debian.org/debian buster main contrib non-freedeb http://deb.debian.org/debian buster-updates main contrib non-free deb-src http://deb.debian.org/debian buster-updates main contrib non-freedeb http://security.debian.org/debian-security/ buster/updates main contrib non-free deb-src http://security.debian.org/debian-security/ buster/updates main contrib non-free# buster-backports # deb http://deb.debian.org/debian buster-backports main contrib non-free # deb-src http://deb.debian.org/debian buster-backports main contrib non-freeI tried switch the comment of backport section (and run sudo apt update) but still got same result. I haven't do anything with my Xorg, .xinit, or anything. (I also read related question but I think I better ask here.)https://superuser.com/questions/1521457/debian-10-on-hp-desktop-with-geforce-gtx-1650-stuck-on-black-screen-and-cursor https://superuser.com/questions/1484109/debian-10-hybrid-graphics-how-to-use-nvidia-drivers-instead-of-nouveau
NVIDIA GTX 1650 not detected on Debian 10
I think the reason you're having problems is that your video card requires the proprietary nvidia 352 driver and the only driver available in the jessie, jessie-backports, and sid repositores is the version 340 driver. You should check on the Nvidia website drivers page to verify the version your card requires. The proprietary version 352 driver is currently available only in the Debian experimental repository. I've pulled it down and built the package on jessie. It's a noodle soup of dependency problems and will be a major task to install it in jessie (at least with my knowledge of the situation). That leaves two options: install the free drivers, or let the nvidia installer loose on your system. Both Debian and Arch (haven't checked others) and I strongly recommend against installing the proprietary drivers outside of the respective package management system If this was my system, I would install the free drivers and wait for the packages in experimental to make it to sid or jessie-backports before trying again. This from the Official Debian Wiki NvidiaGraphicsDrivers page:As of jessie, the need for the proprietary drivers is pretty much over - nouveau now works quite well and works with dual-headed displays by simple and easy configuring from within your desktop. The proprietary drivers don't provide normal logging and can be a hidden source of problems. If you are doing a distribution upgrade - you should at the very least remove all the nvidia packages from wheezy - get your desktop working with nouveau - then reinstall the nvidia packages if there is a pressing reason. I would follow the directions on the Official Debian Wiki Bumblebee page and make sure to install the bumblebee package not bumblebee-nvidia.
I am on Debian Jessie. I just wanted to install Nvidia drivers. But I found nvidia-detect does not detect my dedicated chip. Although it is listed in lshw. lshw -c video before any installation # lshw -c video *-display description: 3D controller product: GK107M [GeForce GT 750M] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom configuration: driver=nouveau latency=0 resources: irq:51 memory:f6000000-f6ffffff memory:e0000000-efffffff memory:f0000000-f1ffffff ioport:e000(size=128) memory:f7000000-f707ffff *-display description: VGA compatible controller product: 4th Gen Core Processor Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 06 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:49 memory:f7400000-f77fffff memory:d0000000-dfffffff ioport:f000(size=64)lshw -c video after installation and uninstallation of Nvidia drivers and blacklisting nouveau -> upon request $ lspci | grep VGA $ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06)$ lspci | grep NVIDIA $ lspci | grep NVIDIA 01:00.0 3D controller: NVIDIA Corporation GK107M [GeForce GT 750M] (rev a1)$ lspci -vnnn | egrep 'VGA|NVIDIA' $ lspci -vnnn | egrep 'VGA|NVIDIA' 00:02.0 VGA compatible controller [0300]: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller [8086:0416] (rev 06) (prog-if 00 [VGA controller]) 01:00.0 3D controller [0302]: NVIDIA Corporation GK107M [GeForce GT 750M] [10de:0fe4] (rev a1)Notice, the Nvidia is not listed under VGA. So, the HW is detected. But I installed Nvidia drivers according to this http://linuxconfig.org/nvidia-geforce-driver-installation-on-debian-jessie-linux-8-64bit and rebooted (into console) and ran nvidia-detect AND IT SAYS nvidia-detect # nvidia-detect No NVIDIA GPU detected.I am unable to startx, it ended up with an error. In the log I saw No screens detected or similar. So, what is wrong? Is the dedicated chip dead?
Is my Nvidia dead?
Finally i've found the solution. Something's completely wrong with all those multiple libGL.so.1 lib files around the system. So solution for this is performing next commands as root: apt-get purge bumblebee bumblebee-nvidia primus primus-libs primus-libs:i386 apt-get purge glx-diversions apt-get purge libgl1-mesa-glx:i386 apt-get autoremoveWait some time until everything is deleted (this might also delete skype and/or some other 32-bit programs if they depend on 32-bit libgl1-mesa-glx, don't worry, it's easy to restore them. In my case it was only skype) apt-get update apt-get install bumblebee-nvidia primus primus-libs primus-libs:i386 libgl1-mesa-glx:i386All the symlinks are recreated, all libraries' versions are correct, and everything works as intended.
Whenever i start a 32-bit program aka. 386 with primusrun on debian jessie (be it steam or any of it's 32-bit games), i get a following error: wv@localhost:~$ primusrun steam Running Steam on debian 8 64-bit STEAM_RUNTIME is enabled automatically Installing breakpad exception handler for appid(steam)/version(1437790054) libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrastI have bumblebee-nvidia, primus, primus-libs and primus-libs:i386 installed. Swrast driver is present in /usr/lib/i386-linux-gnu/dri/ directory. Both 32 and 64 bit libGL.so.1* are present in the system. What could be wrong here? Has anybody met and successfully resolved a similar problem? 64 bit games run through steam work fine (using launch options primusrun %command%). This occures to 32 bit games only.
Debian jessie, primus and 32-bit applications
To anyone interested: I have solved this problem by ditching Optimus altogether and just running on proprietary nVidia drivers. Before that I enabled only nVidia card in BIOS).
I need to run CUDA enabled applications, but (because of other things) I'd rather use Optimus technology rather than running on Nvidia card only. Note that for CUDA I need to use proprietary binary nvidia drivers. I followed: https://wiki.debian.org/Bumblebee this tutorial (and setup is mostly working). After reboot I can successfully run optirun something command. But after some time I get: optirun deviceQuery [ 4574.136296] [ERROR]Cannot access secondary GPU - error: Could not enable discrete graphics card [ 4574.136358] [ERROR]Aborting because fallback start is disabled.I use Quadro K2000M: lspci -k 01:00.0 VGA compatible controller: NVIDIA Corporation GK107GLM [Quadro K2000M] (rev ff) Kernel driver in use: nvidiaHere are my questions: Does anyone have pointers how to solve this problem on debian. Is there any guide on manually restarting nvidia card using bbswitch? How can I disable bbswitch at all (losing power management)
Optirun command stops working on Optimus system after some time
Added xrandr --setprovideroutputsource modesetting NVIDIA-0 xrandr --autoto my ~/.xinitrc file and that made me able to boot with my NVIDIA GPU enabled in BIOS, and without SDDM. The reason why that wasn't added yet was because when using SDDM I had to add that to /usr/share/sddm/scripts/Xsetup instead of to ~/.xinitrc. I completely forgot that I didn't add it my ~/.xinitrc file like it says here.
I can't boot my pc because the Xserver doesn't start. What's weird is that when I use SDDM it boots just fine. I also tried to use Lightdm a few months ago but it didn't want to boot, I'm guessing because of this problem. I've had this problem for like a year but it never really bothered me because I always used SDDM. I'd like to stop using it so that's why I need this fixed. System Info Neofetch: OS: Arch Linux x86_64 Host: 80WK Lenovo Y520-15IKBN Kernel: 4.18.14-arch1-1-ARCH Uptime: 10 mins Packages: 1554 (pacman) Shell: zsh 5.6.2 Resolution: 1920x1080, 1920x1080 DE: KDE WM: KWin WM Theme: Breezemite Theme: Breeze [KDE], Adwaita [GTK2], X-Arc-Plus [GTK3] Icons: Papirus-Light [KDE], Adwaita [GTK2], Papirus [GTK3] Terminal: konsole Terminal Font: DejaVu Sans Mono 10 CPU: Intel i7-7700HQ (8) @ 3.800GHz GPU: Intel Device 591b Memory: 1758MiB / 7851MiBI'm using the proprietary Nvidia driver. Not bumblebee or nouveau because of their performance hit. uname -a : Linux ArchLinux 4.18.14-arch1-1-ARCH #1 SMP PREEMPT Sat Oct 13 13:42:37 UTC 2018 x86_64 GNU/Linux pacman -Q nvidia : nvidia 410.57-6 I enabled KMS to eliminate screen tearing, but disabling it doesn't help. I don't have an xorg.conf file because when I do, even SDDM won't start. nvidia-xconfig has never worked for me. Logs: /var/log/Xorg.0.log: These are both executed with SDDM disabled, so I just log in to the first tty: when executing startx: https://hastebin.com/zadepawiwo when executing xinit : https://hastebin.com/muredinume With SDDM enabled, so a normal boot: https://hastebin.com/anatocavur (is hastebin the right place or should I upload them elsewhere?)EDIT: this is what I see in the terminal when I execute: sudo startx: (That d-bus thing might be interesting but I have no idea what it means). startx: Here's my .xinitrc: #!/bin/shuserresources=$HOME/.Xresources usermodmap=$HOME/.Xmodmap sysresources=/etc/X11/xinit/.Xresources sysmodmap=/etc/X11/xinit/.Xmodmapxsetroot -cursor_name left_ptr xrandr --output eDP-1 --primary xrandr --output HDMI-1 --above eDP-1 xrandr --dpi 96# merge in defaults and keymapsif [ -f $sysresources ]; then xrdb -merge $sysresources fiif [ -f $sysmodmap ]; then xmodmap $sysmodmap fiif [ -f "$userresources" ]; then xrdb -merge "$userresources" fiif [ -f "$usermodmap" ]; then xmodmap "$usermodmap" fi# start some nice programsif [ -d /etc/X11/xinit/xinitrc.d ] ; then for f in /etc/X11/xinit/xinitrc.d/?*.sh ; do [ -x "$f" ] && . "$f" done unset f fisxhkd & statnot & feh --bg-fill ~/Pictures/DnA7hZgU8AAxfxC.jpg:large.jpg exec bspwmAs for the 'possible duplicate' from here, I tried their solutions and they didn't work. For me, SDDM can start when I have it enabled, but I want to use bspwm without a DM. Disabling the NVIDIA GPU in my BIOS settings made startx work, so that reveals that the problem is with NVIDIA. Are my drivers the problem? Is it something else?
Nvidia Optimus laptop: startX and xinit don't work (Arch)
To answer my own question, I finally found the solution here:https://github.com/Bumblebee-Project/Bumblebee/issues/764#issuecomment-448327665So the actual problem is that the X Server and lspci freezes the system when encountering an NVidia GPU which is powered off. I guess setting the kernel option pci=noacpi just accidentally worked around this problem by breaking access to the NVidia GPU completely (Driver can't be loaded). The fix is to modify /etc/bumblebee/bumblebee.conf:Set PMMethod to none Set AlwaysUnloadKernelDriver to trueAfter this I was able to remove the pci=noacpi kernel option and the system boots up correctly, lspci no longer freezes and I'm able to use the NVidia GPU with optirun.
I have a Clevo N871EJ1 (Schenker Media 17) laptop here which gives me quite a headache. I tried to install Ubuntu 18.10, Debian Stretch and Debian Buster (Testing) and all locked up during installation or after installation with "CPU stuck" kernel messages. Was easily reproducible by calling lspci on command line which immediately locked up the machine. I was able to solve this by specifying the pci=noacpi kernel parameter and everything works fine now EXCEPT the NVidia GPU and that's what my question here is about (Just mentioned the initial locking problems in case it is related). The laptop has two GPUs: $ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation UHD Graphics 630 (Mobile) 01:00.0 VGA compatible controller: NVIDIA Corporation GP107M [GeForce GTX 1050 Mobile] (rev a1)The firmware of the machine (UEFI only, no legacy mode) has pretty much no configuration options so no way to select a dedicated GPU. So I guess this dreaded NVidia Optimus stuff is in use here. The Intel GPU works without problems with video acceleration and 3D acceleration so that's fine. But it would be a shame not using this GTX 1050 in there. So I installed bumblebee and the proprietary nvidia drivers (Debian package nvidia-driver version 390.87-6, Kernel 4.19.12-1), made sure the nouveau driver is properly blacklisted, buuuut it doesn't work: $ optirun glxinfo [29571.477699] [ERROR]Cannot access secondary GPU - error: Could not load GPU driver[29571.477772] [ERROR]Aborting because fallback start is disabled.In the kernel log I see this: [29571.206327] nvidia: module license 'NVIDIA' taints kernel. [29571.206329] Disabling lock debugging due to kernel taint [29571.224868] nvidia-nvlink: Nvlink Core is being initialized, major device number 240 [29571.225080] nvidia 0000:01:00.0: can't find IRQ for PCI INT A; please try using pci=biosirq [29571.225082] NVRM: Can't find an IRQ for your NVIDIA card! [29571.225083] NVRM: Please check your BIOS settings. [29571.225083] NVRM: [Plug & Play OS] should be set to NO [29571.225083] NVRM: [Assign IRQ to VGA] should be set to YES [29571.225085] nvidia: probe of 0000:01:00.0 failed with error -1 [29571.225095] NVRM: The NVIDIA probe routine failed for 1 device(s). [29571.225095] NVRM: None of the NVIDIA graphics adapters were initialized! [29571.266406] nvidia-nvlink: Unregistered the Nvlink Core, major device number 240When I follow the tips in the output and set pci=biosirq then the machine locks up again during boot. There is also no option for "Plug & Play OS" or "Assign IRQ to VGA" in the firmware (UEFI only, no legacy mode). So what else can I try to get the NVidia GPU working?
Using NVidia GPU on Clevo N871EJ1 laptop
After a while, I found out, it is a separate package called nvidia-prime-applet. sudo apt-get install nvidia-prime-appletand enabling it in Applets (one of Mint's configuration application).
Situation: I have upgraded two Nvidia Optimus laptops from Linux Mint 17.3 to version 18. Problem: After upgrade, and a fix for Nvidia drivers, I miss the applet in system tray indicating, which graphics card is currently in use.
Missing active graphics card indicator
Here is a step-by-step guide for what I did to make Nvidia Optimus work on Kubuntu 15.10 64-bit. Note that I describe the user friendly way because it's meant for all users to be able to do it.In the Device Manager choose the recommended driver, in my case nvidia-352 If you don't have it already, in Muon Discover find Muon Package Manager and install it Start Muon Package Manager, type nvidia Make sure all of the following packages are installed, as you will probably have to install some of themFrom the menu start Konsole and type sudo kate /etc/bumblebee/bumblebee.conf Change the following linesDriver=to Driver=nvidiaand KernelDriver=nvidia-currentto KernelDriver=nvidia-352and LibraryPath=Sorry I don't remember what was there. to LibraryPath=/usr/lib/nvidia-352:/usr/lib32/nvidia-352and XorgModulePath=Sorry I don't remember what was there. to XorgModulePath=/usr/lib/nvidia-352/xorg,/usr/lib/xorg/modulesReboot From menu start Konsole and type optirun steam if you play games via Steam.
I thought the Nvidia driver would install itself properly since it offered me Nvidia drivers installation after fresh system installation. It did not go well. So, how to make Nvidia Optimus work on Kubuntu 15.10 properly?
How to make Nvidia Optimus work on Kubuntu 15.10?
Solved by adding i915.enable_psr=0 kernel parameter.
I've installed Void Linux on Xiaomi Redmibook Pro 15 2022. I'm experiencing the strange graphic issue: In X11 UI works slow. It's even hard to manipulate mouse cursor: it freezes and gets stuck. But if I start video on youtube or run glxgears everything starts to work normal. Also in TTY if I hold any button screen is not been updated at real time. Letters appear after I release the button. And if I start X, wallpapers and bar appear only after I move cursor or press any button. I assume that the problem is with Intel driver. Because on kernels 5.13 and 5.15 lswh marks iGPU "UNCLAIMED" and I can't start X, but TTY works properly and letters appears immediately. Prerequisites: CPU: i7-12650 dGPU: Nvidia RTX2050 Kernel: 5.18.9 with nvidia-drm.modeset=1 mode setting WM: Qtile(same thing with i3) Display: 3200x2000p, 90Hz No DM. Also I did not configure DPI yet, UI is very small, but I don`t think it could be a problem. My ~/.xinitrc xrandr --setprovideroutputsource modesetting NVIDIA-0 xrandr --auto exec qtile start Output of glmark2 GL_VENDOR: Intel GL_RENDERER: Mesa Intel(R) Graphics (ADL GT2) GL_VERSION: 4.6 (Compatibility Profile) Mesa 22.1.3Output of prime-run glmark2 GL_VENDOR: NVIDIA Corporation GL_RENDERER: NVIDIA GeForce RTX 2050/PCIe/SSE2 GL_VERSION: 4.6.0 NVIDIA 515.48.07I've installed Ubuntu alongside with Void and everything works out of the box with kernel 5.15. If needed I can provide any additional information(maybe some outputs from Ubuntu vs Void) I tried to find any differences in dmesg and lshw outputs between Ubuntu and Void. I did not find something significant, except: 1)Void: setup_percpu: NR_CPUS:256 nr_cpumask_bits:256 nr_cpu_ids:16 nr_node_ids:1U buntu: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 2) Void dmesg does not contain these lines: [ 0.140758] DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations [ 0.140875] DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations [ 0.140999] DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations3)In Ubuntu lshw list devices models, but void lists only vendors. I would really appreciate any suggestion! There are the files containing outputs of corresponding programs/OS dmesg ubuntu dmesg void lshw-ubuntu lshw-void
Graphical lags on hybrid graphics laptop
I tried again on a fresh install of Ubuntu 18.04 and installed the Nvidia driver before anything else, and that worked (everything seems to be working now). I believe something else I had previously installed (not sure what) was conflicting with some of the files required by my graphics setup.
I recently got a new laptop (Thinkpad T480) which has Intel integrated "UHD Graphics 620" and an Nvidia MX150, and I installed Ubuntu 18.04. I installed the nvidia driver alright, and I believe I am using the Nvidia card successfully to run my laptop's display/external monitors. However, I have a problem displaying 3D content: when I try to create a 3D plot in Mathematica, the program simply crashes (this does not happen when I switch back to using my Intel card with prime-select). Furthermore, when I try to launch Steam, I get the error "OpenGL GLX extension not supported by display" (and again this does not occur and steam works normally when I use my integrated graphics). Finally, with the nvidia card selected, I am unable to even login to the standard gnome desktop environment (I simply get booted back out to the login screen). Luckily I normally use xmonad, and that seems to work fine. I tried reinstalling xserver-xorg which was suggested somewhere online but that didn't help. I saw other information about installing Bumblebee, but all of that seems to be from many years ago (and the latest release of Bumblebee is over 5 years old so I was a little wary about it). Nevertheless, I tried installing Bumblebee and, after modifying /etc/bumblebee/bumblebee.conf to use the correct directory for the libGL.so.1 driver, I was able to run a game through Steam. I never tried running Steam itself using optirun but I ran Civilization V with optirun through Steam and it seemed to work as intended, and I could see that the Nvidia card was being used with the program NVTOP. Civilization V does involve 3D graphics but I'm not sure if it uses OpenGL. I also tried running Minecraft (which I think does use OpenGL) through optirun and just got a window with a black screen. I tried optirun glxgears and got an error that said X Error of failed request: BadMatch (invalid parameter attributes)I did some more research and found that perhaps Bumblebee was not the way to go (multiple reports of bugs with Ubuntu 18.04)... so now I am back in the situation I described in the first and second paragraphs above. I figured it was time to ask for help. Below are the outputs to some commands I have seen in other questions related to this issue: Here is my output when I try to run glxinfo: name of display: :0 Error: couldn't find RGB GLX visual or fbconfigHere is my output when I try to run glxgears: Error: couldn't get an RGB, Double-buffered visualHere is my output when I run lspci -nnnk | grep "VGA\|'Kern'\|3D\|Display" -A2: 00:02.0 VGA compatible controller [0300]: Intel Corporation UHD Graphics 620 [8086:5917] (rev 07) Subsystem: Lenovo UHD Graphics 620 [17aa:225e] Kernel driver in use: i915 -- 01:00.0 3D controller [0302]: NVIDIA Corporation GP108M [GeForce MX150] [10de:1d10] (rev a1) Subsystem: Lenovo GP108M [GeForce MX150] [17aa:225e] Kernel driver in use: nvidia
Problems displaying 3D content with Nvidia graphics card in Ubuntu 18.04
The workdir option is required, and used to prepare files before they are switched to the overlay destination in an atomic action (the workdir needs to be on the same filesystem as the upperdir).Source: http://windsock.io/the-overlay-filesystem/ I would hazard a guess that "the overlay destination" means upperdir. So... certain files (maybe "whiteout" files?) are non-atomically created and configured in workdir and then atomically moved into upperdir.
OverlayFS has a workdir option, beside two other directories lowerdir and upperdir, which needs to be an empty directory. Unfortunately the kernel documentation of overlayfs does not talk much about the purpose of this option.The "workdir" needs to be an empty directory on the same filesystem as upperdir.For readonly overlays the workdir might be ommittet among the upperdir. This give me the clue that it has to do with writing the merged files. Please explain what's happening in the workdir when files are written or changed in the merged directory. Why is the writable upperdir not enough?
Linux Filesystem Overlay - what is workdir used for? (OverlayFS)
Here are some thoughts - I am still learning this and will update this as I go. How to choose the union filesystem There are two ways to look at this:How do the features of each one compare? For some common use cases, which one should I choose?I'll compare unionfs / unionfs-fuse / overlayfs / aufs / mergerfs, the latter being a replacement for mhddfs. Features of each one Development statusaufs seems to be active unionfs looks mature, but not under active development? unionfs-fuse seems to be active mergerfs seems to be active overlayfs is activeDistribution / Kernel support There are kernel mode and usersystem mode filesystems, the latter run on FUSE. Kernel mode ones have less overhead (there is overhead when code switches between user space and kernel space) but the only one currently supported in the Linux kernel is overlayfs. User mode filesystems are easier for distributions to package.unionfs and aufs need kernel patches unionfs is not distributed by Debian (the rest are) unionfs-fuse and mergerfs are based on FUSE, so don't need to additional modules in the kernel overlayfs has been part of the kernel since 3.18 (Debian Stretch)Copy on write This relates to the Live CD use case below:mergerfs does not have copy on write The others doUse cases Read-only root / The Live CD use case The idea is to have a read-only CD-ROM/partition of a linux system. The union filesystem makes it look to the user like it is a read-write system so they can make changes. There is a read-write filesystem (for example, a tmpfs RAM disk) which stores the "Delta" of any changes made by the user, but not the full snapshot. Here any of the union filesystems except mergerfs would do (lack of cow support). Docker use case I am aware this is a main use case, but don't know the details - can someone provide guidance on this? Merging hard disks For example, you might have two sets of /home directories on different filesystems. Or you might be upgrading your home computer with a second hard disk, and want a single logical volume. This is where you don't actually want copy-on-write, so possibly mergerfs is the best choice. Union filesystem versus LVM for disk pooling I'll list some use cases that can be achieved with union filesystems but not LVM: If you are upgrading an existing system with a second disk, something like mergerfs might be better because LVM would require you to reformat the first hard disk hence destoying the data on it. A union filesystem would avoid this step. LVM might split a file over two physical hard disks (assuming RAID 0), so you would lose it if one hard disk fails. Some users might like, for example, to keep their /home directory on a USB stick that they can take away. In the use case of one virtual partition on two physical disks, with LVM you wouldn't need to worry about whether files get saved on one disk or the other. With mergefs, the system can automatically choose which one for you depending on how much free space is available.
I have randomly been reading about union file system which enables a user to mount multiple filesystems on top of one another simultaneously. However, am finding trouble deciding on which one to use(Unionfs vs Aufs vs Overlayfs vs mhddfs) and why as I have not found concrete information on the subject anywhere. I know for instance that overlayFS has been adopted in the mainstream Linux kernel which means it might get wider adoption. Would appreciate if someone would give me some perspective. Also I can't find any conceiving use-case for Union file system over something like LVM (as recommended by users in separate question) or RAID setup except in the fact that LVM requires formatting all the drives which might not be desirable if you already have valuable data on the drives.
Unionfs vs Aufs vs Overlayfs vs mhddfs, which one do I use
Managed to make /var/log overlay, it shows SSD log files, and changes. All changes are kept in RAM. Later I'll do syncing, so changes become permanent every hour, by copying upper layer to lower. #prepare layers sudo mkdir -p /var/log.tmpfs sudo mount -t tmpfs -o rw,nosuid,nodev,noexec,relatime,size=512m,mode=0775 tmpfs /var/log.tmpfs sudo mkdir -p /var/log.tmpfs/upper sudo mkdir -p /var/log.tmpfs/work sudo chown -R root:syslog /var/log.tmpfs sudo chmod -R u=rwX,g=rwX,o=rX /var/log.tmpfs#prepare overlay sudo mkdir -p /var/log.overlay sudo chown root:syslog /var/log.overlay sudo chmod u=rwX,g=rwX,o=rX /var/log.overlay#start overlay sudo mount -t overlay -o rw,lowerdir=/var/log,upperdir=/var/log.tmpfs/upper,workdir=/var/log.tmpfs/work overlay /var/log.overlay sudo mount --bind /var/log.overlay /var/logTo make changes persistent, its needed to unmount bind /var/log, copy files, then bind again.
Instead of just mounting tmpfs on /var/log I want to use overlayfs. /var/log are writable tmpfs, but containing files were there before tmpfs mount. This old files are not in memory of tmpfs but in lower layer. only changes are stored in tmpfs, while old and unmodified files stored on SSD sometimes it should be possible to write changes to SSD, for example via cron. This should free up tmpfs memorySo, result should be: logs written to RAM, old and new boot logs accesable via same path. Changes are written sometimes to disk, by script. Point is to speed up a little, and safe SSD from many writes. (I saw similar thing in puppy linux, not for logs, but for all changes to root, but without installing it can't do the same, documentation not helps) I will do same for browser cookies/cache based on answer. But persistent write will be done on browser close. Can't turn off browser cache, need at least small cache to have same bugs in my web development as users can have because of cache.
Mount /var/logs as tmpfs, with help of overlayfs to save changes sometimes
/dev/mmcblk0p2 is the root filesystem where your Linux distribution is installed. Where 2.2GB is in use and 55GB is available. /var/lib/docker/overlay2 directory is the location where Docker stores its images and containers. Docker uses a copy-on-write file system for the storage, which creates a new layer on top of the existing file system. This is the Docker overlay filesystem and the location you see with df -h overlay 58G 2.2G 55G 4% /var/lib/docker/overlay2/b28da5a318945ac7ae1d17d26a635edb9a662c6116dea37fb4f6c13e1c76d7d2/mergedYou have 55GB of storage available on your SD card to store files, install packages, and use Docker, etc.. You can change this path, to another directory, storage or disk if you want. Docker storage drivers Use the OverlayFS storage driverThe legacy overlay driver was used for kernels that did not support the β€œmultiple-lowerdir” feature required for overlay2 All currently supported Linux distributions now provide support for this, and it is therefore deprecated.Overlay in Docker Docker uses the overlay filesystem to create images as well as to position the container layer on top of the image layers. When an image is downloaded, its layers are located inside the /var/lib/docker/overlay2 folder.The merged folders are overlay filesystems, they take no disk space themselves, instead df reports the disk usage of the underlying filesysem, which in your case is //var/lib/docker/overlay2/*/merged take too much space than it should be
I'm new to linux. I installed armbian to an sd card and everything works fine. The sd card is 64GB. Then I installed docker.io, docker-compose and portainer, nothing else. When I check for disk space with lsblk: # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT mmcblk0 179:0 0 59.5G 0 disk β”œβ”€mmcblk0p1 179:1 0 512M 0 part /boot └─mmcblk0p2 179:2 0 58.4G 0 part / mmcblk1 179:32 0 14.6G 0 disk mmcblk1boot0 179:64 0 4M 1 disk mmcblk1boot1 179:96 0 4M 1 disk zram0 254:0 0 50M 0 disk /var/log zram1 254:1 0 929.4M 0 disk [SWAP]Then with df: # df -h Filesystem Size Used Avail Use% Mounted on udev 796M 0 796M 0% /dev tmpfs 186M 8.0M 178M 5% /run /dev/mmcblk0p2 58G 2.2G 55G 4% / tmpfs 930M 0 930M 0% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 930M 0 930M 0% /tmp /dev/mmcblk0p1 511M 59M 453M 12% /boot /dev/zram0 49M 7.0M 38M 16% /var/log overlay 58G 2.2G 55G 4% /var/lib/docker/overlay2/b28da5a318945ac7ae1d17d26a635edb9a662c6116dea37fb4f6c13e1c76d7d2/merged tmpfs 186M 0 186M 0% /run/user/0Why are there 2 remaining 55 GB (/dev/mmcblk0p2 and the overlay filesystem)? Does this mean that I can only use the 55 GB space on the /var/lib/docker/overlay2/.../merged folder? Thank you
Remaining disk space on docker overlay filesystem
I ran into the exact same problem (trying to do the exact same thing: a read-only base system for a PXE boot environment). The behavior I'm seeing exactly matches that in this Ubuntu bug report. I cannot write changes to existing files, but I can delete them (but they're preserved in the lower filesystem... with the upper layer just recording their absence) and then write them back again (at which point the upper layer has a copy which I can edit). From what I can dig up, this is happening because OverlayFS and NFS don't play nice together. The thing that we're expecting to happen, when we try to modify a file from the lower filesystem, is called "copy up", and it breaks when OverlayFS tries to use a NFS filesystem because NFS doesn't support xattrs. Thus far, the only promising avenue I can find is that it's possible to use fuse_xattrs to emulate xattrs on your NFS mount (described here), but it requires that you have two mounts of your NFS share: the "real" one, and the "xattr-enhanced" one that needs the first.
I'm trying to set up a PXEboot environment in which the base system (served over NFS to the PXE clients) is read only, and the root filesystem is an overlayfs filesystem with the read-only NFS base system as the lowerdir and a tmpfs as the upper/work dir's. I edited an AuFS initramfs script to use OverlayFS, and it's working well, except that when you try to edit a file that's in the lowerdir (say, for example, /etc/environment), it is opened as read-only, which is not the case for new files (say, for example, /etc/foobar) or files that have already been copied up to the upper directory. The problem can be mitigated by simply doing a touch before attempting to edit the file, but, it's less than ideal and is likely to break other applications. AuFS didn't have this issue. Any advice? Here's the relevant part of the initramfs script (in /etc/initramfs-tools/scripts/init_bottom/00_overlayfs_init), edited for brevity. mkdir /overlay mkdir /local mkdir /remote# mount the temp file system and move real root out of the way mount -t tmpfs none /local mount --move ${rootmnt} /remotemkdir /local/rw mkdir /local/workmount -t overlayfs -o lowerdir=/remote,upperdir=/local/rw,workdir=/local/work overlay /overlay#test for mount points on overlay file system [ -d /overlay/ro ] || mkdir /overlay/remote [ -d /overlay/rw ] || mkdir /overlay/localmount --move /remote /overlay/remote mount --move /local /overlay/localmount --move /overlay ${rootmnt}edit: more info Trying to edit (with an editor, e.g. Vim) results in vim stating that the file is read only, and on :wq!, E166: Can't open linked file for writing. root@dark-node:~# echo FOO=bar >> /etc/environment -bash: /etc/environment: Permission denied root@dark-node:~# echo FOO=bar > /etc/environment -bash: /etc/environment: Permission denied root@dark-node:~# touch /etc/environment root@dark-node:~# echo FOO=bar >> /etc/environment root@dark-node:~# cat /etc/environment PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/game::/usr/local/games" FOO=bar root@dark-node:~# uname -a Linux dark-node 4.4.0-57-generic #78-Ubuntu SMP Fri Dec 9 23:50:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
OverlayFS Seamlessly Edit File in Lower Directory
The challenge of mounting root as overlayfs has been solved. Briefly, the 'lower', 'work' and 'upper' directories should be moved to the 'merge' dir. However, you should consider: 1) There is no need to do something if the 'lower' directory is present as a disk image. Just mounting it. If not, create tmpfs mount point on it and copy all needed files over NFS into it. 2) The 'upper' and 'lower' directories must be located in one filesystem. Creating another tmpfs mount point and thus placing 'upper' and 'lower' directories on it will be enough. 3) Ensure that your initrd.img has modules for NFS and Overlayfs. If they do not exist, then add them in /etc/initramfs-tools/modules. 4) Ensure that your initrd.img has the full version of the 'mount' command. It it does not exist, then add it over hooks in /etc/initramfs-tools/hooks. For example (some details have been omitted): /etc/initramfs-tools/hooks/mount_full:#!/bin/sh PREREQ="/bin/mount" prereqs() { echo "$PREREQ" }case $1 in prereqs) prereqs exit 0 ;; esac. /usr/share/initramfs-tools/hook-functions # Begin real processing below this linecopy_exec /bin/mount /bin/mount_fullexit 0 Finally, add the pre-mount script in /etc/initramfs-tools/scripts/init-premount/. For example: /etc/initramfs-tools/scripts/init-premount/ramboot:#!/bin/sh PREREQ="" prereqs() { echo "$PREREQ" }case $1 in prereqs) prereqs exit 0 ;; esac. /scripts/functions # Begin real processing below this line# Preparing work dirs mkdir /overlaytmp mkdir /overlaytmp/lower mkdir /overlaytmp/upper_and_work mkdir /overlaytmp/merge mkdir /ramboottmp# Preparing RAM disks and thus layers mount -t tmpfs -o size=100% none /overlaytmp/lower mount -t tmpfs -o size=100% none /overlaytmp/upper_and_work mkdir /overlaytmp/upper_and_work/upper mkdir /overlaytmp/upper_and_work/work... mount nfs_share /ramboottmp ...# Copy root content over NFS to RAM echo "Copying / to RAM ..." cp -rfa /ramboottmp/* /overlaytmp/lower # Preparing layers mount points mkdir /overlaytmp/lower/mnt/lower mkdir /overlaytmp/lower/mnt/upper_and_work # Lower layer will be read-only mount -o remount,ro /overlaytmp/lower# Mounting overlayfs mount -t overlay -olowerdir=/overlaytmp/lower,upperdir=/overlaytmp/upper_and_work/upper,workdir=/overlaytmp/upper_and_work/work none /overlaytmp/merge# Moving layers to merge layer mount --move /overlaytmp/lower /overlaytmp/merge/mnt/lower mount --move /overlaytmp/upper_and_work /overlaytmp/merge/mnt/upper_and_work# Moving merge layer to finally root mount --move /overlaytmp/merge ${rootmnt}umount /ramboottmp
I've been trying mount root (/) as overlayfs. OS is booting over NFS to RAM. I've added a premount script in initrd, which creates the 'work', 'upper' and 'lower' directories. During the boot process I'm copying the contents of NFS to the 'lower' dir. Overlayfs is being mounted into ${rootmnt} after that. Finally, the init script chroots to ${rootmnt} (next, init from real root etc...) and the OS works fine. Naturally I can't see the 'work' and 'upper' dirs. How can I do this? What must I change in initrd?
Mount root as overlayfs
If it's a case of tail not working at all, then it could be because your liveCD is using the overlayfs filesystem, which has a bug regarding notifications of modified files. You could try to move the log to another filesystem, such as /tmp if the application creating the log has an option to do so. You could also carry out your test in /tmp instead of your homedir.
For some test I start Ubuntu Live from USB. I'm trying to use tail command to show debug log, but it doesn't work. I also test opening two terminals (t1, t2) with this code: t1: touch a t2: tail -f at1: for i in `seq 1 10`; do echo $i >> a; sleep 1; doneNothing in t2! What can be the cause?
tail -f produces no output in Ubuntu live CD
This probably has to do with the fact that for a ready only mount (which is probably the case for a clonezilla usb key) a workdir is not needed and is left blank in the OverlayFS mount config. Thus the notice is saying that it is missing, but in this case that is nothing to worry about. You can just go ahead and clone your partition!
I would clone my partition /dev/sda1 on Ubuntu 14.04. I've downloaded Clonezilla in order to clone this partition and be able to reinstall it later. As recommended on the CloneZilla's website, I used Tuxboot to install a CloneZilla iso on my USB key. I've started my PC from the USB key, it worked and I saw a GRUB-like menu that contains a list of boot modes. The problem is the following, I can't boot in any mode, I have the following error: overlayfs: missing 'workdir'I have no idea about the problem it is the first time I try to clone a partition.
overlayfs: missing 'workdir' on CloneZilla start
This doesn't seem to be filesystem related but a file cache memory thing. To reproduce this exact case I first cloned the Emacs repo to the /usr/src/emacs. $ docker run --mount type=bind,source=/usr/src/emacs/.git,destination=/mnt/emacs.git,ro -d --rm --name test debian tail -f /dev/null $ docker exec -it test sh -c 'apt update && apt install -y git && git clone /mnt/emacs.git' $ docker stats --no-stream test CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS c468a203b9d2 test 0.00% 46.91MiB / 15.52GiB 0.30% 30.5MB / 667kB 0B / 786MB 1In my case I didn't get ~500MiB memory usage, I'm guessing this has something to do with the filesystem, but the point is the resulting memory usage is about the same as when cloning from the network. Next I deleted the resulting /emacs directory and the memory usage dropped. $ docker exec -it test rm -rf /emacs $ docker stats --no-stream test CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS c468a203b9d2 test 0.00% 15.84MiB / 15.52GiB 0.10% 30.5MB / 667kB 0B / 786MB 1To test whether this was filesystem related I skipped the cloning part and tried mving the emacs directory from another directory located in the same mount. Assume /usr/src/emacs and the overlay upper directory of the container are in the same block device mount. $ docker run -d --rm --name test debian tail -f /dev/null $ TEST_UPPER=$(docker container inspect --format '{{.GraphDriver.Data.UpperDir}}' test) $ mv -v /usr/src/emacs "$TEST_UPPER/emacs" renamed '/usr/src/emacs' -> '/var/lib/docker/overlay2/dc228a74029510c61ce454549248132bec806686c4da9eac8909d89595ff5a32/diff/emacs' $ docker stats --no-stream test CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS e700998acb86 test 0.00% 400KiB / 15.52GiB 0.00% 3.79kB / 866B 0B / 0B 1Again, but docker cp-ing to a container directory in the root: $ docker run -d --rm --name test debian tail -f /dev/null $ docker exec -it test mkdir /emacs $ tar -C /usr/src/emacs -c . | docker cp - test:/emacs $ docker exec -i ls -la /emacs | wc -l 42 $ docker stats --no-stream test CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 4ea520193998 test 0.00% 444KiB / 15.52GiB 0.00% 3.79kB / 866B 0B / 0B 1According to this, it seems that all that memory usage is cache memory used by apt and git. The recommendation is simple, don't modify your container during runtime.
I'm trying to determine why a web server running in a Dockerized environment is consuming more memory than I expect it to. While investigating this problem, I discovered the following behavior which doesn't make sense to me:Create a Docker container, docker run -d --name test ubuntu:22.04 tail -f /dev/null Check docker stats, memory usage is reported as 400KiB / 62.68GiB Check docker top test, only one process running (tail -f /dev/null) Now create a large number of files and directories in the running container, for example docker exec -it test sh -c 'apt update && apt install -y git && git clone https://git.savannah.gnu.org/git/emacs.git' Check docker stats again, memory usage is now reported as 494.9MiB / 62.68GiB Check docker top test, verify that still only one process is running (tail -f /dev/null)So what is using this almost 500MB of memory in my Docker container? I was under the impression that memory was consumed by processes, yet there is only one process in the container and it cannot be responsible. Is the memory usage from the filesystem? The host system is running btrfs. I use the default storage driver for Docker, overlay2, and here is information on the overlay filesystem: % docker inspect test | jq '.[].GraphDriver.Data.MergedDir' -r /var/lib/docker/overlay2/559d41c89d074c45a1ae89109ebec145e8fd4d929151819f08aaae26f73f7bda/merged% mount | grep overlay overlay on /var/lib/docker/overlay2/559d41c89d074c45a1ae89109ebec145e8fd4d929151819f08aaae26f73f7bda/merged type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/JADZ5UYJDWI5EDAAGALCRWUM3B:/var/lib/docker/overlay2/l/JK3H4JAYNUAUSN2W3V2DQ6TB6C,upperdir=/var/lib/docker/overlay2/559d41c89d074c45a1ae89109ebec145e8fd4d929151819f08aaae26f73f7bda/diff,workdir=/var/lib/docker/overlay2/559d41c89d074c45a1ae89109ebec145e8fd4d929151819f08aaae26f73f7bda/work)% sudo ls -lAd /var/lib/docker/overlay2/l/JADZ5UYJDWI5EDAAGALCRWUM3B/emacs ls: cannot access '/var/lib/docker/overlay2/l/JADZ5UYJDWI5EDAAGALCRWUM3B/emacs': No such file or directory% sudo ls -lAd /var/lib/docker/overlay2/l/JK3H4JAYNUAUSN2W3V2DQ6TB6C/emacs ls: cannot access '/var/lib/docker/overlay2/l/JK3H4JAYNUAUSN2W3V2DQ6TB6C/emacs': No such file or directory% sudo ls -lAd /var/lib/docker/overlay2/559d41c89d074c45a1ae89109ebec145e8fd4d929151819f08aaae26f73f7bda/diff/emacs drwxr-xr-x 1 root root 602 Feb 25 16:08 /var/lib/docker/overlay2/559d41c89d074c45a1ae89109ebec145e8fd4d929151819f08aaae26f73f7bda/diff/emacs% sudo ls -lAd /var/lib/docker/overlay2/559d41c89d074c45a1ae89109ebec145e8fd4d929151819f08aaae26f73f7bda/merged/emacs drwxr-xr-x 1 root root 602 Feb 25 16:08 /var/lib/docker/overlay2/559d41c89d074c45a1ae89109ebec145e8fd4d929151819f08aaae26f73f7bda/merged/emacsSince my base filesystem is btrfs, I would expect that the cloned Git repository would be residing on disk in btrfs (under /var/lib/docker/overlay2/559d41c89d074c45a1ae89109ebec145e8fd4d929151819f08aaae26f73f7bda/diff), which is not within any special volume mount, and thus would not be consuming memory. If I remove the added files and directories, e.g. with docker exec test rm -rf emacs, then the memory usage drops instantly to 68.16 MiB / 62.68GiB. Questions:What is consuming this memory? What misunderstanding do I have about Docker containers or the overlay2 filesystem that leads me to assume adding files and directories to the working layer should not consume memory? Is there a way I can reduce the amount of memory that is consumed in a Docker container from sources other than process RAM, by changing configuration either with the Docker daemon or with the containerized workload?I reviewed existing questions on UNIX Stack Exchange under the [docker] tag and wasn't able to find any that helped my understanding. I also read the official Docker documentation on overlayfs and briefly reviewed the Linux kernel documentation for overlayfs, but didn't find any information that would explain the memory usage. My end goal is to reduce resource consumption for an idle Dockerized workload running in production, since I am billed based on CPU and memory usage. Based on production metrics I observe that my workloads do not release as much memory as they should when they become idle, even though the processes that were using that memory have been terminated. Tests above were performed on Pop!_OS 22.04 with Docker CE 23.0.1 and Linux kernel 6.0.12 using btrfs. System parameters for my btrfs My /etc/fstab showing the mount options for the filesystem - basically, the defaults for everything: UUID=ee43bc7e-1d9d-4300-a46a-5d2d772a1e0f / btrfs defaults,subvol=@ 0 0 UUID=ee43bc7e-1d9d-4300-a46a-5d2d772a1e0f /.snapshots btrfs defaults,subvol=@snapshots 0 0 UUID=127d5704-324b-4a50-97bf-9f5f4646ddd5 /home/raxod502 btrfs defaults,subvol=@ 0 0 UUID=127d5704-324b-4a50-97bf-9f5f4646ddd5 /home/raxod502/.snapshots btrfs defaults,subvol=@snapshots 0 0I take automated snapshots using snapper, about 100 in total kept by my retention policy: % sudo snapper -c system list | wc -l 41 % sudo snapper -c home list | wc -l 44I have never explicitly used reflinks, nor software that to my knowledge creates them automatically. The most advanced btrfs usage I have is snapper, as mentioned above. Here is the output of sudo slaptop -o: https://gist.github.com/raxod502/843a6580dc6fa0f9949d2f4dc03b5c23 Happy to provide any further system configuration details that may be helpful. Comparison for ext4 I ran the same test on ext4 on another Linux system, got roughly the same results. Memory usage before clone was 328KiB / 969.4MiB. Memory usage after clone was 233.5MiB / 969.4MiB. Memory usage after deleting cloned repository was 16.84MiB / 969.4MiB.
Disk space and inodes appear to consume RAM in a Docker container
// Experience based on kubuntu 22 lts livecd after chroot the last step of ramdisk (/cdrom/casper/initrd) is run-init {rootmnt}" "${init}" "$@" which do something like chroot {rootmnt}" "${init}" "$@" after that step it may affect the observation of the original mount point. before chroot Fortunately there are ways to pause at an interactive shell before chrooting (press ctrl+d or exit to continue) kernel boot cmdline args break=top,premount,mount,mountroot,bottom,init may do the trick. //BTW: manjaro 22 only support break=premount (same as break=y) or break=postmount, not support multiple values compound with ,also another cmdline args might help debug or debug=y which turn on detail log during ramdisk running // boot args can be edit at grub menu by press e // those information comes from reading the ramdisk script in ramdisk you already unmkinitramfs and found scripts/casper which handle casper-rw persistence things ./scripts/casper setup_overlay() { image_directory="$1" rootmnt="$2" # Mount up the writable layer, if it is persistent then it may well # tell us what format we should be using. mkdir -p /cow cowdevice="tmpfs" cow_fstype="tmpfs" cow_mountopt="rw,noatime,mode=755" # Looking for "$(root_persistence_label)" device or file if [ -n "${PERSISTENT}" ]; then cowprobe=$(find_cow_device "$(root_persistence_label)") if [ -b "${cowprobe}" ]; then cowdevice=${cowprobe} cow_fstype=$(get_fstype "${cowprobe}") cow_mountopt="rw,noatime" else [ "$quiet" != "y" ] && log_warning_msg "Unable to find the persistent medium" fi fi mount -t ${cow_fstype} -o ${cow_mountopt} ${cowdevice} /cow || panic "Can not mount $cowdevice on /cow"code here determine the persistence partition, and mount it keep /cow after chroot in ramdisk shell, after /cow /root ready mkdir /root/_cow mount -o bind /cow /root/_cowthen after chroot into system, /_cow can access the original outer /cow
I wanted to see what files are added on top of ISO 9660 when LiveUSB Linux is running. When booted with persistence upper and work folders are on USB drive clearly seen. I run mount on Linux booted from LiveUSB "usual way" (w/out persistence) and saw / is mounted via overlayfs and upperdir=/cow/upper. But sudo ls /cow gives no such file or directory. Where is /cow and how to see its contents? Added 1: I was able to extract contents of initrd from liveUSB via unmkinitramfs (see https://unix.stackexchange.com/a/495524/446998) $ find . -type f -exec bash -c 'cat {} | grep "/cow/upper" && ls -l {}' \; if [ ! -d /cow/upper ]; then mkdir -p /cow/upper /cow/lost+found|/cow/upper|/cow/log|/cow/crash|/cow/install-logs-*) continue ;; mv "$cow_content" /cow/upper mount -t overlay -o "upperdir=/cow/upper,lowerdir=$mounts,workdir=/cow/work" "/cow" "$rootmnt" || panic "overlay mount failed" -rw-r--r-- 1 alex alex 33834 Jun 24 2020 ./main/scripts/casper Next step I envision is to understand how /cow is created because is not see in contents of initrd
Where is /cow for Linux booted from LiveUSB?
I think this is because target.work/work on mount belongs to root. Could you try to chown that dir after mount? However I found simpler reproduction case, we just need dir which is in lower, but not in upper: # from user mkdir -p upper lower/test2 target target.work # from root mount -t overlay -o lowerdir=lower,upperdir=upper,workdir=target.work overlay target # from user again unshare -Ur touch target/test2/1If I insert chown user:user target.work/work after mount then all works okay. Not sure if we should think about this as overlayfs bug or feature :)
OverlayFS mount works weird when accessed from within the unprivileged user namespace. Best to be explained in example: ~# uname -a Linux host 4.1.0-1-amd64 #1 SMP Debian 4.1.3-1 (2015-08-03) x86_64 GNU/Linux ~# runuser - test -c id uid=2000(test) gid=2000(test) groups=2000(test) ~# cat /etc/subuid /etc/subgid | grep test test:200000:65536 test:200000:65536 ~# cd ~test /home/test# mkdir -p upper/test1 lower/test2 target target.work /home/test# chown -R test:test upper lower target target.work /home/test# mount -t overlay -o lowerdir=lower,upperdir=upper,workdir=target.work overlay target /home/test# mount | grep test overlay on /home/test/target type overlay (rw,relatime,lowerdir=lower,upperdir=upper,workdir=target.work)Overlay is mounted and works as expected: /home/test# runuser - test ~$ cd target ~/target$ ls -l total 8 drwxr-xr-x 2 test test 4096 Sep 15 13:50 test1 drwxr-xr-x 2 test test 4096 Sep 15 13:50 test2 ~/target$ mkdir test3 ~/target$ mkdir test2/test2-3 ~/target$ mkdir test1/test1-3Lets try unprivileged user namespace now ~/target$ ^D /home/test/target# cd .. /home/test# umount target /home/test# rm -rf upper lower target target.work /home/test# mkdir -p upper/test1 lower/test2 target target.work /home/test# chown -R 200000:200000 upper lower target target.work /home/test# mount -t overlay -o lowerdir=lower,upperdir=upper,workdir=target.work overlay target /home/test# mount | grep test overlay on /home/test/target type overlay (rw,relatime,lowerdir=lower,upperdir=upper,workdir=target.work)Make sure that unprivileged namespaces are allowed: /home/test# sysctl -w kernel.unprivileged_userns_clone=1Okay, lets try it: /home/test# runuser - test ~$ lxc-usernsexec -m u:0:200000:65536 -m g:0:200000:65536 -m u:65536:2000:1 -m g:65536:2000:1 -- /bin/bash ~# cd target ~/target# ls -l total 8 drwxr-xr-x 2 root root 4096 Sep 15 13:57 test1 drwxr-xr-x 2 root root 4096 Sep 15 13:57 test2So far so good. ~/target# mkdir test3 ~/target# mkdir test2/test2-3 ~/target# mkdir test1/test1-3 mkdir: cannot create directory 'test1/test1-3': Permission deniedAnd this is where it gets broken. Aufs works fine in the same scenario (except for debian 4.1 kernel doesn't support aufs anymore). Is there any way to make it working?
OverlayFS doesn't work with unprivileged user namespace
What kernel are you using? it seems that a bug was introduced in kernel 4.2: https://github.com/coreos/rkt/issues/1537
I have two directories (a and b), which are NFS shares with files foo.txt and bar.txt: I want merge this two directories to directory merge (does not have to be writable) this is possible by command: sudo mount -t overlay -olowerdir=a:b overlay merge At first sight everything is ok: . β”œβ”€β”€ a β”‚ └── foo.txt β”œβ”€β”€ b β”‚ └── bar.txt └── merge β”œβ”€β”€ bar.txt └── foo.txtBut i can not read content of files:$ cat merge/foo.txt cat: merge/foo.txt: No such device or addressThis occurs only on NFS share, on plain FS no problem.According to documentation https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt:An overlay filesystem combines two filesystems - an 'upper' filesystem and a 'lower' filesystem. A read-only overlay of two read-only filesystems may use any filesystem type.So I guess NFS is not a problem.
Merge two NFS shares with OverlayFS
ERROR: type should be string, got "\nhttps://github.com/torvalds/linux/commit/e54ad7f1ee263ffa5a2de9c609d58dfa27b21cd9\n /*\n * procfs isn't actually a stacking filesystem; however, there is\n * too much magic going on inside it to permit stacking things on\n * top of it\n */\ns->s_stack_depth = FILESYSTEM_MAX_STACK_DEPTH;This might not be a very informative answer, but the kernel developers specifically don't support it.\n"
Run the following commands on linux (4.4.59 and 4.9.8 are tested) and it will fail: mkdir -p /tmp/proc mount -t overlay overlay -o lowerdir=/proc:/tmp/proc /tmp/procand there is a error message in dmesg: overlayfs: maximum fs stacking depth exceededWhy can't /proc be a layer of a overlay file system? If I replace /proc with /dev or /sys, it mounts without issue, so it seems there is something special with /proc. P.S. The use case is creating a safer chroot environment, I want to make /dev, /sys and /proc read-only in chroot. There are 2 known workarounds: read-only bind mount. The limitation is two commnads instead one required. read-only special mount: mount -t proc -o ro none /tmp/proc. The limitation is sub-mount not mapped automatically.Anyway, I'm still curious about why /dev and /sys play well with overlay but /proc doesn't. The question is migrated from stackoverflow.
Why can't /proc be a layer of a overlay file system (overlayfs) on linux?
This is because on Debian you do not have a kernel driver for overlayfs: so you'll need to use a userspace filesystem driver for overlayfs. First make sure it's installed, sudo apt install fuse-overlayfsThen add this argument to podman (NOT podman run), --storage-opt mount_program=/usr/bin/fuse-overlayfsIn your case it should look like this podman --storage-opt mount_program=/usr/bin/fuse-overlayfs --storage-opt ignore_chown_errors=true run [...]This option can also be set in ~/.config/containers/storage.conf under mount_program
When I run podman with --storage-opt ignore_chown_errors=true I am gettingError: kernel does not support overlay fs: 'overlay' is not supported over extfs at /home/user/.local/share/containers/storage/overlay: backing file system is unsupported for this graph driver
Error: kernel does not support overlay fs: 'overlay' is not supported over extfs
enable the kernel module withmodprobe overlayalternatively add overlay to systemd :echo "overlay" > /etc/modules-load.d/overlay.confand restart
I'm trying to learn about docker and the overlayFS, as suggest by the official website, I installed the last kernel: 4.8.11-1.el7.elrepo.x86_64and as suggest by them to ensure that overlay is good to go, I used: $ lsmod | grep overlayBut no success, it is because I don't have any FS or directory that is mounted with overlay, or should I do something else, like installing or loading the overlay kernel module? If so, can you suggest to me any tutorial to do so For information I'm on: CentOS Linux release 7.2.1511 (Core)
Docker and OverlayFS
The folders in /mnt/config/.data and /mnt/config/.work contain your changes. You can move them out of the way to create new ones. Unmount the overlay and remount it with a clean upper dir: umount /etc mv /mnt/config/.data /mnt/config/.data.old mv /mnt/config/.work /mnt/config/.work.old mkdir /mnt/config/.data mkdir /mnt/config/.work mount -t overlay etc_overlay -o lowerdir=/etc,upperdir=/mnt/config/.data,workdir=/mnt/config/.work /etcAll of your changes to the old overlay will be found in /mnt/config/.data.old if you need them. /mnt/config/.work.old should be empty aside from the work folder if unmounted properly.
I have an embedded device running BusyBox, there are a number of directories mounted with overlayfs with work and data directories mounted on separate UBI partitions using the following style of command. The main root filesystem is a squashfs read only image that's been updated with a newer version. I need to delete the changes that have been made to certain files so that the changes to squashfs take place. How can I do that? mount -t overlay etc_overlay -o lowerdir=/etc,upperdir=/mnt/config/.data,workdir=/mnt/config/.work /etc
How can I delete changes from an overlay fs?
There isn't really a notion of β€œwhat occupies the space” in an overlay filesystem. Each branch of the union has its own space occupation. Run du on both branches. If it's getting more full, the read-write branch is the culprit. Since the overlay mount shadows its branches (/root_ro and /root_rw are hidden by the mount on /), you need to gain access to the branches. You can do that by mounting the block device again (Linux supports this, at least for most block device types): mkdir /media/root_ro /media/root_rw mount /dev/sda1 /mnt/root_rw mount ubi0:rootfs /mnt/root_ro du /mnt/root_ro /mnt/root_rw
I have a linux with a read only root filesystem and a read-write overlayfs mounted over it: # mount overlayfs on / type overlayfs (rw,relatime,lowerdir=/root_ro/,upperdir=/root_rw/) ...The overlayfs is almost full # df Filesystem 1K-blocks Used Available Use% Mounted on overlayfs 4003548 3995012 8536 99% / ...How can I identify files consuming the read/write part of the overlayfs? The du does not differentiate space occupied on ro and rw media. I have found the option -fstype type in find but my linux has busybox and the find does not support this option there. EDIT: add output from cat /proc/mounts rootfs / rootfs rw 0 0 proc /proc proc rw,relatime 0 0 sysfs /sys sysfs rw,relatime 0 0 none /dev devtmpfs rw,relatime,size=1026976,nr_inodes=256744,mode=755 0 0 /dev/sda1 /root_rw ext4 rw,relatime,errors=remount-ro,data=ordered 0 0 ubi0:rootfs /root_ro ubifs ro,noatime,nodiratime 0 0 overlayfs / overlayfs rw,relatime,lowerdir=/root_ro/,upperdir=/root_rw/ 0 0 debugfs /sys/kernel/debug debugfs rw,relatime 0 0 devpts /dev/pts devpts rw,relatime,gid=5,mode=620 0 0
What occupies space in overlayfs
This was a bug with the version of the kernel I was using.
I am getting Out of Memory errors while my swap isn't touched. I have 4GB of ram and 4GB of swap space. I enabled the swap via swapon and when doing free, I see the swap listed there. I'm thinking that perhaps there is some issue with overlayfs / tmpfs and swap all working together. I have always had the opposite problem, trying to prevent swap usage, so I can't seem to figure out what changed. Also, I am using a grsecurity enabled kernel. Is it possible that memory allocation work differently under there? Snapshot of free: total used free shared buff/cache available Mem: 3586392 157292 67052 141664 3362048 3236524 Swap: 4194300 0 4194300After I added the swap configuration to /etc/fstab, these numbers have changed; however, I still don't see any swap usage. The only other thing I changed was the tmpfs size for /dev/shm and my overlayfs volume (/rw). Both of which were not using much space to begin with, so the change should not have had any impact. total used free shared buff/cache available Mem: 3586392 571392 1714036 146096 1300964 2818004 Swap: 4194300 0 4194300I restarted a bunch of services and they're still running, and the biggest difference I see is that free memory is now showing 1.7GB free versus 67MB prior. I'm still confused as to why that had any impact. If I enable swap through swapon, it should behave the same way as if I configure it through /etc/fstab and do swapon -a. Furthermore, it isn't even being used yet anyways.
linux - OOM / swap not being used
Future readers be careful: This question is asking to coalesce two directories containing a lot of sub-directories. This should not be used for normal git operations. For that you should generally try to use the .gitignore file or git submodules.If you want to combine two directories into one place, even where there are overlapping files you can use a simple overlayfs mount. In the context you are asking for this is best done as a read-only overlayfs. If it was going to be read/write it would put all changed files in a separate directory. It's highly unlikely that you're looking for that. To create a readonly overlay you don't specify an upper or workdir: mount -t overlay overlay -o lowerdir=specialRepos1/vendor/:specialRepos2/vendor/ repoDirs1/vendor/Archlinux has a good description of this feature here: https://wiki.archlinux.org/index.php/Overlay_filesystem
I have downloaded several git repositories arranged in different source trees:repoDirs1 (with child dirs including vendor) repoDirs2 (with child dirs including vendor) specialRepos1 specialRepos2To get a workable source tree, I have to do as following: 1. cp -rp repoDirs2/* repoDirs1/ 2. cp -rp specialRepos1/vendor/* repoDirs1/vendor/ 3. cp -rp specialRepos2/vendor/* repoDirs1/vendor/This works but it changes the original repoDirs1 source tree and create difficulty for me to manage them with repo commands. I searched and found that mount has an option to do overlays. However, after reading multiple examples I still cannot figure out how to write a correct mount command sequence to solve my problem. I either get overlayfs unknown or other bad option errors. Could anyone please help to give a clear example? Thanks a lot. I am using ubuntu 20.04
How to use overlay mount to combine multiple directories with different sub directories?
I was sure, that module is statically compiled in kernel, but I was wrong: CONFIG_OVERLAY_FS=m. After adding the overlay module to initrd everything works fine.
I am trying to use read-only overlayfs(no workdir and upperdir) inside custom initrd. This works fine in completely booted OS: mkdir /tmp/ovl1 /tmp/ovl2 /tmp/merged mount -t overlay none -o lowerdir=/tmp/ovl1:/tmp/ovl2 /tmp/mergedThis also works if I use busybox sh as shell, which has built-in mount command. Inside initrd shell directories are successfully created, and mount command gives this error: mount: mounting none on /tmp/merged failed: No such deviceHere is output of mount command inside initrd: rootfs on / type rootfs (... sysfs on /sys type sysfs (... proc on /proc type proc (... udev on /dev type devtmpfs (... devpts on /dev/pts type devpts (... tmpfs on /run type tmpfs (...I got no idea how to debug this one :( P.S. Now I use AUFS and it works fine, but it was rejected from mainline kernel and it's recommended to switch to overlayfs.
Can not mount overlayfs inside initrd
Backslash will escape it. Since the mount command sends it as-is (as can be seen with strace), this has to be the kernel which uses a backslash to escape it. mount -t overlay \ -o 'lowerdir=/tmp/a\,b/lower,upperdir=/tmp/a\,b/upper,workdir=/tmp/a\,b/work' \ overlay '/tmp/a,b/merged'I think the kernel's escapes in octal seen in /proc/mounts is to help parsers: a , will always be a separator. Then it's up to the parser to finaly resolve \134\054 into \, then , as part of a path or filename. This is a part of the overlayfs handling of options in kernel, in linux/fs/overlayfs/super.c:static char *ovl_next_opt(char **s) { char *sbegin = *s; char *p; if (sbegin == NULL) return NULL; for (p = sbegin; *p; p++) { if (*p == '\\') { p++; if (!*p) break; } else if (*p == ',') { *p = '\0'; *s = p + 1; return sbegin; } } *s = NULL; return sbegin; }where the backslash can be seen to escape the character next to it (thus avoiding the specific handling of the comma that would have happened below).
To mount overlay it is given lowerdir, upperdir and workdir as options on mount(8) or data on mount(2), what logic is applied in order to escape commas? I have tried double commas and even quoting with no success. There is two workaround I found that is not exactly what I want:Relative path: as far as the last component doesn't have commas the following works: mkdir /tmp/a,b /tmp/a,b/{upper,lower,work,merged} cd /tmp/a,b sudo mount \ -t overlay \ -o 'lowerdir=./lower,upperdir=./upper,workdir=./work' \ overlay \ '/tmp/a,b/merged'But I reinforce that it doesn't work if the last component contains commas. Move path after mounting: I believe kernel keep the inode track, as the option values on /proc/self/mountinfo doesn't change: mkdir /tmp/a\ b /tmp/a\ b/{upper,lower,work,merged} sudo mount \ -t overlay \ -o 'lowerdir=/tmp/a b/lower,upperdir=/tmp/a b/upper,workdir=/tmp/a b/work' \ overlay \ '/tmp/a b/merged' mv '/tmp/a b' '/tmp/a,b' fgrep merged /proc/self/mountinfo 314 86 0:56 / /tmp/a,b/merged rw,relatime shared:217 - overlay overlay rw,lowerdir=/tmp/a\040b/lower,upperdir=/tmp/a\040b/upper,workdir=/tmp/a\040b/workNote: As kernel escapes space, tab, new lines and backslash characters with three decimal digits I have also tried to escape comma with \044 with no success, it seems it wants to escape backslash again.
How to escape comma on mount options for overlay
Creating a writable overlay like this is inside of docker can be quite tricky. One option you have is to use a volume or bind mount into the container. Inside the container these will not be an overlay so their respective directory can be used as an upper. That of course has a trip hazard if accidentally sharing the volume or failing to delete it after. Another option is to manually a file as a loopbacktruncate or fallocate a new file mkfs.ext4 mount -o loop
I would like to create a new writable overlay from within a docker container. As the root filesystem in docker is already an overlay, it can't be used as the upperdir of another overlay. This answer suggests using tmpfs for the upperdir, and this works for me. However, I need to write more data than will fit in RAM, and this container has no swap space. How can I create a writable overlay which is not limited by physical RAM? I am far from an expert in Linux/Unix, so feel free to explain the basics.
Create a writeable overlay from inside docker, without using tmpfs?
What you describe can not be done using bind mounts or links. However, you can use overlayfs. An overlayfs mount will show "a merged filesystem" containing files and directories from both. The upper filesystem takes precedence over the lower filesystem. If file exists in both, the upper filesystem version will be visible, in case of directories they are merged. Writes are made to upper filesystem (files are copied from lower to upper if they do not exist in the upper filesystem). In your situation, use /home/mvanorder as lower filesystem and /mnt/data/home/mvanorder as upper filesystem. Note the behavior on deletion: changes are always made in the upper filesystem instead of the filesystem(s) where the file exists. A whiteout file is made in upper filesystem when a file is deleted on overlayfs mount. A whiteout file makes the file invisible in the overlayfs mount. When a new directory is created in overlayfs, it is marked opaque. On opaque directories, only the upper filesystem version is used, even if a directory exists in lower filesystem. This means when you delete a directory in overlayfs and then re-create it, only the upper version is visible. Exact details about overlayfs are explained in the documentation.
I have 2 directories:/home/mvanorder /mnt/data/home/mvanorderI have multiple distros on my computer that I periodically rotate out and install new ones. However for convenience all files that are shared are in /mnt/data/home/mvanorder and then symlinks are created in /home/mvanorder to point to them. Does anyone know if it's possible to have the OS look for files in /home/mvanorder then if they're not found look in /mnt/data/home/mvanorder. Similar to a mount --bind, but where it would look in the original directory before looking in the bound directory.
Mount bind or link 2 dirctories into 1
I tried to chroot to the merged directory. The result was just as expected: I had rw rootfs, the only thing I missed was virtual kernel filesystems. So after mounting the overlay I did the following: TARGETDIR="/tmp/usbstick/merged" mount -t proc proc $TARGETDIR/proc mount -t sysfs sysfs $TARGETDIR/sys mount -t devtmpfs devtmpfs $TARGETDIR/dev mount -t tmpfs tmpfs $TARGETDIR/dev/shm mount -t devpts devpts $TARGETDIR/dev/ptsAnd then linked the mtab: chroot $TARGETDIR rm /etc/mtab 2> /dev/null chroot $TARGETDIR ln -s /proc/mounts /etc/mtab chroot $TARGETDIR
I need to keep one system as much intact as possible. Only soldering of HW stuff is allowed :-). I need to install a test software package and this package must not stay there in the future. I have a following situation:mmcblck partition mounted as /, ext4, read only, kernel v4.6.0 usb stick (only one partition), mounted to /tmp/usbstick, ext4 Created directories on usbstick /tmp/usbstick/upperdir, /tmp/usbstick/workdir Using the following line: mount -t overlay overlay -o lowerdir=/,upperdir=/tmp/usbstick/upperdir,workdir=/tmp/usbstick/workdir /After that the / is still read only. The only partial success I had was when I created /tmp/usbstick/merged and gave it as a "merged" directory, instead of / to the module. Then I saw all my rootfs in that directory and it was rw mounted, but I can't use it there. What should I do?
OverlayFS over read only rootfs fail
Finally found the answer for my question. 🎉 The solution is to use overlayfs in combination with bindfs that allows mount one folder as another folder with different perms/owner/etc. # sudo bindfs --map=origOwner/newOwner:@origGroup/@newGroup /srcFolder /dstMountpointmkdir /data/prod-overlay/dev1/prod # mountpoint sudo bindfs --map=prod/dev1:@prod/@dev1 /data/prod-overlay/dev1/overlay /data/prod-overlay/dev1/prod
Usecase: I have a lot of production data and copying it for dev purposes would be unreal. I was thinking that OverlayFS could be a solution until a problem with permissions arised. Let's assume i have following folder structure:/data/prod - production data (files+subfolders) owned by prod:prod having 664 /data/prod-overlay/dev1/{overlay,upper,lower} - data for developers (user dev1:dev1 in this case)Dev users can read prod data but not modify. My question is: Is it possible to make files in /data/overlayfs/developer1/overlay writable even when permissions of original files do not allow it? Or is there any other (simple) way to achieve such behaviour while keeping prod data read-only for dev users? For example: There is a file /data/prod/subfolder/file (prod:prod, 664) and user dev1 wants to remove or change /data/prod-overlay/dev1/overlay/subfolder/file. Note: dev1 can remove file /data/prod-overlay/dev1/overlay/file (with rm -f) probably because he is the owner of the overlay folder.
OverlayFS - Is it possible to make overlay layer writable by anyone/specific user (different than original owner)?