output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
Your first question is already answered by the text you quoted:This is done in the PREROUTING chain, just as the packet comes in;
this means that anything else on the Linux box itself (routing,
packet filtering) will see the packet going to its 'real'
destination.I.e. routing and packet filtering.
For your second question: you seem to be pinging from the system itself. Hence the packets are not coming into the system, hence these packets don't pass through the PREROUTING chain. You will need to originate those packets from outside that system.
|
I am trying to experiment with DNAT in PREROUTING. I found a tutorial here. It contains the following sentence:This is done in the PREROUTING chain, just as the packet comes in; this means that anything else on the Linux box itself (routing, packet filtering) will see the packet going to its 'real' destination.I want to ask what the author means by the last part i.e. anything else on the Linux box itself will see the packet going to its 'real' destination ?
I tried a test where I have a virtual device (tap) and I redirected incoming ICMP packets to that tap device (my tap device address is 10.0.4.1/24 and there is a program listening to the tap device, so its state is UP):
# iptables -t nat -A PREROUTING -i eth0 -p icmp -j DNAT --to-destination 10.0.4.2When I ping an external IP, this rule never gets used (pkts count in iptables remains 0 for this rule). Is this observation related to what the author is saying ?
| Changing Destination IP address using iptables and DNAT |
As user A.B. points out there is an issue with incompatibility between nftables, which Buster uses, and iptables. The best way to save iptables rules to be restored with iptables-restore between compatible versions.
Remove the offending line, and restore the rules:
iptables-restore < rules.qRe-add the rule to your configuration and save:
iptables -A INPUT -p tcp -m multiport --dports 22 -j ACCEPT
iptables-save > rules.qNow try restoring again:
iptables-restore < rules.qUse iptables -L to verify all of your rules are in place.
|
I have /etc/iptables/rule.v4 file contains many rule, the below is the line where I see the issue
-A INPUT -p tcp -m multiport --dports 22 -j ACCEPT
-A INPUT -p udp -m multiport --dports 16384:32768 -j ACCEPTWhen I tried to do iptables-restore it failed with below error
root@rs-dal:/etc/iptables# iptables-restore rules.q
iptables-restore v1.8.2 (nf_tables): multiport needs `-p tcp', `-p udp', `-p udplite', `-p sctp' or `-p dccp'
Error occurred at line: 26
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
root@rs-dal:/etc/iptables# why is it failing?, the same rule had worked successfully on Debian Jessie.
Also when I changed the rules like below, it worked.
-A INPUT -p tcp --dport 22 -j ACCEPT
-A INPUT -p udp --dport 16384:32768 -j ACCEPTI checked the iptables -L and these rules applied successfully as below
ACCEPT udp -- anywhere anywhere udp dpts:16384:32768
ACCEPT tcp -- anywhere anywhere tcp dpt:sshWhether the rule that I currently have is a valid syntax?
Below is my OS details
root@rs-dal:/etc/iptables# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux buster/sid"
NAME="Debian GNU/Linux"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/" | iptables-restore failed in Debian buster/sid if it has --multiport option in the rules file |
Introduction and simplified reproducer setup
Docker loads the br_netfilter module. Once loaded, it affects all present and future network namespaces. This is for historical and compatibility reasons, as described in my answer for this Q/A.
So when this is done on the host:service docker start# When using linux bridges instead of openvswitch, disable iptables on bridges
sysctl net.bridge.bridge-nf-call-iptables=0This affects only the host network namespace. The future network namespace created for hostR will still get:
# docker exec hostR sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1Below is a much simpler bug reproducer than OP's. It doesn't require Docker at all nor a VM: it can be run on the current Linux host, requiring only the iproute2 package and creating a single bridge: within the affected hostR named network namespace:
#!/bin/shmodprobe br_netfilter # as would have done Dockersysctl net.bridge.bridge-nf-call-iptables=0 # actually it won't matter: netns hostR will still get 1 when createdip netns add hostA
ip netns add hostB
ip netns add hostRip -n hostR link add name br address 02:00:00:00:01:00 up type bridge
ip -n hostR link add name eth1 up master br type veth peer netns hostA name eth1
ip -n hostR link add name eth2 up master br type veth peer netns hostB name eth1ip -n hostA addr add dev eth1 192.168.10.1/24
ip -n hostA link set eth1 up
ip -n hostB addr add dev eth1 192.168.10.2/24
ip -n hostB link set eth1 upip netns exec hostR nft -f - <<'EOF'
table bridge filter # for idempotence
delete table bridge filter # for idempotencetable bridge filter {
chain forward {
type filter hook forward priority 0;
meta nftrace set 1
}
}
EOFNote that br_netfilter still has its default settings in hostR network namespace:
# ip netns exec hostR sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1Running on one side:
ip netns exec hostR nft monitor traceAnd elsewhere:
ip netns exec hostA ping -c 4 192.168.10.2will trigger the problem: no IPv4 seen, only ARP (which are often seen delayed a few seconds later, in typical lazy ARP update). This always triggers for kernels 6.6.x or below, and can trigger or not for kernels 6.7.x or above (see later).Effects of br_netfilter
This module creates interactions between the bridge path and the Netfilter hooks for IPv4, normally for the routing path but now also for the bridge path. Here hooks for IPv4 are both iptables and nftables in the ip family (likewise this happens for ARP and IPv6. IPv6 is not used, we won't talk about it anymore).
That means that now the frames reach the Netfilter hooks as described in ebtables/iptables interaction on a Linux-based bridge: 5. Chain traversal for bridged IP packets:Chain traversal for bridged IP packets
A bridged packet never enters any network code above layer 1 (Link
Layer). So, a bridged IP packet/frame will never enter the IP code.
Therefore all iptables chains will be traversed while the IP packet is
in the bridge code. The chain traversal will look like this:Figure 5. Chain traversal for bridged IP packetsThey are supposed to reach bridge filter forward (blue) first followed by ip filter forward (green)...
... but not when the original hook priorities is changed and in turns change the order of the boxes above. The original hook priorities for the bridge family are described in nft(8):Table 7. Standard priority names and hook compatibility for the bridge familyName
Value
Hooksdstnat
-300
preroutingfilter
-200
allout
100
outputsrcnat
300
postroutingSo the schematic above expects filter forward to hook at priority -200 not 0. If using 0, all bets are off.
Indeed, when the running kernel was compiled with option CONFIG_NETFILTER_NETLINK_HOOK, nft list hooks can be used to query all hooks in use in the current namespace, including br_netfilter's. For kernel 6.6.x or before:
# ip netns exec hostR nft list hooks
family ip {
hook prerouting {
-2147483648 ip_sabotage_in [br_netfilter]
}
hook postrouting {
-0000000225 apparmor_ip_postroute
}
}
family ip6 {
hook prerouting {
-2147483648 ip_sabotage_in [br_netfilter]
}
hook postrouting {
-0000000225 apparmor_ip_postroute
}
}
family bridge {
hook prerouting {
0000000000 br_nf_pre_routing [br_netfilter]
}
hook input {
+2147483647 br_nf_local_in [br_netfilter]
}
hook forward {
-0000000001 br_nf_forward_ip [br_netfilter]
0000000000 chain bridge filter forward [nf_tables]
0000000000 br_nf_forward_arp [br_netfilter]
}
hook postrouting {
+2147483647 br_nf_post_routing [br_netfilter]
}
}one can see that the kernel module br_netfilter (not deactivated in this network namespace) hooks at -1 for IPv4 and again at 0 for ARP: the expected hook order isn't met and disruption happens for bridge filter forward at OP's priority 0.
On kernel 6.7.x and later, since this commit, default order after the reproducer was run changes:
# ip netns exec hostR nft list hooks
[...]
family bridge {
hook prerouting {
0000000000 br_nf_pre_routing [br_netfilter]
}
hook input {
+2147483647 br_nf_local_in [br_netfilter]
}
hook forward {
0000000000 chain bridge filter forward [nf_tables]
0000000000 br_nf_forward [br_netfilter]
}
hook postrouting {
+2147483647 br_nf_post_routing [br_netfilter]
}
}With the simplification, br_netfilter hooks only at priority 0 to handle forwarding, but what matters is it's now after bridge filter forward: the expected order, which won't cause OP's issue.
As having two hooks at same priority is to be considered undefined behavior, this is a frail setup: one can still trigger from here the problem (at least on kernel 6.7.x) simply by running:
rmmod br_netfilter
modprobe br_netfilterwhich now changes the order:
[...]
hook forward {
0000000000 br_nf_forward [br_netfilter]
0000000000 chain bridge filter forward [nf_tables]
}
[...]triggering again the problem since now br_netfilter is again before bridge filter forward.
How to avoid this
To work around this in the network namespace (or container) choose one of these:don't have br_netfilter loaded at all
On host:
rmmod br_netfilteror disable the effects of br_netfilter in the additional network namespace
As explained, each new network namespace gets again this feature enabled when created. It has to be disabled where it matters: in hostR network namespace:
ip netns exec hostR sysctl net.bridge.bridge-nf-call-iptables=0Once done, all br_netfilter hooks disappear in hostR causing no more any disruption when the unexpected order happens.
There's one caveat. This doesn't work when using only Docker:
# docker exec hostR sysctl net.bridge.bridge-nf-call-iptables=0
sysctl: error setting key 'net.bridge.bridge-nf-call-iptables': Read-only file system
# docker exec --privileged hostR sysctl net.bridge.bridge-nf-call-iptables=0
sysctl: error setting key 'net.bridge.bridge-nf-call-iptables': Read-only file systembecause Docker protected some settings to prevent them to be tampered with by the container.
Instead, one has to bind-mount (using ip netns attach ...) the container's network namespace, so it can be used by ip netns exec ... without getting its mount namespace in the way:
ip netns attach hostR $(docker inspect --format '{{.State.Pid}}' hostR)Which now allows to run the previous command and affect the container:
ip netns exec hostR sysctl net.bridge.bridge-nf-call-iptables=0or use a priority that guarantees bridge filter forward to happen first
As seen in the previous table, the default priority (priority forward) in the bridge family is -200. So use -200, or else at most the value -2 to always happen before br_netfilter whatever the kernel version:
ip netns exec hostR nft delete chain bridge filter forward
ip netns exec hostR nft add chain bridge filter forward '{ type filter hook forward priority -200; }'
ip netns exec hostR nft add rule bridge filter forward meta nftrace set 1or likewise, if using Docker:
docker exec hostR nft delete chain bridge filter forward
docker exec hostR nft add chain bridge filter forward '{ type filter hook forward priority -200; }'
docker exec hostR nft add rule bridge filter forward meta nftrace set 1Tested on:(OP's) alpine 3.19.1
Debian 12.5 withstock Debian kernel 6.1.x
6.6.x with CONFIG_NETFILTER_NETLINK_HOOK
6.7.11 with CONFIG_NETFILTER_NETLINK_HOOKNot tested with openvswitch bridges.Final note: avoid as much as possible Docker or the br_netfilter kernel module when doing network experiments. As my reproducer shows, it's quite easy to experiment using ip netns alone when there's only network involved (this might become more difficult if daemons (such as OpenVPN) are needed in an experiment).
|
I am experimenting with netfilter in a Docker container. I have three containers, one a "router", and two "endpoints". They are each connected via pipework, so an external (host) bridge exists for each endpoint<->router connection. Something like this:
containerA (eth1) -- hostbridgeA -- (eth1) containerR
containerB (eth1) -- hostbridgeB -- (eth2) containerRThen within the "router" container containerR, I have a bridge br0 configured like so:
bridge name bridge id STP enabled interfaces
br0 8000.3a047f7a7006 no eth1
eth2I have net.bridge.bridge-nf-call-iptables=0 on the host as that was interfering with some of my other tests.
containerA has IP 192.168.10.1/24 and containerB has 192.168.10.2/24.
I then have a very simple ruleset that traces forwarded packets:
flush rulesettable bridge filter {
chain forward {
type filter hook forward priority 0; policy accept;
meta nftrace set 1
}
}With this, I find that only ARP packets are traced, and not ICMP packets. In other words, if I run nft monitor while containerA is pinging containerB, I can see the ARP packets traced, but not the ICMP packets. This surprises me, because based on my understanding of nftables' bridge filter chain types, the only time a packet wouldn't go through the forward stage is if it's sent via input to the host (in this case containerR). Per the Linux packet flow diagram:I would still expect ICMP packets to take the forward path, just like ARP. I do see the packets if I trace pre- and post-routing. So my question is, what's happening here? Is there a flowtable or other short-circuit I'm not aware of? Is it specific to container networking and/or Docker? I can check with VMs rather than containers, but am interested if others are aware of, or have encountered this, themselves.
Edit: I have since created a similar setup with a set of Alpine Virtual Machines in VirtualBox. ICMP packets do reach the forward chain, so it seems something in the host, or Docker, is interfering with my expectations. I will leave this unanswered until I, or somebody else, can identify the reason, in case it's useful for others to know.
Thanks!
Minimum reproducible example
For this I'm using Alpine Linux 3.19.1 in a VM, with the community repository enabled in /etc/apk/respositories:
# Prerequisites of host
apk add bridge bridge-utils iproute2 docker openrc
service docker start# When using linux bridges instead of openvswitch, disable iptables on bridges
sysctl net.bridge.bridge-nf-call-iptables=0# Pipework to let me avoid docker's IPAM
git clone https://github.com/jpetazzo/pipework.git
cp pipework/pipework /usr/local/bin/# Create two containers each on their own network (bridge)
pipework brA $(docker create -itd --name hostA alpine:3.19) 192.168.10.1/24
pipework brB $(docker create -itd --name hostB alpine:3.19) 192.168.10.2/24# Create bridge-filtering container then connect it to both of the other networks
R=$(docker create --cap-add NET_ADMIN -itd --name hostR alpine:3.19)
pipework brA -i eth1 $R 0/0
pipework brB -i eth2 $R 0/0
# Note: `hostR` doesn't have/need an IP address on the bridge for this example# Add bridge tools and netfilter to the bridging container
docker exec hostR apk add bridge bridge-utils nftables
docker exec hostR brctl addbr br
docker exec hostR brctl addif br eth1 eth2
docker exec hostR ip link set dev br up# hostA should be able to ping hostB
docker exec hostA ping -c 1 192.168.10.2
# 64 bytes from 192.168.10.2...# Set nftables rules
docker exec hostR nft add table bridge filter
docker exec hostR nft add chain bridge filter forward '{type filter hook forward priority 0;}'
docker exec hostR nft add rule bridge filter forward meta nftrace set 1# Now ping hostB from hostA while nft monitor is running...
docker exec hostA ping -c 4 192.168.10.2 & docker exec hostR nft monitor# Ping will succeed, nft monitor will not show any echo-request/-response packets traced, only arps
# Example:
trace id abc bridge filter forward packet: iif "eth2" oif "eth1" ether saddr ... daddr ... arp operation request
trace id abc bridge filter forward rule meta nfrtrace set 1 (verdict continue)
trace id abc bridge filter forward verdict continue
trace id abc bridge filter forward policy accept
...
trace id def bridge filter forward packet: iif "eth1" oif "eth2" ether saddr ... daddr ... arp operation reply
trace id def bridge filter forward rule meta nfrtrace set 1 (verdict continue)
trace id def bridge filter forward verdict continue
trace id def bridge filter forward policy accept# Add tracing in prerouting and the icmp packets are visible:
docker exec hostR nft add chain bridge filter prerouting '{type filter hook prerouting priority 0;}'
docker exec hostR nft add rule bridge filter prerouting meta nftrace set 1# Run again
docker exec hostA ping -c 4 192.168.10.2 & docker exec hostR nft monitor
# Ping still works (obviously), but we can see its packets in prerouting, which then disappear from the forward chain, but ARP shows up in both.
# Example:
trace id abc bridge filter prerouting packet: iif "eth1" ether saddr ... daddr ... ... icmp type echo-request ...
trace id abc bridge filter prerouting rule meta nfrtrace set 1 (verdict continue)
trace id abc bridge filter prerouting verdict continue
trace id abc bridge filter prerouting policy accept
...
trace id def bridge filter prerouting packet: iif "eth2" ether saddr ... daddr ... ... icmp type echo-reply ...
trace id def bridge filter prerouting rule meta nfrtrace set 1 (verdict continue)
trace id def bridge filter prerouting verdict continue
trace id def bridge filter prerouting policy accept
...
trace id 123 bridge filter prerouting packet: iif "eth1" ether saddr ... daddr ... ... arp operation request
trace id 123 bridge filter prerouting rule meta nfrtrace set 1 (verdict continue)
trace id 123 bridge filter prerouting verdict continue
trace id 123 bridge filter prerouting policy accept
trace id 123 bridge filter forward packet: iif "eth1" oif "eth2" ether saddr ... daddr ... arp operation request
trace id 123 bridge filter forward rule meta nfrtrace set 1 (verdict continue)
trace id 123 bridge filter forward verdict continue
trace id 123 bridge filter forward policy accept
...
trace id 456 bridge filter prerouting packet: iif "eth2" ether saddr ... daddr ... ... arp operation reply
trace id 456 bridge filter prerouting rule meta nfrtrace set 1 (verdict continue)
trace id 456 bridge filter prerouting verdict continue
trace id 456 bridge filter prerouting policy accept
trace id 456 bridge filter forward packet: iif "eth2" oif "eth1" ether saddr ... daddr ... arp operation reply
trace id 456 bridge filter forward rule meta nfrtrace set 1 (verdict continue)
trace id 456 bridge filter forward verdict continue
trace id 456 bridge filter forward policy accept
# Note the trace id matching across prerouting and forward chainsI tried this with openvswitch as well, but for simplicity I went with a Linux bridge example which yields the same result anyway. The only real difference with openvswitch is that net.bridge.bridge-nf-call-iptables=0 isn't needed, IIRC.
| Netfilter and forward chain traces ARP but not other packets |
TL;DR
When doing an experiment where a network namespace receives traffic and does NAT on it, one can see that whatever the priority given to the type nat hook prerouting chain, it doesn't matter with regard to the filter chains priorities: NAT always happen at exactly prerouting hook priority -100 aka NF_IP_PRI_NAT_DST. Priority between NAT chains themselves is preserved.
You looked at the .hook entries in definitions which are for actual actions during packet traversal, but overlooked the .ops_register/.ops_unregister entries defined only for NAT hooks which introduce a different behavior when the chain is registered.
Tests done with kernel 6.5.x and nftables 1.0.9, some links provided on https://elixir.bootlin.com/ with latest LTS kernel at this date without patch revision: 6.1 (not 6.1.x).
To summarize:NAT acts at special hook priorities, and only these priorities (rather than the priority given when adding the chain) are relevant when comparing with other hook types such as filter or route: NAT chains register differently than other chains. Still the given priorities apply internally between different NAT chains hooking at the same place.route follows normal priorities just like filter (no special registration).don't use exact priorities such as NF_IP_PRI_NAT_DST (or various other NAT-related exact values) elsewhere because then the precise interaction between how nftables and NAT hook into Netfilter might be undefined (example: could change depending on order of creation, or behavior could change depending on kernel version) instead of deterministic. For example use -101 or less to be before DNAT or -99 or more to be after DNAT but don't ever use -100 to avoid undefined behavior.the same warning applies for other special facilities' priorities, described for example there, such as NF_IP_PRI_CONNTRACK_DEFRAG or NF_IP_PRI_CONNTRACK etc. (and for iptables priorities when also interacting with iptables rules and needing a deterministic outcome).Experiment
I left aside cases such as family inet: one can just check it will behave the same with an adequate ruleset and test case.
Example ruleset (to be loaded using nft -f ...):
table t # for idempotence
delete table t # for idempotencetable t {
chain pf1 {
type filter hook prerouting priority -250; policy accept; udp dport 5555 meta nftrace set 1 counter
} chain pf2 {
type filter hook prerouting priority -101; policy accept; udp dport 5555 counter accept
udp dport 6666 counter accept
} chain pf3 {
type filter hook prerouting priority -99; policy accept; udp dport 5555 counter accept
udp dport 6666 counter accept
} chain pn1 {
type nat hook prerouting priority -160; policy accept; counter
} chain pn2 {
type nat hook prerouting priority 180; policy accept; udp dport 5555 counter dnat to :6666
} chain pn3 {
type nat hook prerouting priority -190; policy accept; counter
} chain pn4 {
type nat hook prerouting priority 190; policy accept; udp dport 5555 counter dnat to :7777
udp dport 6666 counter dnat to :7777
}}This ruleset will change a received UDP port 5555 into port 6666 instead in pn2. pn1, pn3 and pn4 are here just for priority between NAT chains (pn4 also here to explain that NAT of a given type (DNAT, SNAT...) happens only once). There's a receiving application on UDP port 6666 (so the flow isn't deleted by an ICMP destination port unreachable), I used socat UDP4-LISTEN:6666,fork EXEC:date for this test and (interactively) sent two packets from a remote client using socat UDP4:192.0.2.2:5555 -.
One would believe that the NAT chain pn2 with priority 180 performing a DNAT would happen after filter chain pf3 with priority -99. But that's not what happens between type nat and other types: NAT is special. Using nft monitor trace like below:
# nft monitor trace
trace id 4ab9ba62 ip t pf1 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49393 ip length 30 udp sport 58201 udp dport 5555 udp length 10 @th,64,16 0x610a
trace id 4ab9ba62 ip t pf1 rule udp dport 5555 meta nftrace set 1 counter packets 0 bytes 0 (verdict continue)
trace id 4ab9ba62 ip t pf1 verdict continue
trace id 4ab9ba62 ip t pf1 policy accept
trace id 4ab9ba62 ip t pf2 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49393 ip length 30 udp sport 58201 udp dport 5555 udp length 10 @th,64,16 0x610a
trace id 4ab9ba62 ip t pf2 rule udp dport 5555 counter packets 0 bytes 0 accept (verdict accept)
trace id 4ab9ba62 ip t pn3 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49393 ip length 30 udp sport 58201 udp dport 5555 udp length 10 @th,64,16 0x610a
trace id 4ab9ba62 ip t pn3 rule counter packets 0 bytes 0 (verdict continue)
trace id 4ab9ba62 ip t pn3 verdict continue
trace id 4ab9ba62 ip t pn3 policy accept
trace id 4ab9ba62 ip t pn1 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49393 ip length 30 udp sport 58201 udp dport 5555 udp length 10 @th,64,16 0x610a
trace id 4ab9ba62 ip t pn1 rule counter packets 0 bytes 0 (verdict continue)
trace id 4ab9ba62 ip t pn1 verdict continue
trace id 4ab9ba62 ip t pn1 policy accept
trace id 4ab9ba62 ip t pn2 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49393 ip length 30 udp sport 58201 udp dport 5555 udp length 10 @th,64,16 0x610a
trace id 4ab9ba62 ip t pn2 rule udp dport 5555 counter packets 0 bytes 0 dnat to :6666 (verdict accept)
trace id 4ab9ba62 ip t pf3 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49393 ip length 30 udp sport 58201 udp dport 6666 udp length 10 @th,64,16 0x610a
trace id 4ab9ba62 ip t pf3 rule udp dport 6666 counter packets 0 bytes 0 accept (verdict accept)trace id 46ad0497 ip t pf1 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49394 ip length 30 udp sport 58201 udp dport 5555 udp length 10 @th,64,16 0x620a
trace id 46ad0497 ip t pf1 rule udp dport 5555 meta nftrace set 1 counter packets 0 bytes 0 (verdict continue)
trace id 46ad0497 ip t pf1 verdict continue
trace id 46ad0497 ip t pf1 policy accept
trace id 46ad0497 ip t pf2 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49394 ip length 30 udp sport 58201 udp dport 5555 udp length 10 @th,64,16 0x620a
trace id 46ad0497 ip t pf2 rule udp dport 5555 counter packets 0 bytes 0 accept (verdict accept)
trace id 46ad0497 ip t pf3 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49394 ip length 30 udp sport 58201 udp dport 6666 udp length 10 @th,64,16 0x620a
trace id 46ad0497 ip t pf3 rule udp dport 6666 counter packets 0 bytes 0 accept (verdict accept)
^Cone can see that all prerouting NAT hooks are happening between pf2 and pf3 ie between priorities -101 and -99: at priority -100 which is NF_IP_PRI_NAT_DST as used in Netfilter's own structures static const struct nf_hook_ops nf_nat_ipv4_ops[]. Chain ip t pf3 sees port 6666 and not 5555.
If a NAT statement has been applied, following rules (in the same hook) are skipped by Netfilter so pn4 never gets a chance here to be traversed at all in the example above (with only 2 packets of the same flow initially to port 5555) and never appears: this behavior also differs from type filter where the next hook is still traversed (eg: pf3 is still traversed after pf2).
As usual, the next packet in the flow doesn't trigger any NAT chain anymore since only packet creating a new flow (conntrack state NEW) are sent to NAT chains, so the next packet doesn't even display traversing pnX chains anymore. Priorities between the four prerouting NAT chains are honored: priority order is pn3 (-190) , pn1 (-160), pn2 (180) (and then there would be pn4 (190) but it doesn't get the chance).
Note: the fact that the packets/bytes counters don't appear increased in the same run of nft monitor trace looks like a bug or a missing feature to me (they are incremented when checking nft list ruleset).
type nat hooks use a different registering function than default for other nftables hooks so they can be handled differently:.ops_register = nf_nat_ipv4_register_fn,
.ops_unregister = nf_nat_ipv4_unregister_fn,It's to be handled by NAT (which is managed by Netfilter) and in hook NF_INET_PRE_ROUTING (still provided by Netfilter to nftables) this will be done at priority NF_IP_PRI_NAT_DST.
This is not done for type filter (nor route) which will then use a common nftables method rather than the specified one.
|
This is a question specifically about nftables chain types in the Linux kernel.
I don't understand how they're processed. I've been staring at the kernel code for a while, and it looks to me like an nftables "chain" is attached to a netns as a hook entry (in e.g. struct netns_nf.hooks_ipv4 for IPv4).
I don't see anything that discriminates on the "type" of the chain—filter, nat, or route—while creating or processing the chain. It looks like all chain types would simply get stuffed in as hook entries, and only the struct nf_hook_entry.hook function would be type-specific. For example, I think nf_hook_entry.hook would be the function nft_nat_do_chain for a type nat chain.
Looking at this table of which combinations of family, hook, and type exist, let's say I added two chains to the input hook, one with type filter and one with type nat. Let's further say that both chains are created with the same priority.
Questions:Is my hypothetical scenario even possible, two chains on the same hook, only varying by type? If not, where does the kernel prevent this?
If it is possible, what will determine the order that these two chains run in? Is there something I'm missing that runs e.g. chains of type nat before chains of type filter? Or will it be down to whichever chain was added first vs. second (and maybe kernel version, etc.)?There is an excellent related answer that's about chains with the same priority, but the specific case there is with two chains of the same type.
I am asking this question with the ultimate intent of understanding why nftables has a concept of "type" at all.
I know, for example, that the handler for type route chains may call ip_route_me_harder (not a joke!) if certain fields of a packet are changed by a chain, and this is unique to chains of type route. I know type nat has a few restrictions on its priority. I have also read that type nat chains are only called for the first packet of a connection, but I haven't been able to locate that exact restriction anywhere in the code (though maybe it's nf_nat_inet_fn in nf_nat_core.c?).
I appreciate any pointers you can give me to help me understand how and where type is handled for nftables chains in the kernel!
Edit: This answer seems to suggest that nftables "types" are nearly a stylistic choice, though it does point out the special behaviors of the route type. Another answer there further muddies my waters by saying that a NAT rule cannot be added to a chain of type filter, which (if true) is very confusing to me. Where is such a restriction implemented? (Only in userspace?)
| nftables: Are chains of multiple types all evaluated for a given hook? |
My understanding is that you are confusing the Ethernet address that you modify with tc (link layer only), with the inner CHADDR field (client's hardware address) that was embedded by the client inside the DHCPDISCOVER request (application layer which won't ever be altered by tc).
|
I am using tc to change the MAC address of incoming packets on a TAP interface (tap0) as follows where mac_org is the MAC address of a guest in a QEMU virtual machine and mac_new is a different MAC address that mac_org should be replaced with.
tc qdisc add dev tap0 ingress handle ffff:
tc filter add dev tap0 protocol ip parent ffff: \
flower src_mac ${mac_org} \
action pedit ex munge eth src set ${mac_new} pipe \
action csum ip pipe \
action xt -j LOGI also add an iptables rule to log UDP packets on the input hook.
iptables -A INPUT -p udp -j LOGsyslog shows that indeed the DHCP discover packet is changed accordingly. The tc log entry looks as follows:
IN=tap0 OUT= MAC=ff:ff:ff:ff:ff:ff:${mac_new}:08:00 SRC=0.0.0.0 DST=255.255.255.255 LEN=338 TOS=0x00 PREC=0xC0 TTL=64 ID=0 DF PROTO=UDP SPT=68 DPT=67 LEN=318and the log entry of the netfilter input hook which follows the tc ingress hook as the locally incoming packet is passed towards the socket shows the same result slightly differently formatted.
IN=tap0 OUT= MACSRC=${mac_new} MACDST=ff:ff:ff:ff:ff:ff MACPROTO=0800 SRC=0.0.0.0 DST=255.255.255.255 LEN=338 TOS=0x00 PREC=0xC0 TTL=64 ID=0 DF PROTO=UDP SPT=68 DPT=67 LEN=318Before starting QEMU I run dnsmasq on tap0 which surprisingly shows the output:
DHCPDISCOVER(tap0) ${mac_org}Running strace -f -x -s 10000 -e trace=network dnsmasq ... shows a recvmsg call that contains ${mac_org} instead of ${mac_new}.
recvmsg(4, {msg_name={sa_family=AF_INET, sin_port=htons(68), sin_addr=inet_addr("0.0.0.0")}, msg_namelen=16, msg_iov=[{iov_base="... ${mac_org} ..." ...How can that happen? It almost appears as if the packet is altered after the netfilter input hook.
| MAC address rewriting using tc |
There are 3 problems.no error is displayed
This looks to be a bug in nftables 1.0.6, see following bullets.
Here with the same version and OP's ruleset in /tmp/ruleset.nft:
# nft -V
nftables v1.0.6 (Lester Gooch #5)
[...]
# nft -f /tmp/ruleset.nft
/tmp/ruleset.nft:7:38-45: Error: unknown datatype ip_proto
type ipv4_addr . ipv4_addr . ip_proto . inet_service : ipv4_addr . inet_service
^^^^^^^^
/tmp/ruleset.nft:6:9-15: Error: set definition does not specify key
map dns_nat {
^^^^^^^Error: unknown datatype ip_proto
The original linked Q/A used the correct type inet_proto. This should not have been replaced with ip_proto which is an unknown type. So replace back:type ipv4_addr . ipv4_addr . ip_proto . inet_service : ipv4_addr . inet_servicewith the correct original spelling:
type ipv4_addr . ipv4_addr . inet_proto . inet_service : ipv4_addr . inet_serviceA list of available types can be found in nft(8) at PAYLOAD EXPRESSION and more precisely for this case at IPV4 HEADER EXPRESSION:Keyword
Description
Type[...]protocol
Upper layer protocol
inet_proto[...]typeof ip protocol <=> type inet_proto (not type ip_proto).
Normally typeof should be preferred to type to avoid having to guess the correct type, but as I wrote in the linked Q/A, some versions of nftables might not cope correctly with this precise case. The replacement would have been:
typeof ip saddr . ip daddr . ip protocol . th dport : ip daddr . th dportwhich is almost a cut/paste from the rule using it, but its behavior should be thoroughly tested.no error is displayed - take 2
Once this previous error is fixed (and the result put in /tmp/ruleset2.nft), then, as OP wrote, trying again the ruleset fails silently:
# nft -V
nftables v1.0.6 (Lester Gooch #5)
cli: editline
json: yes
minigmp: no
libxtables: yes
# nft -f /tmp/ruleset2.nft
# echo $?
1
#The only clue it failed is non-0 return code.
While with a newer nftables version:
# nft -V
nftables v1.0.8 (Old Doc Yak #2)
cli: editline
json: yes
minigmp: no
libxtables: yes
# nft -f /tmp/ruleset2.nft
/tmp/ruleset2.nft:16:9-12: Error: specify `dnat ip' or 'dnat ip6' in inet table to disambiguate
dnat to ip saddr . ip daddr . ip protocol . th dport map @dns_nat;
^^^^
#Now the error is displayed. Whatever was the issue in 1.0.6 it has been fixed at least with version 1.0.8.Error: specify `dnat ip' or 'dnat ip6' in inet table to disambiguate
Because NAT is done in the inet family (combined IPv4+IPv6) rather than either ip (IPv4) or ip6 (IPv6) family, one parameter which is usually optional becomes mandatory: state the IP version NAT should be applied to (even if one could infer it from the map table layout (IPv4)). Documentation tells:NAT STATEMENTS
snat [[ip | ip6] to] ADDR_SPEC [:PORT_SPEC] [FLAGS]
dnat [[ip | ip6] to] ADDR_SPEC [:PORT_SPEC] [FLAGS]
masquerade [to :PORT_SPEC] [FLAGS]
redirect [to :PORT_SPEC] [FLAGS][...]
When used in the inet family (available with kernel 5.2), the dnat and
snat statements require the use of the ip and ip6 keyword in case an
address is provided, see the examples below.So:dnat to ip saddr . ip daddr . ip protocol . th dport map @dns_nat;should be replaced with:
dnat ip to ip saddr . ip daddr . ip protocol . th dport map @dns_natThe original Q/A didn't state the family, so it would be assumed it was the default ip family which wouldn't require this.
Of course, this will work with nftables 1.0.6, only the error reporting had a problem. The return code will now be 0. |
I'm working from the answer of this question and man nft in order to create some dnat rules in my nftables config.
The relevant config extract is:
define src_ip = 192.168.1.128/26
define dst_ip = 192.168.1.1
define docker_dns = 172.20.10.5table inet nat {
map dns_nat {
type ipv4_addr . ipv4_addr . ip_proto . inet_service : ipv4_addr . inet_service
flags interval
elements = {
$src_ip . $dst_ip . udp . 53 : $docker_dns . 5353,
}
}
chain prerouting {
type nat hook prerouting priority -100; policy accept;
dnat to ip saddr . ip daddr . ip protocol . th dport map @dns_nat;
}
}When I apply this rule with nft -f, I see no command output so I presume it's succeeded. However when I inspect the ruleset using nft list ruleset the rules aren't present. When the dnat to ... line is commented out the rules appear to be applied, however when the line is present the rules are not applied.
The collection of rules in the prerouting chain I'm attempting to replace is:
ip saddr $src_ip ip daddr $dst_ip udp dport 53 dnat to $docker_dns:5353;
...Version information:
# nft -v
nftables v1.0.6 (Lester Gooch #5)
# uname -r
6.1.0-11-amd64Why might this not be working? Thanks
| nftables dnat map rule failing silently |
It appears they are not packaged on Fedora since Fedora 36:# drop vendor-provided configs, they are not really useful
rm -f $RPM_BUILD_ROOT/%{_datadir}/nftables/*.nftInstead, a "more advanced default config" is shipped with files /etc/nftables/main.nft,router.nft and nat.nft.# Sample configuration for nftables service.
# Load this by calling 'nft -f /etc/nftables/main.nft'.Anyway you should create your own tables, especially considering that having different hook types in the same table (eg filter + nat) is what should be done with nftables because separating them would hinder functionality (eg: sharing the same set accross chains with a different type requires them to be in the same table). nftables' tables are not an exact equivalent of iptables' tables.
If you need this file to follow some example then yes, the file you found is the one you were looking for. For 1.0.1 this file and other related files are found there instead.
|
I am trying to locate the file /etc/nftables/inet-filter which is referenced in the readme for a project I've inherited. When I installed nftables, the only files that existed in etc/nftables were:
. .. main.nft nat.nft osf router.nftI found an inet-filter.nft file at git.netfilter.org which consists of:
#!/usr/sbin/nft -ftable inet filter {
chain input { type filter hook input priority 0; }
chain forward { type filter hook forward priority 0; }
chain output { type filter hook output priority 0; }
}but I'm not sure if this is the file that my project was referencing.
If anyone has actually used the inet-filter.nft file, does this look familiar? Or is inet-filter.nft obsolete for some reason?
Thanks.
Fedora system: Linux fedora 5.18.11-200.fc36.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Jul 12 22:52:35 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Vagrant: Vagrant 2.3.0
nftables: nftables v1.0.1 (Fearless Fosdick #3)
| Where is /etc/nftables/inet-filter? |
Don't take it as RTFM, yet documentation has just the thing for such uses (https://libvirt.org/formatnwfilter.html#usage-of-variables-in-filters). Using two arrays of parameters and a single iterator should suffice, and I quote:
Accessing the same variables using a single iterator, for example by
using the notation $SRCIPADDRESSES[@1] and $DSTPORTS[@1], would result
in parallel access to both lists and result in the following combinations:Yet I can't tell how to provide such parameters as I am still hitting my head over passing parameters to filters. Comments on that topic would be appreciated.
Side note: In the same chapter it is shown one can get a matrix of parameters with a separate iterator for each.
Edit:
One needs to provide the array of arguments as streams as shown at the questions top. For two arrays, just provide two seperate streams:
<filterref filter='no-ip-spoofing'>
<!-- Array of IP values -->
<parameter name='IP' value='10.0.0.1'/>
<parameter name='IP' value='10.0.0.2'/>
<!-- Array of MASK values -->
<parameter name='MASK' value='255.255.255.0'/>
<parameter name='MASK' value='255.255.255.0'/>
</filterref>Now, must change the rule as such to iterate in parallel (single loop):
<rule action='return' direction='out' priority='500'>
<ip srcipaddr='$IP[@1]' srcipmask='$MASK[@1]'/>
</rule>[Self promotion] I have put up a somewhat related introductory tutorial regarding nwfilters: https://blog.cbugk.com/post/kvm-guest-network-isolation/
|
It's possible to pass multiple parameters to the "filterref" using "parameter" keyword. Like this:
<filterref filter='no-ip-spoofing'>
<parameter name='IP' value='10.0.0.1'/>
<parameter name='IP' value='10.0.0.2'/>
</filterref>And use they in "no-ip-spoofing" inside "rule" statement:
<rule action='return' direction='out' priority='500'>
<ip srcipaddr='$IP'/>
</rule>Each IP (10.0.0.1, 10.0.0.2) inside "rule" statement will be processed independently.
Q: But is it possible to pass parameters as a complex structure?
For example I want to send to "no-ip-spoofing" not only the IP but also the MASK. Something like that (of course the next list is incorrect xml structure):
<filterref filter='no-ip-spoofing'>
<parameter name='IP' value='10.0.0.1', name='MASK' value='255.255.255.0'/>
<parameter name='IP' value='10.0.0.2', name='MASK' value='255.255.255.0'/>
</filterref>And process they like that:
<rule action='return' direction='out' priority='500'>
<ip srcipaddr='$IP' srcipmask='$MASK'/>
</rule>How can I do that?
| libvirt nwfilter, multiple parameters |
The main change between your two systems isn't iptables but the kernel. The older kernel is from 2007.
One notable change that affects routing (which isn't provided by OP in this question but is in OP's other question) when used with marks is src_valid_mark:net: restore ip source validation
when using policy routing and the skb mark: there are cases where a
back path validation requires us to use a different routing table for
src ip validation than the one used for mapping ingress dst ip. One
such a case is transparent proxying where we pretend to be the
destination system and therefore the local table is used for incoming
packets but possibly a main table would be used on outbound. Make the
default behavior to allow the above and if users need to turn on the
symmetry via sysctl src_valid_markBefore this patch (Fedora 8) the behavior with Strict Reverse Path Forwarding (handled by rp_filter): assuming symmetric routing, is different than after (SL 6): assuming asymmetric routing for some very special setups where replies are sent one way or an other through a different route.
This ~ 11 years old patch has only been documented along kernel 5.12 this year in 2021:src_valid_mark - BOOLEAN0 - The fwmark of the packet is not included in reverse path route lookup. This allows for asymmetric routing configurations utilizing
the fwmark in only one direction, e.g., transparent proxying.1 - The fwmark of the packet is included in reverse path route lookup. This permits rp_filter to function when the fwmark is used
for routing traffic in both directions.This setting also affects the utilization of fmwark when performing
source address selection for ICMP replies, or determining addresses
stored for the IPOPT_TS_TSANDADDR and IPOPT_RR IP options.
The max value from conf/{all,interface}/src_valid_mark is used.
Default value is 0.So what has to be done to have a symmetrical routing working with marks while keeping Strict Reverse Path Forwarding settings (rp_filter=1) is:
sysctl -w net.ipv4.conf.all.src_valid_mark=1or add the equivalent in /etc/sysctl.conf:
net.ipv4.conf.all.src_valid_mark = 1since the highest value among all and any interface value is taken.
|
I use a set of iptables rules that makes use of the mangle table, which contains the rows below, on two different versions of iptables v1.3.8, and v1.4.7.
Iptables v1.3.8 runs on Fedora release 8 kernel 2.6.23.1-42.fc8
Iptables v1.4.7 runs on Scientific Linux (a RHEL clone) 6.10 kernel 2.6.32-573.1
Both PCs are configured in the same way, but in the older version of iptables v1.3.8 the configuration is working, it is not in v1.4.7
The rules are :
iptables -A PREROUTING -t mangle -s 10.200.0.0/16 ! -d 192.168.0.0/16 -j MARK --set-mark 0x1
iptables -A PREROUTING -t mangle -s 192.168.0.0/16 ! -d 192.168.0.0/16 -j MARK --set-mark 0x2
iptables -t mangle -A PREROUTING -m mark ! --mark 0 -j CONNMARK --save-mark
iptables -t mangle -I OUTPUT -m connmark ! --mark 0 -j CONNMARK --restore-mark
iptables -A OUTPUT -t mangle -s 172.16.62.100 -j MARK --set-mark 1
iptables -A OUTPUT -t mangle -s 172.16.61.2 -j MARK --set-mark 2
iptables -A OUTPUT -t mangle -s 172.16.61.3 -j MARK --set-mark 2
iptables -A OUTPUT -t mangle -s 172.16.61.4 -j MARK --set-mark 2
iptables -A OUTPUT -t mangle -s 172.16.61.5 -j MARK --set-mark 2
iptables -A OUTPUT -t mangle -s 172.16.61.6 -j MARK --set-mark 2
iptables -A OUTPUT -t mangle -s 172.16.61.7 -j MARK --set-mark 2The /etc/sysconfig/iptables file present in the version v1.3.8 contains the following lines :
*mangle
-A PREROUTING -s 10.200.0.0/255.255.0.0 -d ! 192.168.0.0/255.255.0.0 -j MARK --set-mark 0x1
-A PREROUTING -s 192.168.0.0/255.255.0.0 -d ! 192.168.0.0/255.255.0.0 -j MARK --set-mark 0x2
-A PREROUTING -m mark ! --mark 0x0 -j CONNMARK --save-mark
-A OUTPUT -m connmark ! --mark 0x0 -j CONNMARK --restore-mark
-A OUTPUT -s 172.16.62.100 -j MARK --set-mark 0x1
-A OUTPUT -s 172.16.61.2 -j MARK --set-mark 0x2
-A OUTPUT -s 172.16.61.3 -j MARK --set-mark 0x2
-A OUTPUT -s 172.16.61.4 -j MARK --set-mark 0x2
-A OUTPUT -s 172.16.61.5 -j MARK --set-mark 0x2
-A OUTPUT -s 172.16.61.6 -j MARK --set-mark 0x2
-A OUTPUT -s 172.16.61.7 -j MARK --set-mark 0x2
COMMITThe /etc/sysconfig/iptables file present in the version v1.4.7 contains the following lines :
*mangle
-A PREROUTING -s 10.200.0.0/255.255.0.0 ! -d 192.168.0.0/255.255.0.0 -j MARK --set-xmark 0x1/0xffffffff
-A PREROUTING -s 192.168.0.0/255.255.0.0 ! -d 192.168.0.0/255.255.0.0 -j MARK --set-xmark 0x2/0xffffffff
-A PREROUTING -m mark ! --mark 0x0 -j CONNMARK --save-mark --nfmask 0xffffffff --ctmask 0xffffffff
-A OUTPUT -m connmark ! --mark 0x0 -j CONNMARK --restore-mark --nfmask 0xffffffff --ctmask 0xffffffff
-A OUTPUT -s 172.16.62.100 -j MARK --set-xmark 0x1/0xffffffff
-A OUTPUT -s 172.16.61.2 -j MARK --set-xmark 0x2/0xffffffff
-A OUTPUT -s 172.16.61.3 -j MARK --set-xmark 0x2/0xffffffff
-A OUTPUT -s 172.16.61.4 -j MARK --set-xmark 0x2/0xffffffff
-A OUTPUT -s 172.16.61.5 -j MARK --set-xmark 0x2/0xffffffff
-A OUTPUT -s 172.16.61.6 -j MARK --set-xmark 0x2/0xffffffff
-A OUTPUT -s 172.16.61.7 -j MARK --set-xmark 0x2/0xffffffff
COMMITIn the new version the set-marks have become set-xmark and nfmask and ctmask are also present.
Why don't the same rules work in the new version ?
Update :
The problem was not on the iptables but on /etc/sysctl.conf :
I have set the following parameters and now it works :
net.ipv4.conf.default.log_martians = 1
net.ipv4.conf.all.log_martians = 1net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0 | Same iptables rules for two different iptables versions |
You can print all netfilter rules to check current counter values
nft list rulesetEdit:
Since firewalld probably does not add counters to nft rules, you will not get traffic statistics using firewalld with nftables.
|
I am finally switching from the old iptables to the new netfilter (specifically using firewalld) to configure my computers and servers but so far I have failed to find any newer alternative to the good old iptables -vnL for quickly getting current statistics.
What's the appropriate command to use here instead?
| Get netfilter statistics on the command line |
First, some reminders:-p argument is to specificy protocols like TCP, UDP, ICMP ... not higher level protocol like IMAP.
OUTPUT and INPUT chains are for the packets outgoing from the machine and incoming to the machine. If you want to filter packets that are forwarded (when your machine act as a gateway), you must use the FORWARD chain. To distinguish IN and OUT, use the input or output interfaces and the source and destination IPs
ESTABILISHED --> typo !!! :)Now, let's have a look to your problem:What rules you would set for a mail server accepting connections for EMSTP (port 465) and IMAP (port 993) having a network interface eth1 exposed to the Internet and another network interface eth2 exposed to the corporate network?The problem is too broad since it says that:The machine is a mail server.
It has two interfaces
It must accept connections for mail related protocols.But it isn't said that the connections must be accepted for both networks (internet / corporate). Anyway, let's assume that it is the case.
iptables works with discriminants: -i is one to match packets incoming to THAT interfaces.
Since you want the traffic to be accepted on every interfaces, then simply remove -i.
As mentionned previously, -p is to specify the transport protocol. Mails work in TCP, so use ̀ -p tcp`.
So your first responses would work (minus typo and some syntax error, the idea is OK).
Your last won't cause it allows packets coming from internet(eth1) to pass throught your server and go to your corporate network.
|
I had question about IPtables.
Let's start with this example of my book:What rules you would set for a mail server accepting connections for
EMSTP (port 465) and IMAP (port 993) having a network interface eth1
exposed to the Internet and another network interface eth2 exposed to
the corporate network?I tried to respond with this:
Iptable -A FORWARD -p EMSTP, IMAP -s all -i eth1 -m multiport 465,993 state –state NEW, ESTABILISHED -j ACCEPT
Iptable -A FORWARD -p EMSTP, IMAP -s all -i eth2 -m multiport 465,993 state –state NEW, ESTABILISHED -j ACCEPTI thought about FORWARD because isn't specified if traffic is INPUT
or OUTPUT... So I used the generic in/out (FORWARD if I can use in
this mode)
The protocol is specified(so I think don't have problems about)
I Used two rules because I used different interface, but I think
can do all in the same rules, just adding another -i inside the same rule.
For the network, I think that one is (internet) and another one is
local network (I really don't know what mean for "corporate")My question is if my response is good and if it is mandatory to use this type of format.
What change is I swap the order of the rules?
In this case ad example:
Iptable -A FORWARD -j ACCEPT -i eth1 -p EMSTP, IMAP -s all -m multiport 465,993 state –state NEW, ESTABILISHED Just swapping the jump and the inteface (-j and -i)
Someone can help to understand?
| Iptable order of rules with example |
As far as I know, since it's not possible to have an iptables rule executed after nat/POSTROUTING, which is the last hook provided by iptables, it's not possible to use iptables to capture a packet post-NAT.But this is possible when using nftables, since the hook priority is user defined. nft's dup statement is a direct replacement for iptables' TEE. It's possible to mix nftables and iptables as long as they're not both doing NAT (the nat ressource is special and can't be shared properly between iptables and nftables). Using iptables-over-nftables's version of iptables will also work (care should be taken when flushing rulesets), and of course using only nft for everything would also work.
Here's a ready made nft ruleset for this on a router with a NATed LAN on eth1 and its WAN side on eth2, to send a copy to 192.168.0.3 in the LAN side. like described in an other question from OP. To be put in some file named forwireshark.nft and to be "loaded" using nft -f forwireshark.nft:
table ip forwireshark {
chain postnat {
type filter hook postrouting priority 250; policy accept;
oif eth2 counter dup to 192.168.0.3 device eth1
}
}What matters here is that the value 250 has been chosen to be higher than iptables' NF_IP_PRI_NAT_SRC (100).
Here's what would typically receive the wireshark host when the ping host does ping -c1 8.8.8.8 after some inactivity (note the strange ARP request from the "wrong" IP, which might not be accepted by default on some systems):
root@ns-wireshark:~# tcpdump -e -n -s0 -p -i eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
21:06:03.074142 82:01:54:27:4d:d7 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 192.168.0.1 tell 192.168.0.2, length 28
21:06:03.074301 9a:80:fb:e6:6a:0a > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 192.168.0.3 tell 140.82.118.4, length 28
21:06:03.074343 7e:0a:6c:12:00:61 > 9a:80:fb:e6:6a:0a, ethertype ARP (0x0806), length 42: Reply 192.168.0.3 is-at 7e:0a:6c:12:00:61, length 28
21:06:03.074387 9a:80:fb:e6:6a:0a > 7e:0a:6c:12:00:61, ethertype IPv4 (0x0800), length 98: 140.82.118.4 > 8.8.8.8: ICMP echo request, id 1633, seq 1, length 64I don't know the rationale on the order of mangle/POSTROUTING and nat/POSTROUTING. Anyway this is part of iptables' limitations, because in nftables, apart from the equivalent of mangle/OUTPUT which is a special type route hook for rerouting, all other equivalent usages of mangle are part of type filter: there's not really a separate mangle type anymore. Being able to choose the order of priorities allows to do more.
|
The Netfilter's extensions man page states that:MASQUERADE: This target is only valid in the nat table, in the POSTROUTING chainQUESTION: How to clone the output of the MASQUERADE target with a TEE target ?
If you look at the diagram of netfilter/iptables below, you will notice that nat.POSTROUTING is the last chain to be evaluated before the packet is sent to the outbound interface. There isn't a raw.POSTROUTING chain, ...or is there ?Also see this.
P.S.
What is the rationale for processing the mangle and nat tables in the same order at the outbound and inbound interface, when the data flows in opposite directions through these interfaces (egress and ingress) ?
| How to clone the output of the MASQUERADE target with a TEE? |
I use the following script to emulate various network conditions:
#!/bin/bashintf="dev eth0"
delay="delay 400ms 100ms 50%"
loss="loss random 0%"
corrupt="corrupt 0%"
duplicate="duplicate 0%"
reorder="reorder 0%"
rate="rate 512kbit"tc qdisc del $intf root
tc qdisc add $intf root netem $delay $loss $corrupt $duplicate $reorder $rateecho "Cancel with:"
echo "tc qdisc del $intf root"In your case, to introduce a 400ms delay and a rate limit of 512kbit/s on outgoing packets on device eth0:
tc qdisc del dev eth0 root
tc qdisc add dev eth0 root netem delay 400ms rate 512kbitReferences:man tc-netem
Linux Foundation Netem Wiki |
I read that there's another tool for netfilter that allows you to add latency to a ratelimit.
Does anyone have an example of this?
| How does one use tc to add latency to a ratelimit? |
Be careful using iptables command.
To have access again you have to delete the iptables command you performed.
So do with -D option and not -A to undo the command.
sudo iptables -t nat -D OUTPUT -p tcp --dport 80 -j DNAT --to-destination 192.168.0.35:80
|
My local machine IP: 192.168.0.35
What I did: Answer that i tried here!!
sudo iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination 192.168.0.35:80The error:
Now I can not access any more IP's from my local machine and I do not know why it happened.
Any IP that is: 123.123.123.123, etc ... The result of any attempt is the default page "Apache2 Debian Default Page". Now I can't access my router page to make changes in port forwarding. I don't know how undo this iptables command.What i want:
I was looking for a way to: Expose my Webserver, from the router on the Internet over port 80 and 443. Deploy httpd(apache2) on port 1337 and continue dev on 8000,8080. From my router to my Raspbian, I'm trying to:INTERNET IPx:80,443<===>80,443 router <===> IP-local-web-httpd (raspbian): | iptables blocking local traffic |
The key is that tables are grouping things by design intention. All your rules intended for filtering are in this place, all your NAT rules over there. Chains are sequences of rules, and the default chains are traversed at specific points in the path of a packet.
In theory, you could add a rule that does filtering to, say, the NAT table. But the front end prevents you from doing this, with a message likeThe "nat" table is not intended for filtering, the use of DROP is therefore inhibited.The way I think of it is that it's really about chains, and the tables are a bit of an afterthought to help you organize them. It is confusing because it's ad-hoc, historically grown user interface design.
|
As I understand, Linux kernel has five hook points for IPv4 packet flow defined in netfilter_ipv4.h file:
/* IP Hooks */
/* After promisc drops, checksum checks. */
#define NF_IP_PRE_ROUTING 0
/* If the packet is destined for this box. */
#define NF_IP_LOCAL_IN 1
/* If the packet is destined for another interface. */
#define NF_IP_FORWARD 2
/* Packets coming from a local process. */
#define NF_IP_LOCAL_OUT 3
/* Packets about to hit the wire. */
#define NF_IP_POST_ROUTING 4..and according to netfilter_ipv6.h same seems to be true for IPv6:
/* IP6 Hooks */
/* After promisc drops, checksum checks. */
#define NF_IP6_PRE_ROUTING 0
/* If the packet is destined for this box. */
#define NF_IP6_LOCAL_IN 1
/* If the packet is destined for another interface. */
#define NF_IP6_FORWARD 2
/* Packets coming from a local process. */
#define NF_IP6_LOCAL_OUT 3
/* Packets about to hit the wire. */
#define NF_IP6_POST_ROUTING 4This makes me wonder that is it correct to think of netfilter/iptables architecture in a way that chains define the place where operations happen and tables determine which operations can be done? In addition, do tables matter for kernel as well or are they simply meant for iptables users to group types of processing which can occur?
| understand chains and tables in netfilter/iptables |
The correct command is
root@localhost ~ # nft add rule inet filter output ip daddr 8.8.8.8 counter Notice the inet prefix before the table name (filter). That's the table's family type. It's optional, but if you omit it, nft assumes ip (= IPv4), but I'm using inet pseudo-family (both IPv4 and IPv6).
I learned this thanks to the people in the #netfilter channel on Freenode.
Needless to say that nft error messages are anything but helpful. :-)
|
I'm a bit frustrated by the lack of comprehensive documentation of nftables and currently I'm failing to get even a simple example to work. I'm trying just create a output rule. Here's my only table:
root@localhost ~ # nft list ruleset
table inet filter {
chain output {
type filter hook output priority 0; policy accept;
}
}I wish to count the number of packets sent to 8.8.8.8. So I used the example command from the nftables wiki (https://wiki.nftables.org/wiki-nftables/index.php/Simple_rule_management):
root@localhost ~ # nft add rule filter output ip daddr 8.8.8.8 counter
Error: Could not process rule: No such file or directory
add rule filter output ip daddr 8.8.8.8 counter
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^But for some reason, I get very uninformative error message. What am I doing wrong and what is the correct way to add an output rule?
root@localhost ~ # uname -a
Linux localhost 4.15.3-2-ARCH #1 SMP PREEMPT Thu Feb 15 00:13:49 UTC 2018 x86_64 GNU/Linux
root@localhost ~ # nft --version
nftables v0.8.2 (Joe Btfsplk)
root@localhost ~ # lsmod|grep '^nf'
nfnetlink_queue 28672 0
nfnetlink_log 20480 0
nf_nat_masquerade_ipv6 16384 1 ip6t_MASQUERADE
nf_nat_ipv6 16384 1 ip6table_nat
nf_nat_masquerade_ipv4 16384 1 ipt_MASQUERADE
nf_nat_ipv4 16384 1 iptable_nat
nf_nat 36864 4 nf_nat_masquerade_ipv6,nf_nat_ipv6,nf_nat_masquerade_ipv4,nf_nat_ipv4
nft_reject_inet 16384 0
nf_reject_ipv4 16384 1 nft_reject_inet
nf_reject_ipv6 16384 1 nft_reject_inet
nft_reject 16384 1 nft_reject_inet
nft_meta 16384 0
nf_conntrack_ipv6 16384 2
nf_defrag_ipv6 36864 1 nf_conntrack_ipv6
nf_conntrack_ipv4 16384 2
nf_defrag_ipv4 16384 1 nf_conntrack_ipv4
nft_ct 20480 0
nf_conntrack 155648 10 nft_ct,nf_conntrack_ipv6,nf_conntrack_ipv4,ipt_MASQUERADE,nf_nat_masquerade_ipv6,nf_nat_ipv6,nf_nat_masquerade_ipv4,ip6t_MASQUERADE,nf_nat_ipv4,nf_nat
nft_set_bitmap 16384 0
nft_set_hash 28672 0
nft_set_rbtree 16384 0
nf_tables_inet 16384 2
nf_tables_ipv6 16384 1 nf_tables_inet
nf_tables_ipv4 16384 1 nf_tables_inet
nf_tables 106496 10 nft_ct,nft_set_bitmap,nft_reject,nft_set_hash,nf_tables_ipv6,nf_tables_ipv4,nft_reject_inet,nft_meta,nft_set_rbtree,nf_tables_inet
nfnetlink 16384 3 nfnetlink_log,nfnetlink_queue,nf_tables | nftables, add output rule syntax |
Answering my own question, it doesn't seem that this is possible with ebtables (thank you for the comment @A.B), as there's no mask or specific keyword for dscp. Trying this in a legacy project that uses eb/iptables, but going to migrate to nft as there's a built-in dscp keyword that should work at the bridge layer.
|
Is it possible to match only the DSCP portion of the IPv4 ToS or IPv6 traffic class byte using ebtables? I see that ebtables has the --ip-tos match option for IPv4 packets and the --ip6-class match option for IPv6 packets. To my understanding, those match the entire ToS or traffic class byte (i.e. the 6 DSCP bits and 2 ECN bits). To match DSCP specifically and ignore ECN bits, whatever they may be set to, I'd think a bitwise & would work, e.g. ToS byte of packet & 0xAC would match DSCP field 0x2B (0xAC being 0x2B << 2), but I don't think bitmasking is possible with the ebtables --ip-tos and --ip6-class options.
Is it possible to match only the DSCP portion with ebtables?
| Matching DSCP portion of ToS or traffic class byte using ebtables |
Okay, I found this video from Devconf 2018 with Jiri Benc talking about Linux network pipeline. TCP is a transport layer protocol; for packets that use it, the network stack must first parse the data link and network layer headers. Then, the dst_input determines if the packet is for local processing or forwarding. In slide 62, he mentioned if the packet is for local processing, the TCP state machine will handle it after the INPUT hook of the netfilter. Detailed implementation can be found in the tcp_v4_rcv function in linux/net/ipv4/tcp_ipv4.c.
|
The Linux netfilter has multiple hooks at different OSI model layers according to this image. However, the transportation layer protocols like TCP requires additional processing like retransmission or congestion control. Does this happen before or after netfilter framework or a specific hook? And why it's designed in such way. Explanation with source code is appreciated.
| Does Linux TCP stack processing happen before or after netfilter? |
The issue seems to be CentOS 7 default configuration.
When reloading configuration sysctl -p --system, it will also reload /usr/lib/sysctl.d/00-system.conf. In /usr/lib/sysctl.d/00-system.conf, we can see following:
# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0Once this is done (netfilter disabled on bridge), NAT-ing will not work properly on docker0 bridge.
Interestingly, once you restart docker and query system with sysctl -a, you can see following:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1I can only assume that this settings are changed by docker itself during start.
Which explains why restarting docker usually helps.
|
We have small kuberentes cluster running (CentOS 7, Kuberenetes 1.13 + Flannel) and after some tweaking TCP configuration (see below), we noticed that DNS was not working properly.
I don't think that our changes are directly responsible for what I have observed, nor that kubernetes is responsible. I looked up in IP tables and AFAIK everything looked good. What I observed was following:Pod 10.23.118.10 send UDP(53) package to DNS ClusterIP 10.22.0.10
Destination IP of package is changed then from ClusterIP (10.22.0.10) to the IP of DNS's server pod (10.23.118.2) (DNAT)
Server gets request, process it and then send response back to 10.23.118.10
At this point netfilter should replace source IP 10.23.118.2 with 10.22.0.10 before it forwards package, but for some reason it does not do it
Libc receives package and rejects it because it sees that response came from 10.23.118.2 instead 10.22.0.10 or we get ICMP package, saying to port is unreachable.What is strange about this, is that it only happens if DNS request is sent to pod that is running on same machine. If DNS request came from pod running on other machine, everything worked fine.
I suppose that we are not the only one seeing this. Did you had similar situation? I am not sure whenever this is a bug in linux's netfilter or docker/kubernetes breaks something when configuring bridge interfaces. Where should I look for more information?
Here is TCP configuration we tried to apply:
net.core.somaxconn = 1000
net.core.netdev_max_backlog = 5000
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_wmem = 4096 12582912 16777216
net.ipv4.tcp_rmem = 4096 12582912 16777216
net.ipv4.tcp_max_syn_backlog = 8096
net.ipv4.tcp_slow_start_after_idle = 0 | netfilter fails to properly replace destination IP of UDP response package |
I realized it is an issue regarding gcc while compiling kernel. I applied below patch and it fixed.
https://lkml.org/lkml/2015/4/23/605
|
I'm trying to debug netfilter synproxy module with systemtap.
This is probe point I'm trying to add.
# stap -l 'module("ipt_SYNPROXY").statement("*")' | grep send_client_synack
module("ipt_SYNPROXY").statement("synproxy_send_client_synack@net/ipv4/netfilter/ipt_SYNPROXY.c:72")And this is how stap script look like
probe module("ipt_SYNPROXY").statement("synproxy_send_client_synack@net/ipv4/netfilter/ipt_SYNPROXY.c:72"){//some code}I'm getting below error when I try to run it
semantic error: no line records for net/ipv4/netfilter/ipt_SYNPROXY.c:72 [man error::dwarf]semantic error: resolution failed in DWARF buildersemantic error: while resolving probe point: identifier 'module' at netfilter.stp:915:7
source: probe module("ipt_SYNPROXY").statement("synproxy_send_client_synack@net/ipv4/netfilter/ipt_SYNPROXY.c:72"){
^semantic error: no matchI tried some of other probe points and realized that not all probe point give this error. For example below probe works fine
probe module("ipt_SYNPROXY").statement("ipv4_synproxy_hook@net/ipv4/netfilter/ipt_SYNPROXY.c:314"){
//some code
}My kernel version 4.14.128 that I compiled myself. I suspects I miss something while compiling it.
| Systemtap can not resolve probe point although it is shown in probe list |
I've always had issues with iptables redirections (probably my fault, I'm pretty sure it's doable). But for a case like yours, it's IMO easier to do it in user-land without iptables.
Basically, you need to have a daemon in your "default" workspace listening on TCP port 8112 and redirecting all traffic to 10.200.200.2 port 8112. So it's a simple TCP proxy.
Here's how to do it with socat:
socat tcp-listen:8112,reuseaddr,fork tcp-connect:10.200.200.2:8112(The fork option is needed to avoid socat from stopping after the first proxied connection is closed).
EDIT: added reuseaddr as suggested in the comments.
If you absolutely want to do it with iptables, there's a guide on the Debian Administration site. But I still prefer socat for more advanced stuff -- like proxying IPv4 to IPv6, or stripping SSL to allow old Java programs to connect to secure services...
Beware however that all connections in Deluge will be from your server IP instead of the real client IP. If you want to avoid that, you will need to use a real HTTP reverse proxy that adds the original client IP to the proxied request in a HTTP header.
|
I was able to set up a network namespace, establish a tunnel with openvpn and start an application that uses this tunnel inside the namespace. So far so good, but this application can be accessed via a web interface and I cant't figure out how to route requests to the web interface inside my LAN.
I followed a guide from @schnouki explaining how to set up a network namespace and run OpenVPN inside of it
ip netns add myvpn
ip netns exec myvpn ip addr add 127.0.0.1/8 dev lo
ip netns exec myvpn ip link set lo up
ip link add vpn0 type veth peer name vpn1
ip link set vpn0 up
ip link set vpn1 netns myvpn up
ip addr add 10.200.200.1/24 dev vpn0
ip netns exec myvpn ip addr add 10.200.200.2/24 dev vpn1
ip netns exec myvpn ip route add default via 10.200.200.1 dev vpn1
iptables -A INPUT \! -i vpn0 -s 10.200.200.0/24 -j DROP
iptables -t nat -A POSTROUTING -s 10.200.200.0/24 -o en+ -j MASQUERADE
sysctl -q net.ipv4.ip_forward=1
mkdir -p /etc/netns/myvpn
echo 'nameserver 8.8.8.8' > /etc/netns/myvpn/resolv.confAfter that, I can check my external ip and get different results inside and outside of the namespace, just as intended:
curl -s ipv4.icanhazip.com
<my-isp-ip>
ip netns exec myvpn curl -s ipv4.icanhazip.com
<my-vpn-ip>The application is started, I'm using deluge for this example. I tried several applications with a web interface to make sure it's not a deluge specific problem.
ip netns exec myvpn sudo -u <my-user> /usr/bin/deluged
ip netns exec myvpn sudo -u <my-user> /usr/bin/deluge-web -f
ps $(ip netns pids myvpn)
PID TTY STAT TIME COMMAND
1468 ? Ss 0:13 openvpn --config /etc/openvpn/myvpn/myvpn.conf
9302 ? Sl 10:10 /usr/bin/python /usr/bin/deluged
9707 ? S 0:37 /usr/bin/python /usr/bin/deluge-web -fI'm able to access the web interface on port 8112 from within the namespace and from outside if I specify the ip of veth vpn1.
ip netns exec myvpn curl -Is localhost:8112 | head -1
HTTP/1.1 200 OK
ip netns exec myvpn curl -Is 10.200.200.2:8112 | head -1
HTTP/1.1 200 OK
curl -Is 10.200.200.2:8112 | head -1
HTTP/1.1 200 OKBut I do want to redirect port 8112 from my server to the application in the namespace. The goal is to open a browser on a computer inside my LAN and get the web interface with http://my-server-ip:8112 (my-server-ip being the static ip of the server that instantiated the network interface)
EDIT: I removed my attempts to create iptables rules. What I'm trying to do is explained above and the following commands should output a HTTP 200:
curl -I localhost:8112
curl: (7) Failed to connect to localhost port 8112: Connection refused
curl -I <my-server-ip>:8112
curl: (7) Failed to connect to <my-server-ip> port 8112: Connection refusedI tried DNAT and SNAT rules and threw in a MASQUERADE for good measure, but since I don't know what I'm doing, my attempts are futile. Perhaps someone can help me put together this construct.
EDIT: The tcpdump output of tcpdump -nn -q tcp port 8112. Unsurprisingly, the first command returns a HTTP 200 and the second command terminates with a refused connection.
curl -Is 10.200.200.2:8112 | head -1
listening on vpn0, link-type EN10MB (Ethernet), capture size 262144 bytes
IP 10.200.200.1.36208 > 10.200.200.2.8112: tcp 82
IP 10.200.200.2.8112 > 10.200.200.1.36208: tcp 145curl -Is <my-server-ip>:8112 | head -1
listening on lo, link-type EN10MB (Ethernet), capture size 262144 bytes
IP <my-server-ip>.58228 > <my-server-ip>.8112: tcp 0
IP <my-server-ip>.8112 > <my-server-ip>.58228: tcp 0EDIT: @schnouki himself pointed me to a Debian Administration article explaining a generic iptables TCP proxy. Applied to the problem at hand, their script would look like this:
YourIP=<my-server-ip>
YourPort=8112
TargetIP=10.200.200.2
TargetPort=8112iptables -t nat -A PREROUTING --dst $YourIP -p tcp --dport $YourPort -j DNAT \
--to-destination $TargetIP:$TargetPort
iptables -t nat -A POSTROUTING -p tcp --dst $TargetIP --dport $TargetPort -j SNAT \
--to-source $YourIP
iptables -t nat -A OUTPUT --dst $YourIP -p tcp --dport $YourPort -j DNAT \
--to-destination $TargetIP:$TargetPortUnfortunately, traffic between the veth interfaces seized and nothing else happened. However, @schnouki also suggested the use of socat as a TCP proxy and this is working perfectly.
curl -Is <my-server-ip>:8112 | head -1
IP 10.200.200.1.43384 > 10.200.200.2.8112: tcp 913
IP 10.200.200.2.8112 > 10.200.200.1.43384: tcp 1495I have yet to understand the strange port shuffling while traffic is traversing through the veth interfaces, but my problem is solved now.
| port forwarding to application in network namespace with vpn |
It turns out that you can put a tunnel interface into a network namespace. My entire problem was down to a mistake in bringing up the interface:
ip addr add dev $tun_tundv \
local $ifconfig_local/$ifconfig_cidr \
broadcast $ifconfig_broadcast \
scope linkThe problem is "scope link", which I misunderstood as only affecting routing. It causes the kernel to set the source address of all packets sent into the tunnel to 0.0.0.0; presumably the OpenVPN server would then discard them as invalid per RFC1122; even if it didn't, the destination would obviously be unable to reply.
Everything worked correctly in the absence of network namespaces because openvpn's built-in network configuration script did not make this mistake. And without "scope link", my original script works as well.
(How did I discover this, you ask? By running strace on the openvpn process, set to hexdump everything it read from the tunnel descriptor, and then manually decoding the packet headers.)
|
I am trying to set up a VPN (using OpenVPN) such that all of the traffic, and only the traffic, to/from specific processes goes through the VPN; other processes should continue to use the physical device directly. It is my understanding that the way to do this in Linux is with network namespaces.
If I use OpenVPN normally (i.e. funnelling all traffic from the client through the VPN), it works fine. Specifically, I start OpenVPN like this:
# openvpn --config destination.ovpn --auth-user-pass credentials.txt(A redacted version of destination.ovpn is at the end of this question.)
I'm stuck on the next step, writing scripts that restrict the tunnel device to namespaces. I have tried:Putting the tunnel device directly in the namespace with
# ip netns add tns0
# ip link set dev tun0 netns tns0
# ip netns exec tns0 ( ... commands to bring up tun0 as usual ... )These commands execute successfully, but traffic generated inside the namespace (e.g. with ip netns exec tns0 traceroute -n 8.8.8.8) falls into a black hole.
On the assumption that "you can [still] only assign virtual Ethernet (veth) interfaces to a network namespace" (which, if true, takes this year's award for most ridiculously unnecessary API restriction), creating a veth pair and a bridge, and putting one end of the veth pair in the namespace. This doesn't even get as far as dropping traffic on the floor: it won't let me put the tunnel into the bridge! [EDIT: This appears to be because only tap devices can be put into bridges. Unlike the inability to put arbitrary devices into a network namespace, that actually makes sense, what with bridges being an Ethernet-layer concept; unfortunately, my VPN provider does not support OpenVPN in tap mode, so I need a workaround.]
# ip addr add dev tun0 local 0.0.0.0/0 scope link
# ip link set tun0 up
# ip link add name teo0 type veth peer name tei0
# ip link set teo0 up
# brctl addbr tbr0
# brctl addif tbr0 teo0
# brctl addif tbr0 tun0
can't add tun0 to bridge tbr0: Invalid argumentThe scripts at the end of this question are for the veth approach. The scripts for the direct approach may be found in the edit history. Variables in the scripts that appear to be used without setting them first are set in the environment by the openvpn program -- yes, it's sloppy and uses lowercase names.
Please offer specific advice on how to get this to work. I'm painfully aware that I'm programming by cargo cult here -- has anyone written comprehensive documentation for this stuff? I can't find any -- so general code review of the scripts is also appreciated.
In case it matters:
# uname -srvm
Linux 3.14.5-x86_64-linode42 #1 SMP Thu Jun 5 15:22:13 EDT 2014 x86_64
# openvpn --version | head -1
OpenVPN 2.3.2 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [eurephia] [MH] [IPv6] built on Mar 17 2014
# ip -V
ip utility, iproute2-ss140804
# brctl --version
bridge-utils, 1.5The kernel was built by my virtual hosting provider (Linode) and, although compiled with CONFIG_MODULES=y, has no actual modules -- the only CONFIG_* variable set to m according to /proc/config.gz was CONFIG_XEN_TMEM, and I do not actually have that module (the kernel is stored outside my filesystem; /lib/modules is empty, and /proc/modules indicates that it was not magically loaded somehow). Excerpts from /proc/config.gz provided on request, but I don't want to paste the entire thing here.
netns-up.sh
#! /bin/shmask2cidr () {
local nbits dec
nbits=0
for dec in $(echo $1 | sed 's/\./ /g') ; do
case "$dec" in
(255) nbits=$(($nbits + 8)) ;;
(254) nbits=$(($nbits + 7)) ;;
(252) nbits=$(($nbits + 6)) ;;
(248) nbits=$(($nbits + 5)) ;;
(240) nbits=$(($nbits + 4)) ;;
(224) nbits=$(($nbits + 3)) ;;
(192) nbits=$(($nbits + 2)) ;;
(128) nbits=$(($nbits + 1)) ;;
(0) ;;
(*) echo "Error: $dec is not a valid netmask component" >&2
exit 1
;;
esac
done
echo "$nbits"
}mask2network () {
local host mask h m result
host="$1."
mask="$2."
result=""
while [ -n "$host" ]; do
h="${host%%.*}"
m="${mask%%.*}"
host="${host#*.}"
mask="${mask#*.}"
result="$result.$(($h & $m))"
done
echo "${result#.}"
}maybe_config_dns () {
local n option servers
n=1
servers=""
while [ $n -lt 100 ]; do
eval option="\$foreign_option_$n"
[ -n "$option" ] || break
case "$option" in
(*DNS*)
set -- $option
servers="$servers
nameserver $3"
;;
(*) ;;
esac
n=$(($n + 1))
done
if [ -n "$servers" ]; then
cat > /etc/netns/$tun_netns/resolv.conf <<EOF
# name servers for $tun_netns
$servers
EOF
fi
}config_inside_netns () {
local ifconfig_cidr ifconfig_network ifconfig_cidr=$(mask2cidr $ifconfig_netmask)
ifconfig_network=$(mask2network $ifconfig_local $ifconfig_netmask) ip link set dev lo up ip addr add dev $tun_vethI \
local $ifconfig_local/$ifconfig_cidr \
broadcast $ifconfig_broadcast \
scope link
ip route add default via $route_vpn_gateway dev $tun_vethI
ip link set dev $tun_vethI mtu $tun_mtu up
}PATH=/sbin:/bin:/usr/sbin:/usr/bin
export PATHset -ex# For no good reason, we can't just put the tunnel device in the
# subsidiary namespace; we have to create a "virtual Ethernet"
# device pair, put one of its ends in the subsidiary namespace,
# and put the other end in a "bridge" with the tunnel device.tun_tundv=$dev
tun_netns=tns${dev#tun}
tun_bridg=tbr${dev#tun}
tun_vethI=tei${dev#tun}
tun_vethO=teo${dev#tun}case "$tun_netns" in
(tns[0-9] | tns[0-9][0-9] | tns[0-9][0-9][0-9]) ;;
(*) exit 1;;
esacif [ $# -eq 1 ] && [ $1 = "INSIDE_NETNS" ]; then
[ $(ip netns identify $$) = $tun_netns ] || exit 1
config_inside_netns
else trap "rm -rf /etc/netns/$tun_netns ||:
ip netns del $tun_netns ||:
ip link del $tun_vethO ||:
ip link set $tun_tundv down ||:
brctl delbr $tun_bridg ||:
" 0 mkdir /etc/netns/$tun_netns
maybe_config_dns ip addr add dev $tun_tundv local 0.0.0.0/0 scope link
ip link set $tun_tundv mtu $tun_mtu up ip link add name $tun_vethO type veth peer name $tun_vethI
ip link set $tun_vethO mtu $tun_mtu up brctl addbr $tun_bridg
brctl setfd $tun_bridg 0
#brctl sethello $tun_bridg 0
brctl stp $tun_bridg off brctl addif $tun_bridg $tun_vethO
brctl addif $tun_bridg $tun_tundv
ip link set $tun_bridg up ip netns add $tun_netns
ip link set dev $tun_vethI netns $tun_netns
ip netns exec $tun_netns $0 INSIDE_NETNS trap "" 0
finetns-down.sh
#! /bin/shPATH=/sbin:/bin:/usr/sbin:/usr/bin
export PATHset -extun_netns=tns${dev#tun}
tun_bridg=tbr${dev#tun}case "$tun_netns" in
(tns[0-9] | tns[0-9][0-9] | tns[0-9][0-9][0-9]) ;;
(*) exit 1;;
esac[ -d /etc/netns/$tun_netns ] || exit 1pids=$(ip netns pids $tun_netns)
if [ -n "$pids" ]; then
kill $pids
sleep 5
pids=$(ip netns pids $tun_netns)
if [ -n "$pids" ]; then
kill -9 $pids
fi
fi# this automatically cleans up the the routes and the veth device pair
ip netns delete "$tun_netns"
rm -rf /etc/netns/$tun_netns# the bridge and the tunnel device must be torn down separately
ip link set $dev down
brctl delbr $tun_bridgdestination.ovpn
client
auth-user-pass
ping 5
dev tun
resolv-retry infinite
nobind
persist-key
persist-tun
ns-cert-type server
verb 3
route-metric 1
proto tcp
ping-exit 90
remote [REDACTED]
<ca>
[REDACTED]
</ca>
<cert>
[REDACTED]
</cert>
<key>
[REDACTED]
</key> | Feed all traffic through OpenVPN for a specific network namespace only |
UPDATE 2023: Siemens has now released Edgeshark as OSS that provides a nice graphical web UI rendering the relationships of network interfaces in containers, the host, et cetera. It uses a Go-based implementation of the method outlined in this answer, with more bells and whistles.
Many thanks to @A.B who filled in some missing pieces for me, especially regarding the semantics of netnsids. His PoC is very instructive. However, the crucial missing piece in his PoC is how to correlate a local netnsid to its globally unique network namespace inode number, because only then we can unambiguously connect the correct corresponding veth pairs.
To summarize and give a small Python example how to gather the information programmatically without having to rely on ip netns and its need to mount things: RTNETLINK actually returns the netnsid when querying for network interfaces. It's the IFLA_LINK_NETNSID attribute, which only appears in a link's info when needed. If it's not there, then it isn't needed -- and we must assume that the peer index refers to a namespace-local network interface.
The important lesson to take home is that a netnsid/IFLA_LINK_NETSID is only locally defined within the network namespace where you got it when asking RTNETLINK for link information. A netnsid with the same value gotten in a different network namespace might identify a different peer namespace, so be careful to not use the netnsid outside its namespace. But which uniquely identifyable network namespace (inode number) map to which netnsid?
As it turns out, a very recent version of lsns as of March 2018 is well capable to show the correct netnsid next to its network namespace inode number! So there is a way to map local netnsids to namespace inodes, but it is actually backwards! And it's more an oracle (with a lowercase ell) than a lookup: RTM_GETNSID needs a network namespace identifier either as a PID or FD (to the network namespace) and then returns the netnsid. See https://stackoverflow.com/questions/50196902/retrieving-the-netnsid-of-a-network-namespace-in-python for an example of how to ask the Linux network namespace oracle.
In consequence, you need to enumerate the available network namespaces (via /proc and/or /var/run/netns), then for a given veth network interface attach to the network namespace where you found it, ask for the netnsids of all the network namespaces you enumerated at the beginning (because you never know Beforehand which is which), and finally map the netnsid of the veth peer to the namespace inode number per the local map you created in step 3 after attaching to the veth's namespace.
import psutil
import os
import pyroute2
from pyroute2.netlink import rtnl, NLM_F_REQUEST
from pyroute2.netlink.rtnl import nsidmsg
from nsenter import Namespace# phase I: gather network namespaces from /proc/[0-9]*/ns/net
netns = dict()
for proc in psutil.process_iter():
netnsref= '/proc/{}/ns/net'.format(proc.pid)
netnsid = os.stat(netnsref).st_ino
if netnsid not in netns:
netns[netnsid] = netnsref# phase II: ask kernel "oracle" about the local IDs for the
# network namespaces we've discovered in phase I, doing this
# from all discovered network namespaces
for id, ref in netns.items():
with Namespace(ref, 'net'):
print('inside net:[{}]...'.format(id))
ipr = pyroute2.IPRoute()
for netnsid, netnsref in netns.items():
with open(netnsref, 'r') as netnsf:
req = nsidmsg.nsidmsg()
req['attrs'] = [('NETNSA_FD', netnsf.fileno())]
resp = ipr.nlm_request(req, rtnl.RTM_GETNSID, NLM_F_REQUEST)
local_nsid = dict(resp[0]['attrs'])['NETNSA_NSID']
if local_nsid != 2**32-1:
print(' net:[{}] <--> nsid {}'.format(netnsid, local_nsid)) |
Task
I need to unambiguously and without "holistic" guessing find the peer network interface of a veth end in another network namespace.
Theory ./. Reality
Albeit a lot of documentation and also answers here on SO assume that the ifindex indices of network interfaces are globally unique per host across network namespaces, this doesn't hold in many cases: ifindex/iflink are ambiguous. Even the loopback already shows the contrary, having an ifindex of 1 in any network namespace. Also, depending on the container environment, ifindex numbers get reused in different namespaces. Which makes tracing veth wiring a nightmare, espcially with lots of containers and a host bridge with veth peers all ending in @if3 or so...
Example: link-netnsid is 0
Spin up a Docker container instance, just to get a new veth pair connecting from the host network namespace to the new container network namespace...$ sudo docker run -it debian /bin/bashNow, in the host network namespace list the network interfaces (I've left out those interfaces that are of no interest to this question):$ ip link show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
...
4: docker0: mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:34:23:81:f0 brd ff:ff:ff:ff:ff:ff
...
16: vethfc8d91e@if15: mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
link/ether da:4c:f7:50:09:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 0As you can see, while the iflink is unambiguous, but the link-netnsid is 0, despite the peer end sitting in a different network namespace.
For reference, check the netnsid in the unnamed network namespace of the container:$ sudo lsns -t net
NS TYPE NPROCS PID USER COMMAND
...
...
4026532469 net 1 29616 root /bin/bash$ sudo nsenter -t 29616 -n ip link show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
15: eth0@if16: mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0So, for both veth ends ip link show (and RTNETLINK fwif) tells us they're in the same network namespace with netnsid 0. Which is either wrong or correct under the assumptions that link-netnsids are local as opposed to global. I could not find any documentation that make it explicit what scope link-netnsids are supposed to have.
/sys/class/net/... NOT to the Rescue?
I've looked into /sys/class/net/if/... but can only find the ifindex and iflink elements; these are well documented. "ip link show" also only seems to show the peer ifindex in form of the (in)famous "@if#" notation. Or did I miss some additional network namespace element?
Bottom Line/Question
Are there any syscalls that allow retrieving the missing network namespace information for the peer end of a veth pair?
| How to find the network namespace of a veth peer ifindex? |
Just look at what is doing ip netns exec test ... in your situation, using strace.
Excerpt:
# strace -f ip netns exec test sleep 1 2>&1|egrep '/etc/|clone|mount|unshare'|egrep -vw '/etc/ld.so|access'
unshare(CLONE_NEWNS) = 0
mount("", "/", 0x55f2f4c2584f, MS_REC|MS_SLAVE, NULL) = 0
umount2("/sys", MNT_DETACH) = 0
mount("test", "/sys", "sysfs", 0, NULL) = 0
open("/etc/netns/test", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 5
mount("/etc/netns/test/resolv.conf", "/etc/resolv.conf", 0x55f2f4c2584f, MS_BIND, NULL) = 0so to reproduce (partially, eg /sys isn't handled here) what ip netns exec test ... is doing:
~# ip netns id~# head -1 /etc/resolv.conf
# Generated by NetworkManager~# nsenter --net=/var/run/netns/test unshare --mount sh -c 'mount --bind /etc/netns/test/resolv.conf /etc/resolv.conf; exec bash'
~# ip netns id
test
~# head -1 /etc/resolv.conf
# For namespace test
~#So that's right. nsenter alone isn't enough. unshare has to be used, to change to a newly created mount namespace (basing this new on a copy of the previous one) and alter it, and not just using verbatim an existing one, since there is no existing one yet that fits. That's what is doing the syscall of the same name as is telling strace.
|
I've set up several network namespaces on my Linux system (kernel version 3.10), and now I want to configure each network namespace to have its own DNS settings.
I created resolv.conf files in each /etc/netns/[namespace] directory, and now I want to make my system work in the following way:
In bash command line, whenever I enter the context of a particular network namespace with nsenter --net=/run/netns/[namespace name], I want all processes launched from command line (like nslookup, ping) to run with the DNS settings that I configured with the matching /etc/netns/[namespace name]/resolv.conf.
If I run my commands like this:
"ip netns exec [namespace name] [command]"then the DNS settings of the namespace apply.
However, when running the commands without "ip netns exec", the DNS settings are taken from /etc/resolv.conf, even though running "netns get cur" indicates that the context is set to the desired network namespace.
I tried doing mount --bind /etc/netns/[namespace name]/resolv.conf /etc/resolv.conf in the context of the appropriate network namespace, but this applies the mount in the entire system rather then only in the context of that network namespace.
I suspected that using mount namespaces may help, so I tried reading the man page of mount namespaces, however couldn't make anything out of it in the short time that I dedicated to it.
Is there an easy and elegant way to achieve this goal?
Any help/direction toward the solution will be greatly appreciated!
| Separate DNS configuration in each network namespace |
Connecting to a DBus daemon listening on an abstract Unix socket in a different network namespace is not possible. Such addresses can be identified in ss -x via an address that contains a @:
u_str ESTAB 0 0 @/tmp/dbus-t00hzZWBDm 11204746 * 11210618 As a workaround, you can create a non-abstract Unix or IP socket which proxies to the abstract Unix socket. This is to be done outside the network namespace. From within the network namespace, you can then connect to that address. E.g. assuming the above abstract socket address, run this outside the namespace:
socat UNIX-LISTEN:/tmp/whatever,fork ABSTRACT-CONNECT:/tmp/dbus-t00hzZWBDmThen from within the namespace you can connect by setting this environment variable:
DBUS_SESSION_BUS_ADDRESS=unix:path=/tmp/whatever |
I am using network namespaces such that I can capture network traffic of a single process. The namespace is connected through the "host" via a veth pair and has network connectivity through NAT. So far this works for IP traffic and named Unix domain sockets.
A problem arises when a program needs to communicate with the D-Bus session bus. The D-Bus daemon listens on an abstract socket as specified with this environment variable:
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-jIB6oAy5ea,guid=04506c9a7f54e75c0b617a6c54e9b63aIt appears that the abstract Unix domain socket namespace is different in the namespace. Is there a way to get access to this D-Bus session from the network namespace?
| Connect with D-Bus in a network namespace |
You could do something like:
netns=myns
find -L /proc/[1-9]*/task/*/ns/net -samefile /run/netns/"$netns" | cut -d/ -f5Or with zsh:
print -l /proc/[1-9]*/task/*/ns/net(e:'[ $REPLY -ef /run/netns/$netns ]'::h:h:t)It checks the inode of the file which the /proc/*/task/*/ns/net symlink points to agains those of the files bind-mounted by ip netns add in /run/netns. That's basically what ip netns identify or ip netns pid in newer versions of iproute2 do.
That works with the 3.13 kernel as from the linux-image-generic-lts-trusty package on Ubuntu 12.04, but not with the 3.2 kernel from the first release of 12.04 where /proc/*/ns/* are not symlinks and each net file there from every process and task gets a different inode which can't help determine namespace membership.
Support for that was added by that commit in 2011, which means you need kernel 3.8 or newer.
With older kernels, you could try and run a program listening on an ABSTRACT socket in the namespace, and then try to enter the namespace of every process to see if you can connect to that socket there like:
sudo ip netns exec "$netns" socat abstract-listen:test-ns,fork /dev/null &
ps -eopid= |
while read p; do
nsenter -n"/proc/$p/ns/net" socat -u abstract:test-ns - 2> /dev/null &&
echo "$p"
done |
I am on Ubuntu 12.04, and the ip utility does not have ip netns identify <pid> option, I tried installing new iproute, but still, the option identify doesn't
seem to be working!.
If I were to write a script (or code) to list all processes in a network-namespace, or given a PID, show which network-namespace it belongs to, how should I proceed ?
(I need info on a handful of processes, to check if they are in the right netns)
| How to list processes belonging to a network namespace? |
ip link has a namespace option, which in addition to a network namespace name, can use a PID to refer a process' namespace. If PID namespaces are shared between the processes, you can move devices either way; it is probably easiest from inside, when you consider PID 1 being "outside". With separate PID namespaces you need to move from outer (PID) namespace to the inner one.
For example, from inside of a network namespace you can create a veth device pair to PID 1 namespace:
ip link add veth0 type veth peer name veth0 netns 1How namespaces work in Linux
Every process has reference files for their namespaces in /proc/<pid>/ns/. Additionally, ip netns creates persistent reference files in /run/netns/. These files are used with setns system call to change the namespace of the running thread to a namespace pointed by such file.
From shell you can enter to another namespace using nsenter program, providing namespace files (paths) in arguments.
A good overview of Linux namespaces is given in the Namespaces in operation article series on LWN.net.
Setting up namespaces
When you set up multiple namespaces (mount, pid, user, etc.), set up network namespace as early as possible, before altering mount and pid namespaces. If you do not have shared mount or pid namespaces, you do not have any way to point to the network namespace outside, because you can not see the files referring to network namespaces outside.
If you need more flexibility than the command line utilities provide, you need to use the systemcalls to manage name spaces directly from your program. For documentation, see the relevant man pages: man 2 setns, man 2 unshare and man 7 namespaces.
|
I have a process that has called unshare to create a new network namespace with just itself inside. When it calls execve to launch bash, the ip command shows that I have just an lo device. If I also create a user namespace and arrange for my process to be root inside the namespace, I can use the ip command to bring that device up and it works.
I can also use the ip command to create a veth device in this namespace. But it doesn't show up in ip netns list and the new veth device doesn't show up in the root level namespace (as I'd expect). How do I connect a veth device in the root-level namespace to my new veth device inside my process namespace? The ip command seems to require that the namespace has a name assigned by the ip command, and mine doesn't because I didn't use ip netns add to create it.
Maybe I could do it by writing my own program that used the netlink device and set things up. But I'd really prefer not to. Is there a way to do this through the command line?
There must be a way to do it, because docker containers have their own network namespace as well, and that namespace is also unnamed. Yet there is a veth device inside it that's connected to a veth device outside it.
My goal is to dynamically create a process isolation context, ideally without needing to become root outside the container. To this end I'm going to be creating a PID namespace, a UID namespace, a network namespace, an IPC namespace, and mount namespace. I may also create a cgroup namespace, but those are newish and I need to be able to run on currently supported versions of SLES, RHEL, and Ubuntu LTS.
I've been working through this one namespace at a time, and I currently have User, PID and mount namespaces working satisfactorily.
I can mount /proc/pid/ns/net if I must, but I would prefer to do that from inside the user namespace so (again) I don't have to be root outside the namespace. Mostly, I want everything to disappear as soon as all the processes in the namespace are gone. Having a bunch of state to clean up on the filesystem when I'm done would be less than ideal. Though creating it temporarily when the container is first allocated and then immediately removing it is far better than having to clean it up when the container exits.
No, I can't use docker, lxc, rkt, or any other existing solution such that I'd be relying on anything other than bog-standard system utilities (like ip), system libraries like glibc, and Linux system calls.
| How do I connect a veth device inside an 'anonymous' network namespace to one outside? |
The only bug I see with your code is that you're running the user's command unnecessarily through sh -c when you should just run it directly. Running it through sh -c buys you nothing but it destroys quoting that the user originally put into the command. For example, try this:
sudo /usr/local/sbin/_oob_shim ls -l "a b"should list a file called a b inside the context of the namespace but instead it lists two files called a and b.
sudo /usr/local/sbin/_oob_shim ls -l "*"should list a file called * (literal asterisk), fails for the same reason.
So it should look like this instead:
#!/bin/sh
/bin/ip netns exec oob \
/usr/bin/sudo -u "#$SUDO_UID" -g "#$SUDO_GID" -- "$@"Makes the script simpler to boot!
Another point I can make is that although in this case the bug was just a functionality bug, not a security bug, one is always suspicious when auditing security-sensitive code and finding that it runs things through shells because that's almost always a problem.
Finally, the user's supplementary groups will not be propagated into the namespace (they get only their uid and main gid), but that doesn't seem like a huge problem and fixing that is not trivial.
Other than that, it looks good to me.
|
I have a cell modem connected to my server that I want to use as a means to get notification emails out when the landline dies.
To nicely separate normal network access and this exceptional cell modem access, I created a network namespace and created the network device in there as the only device. To have a program use the cell modem I simply use ip netns exec.
The wrinkle is that I want to allow any user to run any program they wish in the namespace, but netns exec requires root. My solution is as follows:
/usr/local/sbin/_oob_shim:
#!/bin/sh
cmd_line="$@"
/bin/ip netns exec oob \
/usr/bin/sudo -u "#$SUDO_UID" -g "#$SUDO_GID" /bin/sh -c "$cmd_line"/etc/sudoers:
ALL ALL=NOPASSWD: /usr/local/sbin/_oob_shimI figure the only way to run the shim without already being root or knowing the root password is through sudo, and I can trust sudo to set $SUDO_UID and $SUDO_GID to the right values.
Am I opening myself up to significant risk? Or, should I say am I missing any obvious caveats?
| Secure way to allow any user to run programs in specific network namespace |
I was on a similar situation, here is how I work around it.
Some background: I had to span several selenium Firefox instances within namespaces for binding them with different IP addresses. But as you know I was having the error:
Error: Can't open display: localhost:10.0Instead of working with unix sockets as Marius suggested I have just bound SSHD X11Forwarding to * instead of localhost (adding "X11UseLocalhost no" to the config) and redirected simple TCP connections with socat.
Attention to the security consequences of doing this!!!!
After this change on sshd, the DISPLAY will automatically change when you login from this:
DISPLAY=localhost:10.0To something like:
DISPLAY=10.0.0.1:10.0After that I just have to do redirect the :
ip netns exec my-NNS socat tcp-listen:6010,reuseaddr,fork tcp:192.168.5.130:6010 &Then you should be able to work with xeyes, firefox, x-whatever-you-want...:
ip netns exec my-NNS xeyes &And voilà!
|
I connect (via ssh -Y ...) from a machine (=client) to another machine (=server, actually in my LAN, but it is irrelevant); then I start a new network namespace (NNS, for short) on the server, I start an xterm (from the default namespace) which is displayed perfectly on my client, and lastly, from within the xterm, I join the non-default NNS,
ip netns exec NNSName bashI can check that I am in the new NNS,
ip netns identify $$and I can run complex programs like, for instance, OpenVPN from within the new NNS.
The rub is here: I would like to start a graphical application (even just xeyes, for the moment) from within the new NNS, but I can't, I am always told: Unable to open DISPLAY=...
Admittedly, I have only tried the obvious:
DISPLAY=:0.0
DISPLAY=:10.0
DISPLAY=localhost:10.0
DISPLAY=localhost:20.0
DISPLAY=ClientName:10.0
DISPLAY=ClientIPAddress:10.0always with xhost + on the client, for pure debugging purposes.
I have no problems:connecting via ssh -Y .... from client to server, running xeyes on the server and displaying it on the client;
starting a new NNS on the server, and starting graphical applications within the NNS to be displayed on the server (i.e., in this case forget about the client). It is when I put these two things together (ssh and namespace) that I cannot display on the client applications running in the server's new NNS.
It appears the standard TCP port 6010 belongs to the ssh session with the default NNS, while the new NNS ought to get its own. I can surely start an ssh server in the new NNS and connect directly from the client to the server's new NNS, but I was wondering: is there any easier way to do this, i.e. to display graphical applications running in the server's new NNS on the client's X11-server?
| Network namespace, ssh, X11 |
First: I don't think you can achieve this by using 127.0.0.0/8 and/or a loopback interface (like lo). You have to use some other IPs and interfaces, because there are specific things hardwired for 127.0.0.0/8 and for loopback.
Then there is certainly more than one method, but here's an example:
# ip netns add vpn
# ip link add name vethhost0 type veth peer name vethvpn0
# ip link set vethvpn0 netns vpn
# ip addr add 10.0.0.1/24 dev vethhost0
# ip netns exec vpn ip addr add 10.0.0.2/24 dev vethvpn0
# ip link set vethhost0 up
# ip netns exec vpn ip link set vethvpn0 up
# ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.100 msThe first command creates out of thin air a pair of virtual ethernet interfaces connected by a virtual ethernet cable. The second command moves one of these interfaces into the netns vpn. Consider it the equivalent of things like socketpair(2) or pipe(2): a process creates a pair, then forks, and each process keeps only one end of the pair and they can communicate.
Usually (LXC, virt-manager,...) there's also a bridge involved to put everything in the same LAN when you have many netns.
Once this is in place, for the host it's like any router.
Enable ip forwarding (be more restrictive if you can: you need it at least for vethhost0 and the main interface):
# echo 1 > /proc/sys/net/ipv4/conf/all/forwardingAdd some DNAT rule, like:
# iptables -t nat -A PREROUTING ! -s 10.0.0.0/24 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.0.2Now you can either add a default route inside vpn with:
# ip netns exec vpn ip route add default via 10.0.0.1Or else, instead, add a SNAT rule to have everything be seen as coming from 10.0.0.1 inside vpn.
# iptables -t nat -A POSTROUTING -d 10.0.0.2/24 -j SNAT --to-source 10.0.0.1With this in place you can test from any other host, but not from the host itself. To do this, also add a DNAT rule similar to the previous DNAT, but in OUTPUT and changed (else any outgoing http connexion would be changed too) to your own IP. Let's say your IP is 192.168.1.2:
# iptables -t nat -A OUTPUT -d 192.168.1.2 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.0.2Now it will even work if you connect from the host to itself if you don't use a loopback ip, but any other IP belonging to the host with a nat rule as above. Let's say your IP is 192.168.1.2:
# ip netns exec vpn nc -l -s 10.0.0.2 -p 80 &
[1] 10639
# nc -vz 192.168.1.2 80
nc: myhost (192.168.1.2) 80 [http] open
#
[1]+ Done ip netns exec vpn nc -l -s 10.0.0.2 -p 80 |
I was able to set up a network namespace and start a server that listens on 127.0.0.1 inside the namespace:
# ip netns add vpn
# ip netns exec vpn ip link set dev lo up
# ip netns exec vpn nc -l -s 127.0.0.1 -p 80 &# ip netns exec vpn netstat -tlpnActive Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:80 0.0.0.0:* LISTEN 5598/ncAfter that, I can connect to the server inside the namespace:
# ip netns exec vpn nc 127.0.0.1 80 -zv
localhost [127.0.0.1] 80 (http) openBut I can't connect to the server outside the namespace:
# nc 127.0.0.1 80
(UNKNOWN) [127.0.0.1] 80 (http) : Connection refusedHow to configure iptables or namespace to forward traffic from the global namespace to the vpn namespace?
| How to forward traffic between Linux network namespaces? |
Let's look into man 5 sysfs:
/sys/class/net
Each of the entries in this directory is a symbolic link representing
one of the real or virtual networking devices that are visible in
the network namespace of the process that is accessing the directory.So, according to this manpage, the output of ls /sys/class/net must depend on the network namespace of the ls process. But... Actual behavior does not seem to be as described in this manpage. There is a nice kernel documentation about how it works.
Each sysfs mount has a namespace tag associated with it. This tag is set when sysfs gets mounted and depends on the network namespace of the calling process. Each sysfs entry (e.g. an entry in /sys/class/net) also may have a namespace tag associated with it.
When you iterate over the sysfs directory, the kernel obtains the namespace tag of the sysfs mount, and then it iterates over the entries, filtering out those which have different namespace tag.
So, it turns out that the results of iterating over the /sys/class/net depend on the network namespace of the process which initiated /sys mount rather than on the network namespace of the current process, thus, you must always mount /sys in the current network namespace (from any process belonging to this namespace) to see the correct results.
|
The Linux man page for network namespaces(7) says:Network namespaces provide isolation of the system resources associated with networking: [...], the /sys/class/net directory, [...].However, simply switching into a different network namespace doesn't seem to change the contents of /sys/class/net (see below for how to reproduce). Am I just mistaken here in thinking that the setns() into the network namespace is already sufficient? Is it always necessary to remount /sys in order to get the correct /sys/class/net matching the currently joined network namespace? Or am I missing something else here?
Example to Reproduce
Take an *ubuntu system, find the PID of the rtkit-daemon, enter the daemon's network namespace, show its network interfaces, and then check /sys/class/net:
$ PID=`sudo lsns -t net -n -o PID,COMMAND | grep rtkit-daemon | cut -d ' ' -f 2`
$ sudo nsenter -t $PID -n
# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
# ls /sys/class/net
docker0 enp3s0 lo lxcbr0 ...Please notice that while ip link show correctly only shows lo, /sys/class/net shows all network interfaces visible in the "root" network namespace (and "root" mount namespace).
In the case of rtkit-daemon also entering the mount namespace of it doesn't make a difference: sudo nsenter -t $PID -n -m and then ls /sys/class/net still shows network interfaces not present in the network namespace.
"Fix"
Many kudos to @Danila Kiver for explaining what really is going on behind the Linux kernel scenes. Remounting sysfs while the correct network namespace is joined will show the correct entries in /sys/class/net:
$ PID=`sudo lsns -t net -n -o PID,COMMAND | grep rtkit-daemon | cut -d ' ' -f 2`
$ sudo nsenter -t $PID -n
# MNT=`mktemp -d`
# mount -t sysfs none $MNT
# ls $MNT/class/net/
lo
# umount $MNT
# rmdir $MNT
# exitSo this now yields the correct results in /sys/class/net.
| Switching into a network namespace does not change /sys/class/net? |
The issue is that you're trying to route a packet from namespace ns_snd through ns_mid to ns_rcv. The kernel is going to treat the namespaces as if they were separate hosts. Meaning you have to configure the kernel to act as a router.
This is rather simple to do:
sudo ip netns exec $NS_MID sysctl -w net.ipv4.ip_forward=1 |
I have the following network topology using linux namespaces:
.--------.veth0 .--------.veth2 .--------.
| ns_snd |------------| ns_mid |------------| ns_rcv |
'--------' veth1'--------' veth3'--------'veth0: 10.0.0.1/30
veth1: 10.0.0.2/30
veth2: 10.0.0.5/30
veth3: 10.0.0.6/30
veth0 belongs to ns_snd,
veth[1,2] belongs to ns_mid,
veth3 belongs to ns_rcv
The commands are:
S1="veth0"
S2M1="veth1"
M2R1="veth2"
R1="veth3"NS_SND="ns_snd"
NS_RCV="ns_rcv"
NS_MID="ns_mid"#Remove existing namespace
sudo ip netns del $NS_SND
sudo ip netns del $NS_RCV
sudo ip netns del $NS_MID#Remove existing veth pairs
sudo ip link del $S1
sudo ip link del $R1
sudo ip link del $S2M1
sudo ip link del $M2R1#Create veth pairs
sudo ip link add $S1 type veth peer name $S2M1
sudo ip link add $M2R1 type veth peer name $R1#Bring up
sudo ip link set dev $S1 up
sudo ip link set dev $S2M1 up
sudo ip link set dev $M2R1 up
sudo ip link set dev $R1 up#Create the specific namespaces
sudo ip netns add $NS_SND
sudo ip netns add $NS_RCV
sudo ip netns add $NS_MID#Move the interfaces to the namespace
sudo ip link set $S1 netns $NS_SND
sudo ip link set $S2M1 netns $NS_MID
sudo ip link set $M2R1 netns $NS_MID
sudo ip link set $R1 netns $NS_RCV#Configure the loopback interface in namespace
sudo ip netns exec $NS_SND ip address add 127.0.0.1/8 dev lo
sudo ip netns exec $NS_SND ip link set dev lo up
sudo ip netns exec $NS_RCV ip address add 127.0.0.1/8 dev lo
sudo ip netns exec $NS_RCV ip link set dev lo up
sudo ip netns exec $NS_MID ip address add 127.0.0.1/8 dev lo
sudo ip netns exec $NS_MID ip link set dev lo up#add bridge
#sudo ip netns exec $NS_MID brctl addbr br549
#sudo ip netns exec $NS_MID brctl addif br549 $S2M1
#sudo ip netns exec $NS_MID brctl addif br549 $M2R1
#sudo ip netns exec $NS_RCV ip route add 10.0.0.0/30 via 10.0.0.5#Bring up interface in namespace
sudo ip netns exec $NS_SND ip link set dev $S1 up
sudo ip netns exec $NS_SND ip address add 10.0.0.1/30 dev $S1
sudo ip netns exec $NS_MID ip link set dev $S2M1 up
sudo ip netns exec $NS_MID ip address add 10.0.0.2/30 dev $S2M1
sudo ip netns exec $NS_MID ip link set dev $M2R1 up
sudo ip netns exec $NS_MID ip address add 10.0.0.5/30 dev $M2R1
sudo ip netns exec $NS_RCV ip link set dev $R1 up
sudo ip netns exec $NS_RCV ip address add 10.0.0.6/30 dev $R1#Add ip routes
sudo ip netns exec $NS_SND ip route add 10.0.0.4/30 via 10.0.0.2
sudo ip netns exec $NS_RCV ip route add 10.0.0.0/30 via 10.0.0.5#sudo ip netns exec $NS_SND "./scripts/setup_ns_snd.sh"
#sudo ip netns exec $NS_RCV "./scripts/setup_ns_rcv.sh"Inside ns_snd I can ping 10.0.0.5, but 10.0.0.6 not. What do I need to add or what I have forgotten to add?
| routing between linux namespaces |
The best that I can offer is to execute the command in one namespace (using the -n shortcut), create each endpoint with the same name, and move one of them into a different namespace in that command:
ip -n mynamespace-1 link add eth0 type veth peer name eth0 netns mynamespace-2You'll still need to do the other stuff like address assignment (the -n abbreviation may also be helpful), so you'll have to write a script, anyway.
As man ip says, the -n option is a shortcut for ip netns exec ... ip ..., so you can use this form if your ip doesn't support the -n option.
|
Is there a single (simple) command that will create a veth interface pair and assign each interface to a different network namespace?
For example, suppose that I have two namespaces: mynamespace-1 and mynamespace-2. Is there a single (simple) command that will connect these two namespaces via a veth pair where each endpoint of the interface is named eth0?
Currently what I would do is create the veth pair, move each interface to the corresponding namespace, and then rename the interface from within that namespace. I'd like to know if these three commands can be compressed into a single command.
For context, here is an example of how I am currently connecting a pair of namespaces and testing the connection:
# Create two network namespaces
sudo ip netns add 'mynamespace-1'
sudo ip netns add 'mynamespace-2'# Create a veth virtual-interface pair
sudo ip link add 'myns-1-eth0' type veth peer name 'myns-2-eth0'# Assign the interfaces to the namespaces
sudo ip link set 'myns-1-eth0' netns 'mynamespace-1'
sudo ip link set 'myns-2-eth0' netns 'mynamespace-2'# Change the names of the interfaces (I prefer to use standard interface names)
sudo ip netns exec 'mynamespace-1' ip link set 'myns-1-eth0' name 'eth0'
sudo ip netns exec 'mynamespace-2' ip link set 'myns-2-eth0' name 'eth0'# Assign an address to each interface
sudo ip netns exec 'mynamespace-1' ip addr add 192.168.1.1/24 dev eth0
sudo ip netns exec 'mynamespace-2' ip addr add 192.168.2.1/24 dev eth0# Bring up the interfaces (the veth interfaces the loopback interfaces)
sudo ip netns exec 'mynamespace-1' ip link set 'lo' up
sudo ip netns exec 'mynamespace-1' ip link set 'eth0' up
sudo ip netns exec 'mynamespace-2' ip link set 'lo' up
sudo ip netns exec 'mynamespace-2' ip link set 'eth0' up# Configure routes
sudo ip netns exec 'mynamespace-1' ip route add default via 192.168.1.1 dev eth0
sudo ip netns exec 'mynamespace-2' ip route add default via 192.168.2.1 dev eth0# Test the connection (in both directions)
sudo ip netns exec 'mynamespace-1' ping -c 1 192.168.2.1
sudo ip netns exec 'mynamespace-2' ping -c 1 192.168.1.1 | Connecting two network namespaces via a veth interface pair where each endpoint has the same name |
macvlan interface can be used in different modes which alter how data transmitted between two macvlan instances is treated. The default mode is vepa (Virtual Ethernet Port Aggregation), which possibly is why your setup doesn't work.
Short description of common modes you might want to configure:vepa data is transmitted over physical interface, for communication between macvlan instances the switch needs to support hairpin mode or there must be a IP router forwarding the packets.
private no communication between macvlan instances allowed, even if the external switch supports hairpin mode.
bridge allow direct communication between instances, traffic between macvlan instances is not transmitted on physical link.You probably want to use macvlan in bridge mode. For communication between the macvlan instance and the namespace containing the network interface itself, you need to create a macvlan instance in the same (main/host) network namespace. For details and explanation, see A.B's answer.
For full documentation (and the other modes), see man 8 ip-link.
|
I havean interface, name eth0, in my main network namespace
another interface, name jail0, in an alternate network namespace (name name0). This namespace is used by a jailed environment.
jail0 is a macvlan alias of eth0.I see the network without any problem, from the my main system and also from my jail.
However, I can't ping eachother.
Why is it so? I would like to make them reachable.
None of the network interfaces exists in the namespace of the others.
| How to make reachable macvlan aliases in a different namespaces? |
From within namespace foo:
ip link set <veth-name> netns 1From the global namespace:
ip netns exec foo ip link set <veth-name> netns 1It moves the interface back to the global namespace.
Pitfall: avoid having the namespace named "1".
Yes, you can. You can create namespace "1". But during namespace "1" existence there is no way to move an interface from a namespace to the global namespace with the proposed method. All moves are performed to namespace "1" instead. So avoid having namespace "1".
|
I created a namespace in Linux with 'ip netns add foo', created a pair of veth interfaces and moved one in the namespace. I set up IP addresses etc., so that now I can ping my 'foo' namespace from the default namespace, i.e. a host.
However the problem is with removing a link from the namespace foo back to the default one. Which command(s) should I use?
| remove link from Linux namespace |
Why is this happening?a network namespace doesn't change mount settings: it deals with networkbut some mounted settings related to a network namespace, most prominently /sys/class/net and /proc/sys/net, as documented in the previous link do depend on the network namespace
Here there's already a difference in behavior: while /proc/sys/net when already mounted changes on-the-fly when entering a new namespace, /sys/class/net doesn't. That means that when using a command that really only changes network namespaces:
unshare -n -- sh -c 'ls -1d /proc/sys/net/*/conf/* /sys/class/net/*'one will see previous network interfaces have disappeared in /proc/sys/net/ (leaving only an new instance of the lo interface) but are still visible in /sys/class/net/: preventing there interaction with new network namespace's interfaces and still allowing interaction with former network namespace's interfaces when this might not be a good idea.to solve this network-related problem, /sys has to be (re)mounted from the new network namespace. To avoid affecting the environment dedicated to the former (the initial) network namespace, this has to be done in a newer mount namespace. That's why ip netns exec does this: to prepare a coherent network environment for applications, it both enters an existing network namespace (created and kept existing with a bind mount by ip netns add) and unshares a new mount namespace.this mount namespace is kept existing only by having (a) process(es) referencing it. Once no process is left, the mount namespace disappears, as well as any mount done inside it (which could be visible only from such processes).so using unshare -m or ip netns exec (which isn't even intended to deal with mounts in the first place) separately won't keep the bind mount between invocations.Solutions
Such mount should be done right before using ip netns exec ..., in one shot, not in separate steps that create and destroy a (mount) namespace.
/etc/netns
Actually ip netns exec already manages such kind of bind mount in its own invocation, but in a specific place: /etc/netns. Each time ip netns exec foo is invoked, if directories and/or files exist and match, it will automatically bind mount anything in /etc/netns/foo/* to its matching /etc/*. As this is a feature meant to facilitate running multiple instance of a service in separate namespaces, this should be preferred as a solution.
The application should retrieve its configuration from a specific place in /etc/, for example /etc/my-app/ and there should be a per-netns distinct file there that will have contents pointing the application to its working directory, wherever it is, like /var/lib/my-app/nsX (or even /tmp/nsX).
The directory /etc/my-app/ should exist and probably include some kind of template files to be used by scripts to prepare running the instances, but its content will be shadowed in the other namespaces since it will be the target of a bind mount.
mkdir -p /etc/netns # it is usually not provided by the distributionfor instance in foo bar baz; do
mkdir "/etc/netns/$instance"
cp -a /etc/my-app /etc/netns/$instance/
doneThen with a script or manually, each instance should be customized (location of data, location of pid file etc.), relevant directories added elsewhere (in /var/ or /run (possibly with the help of some boot tools/configs like tmpfiles.d).
Once this is done properly one should be able to run multiple instances of the application in different netns simply like this:
ip netns exec foo my-app
ip netns exec bar my-app
ip netns exec baz my-appwhich would not clash between themselves (or that means some more has to be done before).
unshare -m plus ip netns exec together
If the application can't be made to receive parameters from /etc or to have a wrapper doing it and is stuck to using only /var/lib/my-app, then a bind mount immediately followed by ip netns exec will also work:
create ipnetns namespaces:
ip netns add foo
ip netns add bar
ip netns add bazprepare network configurations:
ip -n foo link ....run applications:
unshare -m sh -c 'mount --bind /tmp/foo /var/lib/my-app; exec ip netns exec foo my-app'
unshare -m sh -c 'mount --bind /tmp/bar /var/lib/my-app; exec ip netns exec bar my-app'
unshare -m sh -c 'mount --bind /tmp/baz /var/lib/my-app; exec ip netns exec baz my-app'While these bind mounts will still disappear later, they will be used by the now instancied my-app.
which more or less reproduces the feature already built in ip netns exec with an additional intermediate mount namespace around.
Wrapper
You could also simply have a wrapper script that takes an extra parameter for the mount avoiding this extra mount namespace:
my-app.wrapper (no check is done):
#!/bin/sh
mount --bind /tmp/"$1" /var/lib/my-app
shift
exec my-app "$@"and run:
ip netns exec foo my-app.wrapper fooIntegration
Whichever the method chosen, this should be integrated in some startup script. For example systemd's instantiated features (here's an example of mine) could be combined with one of the methods above to create and run on the fly new netns instances of the same application.
|
I have an app I run in a network namespace. This works well.
I want to run the app multiple times, in different namespaces. For convenience, I want to bind mount the app's working directory to something like /tmp/nsX, inside of the namespace.
If I just do mount --bind /tmp/nsX /var/lib/my-app in the namespace, the mount goes away when I exit the namespace.
By enter/exit the namespace, I mean just ip netns exec bash
I'm looking at unshare and nsenter but I can't figure out what to do.
I want to:Configure networking for a namespace
Create a bind mount for my app's working dir, in the namespace.
Spawn my app in the namespace. It has a "fork" option if that helps.
Be able to leave and enter the namespace(s) without things dying or disappearing.If I need to use some of the other namespace types, that's fine.
| How can I use a bind mount in a network namespace? |
There is a file that associates a thread to its network namespace:
/proc/[PID]/task/[TID]/ns/netwhere TID is the thread ID. This solved my issue.
|
/proc/[pid]/ns/net contains a link to the inode representing the network namespace of the process with PID [pid]. Is there something similar for threads?
My use case is a multi-threaded application, where there's one main thread and a group of worker threads. The generic worker W creates a new network namespace N with a call to unshare() (which makes W enter N), pushes one end of a veth pair in N and leaves it (it uses an fd pointing to the root namespace to go back to such namespace). Since no processes are in N after W goes back to the root namespace, N is destroyed when that happens, and I do not want that.
The solution I thought about is to mount a link to N somewhere in the filesystem. This is what iproute2 netns does: mounting a link to /proc/[pid]/ns/net. The problem, in my case, is that /proc/[pid]/ns/net keeps referencing the root namespace, only W changes namespace, hence I cannot use it and I need a file/something else which points to the namespace of a thread. Is there such a thing in Linux?
| Is there a file that associates a thread to its network namespace? |
First, this answer to "What is the NSFS filesystem?" sheds more light on how the Linux kernel manages namespace lifecycles: using a so-called "nsfs" filesystem that the proc filesystem internally brings in. Thus, a namespace is ready for destruction when its inode isn't referenced anymore by one of the elements mentioned in this question.
As it turns out, network namespaces seem to be especially complex in terms of destroying them (cleaning them up). The management of network namespaces is implemented in net/core/net_namespace.c.
One thing that catches the eye is the definition of a workqueue for cleaning up network namespaces:
static DECLARE_WORK(net_cleanup_work, cleanup_net);Workqueues (linux-kernel-labs.github.io Lab) are used in many places to schedule potentially blocking actions to run in process context. Cleaning up network namespaces is then handled by kernel worker threads, which serve also other workqueues, see also the Linux kernel documentation on Concurrency Managed Workqueue[s] (cmwq) for more workqueue background information.
A quick look at the other namespace management implementations (with fs/proc/namespaces.c as a good, erm, trampoline into the particular implementations) doesn't show any need for using workqueues for namespace cleanup.
|
My current understanding of Linux (kernel) namespaces is that their lifetime after creation is as long as at least one of the following conditions holds true:at least one process/thread is joined (attached, ...) to namespace X.
at least one bind-mount exists to namespace X.
at least one open fd exists referencing namespace X.
for user/PID namespaces: at least one child namespace Y of X exists.Naively, I would have thought that the Linux kernel destroys a namespace "as soon" as none of the above conditions holds true anymore. However, I notice that there is some delay between a namespace becoming obsolete and it becoming destroyed ... if I'm not mistaken, that is.
The following small Python3 script creates a series of new network namespaces and enters each one immediately, leaving the previous one. As there is no other process and thread holding any references to the previously created network namespace, it becomes obsolete and eventually should go away. An indirect sign is that namespace inode numbers then get reused.
Now notice how this script creates "temporary" network namespaces in two sequences: once in a slow fashion with much idle time in between, and once in a rapid fashion...
import unshare
import os
import timedef trash(delay):
for i in range(4):
unshare.unshare(unshare.CLONE_NEWNET)
print('trash net:[%d]' % os.stat('/proc/self/ns/net').st_ino)
time.sleep(delay) # wait for penguins to collect garbage namespaces# user namespaces can be created by unprivileged processes
# (unless on mispatched Debian kernels): this gives us all
# capabilities inside this new user namespace owned by our
# user, so we can create other namespaces.
unshare.unshare(unshare.CLONE_NEWUSER)
print('original net:[%d]' % os.stat('/proc/self/ns/net').st_ino)print('slow trashing...')
trash(0.5)time.sleep(0.5)
print('fast trashing...')
trash(0.01)When run, your output should look similar to this one:
$ python3 nsgarbage.py
original net:[4026531905]
slow trashing...
trash net:[4026532268]
trash net:[4026532344]
trash net:[4026532268]
trash net:[4026532344]
fast trashing...
trash net:[4026532268]
trash net:[4026532419]
trash net:[4026532494]
trash net:[4026532569]Notice how in the slow sequence with 0.5s delays, obsolete network namespaces get destroyed and their inode numbers reused: the inode number of freshly create network namespaces oscillate.
In contrast, for the fast sequence, obsolete namespaces do no seem to get destroyed (garbage-collected), as indicated by their inode numbers not getting reused, but instead "piling up".
Please note that I can only indirectly deduce when namespaces get destroyed, based on the inode number reuse. This might be the wrong assumption.
Can someone with Linux kernel knowledge shed more light on the behavior of Linux: when does the kernel really destroy namespaces? And if destruction is delayed, is there some intrinsic granularity for this "garbage collection"?
| When does Linux "garbage-collect" namespaces? |
So the question is: what is the best or most canonical way to share a
wireguard interface with a network namespace, while still retaining
access to wireguard outside of the namespace?IMO, a good approach would be to use policy-based routing for this. E.g, "any packet coming from interface A should use routing table B", where interface A is the veth/bridge interface outside the netns and routing table B only containing routes via your wireguard interface (and of course the route back to the originating network namespace). Using iproute2, something along these lines:
# echo "100 dayjob" >> /etc/iproute2/rt_tables
# ip route add <wireguard glue net> dev wg0 table dayjob
# ip route add default via <wireguard gw> dev wg0 table dayjob
# ip route add <netns net> dev <veth/bridge interface> table dayjob
# ip rule add iif <veth/bridge interface> lookup table dayjob |
I have a wireguard connection (interface name wg0) to a trusted machine inside an admin network at $DAYJOB. Usually, I don't want to use wg0 for all my traffic, only for IP addresses in the 172.16.0.0/12 range. This is easily accomplished with a stanza like so in /etc/wireguard/wg0.conf:
[Peer]
# ...
AllowedIPs = 172.16.0.0/12But for one firefox profile, I do want to route everything through wg0, even traffic not destined for 172.16.0.0/12. Furthermore, for DNS, I usually use dnscrypt-proxy + dnsmasq, but for wg0 traffic I want to use the nameserver at $DAYJOB.
I can almost match these constraints by having a network namespace created with ip netns and a veth pair. Inside the namespace, simply replace my default resolv.conf with one containing the alternative nameserver. The only problem is that I haven't quite figured out how to use wg0 as the sole way for packets to leave the namespace.Non-default DNS used: ✓
Traffic from the namespace destined for 172.16.0.0/12 is correctly routed through wg0: ✓
All other traffic exiting the namespace also goes via wg0: ✗Wireguard has documentation related to netns but it seems to assume you don't still need the wireguard interface outside the namespace. I do want everything outside the namespace to still have access to the wireguard interface.
Some sources, eg 1, suggest something similar using vlans. However, it seems like wireguard interfaces do not support vlans. Here is what happens:
$ sudo ip link add link wg0 name wg0.4 type vlan id 4
$ sudo ip netns add ns-wg-test-1
$ sudo ip link set wg0.4 netns ns-wg-test-1
$ sudo ip netns exec ns-wg-test-1 su -c "/bin/bash -l" $USER
$ sudo ip addr add 192.168.126.2 dev wg0.4
$ sudo ip link set dev wg0.4 up
RTNETLINK answers: Cannot assign requested addressSo now I'm equivocating between various alternative approaches which all have problems.
Possibility 1: Add a second wireguard interface. That will require making a bunch of redundant configs in /etc/wireguard/wg1.conf. It's not clear that this is even workable or a good idea. It seems inelegant to have multiple wireguard interfaces and redundant configs.
Possibility 2: Add some combination of ip route and iptables -A rules to force everything exiting the namespace to then be channelled into wg0. However, I haven't come across any examples or documentation which makes it clear how to force the routing of all traffic incoming from one interface to go out via another interface. And again, I have a certain amount of skepticism that this would even be a good approach.
Possibility 3: Have faith in the wireguard documentation. Put wg0 inside the namespace, and tell all 172.16.0.0/12 traffic outside the namespace to go via a veth pair connected to the namespace. Inside the namespace, there could be routing/firewall rules to forward everything from the veth pair to wg0. The problem with this solution is that, even if it works, it requires the namespace to always be active. I would like traffic destined for 172.16.0.0/12 to always find its way to wg0 regardless of whether I remembered to activate the namespace this morning.
So the question is: what is the best or most canonical way to share a wireguard interface with a network namespace, while still retaining access to wireguard outside of the namespace?
This isn't opinion-based. I'll know the answer is right when I can see a working example which is robust, efficient, secure, and scriptable. It doesn't need to be cross-platform. I am doing this solely on Void Linux (or sometimes Arch Linux).
Alternatively, if the whole enterprise is not a good idea, or not possible for some reason, a negative answer could consist of arguments, evidence, and citations to explain why. I'll still mark it as correct if nothing better comes along.
| How to share wireguard with namespace? |
In my experience, the loopback interface in the new network namespace is not brought up automatically. Check to see if it is up (example, use ip addr show in the new network namespace). If it is not up you can bring it up with something like ip netns exec myns ip link set dev lo up.
Maybe my answer should be a comment, but I don't have enough reputation to add a comment to your question.
Edit: Just for clarification, this answer shows how to activate the loopback interface in the new network namespace ("myns" in the case of the question). The default network namespace and the new network namespace each have their own loopback interface. This answer does not show how to expose the loopback interface from the default network namespace into the new network namespace.
|
I've followed this guide to setup a network namespace (to run VPN in). Here's my setup script:
ip netns add myns
ip link add type veth
ip link set veth1 netns mynsip addr add 10.255.255.1/24 dev veth0
ip link set dev veth0 up
ip netns exec myns ip addr add 10.255.255.2/24 dev veth1
ip netns exec myns ip link set dev veth1 up
ip netns exec myns ip ro add default via 10.255.255.1iptables -A POSTROUTING -t nat -o wlp58s0 -s 10.255.255.2 -j MASQUERADE
sysctl -w net.ipv4.ip_forward=1ip netns exec myns bashSome applications bind to ports on localhost and this doesn't work.
localhost resolves to 127.0.0.1 but pings to it fail.
Can I somehow expose localhost to the network namespace?
| Access localhost from network namespace |
You can just continue with the same unshortened syntax you were using for executing a command inside a network namespace created with ip netns add:
ip netns exec vpn <command-for-capture>Like:
ip netns exec vpn tshark -i tun0 -n -f 'port 53'Note tshark's option -n to avoid triggering DNS resolution, especially important when capturing DNS traffic, without which cascading DNS resolutions and captures caused by tshark itself would pollute original traffic.
As a side note, the ip command itself has a shortcut allowing to replace ip netns exec FOO ip BAR ... with ip -n FOO BAR ..., but of course this can't be used for any other command. A lot of OP's setup can be shortened into ip -n vpn ... instead of ip netns exec vpn ip ....
|
How can I capture traffic specifically from a network interface inside a network namespace using tshark? In my case, the network interface tun0 is moved into the network namespace called vpn.
Normally running tshark -f "port 53" clutters the output because it includes DNS queries from the main interface that the network namespace ends up using.
This is my network namespace setup (for what it's worth, this is from the openvpn netns-up script here: http://www.naju.se/articles/openvpn-netns.html)
$ ip netns add vpn
$ ip netns exec vpn ip link set dev lo up
$ ip link set dev tun0 up netns vpn mtu 1500
$ ip netns exec vpn ip addr add dev tun0 "10.14.0.3/16"$ ip netns exec vpn ip addr add dev tun0 "$ifconfig_ipv6_local"/112$ ip netns exec vpn ip route add default via 10.14.0.1$ ip netns exec vpn ip route add default via "$ifconfig_ipv6_remote" | Capture DNS traffic to and from a network namespace using tshark |
After some trial and error, I found out that in fact CAP_SYS_PTRACE is needed.
In contrast, CAP_DAC_READ_SEARCH and CAP_DAC_OVERRIDE don't give the required access, which includes readlink() and similar operations.
What I'm seeing can be cross-checked: first, ptrace.c gives the necessary clue in __ptrace_may_access():
/* May we inspect the given task?
* This check is used both for attaching with ptrace
* and for allowing access to sensitive information in /proc.
*
* ptrace_attach denies several cases that /proc allows
* because setting up the necessary parent/child relationship
* or halting the specified task is impossible.
*/And second, the nsfs-related functions, such as proc_ns_readlink() (indirectly) call __ptrace_may_access().
And finally, man 7 namespaces mentions:The symbolic links in this subdirectory are as follows:
[...]
Permission to dereference or read (readlink(2)) these symbolic links is governed by a ptrace access mode PTRACE_MODE_READ_FSCREDS check; see ptrace(2). |
In a program I'm enumerating network namespaces by scanning /proc/pid/ for ns/net (sym) links. This program runs inside the "root" namespaces (original init) of the host itself. Normally, I need to run the scanner part as root, as otherwise I will have only limited access to other processes' /proc/pid/ information. I would like to avoid running the scanner as root if possible, and I would like to avoid the hassle of dropping privileges.
Which Linux capability do I need to set for my scanner program so it can be run by non-root users and still see the complete /proc/pid/ tree and read network namespace links?
| Access /proc/pid/ns/net without running query process as root? |
There are two steps that were done initially but weren't done again in the new network namespace:echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv6/conf/all/forwardingIt happens the behavior resulting from this is different between IPv4 and IPv6, as documented in the relatively new network toggle that allows to alter this default behavior: devconf_inherit_init_net:[...]
By default, we keep the current behavior: for IPv4 we inherit all
current settings from init_net and for IPv6 we reset all settings to
default.So in the new network namespace:IPv4 forwarding is inherited from initial network namespace. As it was just enabled with echo 1 > /proc/sys/net/ipv4/ip_forward run in the initial network namespace, the new network namespace is also set as an IPv4 router.
So it works fine for IPv4.IPv6 is reset to the default of host rather than router whatever was done in the initial network namespace (unless if for example this is run before creating the new network namespace: sysctl -w net.core.devconf_inherit_init_net=1)Just add the missing step, to be run within the new network namespace (/proc/sys/net/ is network-namespace aware). Using a stdout redirection won't work correctly without some gymnastic, so better use the dedicated command: sysctl.
ip netns exec net1 sysctl -w net.ipv6.conf.all.forwarding=1 |
I have 3 Linux VMs connected like this:
/ server1 \
| ens19 2001:1::2 |
\ /
|
/ \
| ens19 2001:1::1 |
| server2 |
| ens20 2001:2::1 |
\ /
|
/ \
| ens19 2001:2::2 |
\ server3 / I run these commands on server1:
ip link set dev ens19 up
ip -6 address add 2001:1::2/96 dev ens19
ip -6 route add default via 2001:1::1then these on server3:
ip link set dev ens19 up
ip -6 address add 2001:2::2/96 dev ens19
ip -6 route add default via 2001:2::1then these on server2:
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv6/conf/all/forwarding
ip link set dev ens19 up
ip link set dev ens20 up
ip -6 address add 2001:1::1/96 dev ens19
ip -6 address add 2001:2::1/96 dev ens20If I try to ping server3 from server1, it works:
root@server1:~# ping6 2001:2::2but if I move the interfaces on server2 inside a network namespace:
ip netns add net1
ip link set dev ens19 netns net1
ip link set dev ens20 netns net1
ip netns exec net1 ip link set dev ens19 up
ip netns exec net1 ip link set dev ens20 up
ip netns exec net1 ip -6 address add 2001:1::1/96 dev ens19
ip netns exec net1 ip -6 address add 2001:2::1/96 dev ens20ping from server1 to server3 no longer works. Packets are no longer forwarded.
Why? (note: same process for IPv4 works)
| IPv6 forwarding doesn't work in a network namespace |
Unfortunately, while user313992's hint about SIOCGSKNS is extremely useful for sockets, the implementation of SIOCGSKNS for TAP/TUN file descriptors is ... strange: it returns an fd for the network namespace the TAP/TUN was initially created in, but not for the current network namespace of its netdev.
Looking around more in __tun_chr_ioctl where SIOCGSKNS is implemented, reveals a highly promising TUNGETDEVNETNS ioctl operation: this finally fetches and returns the network namespace of the TAP/TUN device.
The following unit test codes creates a TAP device in the initial network namespaces, creates a new network namespace, and then moves the TAP netdev into this new network namespace. The TUNGETDEVNETNS ioctl then correctly returns a fd referencing the new network namespace, where the TAP netdev has been moved already to.
package mainimport (
"os"
"runtime" "github.com/thediveo/notwork/link"
"github.com/thediveo/notwork/netns"
"github.com/vishvananda/netlink"
"golang.org/x/sys/unix" . "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
. "github.com/thediveo/success"
)const tapNamePrefix = "tap-"// Ugly IOCTL stuff; copied from github.com/thediveo/lxkns/ops/ioctl.go
const _IOC_NRBITS = 8
const _IOC_TYPEBITS = 8
const _IOC_SIZEBITS = 14const _IOC_NRSHIFT = 0
const _IOC_TYPESHIFT = _IOC_NRSHIFT + _IOC_NRBITS
const _IOC_SIZESHIFT = _IOC_TYPESHIFT + _IOC_TYPEBITS
const _IOC_DIRSHIFT = _IOC_SIZESHIFT + _IOC_SIZEBITSconst _IOC_NONE = uint(0)func _IOC(dir, ioctype, nr, size uint) uint {
return (dir << _IOC_DIRSHIFT) | (ioctype << _IOC_TYPESHIFT) | (nr << _IOC_NRSHIFT) | (size << _IOC_SIZESHIFT)
}func _IO(ioctype, nr uint) uint {
return _IOC(_IOC_NONE, ioctype, nr, 0)
}func getTapNetdevNetnsFd(fd int) (int, error) {
return unix.IoctlRetInt(fd, _IO('T', 227))
}var _ = Describe("TAP/TUN netns", func() { It("finds namespace of TAP/TUN netdev", func() {
runtime.LockOSThread()
defer runtime.UnlockOSThread() By("creating a TAP netdev")
tt := netlink.Tuntap{
Mode: netlink.TUNTAP_MODE_TAP,
Queues: 1,
}
tap := link.NewTransient(&tt, tapNamePrefix).(*netlink.Tuntap)
Expect(tap.Fds).NotTo(BeEmpty())
for _, fd := range tap.Fds {
DeferCleanup(func() { fd.Close() })
} By("creating a new transient network namespace")
newnetnsfd := netns.NewTransient() By("moving the TAP netdev into the new network namespace")
Expect(netlink.LinkSetNsFd(tap, newnetnsfd)).To(Succeed())
Expect(netlink.LinkList()).NotTo(ContainElement(
HaveField("Attrs().Name", tap.Name)))
nlh := netns.NewNetlinkHandle(newnetnsfd)
defer func() {
Expect(nlh.LinkSetNsPid(tap, os.Getpid())).To(Succeed())
nlh.Close()
}()
Expect(nlh.LinkList()).To(ContainElement(
HaveField("Attrs().Name", tap.Name))) By("querying the network namespace of the TAP netdev")
tapnetnsfd := Successful(getTapNetdevNetnsFd(int(tap.Fds[0].Fd())))
defer unix.Close(tapnetnsfd) Expect(netns.Ino(tapnetnsfd)).NotTo(Equal(netns.CurrentIno()))
Expect(netns.Ino(tapnetnsfd)).To(Equal(netns.Ino(newnetnsfd)))
})}) |
I want to correctly match TAP/TUN devices with the processes they are using, and I want to do this from outside of these processes using TAP/TUN devices (that is, I cannot issue any ioctl()s because I don't have access to a particular file descriptor inside its process itself).
I'm aware of the answers to How to find the connection between tap interface and its file descriptor?, that is: /proc/[PID]/fdinfo/[FD] has an additional iff: key-value pair that gives the name of the corresponding TAP/TUN network interface.
However, there's a problem with network namespaces, especially when TAP/TUN network interfaces get moved around network namespaces after their user-space processes have attached to them; for instance (here, tapclient is a simple variation of a34729t's tunclient.c, which accepts a tap network name and attaches to it):
$ sudo ip tuntap add tap123 mode tap
$ sudo tapclient tap123 &
$ sudo ip netns add fooz
$ sudo ip link set tap123 netns fooz
$ PID=$(ps faux | grep tapclient | grep -v -e sudo -e grep | awk '{print $2}')
$ sudo cat /proc/$PID/fdinfo/3...which then gives: iff: tap123 -- but not the network namespace where the tap123 network interface is currently located in.
Of course, tap123 can be located by iterating over all network namespaces and looking for a matching network interface inside one of them. Unfortunately, there might be duplicate names, such as when creating another tap123 in the host namespace after we've moved the first one of this name into the fooz network namespace above:
$ sudo ip tuntap add tap123 mode tap
$ ip link show tap123
$ sudo ip netns exec fooz ip link show tap123So we now have two tap123s in separate network namespaces, and fdinfo only gives us an ambiguous iff: tap123.
Unfortunately, looking at the /proc/$PID/ns/net network namespace of the tapclient won't help either, since that doesn't match the current network namespace of tap123 any longer:
$ findmnt -t nsfs | grep /run/netns/fooz
$ sudo readlink /proc/$PID/ns/netFor instance, this gives net:[4026532591] versus net:[4026531993].
It there a way to unambiguously match the tapclient process with the correct tap123 network interface instance it is attached to?
| How to get the Linux network namespace for a tap/tun device referenced in /proc/[PID]/fdinfo/[FD]? |
I would prefer to work from a more complete specification. However from careful reading of the script and your description, I conclude you are entering a network namespace (using the script) first, and entering a user namespace afterwards.
The netns is owned by the initial userns, not your child userns. To do ping, you need cap_net_raw in the userns that owns the netns. I think.
There is a similar answer here, which provides links to reference documentation: Linux Capabilities with User Namespaces
(I think ping can also work without privilege if you have access to ICMP sockets. But at least on my Fedora 29, this does not seem to be used. Unprivileged cp "$(which ping)" && ping localhost shows the same socket: Operation not permitted. Not sure why it has not been adopted).
|
I've been working on writing my own Linux container from scratch in C. I've borrowed code from several places and put up a basic version with namespaces & cgroups.
Basically, I clone a new process with all the CLONE_NEW* flags to create new namespaces for the clone'ed process.
I also set up UID mapping by inserting 0 0 1000 into the uid_map and gid_map files. I want to ensure that the root inside the container is mapped to the root outside.
For the filesystem, I am using a base image of stretch created with debootstrap.
Now, I am trying to set up the network connectivity from inside the container. I used this script to setup the interface inside the container. This script creates a new network-namespace of its own. I edited it slightly to mount the net-namespace of the created process onto the newly created net-namespace via the script.
mount --bind /proc/$PID/ns/net /var/run/netns/demoI can just get into the new network namespace as follows:
ip netns exec ${NS} /bin/bash --rcfile <(echo "PS1=\"${NS}> \"")and successfully ping outside.
But from the bash shell when I get inside the clone'ed process by default I am unable to PING. I get the error:
ping: socket: Operation not permittedI've tried setting up capabilities: cap_net_raw and cap_net_admin
I would like some guidance.
| Ping not working in a new C container |
I would use and option kind of like:
firejail --interface=eth0.vlan100 --ip=someipaddress someprogramSupport for ipvlan driver was introduced in Linux
kernel 3.19.Found Here: man firejail | Firejail
|
I have this local network service and this client program needing to access it. I am running them both as an unprivileged user.
I am looking for a way to sandbox the client using firejail, in a way that it cannot access network, except for localhost (or even better, except for that service).
first thing I tried was of course
firejail --net=lo programBut it didn’t work.
Error: cannot attach to lo deviceI think I could work around it by creating a virtual network interface, for example veth0 and veth1,
moving veth1 to a new network namespace in which I’d run the service
and using firejail to restrain the client to veth0
Is there a way to actually automate this setting in a firejail profile, so that all of these interfaces are created and veth1 is moved when I type
firejail server(without having to run anything as root)?
Or is there a simpler way solve this problem? (I cannot run both the client and the service in the same namespace, because the service needs to access the network)
| firejail : only let a program access localhost |
The solution is to mark new connections and use the mark for policy routing:
iptables -t mangle -A FORWARD -i ve006 -m connmark -j CONNMARK --set-mark 6
iptables -t mangle -A FORWARD -i ve010 -m connmark -j CONNMARK --set-mark 10ip rule has a test for fwmark. Thus you create a routing table for ve006 and one for ve010.
ip route add default table ve006 via a.b.c.51 dev ve006
# .51 again, typo?
ip route add default table ve010 via a.b.c.51 dev ve010ip rule add pref 100 iif ve998 fwmark 6 table ve006
ip rule add pref 101 iif ve998 fwmark 10 table ve010 |
Basically trying to bend bridging and NATing to my will with quite a unique project.
I've simplified what I'm doing below (VM=Kali virtual machine for testing):ZoneX's are network namespaces, vexxx's are virtual links created with ip link
The premise is to create a gateway for the LAN which can divert traffic (based on what it is) to either ZoneX or ZoneY modify the traffic and forward it to ZoneZ and finally out to the real networks gateway.
I've tried quite a few different things, however the main problem is either from creating a layer2 storm... not nice in VM's... or the NAT net namespace (ZoneZ) forwards the return traffic via the first interface in the NAT table for the client VM (which is sometimes incorrect).
The main aim is to split the traffic to multiple zones but have the return traffic take the same route back, thats the clincher! The next stage is then to be able to chain multiple Zones together to modify the traffic in multiple ways.
*** EDIT
A connection example would be a DNS lookup to 8.8.8.8 and an TCP request to 8.8.8.8, both from the VM.
Firstly the DNS request passes to eth0 over brA to ve001, to ZoneA where the packet is marked (using iptables) and passed to ve003 > ve004 etc. to ve006 where it is NAT'd and sent out to the internet. When the response returns to ZoneZ (the NAT zone) the lookup in the NAT table is done and the packet is routed to ve006 because the ARP entry for the VM machine points to that interface.
The main trouble comes when I have other traffic I want to forward via the bottom route. Same as before until ZoneA, however this time it is routed down to ve007, through ZoneY and finally into ZoneZ, its then passed over the NAT gw and onto the internet. However, when a reply is received for this connection the packets go to ZoneZ the lookup is done in the NAT table, its translated and then the ARP table lookup is done, this is when it forwards it back via ve006 which is wrong, I want it to go back the way it came (in this case via ve010).
I guess my question should be, can I get the NAT table to record the interface it was presented from and forward it back via that?
| How to create an internal multipath gateway |
According to the manual, redirect-gateway def1 doesn't try to replace the default route, it just creates two new ones 0.0.0.0/1 and 128.0.0.0/1. How does not having a default route prevent these from being created?If OpenVPN were only to override the default gateway it would no longer be able to get to its peer endpoint via that original gateway. So what it does first is to look at the default gateway and set an explicit host route for its peer endpoint via that gateway.
The man page for OpenVPN is quite explicit about this:--redirect-gateway flags...
Automatically execute routing commands to cause all outgoing IP
traffic to be redirected over the VPN. This is a client-side
option.
This option performs three steps:
(1) Create a static route for the --remote address which forwards to
the pre-existing default gateway. This is done so that (3) will not
create a routing loop.
(2) Delete the default gateway route.
(3) Set the new default gateway to be the VPN endpoint address
(derived either from --route-gateway or the second parameter to
--ifconfig when --dev tun is specified).
When the tunnel is torn down, all of the above steps are reversed so
that the original default route is restored.You then askhow can I get OpenVPN to automatically route all traffic through the VPN when a default route doesn't initially exist?A route must exist (somewhere) for the OpenVPN client to get to its peer endpoint (i.e. the server). So whatever that route is, you will need to implement the three tasks described above in a --up script. You'll need to write this script, of course.
|
Note
This question was originally asked about OpenVPN not setting a default gateway if one did not already exist even if you specify --redirect-gateway local or --redirect-gateway def1. As of 2.4.1, OpenVPN does set a default gateway with this option whether or not one exists already, so it's obsolete if you're using that version.
Note that the version of OpenVPN in Ubuntu Zesty is 2.4.0, and doesn't have this change. But the version from Artful installs without issue on Zesty.I have created a network namespace under Ubuntu 14.04.5, with a plan to run a VPN inside that (using OpenVPN 2.3.2 and a config file). The idea is that no traffic can be routed out of the namespace until the VPN is running.
I've implemented this by not creating a default route, and instead whitelisting the VPN server IP address. So my routing table looks like:
# ip netns exec testns ip route add A.B.C.D/32 via 10.200.200.1 dev veth1
# ip netns exec testns ip route show
10.200.200.0/24 dev veth1 proto kernel scope link src 10.200.200.2
A.B.C.D via 10.200.200.1 dev veth1 Note that 10.200.200.1 is the address of veth0, the other end of the virtual interface veth1 (10.200.200.2). I've confirmed that my iptables rules and IP forwarding in the root namespace works to get traffic in and out of the testns namespace when there's a route for it. A.B.C.D is the VPN server address.
To bring the VPN up, I run ip netns exec testns openvpn abcd.ovpn. This config file contains a pull directive, and the pushed config from the server contains:
PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,dhcp-option[...]But then a little later, I see:
NOTE: unable to redirect default gateway -- Cannot read current default gateway from systemAnd, accordingly, there is no default route set up in my testns namespace.
According to the manual, redirect-gateway def1 doesn't try to replace the default route, it just creates two new ones 0.0.0.0/1 and 128.0.0.0/1. How does not having a default route prevent these from being created? And how can I get OpenVPN to automatically route all traffic through the VPN when a default route doesn't initially exist?
| Can OpenVPN create the default route if it doesn't exist? |
is packet marking only local to the network namespace the mark is being placed in?Yes. the mark is local to the network namespace. Each namespace has an independent network stack, so when a packet transits from one namespace to an other it's like transiting over the wire: no mark remains.I installed a TRACE rule inside the network namespaceIt depends...
If using iptables-legacy's TRACE target the choices are limited:by default only the initial network namespace can log netfilter events to dmesg
or with sysctl -w net.netfilter.nf_log_all_netns=1 then all network namespaces will log to dmesg which can be a problem when a lot of logs are generated by a lot of network namespacesThat's because dmesg is not per-namespace but global and sending logs to dmesg was initially the only available method for TRACE.
Now, if using nftables' nftrace statement or using iptables-nft, the previous method to send messages with the TRACE target is replaced by using the (netfilter) netlink socket API which is namespace-aware and is sent only to listeners (ie: multicasting).
That means that when the nft variant like below:
# iptables-nft -V
iptables v1.8.7 (nf_tables)but not the legacy variant like below:
# iptables-legacy -V
iptables v1.8.7 (legacy)is used, then traces aren't sent to dmesg anymore but can be captured with xtables-monitor --trace instead. Again: xtables-monitor is intended only for the iptables-nft variant of iptables.
In this case one way to debug in parallel multiple network namespaces created by ip netns add ..., is to run multiple times in parallel xtables-monitor, once per network namespace, and write in separate logs or for example use ts to tag every line of the output to have a timestamp and identify each namespace, making it easy to split the result later if needed.
Something like this for netns foo bar and baz:
for ns in foo bar baz; do
ip netns exec "$ns" xtables-monitor --trace | ts -s "%.s $ns" &
done( pkill xtables-monitor might be required later.)
With network namespaces not created through ip netns add one can replace ip netns exec with nsenter and more elbow grease usually involving information from the application that created them (docker inspect, lxc-info ...)
The command when using nftables is instead nft monitor trace and behaves the same with regard to network namespaces. Actually nft monitor trace will also display traces created by iptables-nft's TRACE target since it's the same API.
|
I am trying to use iptables to packet mark packets of a certain source/destination IP in the mangle table on a given host. The packets are later forwarded to a particular network namespace on the same host, yet the iptables rules that I've installed in that network namespace do not pick up on the mark. I am thus wondering: is packet marking only local to the network namespace the mark is being placed in? I was under the impression that since the mark is an "attribute" associated with the skb, that the kernel would keep track of the mark anywhere it's routed on the host, irrespective of the namespace.
Alternatively does anyone have any ideas of how to debug this? I installed a TRACE rule inside the network namespace I am targeting, but I am under the impression that I need to run dmesg to view the output and that doesn't make a lot of sense for a network namespace.
| Linux packet mark across network namespaces |
There are a number of different namespace types, and Cgroup is one of them:Cgroup
IPC
Network
Mount
PID
Time
User
UTS (hostname and NIS domain name)But cgroups and cgroup namespaces are manipulated differently; cgroup namespaces virtualise cgroup hierarchies. Most of the time you’d only use cgroups directly, without caring about cgroup namespaces.
|
Is cgroup a type of namespace?
I am asking this because I have seen blogs talking as if cgroup and namespaces are different. However, in different linux commands , cgroup is considered as a type of namespace. For example,
% unshare --help | grep cgroup
-C, --cgroup[=<file>] unshare cgroup namespace
% lsns --help | grep cgroup
-t, --type <name> namespace type (mnt, net, ipc, user, pid, uts, cgroup)What is actually going on here?
| Is cgroup a type of namespace? |
TL;DR
Linux adds implicit routes when adding addresses. When the address is a /32 there can't be an implicit route added. You need then to manually add the route to other IP(s). When it's intended to route only to one destination (LAN or IP but when it's symmetric, it's often to only peer's IP), this can be abbreviated with ip address's peer parameter to add the route in the same shot. So instead of the two ifconfig commands, use this (with newer ip commands), and ping will work:
ip -n net1 link set veth1 up
ip -n net2 link set veth2 up
ip -n net1 address add 10.0.15.1 peer 10.0.15.2 dev veth1
ip -n net2 address add 10.0.15.2 peer 10.0.15.1 dev veth2Longer answer:
A few remarks first: Your issue is not related to network namespaces, but to routing alone, along with Linux peculiarities. Anyway network namespaces are very handy to do the mockup for a whole setup.
On Linux, you should drop the use of the ifconfig command (as well as route, brctl etc. commands) to the profit of the set of commands provided by iproute2. ifconfig's API (ioctl) has been half abandonned on Linux to use instead the netlink API, so some newer features may be available only through the ip ... kind of command.
Recent enough iproute2 tools have for most of their sub-commands the -netns option as a shortcut when using namespaces: ip -netns net1 FOO is equivalent to ip netns exec net1 ip FOO. I'll be using this shortcut when possible below. Many commands have abbreviated versions (eg: ip addr or even ip a instead of ip address) and some parameters keywords can be omitted. I won't use abbreviations here (except -n instead of -netns).Linux implicitly adds a route when an address with a netmask is set. When the netmask is /32 there can't be a route added this way (or actually there still is: a local scope route, but it's hidden in the local routing table (ip -n net1 route show table local) rather than in the main routing table. This addition can be prevented when using ip address add with the flag noprefixroute for some setups where the implicit route is not wanted. Note that ifconfig also adds a few default settings, like by default (all host's bits set variant) broadcast address. Those must be explicitly asked when using ip address.
Here are a few examples to help understand what's happening
Right before adding the addresses, run ip monitor in one of the namespaces, in a separate terminal. It will display any of a lot of possible network changes in the network namespace, then run the first address addition(ip netns exec net1 ifconfig veth1 10.0.15.1/24 up). Here's what you can typically get:
# ip -4 -n net1 monitor route
local 10.0.15.1 dev veth1 table local proto kernel scope host src 10.0.15.1
broadcast 10.255.255.255 dev veth1 table local proto kernel scope link src 10.0.15.1 linkdown
10.0.0.0/8 dev veth1 proto kernel scope link src 10.0.15.1 linkdown
broadcast 10.0.0.0 dev veth1 table local proto kernel scope link src 10.0.15.1 linkdown
Deleted 10.0.0.0/8 dev veth1 proto kernel scope link src 10.0.15.1 linkdown
Deleted broadcast 10.255.255.255 dev veth1 table local proto kernel scope link src 10.0.15.1 linkdown
Deleted broadcast 10.0.0.0 dev veth1 table local proto kernel scope link src 10.0.15.1 linkdown
Deleted local 10.0.15.1 dev veth1 table local proto kernel scope host src 10.0.15.1
local 10.0.15.1 dev veth1 table local proto kernel scope host src 10.0.15.1
broadcast 10.0.15.255 dev veth1 table local proto kernel scope link src 10.0.15.1 linkdown
10.0.15.0/24 dev veth1 proto kernel scope link src 10.0.15.1 linkdown
broadcast 10.0.15.0 dev veth1 table local proto kernel scope link src 10.0.15.1 linkdownIt appears here that ifconfig is working inefficiently: it first adds a /8 route then removes it and puts the asked /24 route. That's probably a leftover from the past never corrected. Here would be the equivalent with ip -n net1 link set veth1 up; ip -n net1 address add 10.0.15.1/24 broadcast + dev veth1:
local 10.0.15.1 dev veth1 table local proto kernel scope host src 10.0.15.1
broadcast 10.0.15.255 dev veth1 table local proto kernel scope link src 10.0.15.1 linkdown
10.0.15.0/24 dev veth1 proto kernel scope link src 10.0.15.1 linkdown
broadcast 10.0.15.0 dev veth1 table local proto kernel scope link src 10.0.15.1 linkdown So what if you don't want any /24? You can set /32 IPs and add the route yourself, like this (At the same time I rewrite a(n arguably) shorter version of the whole setup):
ip netns add net1
ip netns add net2
ip -n net1 link set lo up
ip -n net2 link set lo up
ip -n net1 link add name veth1 type veth peer netns net2 name veth2Set the interfaces up (can be done before or after, this won't change the final result once it's up):
ip -n net1 link set veth1 up
ip -n net2 link set veth2 upAddresses:
ip -n net1 address add 10.0.15.1/32 dev veth1
ip -n net2 address add 10.0.15.2/32 dev veth2Here an ip -4 -n net1 monitor route would have only shown local 10.0.15.1 dev veth1 table local proto kernel scope host src 10.0.15.1: only the hidden local route.
Routes:
ip -n net1 route add 10.0.15.2/32 dev veth1
ip -n net2 route add 10.0.15.1/32 dev veth2The two addresses values can be completely unrelated. You could likewise use 192.0.2.1 and 198.51.100.2. Some hosting providers use this mechanism to provide additional "failover IPs". Actually there's a shortcut for this specific case, where again a route will be added along the address in one shot, as long as the relevant informations are provided. So instead of the 4 previous commands, this is enough:
ip -n net1 address add 10.0.15.1 peer 10.0.15.2 dev veth1
ip -n net2 address add 10.0.15.2 peer 10.0.15.1 dev veth2Note that in all cases, those are still Ethernet interfaces, not point-to-point, so there will still be ARP requests done at the link layer to find the other IP.
Final note: if you intend to use more than two network namespaces together, you should probably revert to using a LAN netmask, and you'll very likely need to create a bridge interface (you can put it in the original namespace, in one of the newly created namespaces, but I advise you to put it in its own reserved namespace, to avoid unforseen interactions). Then for each pair of veth interface, one side should be put in the bridge's network namespace and enslaved to the bridge (eg: ip -n mybridgens link set vethp1 master bridge0).
|
I just started studying network namespaces and I'm looking at a very common example. This is just connecting two namespaces thanks to two veth (no bridge involved).
ip netns add net1
ip netns add net2
ip netns exec net1 ifconfig lo up
ip netns exec net2 ifconfig lo up
ip link add veth1 type veth peer name veth2
ip link set veth1 netns net1
ip link set veth2 netns net2
ip netns exec net1 ifconfig veth1 10.0.15.1/24 up
ip netns exec net2 ifconfig veth2 10.0.15.2/24 up
ip netns exec net1 ping 10.0.15.2The output is:
PING 10.0.15.2 (10.0.15.2) 56(84) bytes of data.
64 bytes from 10.0.15.2: icmp_seq=1 ttl=64 time=0.355 ms
64 bytes from 10.0.15.2: icmp_seq=2 ttl=64 time=0.189 ms
64 bytes from 10.0.15.2: icmp_seq=3 ttl=64 time=0.187 ms
64 bytes from 10.0.15.2: icmp_seq=4 ttl=64 time=0.184 ms
64 bytes from 10.0.15.2: icmp_seq=5 ttl=64 time=0.158 ms
64 bytes from 10.0.15.2: icmp_seq=6 ttl=64 time=0.300 ms
64 bytes from 10.0.15.2: icmp_seq=7 ttl=64 time=0.189 ms
64 bytes from 10.0.15.2: icmp_seq=8 ttl=64 time=0.186 ms
64 bytes from 10.0.15.2: icmp_seq=9 ttl=64 time=0.186 msWhat I can't undersant is why in every example I saw, when we give IP to the veth, we always give it a /24 class IP.
When I tried giving it a single IP:
ip netns exec net2 ifconfig veth2 10.0.15.2/32 upthe output I got was:
PING 10.0.15.2 (10.0.15.2) 56(84) bytes of data.
From 10.0.15.1 icmp_seq=9 Destination Host Unreachable
From 10.0.15.1 icmp_seq=10 Destination Host Unreachable
From 10.0.15.1 icmp_seq=11 Destination Host Unreachable
From 10.0.15.1 icmp_seq=12 Destination Host Unreachable
From 10.0.15.1 icmp_seq=13 Destination Host Unreachable
From 10.0.15.1 icmp_seq=14 Destination Host Unreachable
From 10.0.15.1 icmp_seq=15 Destination Host UnreachableWhy I am not able to give it a single ip address?
| Directly connecting two namespaces using veth devices, what network prefix should I use? |
A tun/tap interface always belongs to some application: Packets send to the interface get read by the application, and packets written by the application enter the kernel network stack through this interface.
Typically, you'll connect up network namespace with virtual ethernet pairs (veth). They just forward packets to the other interface of the pair.
Nothing stops you from writing an application that does exactly this: Open two tun/tap interfaces, read packets from one and forward it to the other, and vice versa. There are also ready-made applications you could use to do this, e.g. socat.
You can even write two applications, where each application opens a single tun/tap interface, and the applications communicate with each other using other means, and implement forwarding this way. Basically all VPN applications work this way (though for VPN applications "by other means" is typically "over an existing network connection", so it doesn't really count).
So yes, with the right application(s), you can connect namespaces with tun/tap interfaces. However, in general it doesn't make a lot of sense to do that, because you have to write such an application, and it will be less efficient than just using a veth-pair.
Edit
I tried moving one tun interface of socat into a network namespace ns0 I created, and it works fine as I had expected, despite socat running in the main network namespace:
socat TUN:10.1.0.254/24,tun-name=tun0a,iff-up TUN:10.1.0.1/24,tun-name=tun0b
ip link set tun0b netns ns0and then you again have to set the address for tun0b after the move.
So the "crossover" happens by having one (or both) tun/tap network interface(s) in a different namespace than the process.
|
I am trying to understand the difference between different types of (virtual) interfaces (e.g. TUN/TAP, veth etc.) and was studying some of these types within the context of containers.
Is it possible to send packets between a container (in its own network namespace) to the host's network namespace using only TUN/TAP interfaces or is a veth pair (one end in each namespace) required to do this?
From my understanding, TUN/TAP interfaces can only be used to send/receive packets to/from userspace from/to the network stack corresponding to the network namespace of that interface and not send packets between network namespaces. Is this correct?
| Is it possible to send packets between network namespaces using only TUN/TAP interfaces? |
To do this without manual bridging (brctl, etc) and re-use the physical interface I went with VLANs.
Assumptions: eth0 is the physical interface
What I did:Create the VLAN interface: ip link add link eth0 name vlan1 type vlan id 1
Assign an IP to the interface: ip addr add x.x.x.x/24 brd x.x.x.x dev vlan1
Up the interface: ip link set dev vlan1 upIf one has a bond interface; the same can be applied; instead of using the ethX interface just use the bond one.
|
How do I create a VNIC interface in linux?
What I want to do is create an interface that is linked in some way to a physical interface but functions in its own namespace.
I know the physical interface could be bridged; but this doesn't quite do what I want it to. I can also alias the interface but that too doesn't quite do what I want it to.
For example; in Solaris I can create a VNIC like so: dladm create-vnic -l <phys> <vnic_name>
| Virtual NIC's in Linux? |
It turned out the trick was to disable ufw:
sudo ufw disableAnd then I flushed the iptables and re-added the rules, and re-wrote /etc/resolv.conf after NetworkManager overwrote it for some reason.
Now it all works perfectly.
|
I am trying to use a network namespace for VPN-specific traffic, using this guide: https://schnouki.net/posts/2014/12/12/openvpn-for-a-single-application-on-linux/ on Debian.
Everything works with regard to setting up the namespace, and the bride, as shown here. The namespace is named piavpn, the veth on the namespace side is vpn1 and on the main side is vpn0. However, I cannot access the internet nor the main network from the namespace.
On the namespace:
sudo ip netns exec piavpn ip addr list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
7: vpn1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether da:8f:25:6f:47:74 brd ff:ff:ff:ff:ff:ff
inet 10.200.200.2/24 scope global vpn1
valid_lft forever preferred_lft forever
inet6 fe80::d88f:25ff:fe6f:4774/64 scope link
valid_lft forever preferred_lft foreverOn the normal network:
ip addr list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
link/ether 00:90:f5:eb:90:24 brd ff:ff:ff:ff:ff:ff
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 68:17:29:90:f5:ba brd ff:ff:ff:ff:ff:ff
inet 192.168.0.16/24 brd 192.168.0.255 scope global dynamic wlan0
valid_lft 80406sec preferred_lft 80406sec
inet6 fe80::6a17:29ff:fe90:f5ba/64 scope link
valid_lft forever preferred_lft forever
4: vmnet1: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
link/ether 00:50:56:c0:00:01 brd ff:ff:ff:ff:ff:ff
5: vmnet8: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
link/ether 00:50:56:c0:00:08 brd ff:ff:ff:ff:ff:ff
8: vpn0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 2a:19:71:d5:79:29 brd ff:ff:ff:ff:ff:ff
inet 10.200.200.1/24 scope global vpn0
valid_lft forever preferred_lft forever
inet6 fe80::2819:71ff:fed5:7929/64 scope link
valid_lft forever preferred_lft foreverPinging works both ways:
ping 10.200.200.2
PING 10.200.200.2 (10.200.200.2) 56(84) bytes of data.
64 bytes from 10.200.200.2: icmp_seq=1 ttl=64 time=0.084 ms
64 bytes from 10.200.200.2: icmp_seq=2 ttl=64 time=0.068 mssudo ip netns exec piavpn ping 10.200.200.1
PING 10.200.200.1 (10.200.200.1) 56(84) bytes of data.
64 bytes from 10.200.200.1: icmp_seq=1 ttl=64 time=0.088 ms
64 bytes from 10.200.200.1: icmp_seq=2 ttl=64 time=0.040 msHowever, I cannot access the internet nor the main network from the namespace. I think it must be an iptables issue as I have ipv4 forwarding enabled in sysctl.
My iptables rules are here: https://gist.github.com/anonymous/a1b440f1d3538be6557d
The NAT iptables rules are:
sudo iptables -t nat --list
Chain PREROUTING (policy ACCEPT)
target prot opt source destination Chain INPUT (policy ACCEPT)
target prot opt source destination Chain OUTPUT (policy ACCEPT)
target prot opt source destination Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 10.200.200.0/24 anywhere
MASQUERADE all -- 10.200.200.0/24 anywhere
MASQUERADE all -- 10.200.200.0/24 anywhere
MASQUERADE all -- 10.200.200.0/24 anywhere
MASQUERADE all -- anywhere anywhere
MASQUERADE all -- 10.0.0.0/8 anywhereClearly it's become messy where I've tried multiple times. But it should be permissive.
Until I get general connectivity from the namespace there is no point in worrying about the VPN.
| Using a VPN for certain applications via a network namespace |
Docker changes the default forward policy to DROP:Docker also sets the policy for the FORWARD chain to DROP. If your
Docker host also acts as a router, this will result in that router not
forwarding any traffic anymore.As happens in OP's case:Chain FORWARD (policy DROP 2 packets, 168 bytes)The documentation also tells:If you want your system to continue functioning as a router, you can
add explicit ACCEPT rules to the DOCKER-USER chain to allow it:The fix is to enable such traffic in the DOCKER-USER chain (by inserting in this chain rather than appending, because there's a final RETURN target).
The minimal fix to have the experiment unhindered without allowing more would thus be:
iptables -I DOCKER-USER 1 -i RA -o RB -j ACCEPT
iptables -I DOCKER-USER 2 -i RB -o RA -j ACCEPT |
I am trying to communicate between two network namespaces that are connected through the root namespaces using veth pairs as seen in the diagram. I am unable to perform a ping from netns A to netns B. Additionally I can ping from root namespace to both netns A (VA IP) and B (VB IP).
+-------+ +-------+
| A | | B |
+-------+ +-------+
| VA | VB
| |
| RA | RB
+-------------------------+
| |
| Root namespace |
| |
+-------------------------+ip netns add A
ip netns add Bip link add VA type veth peer name RA
ip link add VB type veth peer name RBip link set VA netns A
ip link set VB netns Bip addr add 192.168.101.1/24 dev RA
ip addr add 192.168.102.1/24 dev RBip link set RA up
ip link set RB upip netns exec A ip addr add 192.168.101.2/24 dev VA
ip netns exec B ip addr add 192.168.102.2/24 dev VBip netns exec A ip link set VA up
ip netns exec B ip link set VB upip netns exec A ip route add default via 192.168.101.1
ip netns exec B ip route add default via 192.168.102.1I have tried enabling IP forwarding and there are no IP table rules blocking the traffic.
The same works when instead of using root namespace I use another namespace called transit and connect it like below.
+-------+ VA RA +-------+ RB VB +-------+
| A |--------|transit|---------| B |
+-------+ +-------+ +-------+ +-------------------------+
| |
| Root namespace |
| |
+-------------------------+Here I am successful in pinging between namespaces A and B.
Why is it that the traffic gets dropped at root namespace and does not when a third transit namespace is used instead?
There are a few iptable rules installed by docker, but I do not see any conflict.
rahul@inception:~$ sudo iptables -L -n -v
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination Chain FORWARD (policy DROP 2 packets, 168 bytes)
pkts bytes target prot opt in out source destination
2 168 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
2 168 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination Chain DOCKER (1 references)
pkts bytes target prot opt in out source destination Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
0 0 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
2 168 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-ISOLATION-STAGE-2 (1 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
2 168 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 nft list format
rahul@inception:~$ sudo nft list ruleset
table ip nat {
chain DOCKER {
iifname "docker0" counter packets 0 bytes 0 return
} chain POSTROUTING {
type nat hook postrouting priority srcnat; policy accept;
oifname != "docker0" ip saddr 172.17.0.0/16 counter packets 1 bytes 90 masquerade
} chain PREROUTING {
type nat hook prerouting priority dstnat; policy accept;
fib daddr type local counter packets 148 bytes 11544 jump DOCKER
} chain OUTPUT {
type nat hook output priority -100; policy accept;
ip daddr != 127.0.0.0/8 fib daddr type local counter packets 3 bytes 258 jump DOCKER
}
}
table ip filter {
chain DOCKER {
} chain DOCKER-ISOLATION-STAGE-1 {
iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 jump DOCKER-ISOLATION-STAGE-2
counter packets 2 bytes 168 return
} chain DOCKER-ISOLATION-STAGE-2 {
oifname "docker0" counter packets 0 bytes 0 drop
counter packets 0 bytes 0 return
} chain FORWARD {
type filter hook forward priority filter; policy drop;
counter packets 2 bytes 168 jump DOCKER-USER
counter packets 2 bytes 168 jump DOCKER-ISOLATION-STAGE-1
oifname "docker0" ct state related,established counter packets 0 bytes 0 accept
oifname "docker0" counter packets 0 bytes 0 jump DOCKER
iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 accept
iifname "docker0" oifname "docker0" counter packets 0 bytes 0 accept
} chain DOCKER-USER {
counter packets 2 bytes 168 return
}
}ip route
rahul@inception:~$ ip route
default via 192.168.0.1 dev wlo1 proto dhcp metric 600
169.254.0.0/16 dev wlo1 scope link metric 1000
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.0.0/24 dev wlo1 proto kernel scope link src 192.168.0.101 metric 600
192.168.101.0/24 dev RA proto kernel scope link src 192.168.101.1
192.168.102.0/24 dev RB proto kernel scope link src 192.168.102.1Using TCPDUMP I found that the packet is reaching the root namespace.
Is there any debugging tool that I can learn and can be used to see where the packet is traversing inside the namespace (like strace or ftrace)?
| Root network namespace as transit between 2 other net namespaces |
It is easier to use other virtualized networking instead of bridges for this purpose.
"Don't use veth + bridge! Use macvlan!"
(from https://unix.stackexchange.com/a/546090/568691)Add the namespace
ip netns add sample_nsCreate the macvlan link attaching it to the parent host enp0s3 (https://developers.redhat.com/blog/2018/10/22/introduction-to-linux-interfaces-for-virtual-networking/#macvlan)
ip link add mv1 link enp0s3 address 00:11:22:33:44:55 type macvlan mode bridgeIf the address 00:11:22:33:44:55 part is ommitted then a random mac address will be generated. If you want to use the same mac as your physical enp0s3 or eth0 etc. than you can use ipvlan in layer 2 mode instead of macvlan (https://www.kernel.org/doc/Documentation/networking/ipvlan.txt).
Move the new interface mv1 to the new namespace
ip link set mv1 netns sample_nsBring the interfaces up
ip netns exec sample_ns ip link set dev lo up
ip netns exec sample_ns ip link set dev mv1 upSet ip addresses with dhclient - usually this also sets the routing.
ip netns exec sample_ns dhclient mv1#if dhclient does not work, add ip and route manually:
#ip netns exec sample_ns ip addr add $your_ip/24 dev mv1
#ip netns exec sample_ns ip route add default via $your_gateway dev mv1Test if the new namespace has the desired connectivity. Note that I am using fping to check if ping was successful or not (install fping).
ip netns exec sample_ns sudo fping 1.1.1.1 &>/dev/null && echo "Internet connectivity okay"
ip netns exec sample_ns sudo fping 1.1.1.1 &>/dev/null || echo "No internet"
ip netns exec sample_ns sudo fping google.com &>/dev/null || echo "No DNS service"Now only have to make the xserver accessible from the new network namespace - this is only neccessary if you cannot already open programs from the new namespace.
In default namespace:
sudo -u $USER xhost +local:$USER
ip netns exec sample_ns bash export DISPLAY=:0This exports the DISPLAY environmental variable, yours may be different, like :1 or :99. You can check in default namespace with
echo $DISPLAYNow run some programs from new namespace
sudo ip netns exec sample_ns sudo -u $USER bash
/usr/bin/chromium-browser %Uor if you want to do it at once:
sudo ip netns exec sample_ns sudo -u $USER /usr/bin/chromium-browser %UIf you want to continue using the bash opened in this new namespace you can add & to the end of the command to make it independent from this parent bash like so:
/usr/bin/chromium-browser %U &This way if you press CTRL+C the program will keep running, but you can get back your bash to run additional programs from it. Or you can just open multiple terminals.
Troubleshooting:
#enable forwarding
sysctl -w net.ipv4.ip_forward=1#to access snap packages in ns run in the ns: (“error: cannot find tracking cgroup”)
sudo mount -t cgroup2 cgroup2 /sys/fs/cgroup
sudo mount -t securityfs securityfs /sys/kernel/security/Additional sources:
Trying to run OpenVPN in Network Namespace
linux namespace, How to connect internet in network namespace?
Bind unix program to specific network interface
https://networkstatic.net/configuring-macvlan-ipvlan-linux-networking/
|
My aim is to route the default namespace through my vpn, and create a new namespace which does not route through the vpn
(so i can selectively launch programs that should not have access to the remote vpn network).
lan address: 10.0.2.15/24 on enp0s3
vpn address: 10.111.0.10/24 on tun1
# enable forwarding
sysctl -w net.ipv4.ip_forward=1
# create the network namespace
ip netns add sample_ns
# create the virtual nic and it's peer
ip link add virt_out type veth peer name virt_in# assign the peer to the network namespace
ip link set virt_in netns sample_ns
# bring up interface
ip link set virt_out up#Create a new bridge and change its state to up:
ip link add name bridge_name type bridge
#To add an interface (e.g. eth0) into the bridge, its state must be up:
ip link set enp0s3 up
#Adding the interface into the bridge is done by setting its master to bridge_name:
ip link set enp0s3 master bridge_name
ip link set virt_out master bridge_name
#To show the existing bridges and associated interfaces, use the bridge utility (also part of iproute2). See bridge(8) for details.
bridge link
# assign an ip address
ip addr add 10.0.3.1/24 dev virt_out#network setup for network namespace
ip netns exec sample_ns ip link set lo up
ip netns exec sample_ns ip addr add 10.0.3.2/24 dev virt_in
ip netns exec sample_ns ip link set virt_in up
ip netns exec sample_ns ip route add default via 10.0.3.2 dev virt_in# allow forwarding and add enable NAT
iptables -I FORWARD -s 10.0.3.0/24 -j ACCEPT
iptables -t nat -I POSTROUTING -s 10.0.3.0/24 -o enp0s3 -j MASQUERADE# pop a shell in the namespace
ip netns exec sample_ns bash# check that you're in the namespace
ip netns identify# run the browser as your local user
runuser -u $USER google-chrome#to access snap packages in ns run in the ns: (“error: cannot find tracking cgroup”)
sudo mount -t cgroup2 cgroup2 /sys/fs/cgroup
sudo mount -t securityfs securityfs /sys/kernel/security/The aim is basically the reverse of namespaced-openvpn (https://github.com/slingamn/namespaced-openvpn) so the vpn protected namespace is the default, and the not vpn connected namespace is the new one.
What I am doing currently does not work, I assume I should somehow add ip address to the bridge/the virtual nic or the enp0s3?
Thanks for any help!
Other sources:
https://forums.openvpn.net/viewtopic.php?f=15&t=25690
https://github.com/ausbin/nsdo
| How to use network namespaces for vpn split tunneling |
On Linux this setting isn't specific to veth so isn't documented along veth, but in the generic ip link command:ip link add [ link DEVICE ] [ name ] NAME
[ txqueuelen PACKETS ]
[ address LLADDR ] [ broadcast LLADDR ]
[ mtu MTU ] [ index IDX ] ...ip link set { DEVICE | group GROUP }
...
[ mtu MTU ] ...You can use ip link set ... mtu 9000:
ip link set veth0 mtu 9000Some interfaces might answer: Error: mtu greater than device maximum. because of hardware limits. That won't be the case for a virtual veth interface, its maximum MTU is 65535:
# ip -details link show veth0
68: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 9000 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 2a:93:f8:8e:bc:b6 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535
veth addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 with relevant part:
minmtu 68 maxmtu 65535 Note that for this to be useful you need the other side in an other namespace. For example here the other side of veth0 is veth1:
ip netns add experiment
ip link set veth1 peer netns experiment
ip -n experiment link set veth1 mtu 9000etc. (bring interfaces up, add addresses, routes...)
For the multicast part, you might be interested in this:
IP Multicasting with Socat
|
The sender needs to transmit large data packets to the receiver (which is on the same host with 1500 MTU) and I think this can be simulated using veth with 9000 MTU, from my reading on it. But I'm not able to figure out how exactly to do that - most of the veth tutorials/articles on the internet mention network namespaces and I'm not sure if I would need to create a network namespace to achieve this. Any pointers/suggestions would be helpful, thanks!
| How to setup veth with 9000 MTU to simulate sending and receiving large UDP multicast packets on the same host? |
There's no actual name existing in the low-level handling of namespaces. It's all handled by common actions or expectations done with every command in the iproute2 suite.
Assigning a name to an existing anonymous network namespace really means: make the iproute2 tools believe they did their usual settings when creating this network namespace.
So what is ip netns add foo really doing? It unshares to a new network namespace, and to keep this namespace existing even without process using it, mounts it. In the usual *nix philosophy, a namespace has an inode number representing a pseudo-file in the pseudo-filesystem nsfs. Almost no operation can be done on the file (not even read it), but it can be opened to use it as a token for namespace operations (eg setns(2)) and can also be mounted as a bind mount.
Alas, the link nsid is not directly usable: that's not a global value representing an other namespace, but a local (locally unique but recyclable and not globally unique) ID representing a link to an other namespace. Multiple values means: links to multiple other namespaces, twice the same value means: two links to the same other namespace.
If you have to find the other namespaces starting only from this, I invite you to check my answer in this other Q/A where I made an answer about it. There's also a handy mapping tool available: plotnetcfg which can map all the network namespaces. While it does know about iproute2 methods, it doesn't appear to give a process id separately, instead it names a namespace with a PID from it when it's not "named" by iproute2. Redacted example (requiring jq), run as root:
# unshare -n -- sleep 999 &
[1] 677451
# plotnetcfg -f json | jq '.namespaces[].name'
""
[...]
"PID 268150 (systemd)"
"PID 345878 (systemd)"
[...]
"PID 677451 (sleep)"Here "" represents the initial namespace, the two systemd are from two LXC instances, and the sleep command running is also found.
To help for two common cases:for LXC:
lxc-info -H -p -n containernamefor Docker:
docker inspect --format '{{.State.Pid}}' containernameOnce you have found by whatever method available a process in the intended network namespace, you can mimic what would do the iproute2 tools. What they exactly do might depend on their exact version. From strace mine appear to do:
mkdir -p /run/netnsand then mount it over itself so it can be set as shared propagation:
mount --bind --make-shared /run/netns /run/netnsAbove should only be done if not already done and only once. To have the tools do it for you (and only when needed), simply create and delete a dummy namespace:
ip netns add dummy && ip netns delete dummyNow pick a name, for example foo, create an empty file and change its mode to 000 (which is not needed, but mimics iproute2):
touch /run/netns/foo
chmod 0 /run/netns/fooAnd finally mount the process' namespace given its PID (eg sleep from before: 677451):
mount --bind /proc/677451/ns/net /run/netns/fooThat's it. Even if the sleep command ends, the namespace will now survive. All iproute2 tools will now name it foo. For example if there was a veth interface connected to it, ip link would replace the link nsid's number with foo.
If you want to remove this network namespace, ip netns del foo will now do it (caveat: as usual the actual network namespace really disappears once there's no resource using it anymore, like this sleep command).
Further documentation on iproute2's specific additional features along network namespaces in an other answer of mine.
|
I have a Linux process that creates a network namespace without registering it in /run/netns. The process has also own PID namespace. The network namespace does not have a name and I can see only id of the namespace:
# ip netns list-id
nsid 0
nsid 1Is there a possibility to assign a name to the network namespace so I can use convenient ip-netns commands to show and manage the namespace?
I have found an article How to access an unnamed network namespace but it does not work for me because my process has own PID namespace so I do not see the process in the root namespace.
| How to assign a name to the existing anonymous network namespace |
The issue was that tcpdump was picking up the docker0 interface rather than looking on everything.
Correct command was tcpdump -i any host 10.0.0.1
|
I am trying to capture a tcpdump of a set of processes running in the mininet network emulation framework.
Mininet works by putting each process/set of processes into its own network namespace and then connecting each network namespace via veth devices.
What I am trying to do is to take a tcpdump to get a bandwidth usage over time graph. This is however not the hard bit.
What I would expect is that I could just do sudo tcpdump host 10.0.0.1 (10.0.0.1 being one of the emulated nodes) to capture all the traffic flowing between the namespaces. However, I am getting 0 packets captured.
I have previously gotten this working, however I cannot remember how. Mininet is running inside the mininet docker container with net=host.
| tcpdump traffic in network namespaces in `net=host` dockercontainer |
When packets leave the network namespace they have (in your case) a source address in the 192.168.163.0/24-network. Locally, routing back into the network namespace work just fine, but once the packets leave your local system, you need to translate this source address into an address your next-hop/gateway knows how to route back to you.
This is what -j SNAT does in the POSTROUTING chain in the nat table. However, in your case, you only SNAT packets leaving the wlo1 interface. This is why routing via wlo1 works fine, but via tun0 (the VPN interface) fails.
When packets are routed through tun0, they still have a source address in the 192.168.163.0/24-network, and your VPN server does not know how to return packets from this source address.
To resolve this, you need to SNAT packets leaving the tun0-interface. Simplest option here (since the glue net addresses usually are dynamic) is to use the -j MASQUERADE target:
iptables -A POSTROUTING -t nat -s 192.168.163.0/24 -o tun0 -j MASQUERADE
|
I have created a network namespace (named ppn) to run certain application in it. This works perfectly but when my commercial VPN (based on OpenVPN) is also enabled it seems that the traffic is only unidirectional.
For the creation of the network namespace, this logic was followed (also same ip addresses used): https://askubuntu.com/a/499850/820897
When VPN is disabled pinging 8.8.8.8 from the network namespace works normally:
sudo ip netns exec ppn ping 8.8.8.8When VPN is enabled though, I get no ICMP echo replies although tcpdump -i tun0 host 8.8.8.8 logs the ICMP echo requests.
Below you find my iptables and ip route lists:wlo1 is on 192.168.2.106
tun0 is on 10.8.1.12sudo iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A INPUT -s 5.180.62.60/32 -i wlo1 -j ACCEPT
-A INPUT -s 5.180.62.60/32 -i enp5s0 -j ACCEPT
-A INPUT -i wlo1 -j DROP
-A INPUT -i enp5s0 -j DROP
-A OUTPUT -d 5.180.62.60/32 -o wlo1 -j ACCEPT
-A OUTPUT -d 5.180.62.60/32 -o enp5s0 -j ACCEPT
-A OUTPUT -o wlo1 -j DROP
-A OUTPUT -o enp5s0 -j DROPsudo iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-A POSTROUTING -s 192.168.163.0/24 -o wlo1 -j SNAT --to-source 192.168.2.106ip route
0.0.0.0/1 via 10.8.1.1 dev tun0
default via 192.168.2.1 dev wlo1 proto dhcp metric 600
5.180.62.60 via 192.168.2.1 dev wlo1
10.8.1.0/24 dev tun0 proto kernel scope link src 10.8.1.12
128.0.0.0/1 via 10.8.1.1 dev tun0
169.254.0.0/16 dev tun0 scope link metric 1000
192.168.2.0/24 dev wlo1 proto kernel scope link src 192.168.2.106 metric 600
192.168.163.0/24 dev veth-b proto kernel scope link src 192.168.163.254 sudo ip netns exec ppn ip route
default via 192.168.163.254 dev veth-a
192.168.163.0/24 dev veth-a proto kernel scope link src 192.168.163.1How could I make the network namespace functional also under VPN ?
-------EDIT-------
sysctl net.ipv4.ip_forward = 1 in my system
| Network namespace doesn't work with vpn |
unshare -mrn # which implies -UEverything else below is run from the namespace(s) entered from above unless told otherwise.Without using ip netns
touch $HOME/mynamespace
unshare --net=$HOME/mynamespace trueHere true ended, losing any PID reference but a mount reference was kept allowing this namespace to still exist.
The ip link set ... netns command can take a mountpoint or a process id as reference:netns NETNSNAME | PID
move the device to the network namespace associated with name NETNSNAME or process PID.ip link add veth0 type veth peer name veth1Without using ip netns we can still create a PID with a temporary sleep command to get a PID reference with the ip link command and use it:
nsenter --net=$HOME/mynamespace sleep 99 & pid=$!ip link set veth1 netns $pidwhich gets:
# ip -br link
lo DOWN 00:00:00:00:00:00 <LOOPBACK>
veth0@if2 DOWN 86:c2:bc:ba:1a:01 <BROADCAST,MULTICAST>
# nsenter --net=$HOME/mynamespace ip -br link
lo DOWN 00:00:00:00:00:00 <LOOPBACK>
veth1@if3 DOWN 86:e3:a1:ce:48:4e <BROADCAST,MULTICAST> Using ip netns
ip netns requires using a shared mounted /run/netns to work, or will create and mount one if not already mounted (which happens on first use). If it detects one already mounted, it won't create it and this will fail later (Cannot create namespace file "/run/netns/foo": Permission denied) because it belongs to real initial user namespace root and thus can't be written into. If it's not created it can't create it because it can't write in /run which belongs to real initial user namespace root. Etc. in all various cases, ip netns fails when run from an user namespace without privileges.
Just manually mount one over the previous one, so ip netns will be happy:if /run/netns exists and one wants to keep the current /run for some reason:
mount -t tmpfs --make-rshared tmpfs /run/netnsif /run/netns doesn't even exist or even if it exists one can override the whole /run:
One can't create a directory in the current /run so it also has to be mounted over, losing access to some other useful information, but opening access to other failing tools (like iptables-legacy -w which would otherwise tell Fatal: can't open lock file /run/xtables.lock: Permission denied).
mount -t tmpfs tmpfs /runNow every standard ip netns, ip link or ip -n foo ... command will work as usual in the user namespace:
ip netns add mynamespace
ip link add name veth0 type veth peer netns mynamespace name veth1which gets:
# ip -br link
lo DOWN 00:00:00:00:00:00 <LOOPBACK>
veth0@if2 DOWN 2a:98:7f:83:bf:9e <BROADCAST,MULTICAST>
# ip -n mynamespace -br link
lo DOWN 00:00:00:00:00:00 <LOOPBACK>
veth1@if2 DOWN 96:3c:5e:a6:a4:4a <BROADCAST,MULTICAST> |
I have a Linux network namespace bind mounted at ~/mynamespace as follows:
unshare -mrn;
touch ~/mynamespace; # executed in the console opened by the first command
unshare --net=~/mynamespace true; # executed in the console opened by the first commandHow can I move an interface from the anonymous network namespace created by command #1 into the namespace bind mounted at ~/mynamespace?
Note that bind mounting the inner namespace or a copy of it into /var/run/netns is not an option in my case, not even temporarily. I think that the ip ... netns related commands will only accept a network namespace which is bind mounted in the more standard /var/run/netns/ directory. So I don't think the ip command will work in this case.
Also note that all commands above are run without root as an unprivileged user.
| How to move interface into nonstandard network namespace as unprivileged user |
fe80::/10 are link local addresses. I think these are unroutable (though there may be some tricks to make them routable, I never tried).
If you want to play around with IPv6 routing, don't use these; instead, assign Unique Local Addresses (ULAs, range fc00::/7) in addition to the link-local addresses (which are usually autoconfigured) to your interfaces.
And you can also use ip -6 route get from ... to ... for debugging.
Set up routes as usual with ip route add ... while you are playing around.
Remember routes need to be set on all nodes (or in your case, namespaces) along the path of the packet, not only on the "forwarding" nodes (it's a typical beginner's mistake to forget that).It's easier to make this work if you don't make your life difficult by trying to ping "from interfaces to interfaces" (I assume with -I).
Set up three namespaces A, B, C. Connect A and B with a veth pair, and connect B and C with a veth pair. Place ULA addresses (with correct subnets, one subnet for A/B, one subnet for B/C) on all four network interfaces. Set default routes on A and C (via the resp. B interface). Enable forwarding in B. Then, in A, try to ping the address on the near interface in B, then the address on the "far" interface in B, then C. Just use plain ping <addr>. See where it stops working (if it doesn't work right out of the box).
To debug, run tcpdump in four terminals on all four interfaces. Run an xterm (or multiple ones) in the network namespace, that makes debugging and set up a lot easier.
If it still doesn't work, make a new question, include all commands you made the above setup, include the pings that work, and for the pings that don't work, include the tcpdump output. (Remote debugging per Q&A is a PITA).
|
I am currently trying to simulate a network using network namespaces under Linux. I have already set up the nodes and connected them, and they can ping each other, one hop at a time. But I am really struggling trying to enable IP forwarding.
I am using Ubuntu Server 21.04 and networking on my system is controlled by systemd-networkd. systemd's version is 247.3-3ubuntu3.4. net.ipv6.conf.all.forwarding and net.ipv4.ip_forward are already enabled. Because networkd is used, forwarding has to be enabled in configuration files additionally. For one of my namespaces, this looks as follows:
/etc/systemd/network/router1i.network:
[Match]
Name=router1i[Network]
IPForward=yesand /etc/systemd/network/router1i2.network:
[Match]
Name=router1i2[Network]
IPForward=yesThose (router1i and router1i2) are both veth interfaces and the only 2 interfaces in the namespace.
If I use the command ip -6 route get to fe80::1:0:200 iif router1i2 in the namespace, I get the correct answer fe80::1:0:200 from :: dev router1i2 proto kernel metric 256 iif router1i2 pref medium, because the route doesn't involve forwarding. If I use the similar command ip -6 route get to fe80::1:0:200 iif router1i, that starts from the other interface, the answer suddenly is RNETLINK answers: Network is unreachable. So apparently, forwarding isn't enabled.
I already tried to get networkd to update by using networkctl reconfigure router1i from within the namespace, but it says Failed to reconfigure network interface router1i: No such device or address. This is strange, because when I use networkctl status router1i, it lists all the information correctly. A full reload using networkctl reload was also already tried and doesn't change anything.
I'm honestly pretty much at my wit's end. I don't even necessarily need to get it to work with networkd. Any idea or workaround would be very much appreciated.
Edit:
I have now exchanged the link-local addresses with Unique Local Addresses, as dirkt suggested. The routes are now selected correctly judging from the output of ip -6 route get. But I still can't ping other network interfaces. I'll add the details below, because I honestly can't find the error.
Configuration of the interfaces:
ubuntu@ubuntu:~$ sudo ip netns exec Router1 ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group`default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
6: router1i@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether be:fc:8e:30:e4:18 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fd00:0:0:1000::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::bcfc:8eff:fe30:e418/64 scope link
valid_lft forever preferred_lft forever
8: router1i2@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d6:3f:e9:9a:93:f3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fd00:0:0:1001::1/63 scope global
valid_lft forever preferred_lft forever
inet6 fe80::d43f:e9ff:fe9a:93f3/64 scope link
valid_lft forever preferred_lft foreverIPv6 Routing Table:
ubuntu@ubuntu:~$ sudo ip netns exec Router1 ip -6 route
fd00:0:0:1000::/64 dev router1i proto kernel metric 256 pref medium
fd00:0:0:1000::/63 dev router1i2 proto kernel metric 256 pref medium
fe80::/64 dev router1i proto kernel metric 256 pref medium
fe80::/64 dev router1i2 proto kernel metric 256 pref medium
default via fd00:0:0:1001::2 dev router1i2 metric 1024 pref mediumOutput of ip netconf:
ubuntu@ubuntu:~$ sudo ip netns exec Router1 ip netconf
inet lo forwarding on rp_filter loose mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
inet router1i forwarding on rp_filter loose mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
inet router1i2 forwarding on rp_filter loose mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
inet all forwarding on rp_filter loose mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
inet default forwarding on rp_filter loose mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
inet6 lo forwarding on mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
inet6 router1i forwarding on mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
inet6 router1i2 forwarding on mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
inet6 all forwarding on mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off
inet6 default forwarding on mc_forwarding off proxy_neigh off ignore_routes_with_linkdown offThis now works correctly:
ubuntu@ubuntu:~$ sudo ip netns exec Router1 ip -6 route get to fd00:0:0:1001::2 iif router1i
fd00:0:0:1001::2 from :: dev router1i2 proto kernel metric 256 iif router1i pref mediumBut when I try to actually ping that address from router1i, it says:
ubuntu@ubuntu:~$ sudo ip netns exec Router1 ping6 fd00:0:0:1001::2 -I router1i
ping6: connect: Network is unreachableForwarding is on and the correct route is selected, so why does it still not work?
Edit2:
I got it to work! Thanks to everyone who tried to help.
Apparently I misjudged what the ping with -I option does. This got me confused and my inexperience didn't help... In the end, I found out that the last piece missing was a wrong route in one of the outer namespaces that hindered it from answering the pings. I should have found that way sooner, but I got too obsessed with the forwarding issue...
So anyway, thanks again, and have a nice day!
| IP forwarding in linux namespaces |
You have Docker which itself alters the firewall rules. I can't tell if that's because of Docker, but you have iptables' default policy for filter/FORWARD set to DROP preventing any routing not explicitly allowed.
EDIT: added the return direction.
To make your experiment work this should be enough (including the return traffic which must also be enabled):
iptables -A FORWARD -i v-eth1 -j ACCEPT
iptables -A FORWARD -o v-eth1 -j ACCEPTNote that those could be complemented with the interface to/from internet but I don't have its name.
Usually using rules below is prefered, letting the return traffic be allowed by stateful tracking: conntrack, thus having to care only about initial traffic. Feel free to try it.
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i v-eth1 -j ACCEPTAs a side note kernels >= 4.7 usually require/allow a few more settings to have conntrack helpers (ftp...) to work correctly/securely, but that's not needed for your experiment (ICMP is handled). Some informations in this blog: Secure use of iptables and connection tracking helpers.
In case of doubt (like interaction with Docker) use -I instead to be sure to insert your rules before anything else. Just be aware restarting Docker might alter the rules again. Now you know where the problem is, it's up to you to integrate this along boot and Docker.
You might be interested in reading Docker's documentation about its use of iptables: Docker and iptables.
|
I'm trying to ping an external ip (in this case google) from inside a network namespace.
ip netns add ns1
# Create v-eth1 and v-peer1: v-eth1 is in the host space whereas peer-1 is supposed to be in the ns
ip link add v-eth1 type veth peer name v-peer1
# Move v-peer1 to ns
ip link set v-peer1 netns ns1
# set v-eth1
ip addr add 10.200.1.1/24 dev v-eth1
ip link set v-eth1 up
# Set v-peer1 in the ns
ip netns exec ns1 ip addr add 10.200.1.2/24 dev v-peer1
ip netns exec ns1 ip link set v-peer1 up
# Set loopback interface in the ns
ip netns exec ns1 ip link set lo up
# Add defaut route in the ns
ip netns exec ns1 ip route add default via 10.200.1.1
# Set host routing tables
iptables -t nat -A POSTROUTING -s 10.200.1.0/24 -j MASQUERADE
# Enable routing in the host
sysctl -w net.ipv4.ip_forward=1
#
ip netns exec ns1 ping 8.8.8.8For some reasons this is working fine inside a VM in virtualbox (on my laptop), it's working on my desktop (ubuntu 18.04) but it does not work on my host os on laptop (which is too Ubuntu 18.04).
I tried traceroute and this is what I got:
on laptopon desktopDoes any of you have any idea on what should I investigate in order to find the problem?
I don't have a firewall set as far as I know (ufw is disabled)
EDIT: this is what I get with iptables-save -c:
# Generated by iptables-save v1.6.1 on Fri Jan 17 18:05:36 2020
*filter
:INPUT ACCEPT [3774:2079111]
:FORWARD DROP [5:420]
:OUTPUT ACCEPT [3053:308301]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
[16984:18361691] -A FORWARD -j DOCKER-USER
[16984:18361691] -A FORWARD -j DOCKER-ISOLATION-STAGE-1
[12139:18094316] -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
[0:0] -A FORWARD -o docker0 -j DOCKER
[4761:260319] -A FORWARD -i docker0 ! -o docker0 -j ACCEPT
[0:0] -A FORWARD -i docker0 -o docker0 -j ACCEPT
[0:0] -A FORWARD -o br-6a72e380ece6 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
[0:0] -A FORWARD -o br-6a72e380ece6 -j DOCKER
[0:0] -A FORWARD -i br-6a72e380ece6 ! -o br-6a72e380ece6 -j ACCEPT
[0:0] -A FORWARD -i br-6a72e380ece6 -o br-6a72e380ece6 -j ACCEPT
[4761:260319] -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
[0:0] -A DOCKER-ISOLATION-STAGE-1 -i br-6a72e380ece6 ! -o br-6a72e380ece6 -j DOCKER-ISOLATION-STAGE-2
[16984:18361691] -A DOCKER-ISOLATION-STAGE-1 -j RETURN
[0:0] -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
[0:0] -A DOCKER-ISOLATION-STAGE-2 -o br-6a72e380ece6 -j DROP
[4761:260319] -A DOCKER-ISOLATION-STAGE-2 -j RETURN
[16984:18361691] -A DOCKER-USER -j RETURN
COMMIT
# Completed on Fri Jan 17 18:05:36 2020
# Generated by iptables-save v1.6.1 on Fri Jan 17 18:05:36 2020
*nat
:PREROUTING ACCEPT [406:111092]
:INPUT ACCEPT [9:703]
:OUTPUT ACCEPT [29:2283]
:POSTROUTING ACCEPT [28:2114]
:DOCKER - [0:0]
[253:19770] -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
[0:0] -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
[4:249] -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
[0:0] -A POSTROUTING -s 172.18.0.0/16 ! -o br-6a72e380ece6 -j MASQUERADE
[1:169] -A POSTROUTING -s 10.200.1.0/24 -j MASQUERADE
[0:0] -A DOCKER -i docker0 -j RETURN
[0:0] -A DOCKER -i br-6a72e380ece6 -j RETURN
COMMIT
# Completed on Fri Jan 17 18:05:36 2020 | Unable to ping external network from namespace, probably postrouting not working |
You have a few options.
LD_PRELOAD
You could use an LD_PRELOAD library to intercept the bind() system call to force binding to a specific address. One example of that is this, which you compile like this:
gcc -nostartfiles -fpic -shared bind.c -o bind.so -ldl -D_GNU_SOURCEAnd use like this:
BIND_ADDR=127.0.0.1 LD_PRELOAD=./bind.so /path/to/myprogramNetwork namespaces w/ Docker
You could also elect to run your program inside its own network namespace. The easiest way to do this would be to build a Docker image for your application and then run it under Docker, and use Docker's port mapping capabilities to expose the service on the host ip of your choice.
Here there be dragons
I would strongly recommend one of the above solutions. I only include the following because you asked about network namespaces.
Network namespaces w/ macvlan
If you want to do it without Docker it's possible but a little more work. First, create a new network namespace:
# ip netns add mynsThen create a macvlan interface associated with one of your host interfaces and put it into the namespace:
# ip link add myiface link eth0 type macvlan mode bridge
# ip link set myiface netns mynsAnd assign it an address on your local network:
# ip netns exec myns \
ip addr add 192.168.0.4/24 dev myiface
# ip netns exec myns \
ip link set myiface upAnd create appropriate routing rules inside the namespace (substituting your actual gateway address for 192.168.0.1):
# ip netns exec myns \
ip route add default via 192.168.0.1Now, run your program inside the network namespace:
# ip netns exec myns \
/path/to/myprogramNow your program is running and will bind only to 192.168.0.4, because that is the only address visible inside the namespace. But! Be aware of the limitation of mavclan interfaces: while other hosts on your network will be able to connect to the service, you will not be able to connect to that address from the host on which it is running (unless you create another macvlan interface on the host and route connections to 192.168.0.4 via that interface).
Network namespaces w/ veth interfaces
Instead of using macvlan interfaces, you can create a veth interface pair, with one end of the pair inside a network namespace and the other on your host. You will use ip masquerading to pass packets from the namespace to your local network.
Create the network namespace:
# ip netns add mynsCreate an interface pair:
# ip link add myiface-in type veth peer name myiface-outAssign one end of the pair to your network namespace:
# ip link setns myiface-in mynsConfigure an address on each end of the pair and bring up the links:
# ip addr add 192.168.99.1/24 dev myiface-out
# ip link set myiface-out up
# ip netns exec myns ip addr add 192.168.99.2/24 dev myiface-in
# ip netns exec myns ip link set myiface-in upConfigure ip masquerading on your host. This will redirect incoming packets on 192.168.0.4 to your namespace:
# iptables -t nat -A PREROUTING -d 192.168.0.4 -p tcp --dport 34964 -j DNAT --to-destination 192.168.99.2
# iptables -t nat -A OUTPUT -d 192.168.0.4 -p tcp --dport 34964 -j DNAT --to-destination 192.168.99.2And this will masquerade outbound packets:
# iptables -t nat -A POSTROUTING -s 192.168.99.2 -j MASQUERADEYou will need to ensure that you have ip forwarding enabled on your host (sysctl -w net.ipv4.ip_forward=1) and that your iptables FORWARD chain permits forwarding the connection (iptables -A FORWARD -d 192.168.99.2 -j ACCEPT, keeping in mind that rules are processed in sequence so a reject rule before this one will take precedence).
|
I have two applications that use the same port for network communication (34964). I have control over (source code) the first application and it uses 192.168.0.4:34964. Whereas the other application tries to use/"claim" all IP addresses (0.0.0.0:34964), but this one I have no control over. Each application works running alone, however when I try to make them run at the same time I get an error: Failed to bind address.
Question
Is there any way to prevent the second application from using/claiming all IP addresses (0.0.0.0) and instead use 192.168.0.5. Either before starting it, or by encapsulating it in a network namespace?
I have tried nothing and I am all out of ideas...
More detailed version:
Two application to communicate on two separate Profinet networks. The first application acts as a Profinet device and communicates with a Siemens Profinet controller, I have access to the source code to this application. The second application should act as a Profinet Controller that talks to a Profinet Siemens device, I am currently using Codesys for this and have no access to change the source code.
| Prevent application from using all IPs on port (0.0.0.0:34964) |
No, there is not a way to do that. That would break the very concept behind separation of network namespaces. There is one and only one way to "escape" that separation, and it's veth interfaces.
In a little bit more detail, it wouldn't just be a matter of somehow "sharing" a loopback interface between network namespaces. Each network namespace is logically another copy of the network stack, with it's [sic] own routes, firewall rules, and network devices. In the context of this "sharing", which routing table and firewall rules would apply? You can even have multiple different processes both bound and listening to the same TCP/IP address and port number in different network namespaces, and which one would then pick up the incoming packets? It fundamentally does not work.
|
I'm interested in using a separate namespace for running a VPN client, so that every process I run into that namespace accesses the Internet through the VPN. That part I've managed to accomplish.
However, some programs communicate through the loopback interface (e.g. a daemon I want to talk to the Internet using the VPN and a separate administration interface I want to access through the public IP of my machine) and they cannot see each other.
Is there any way to configure a network namespace to use the same loopback as the global namespace?
| Sharing the loopback interface across network namespaces |
Okay, so it turns out it is not so trivial.
User input when
$ wgsh
wgsh@vultr /$and piped commands:
$ wgsh <<EOD
echo 1
echo 2
echo 3
EODNot easy, but it is doable.
The solution is to have the local socat open the remote shell (a reverse shell). Then drop into the background, whenever piped command input is detected. Finally, submit each piped command to the /dev/ptsN associated with the background socat.
The first problem is that socat always thinks it has two extra arguments whenever to try to background within in a shell script. Complaining:
socat[3124] E exactly 2 addresses required (there are 4)The second problem is that executing commands on another /dev/ptsN isn't trivial.
Consequently, the solution is in two parts:Use tmux to background the socat connection.
Use ttyecho to send each piped command to the backgrounded socat.ttyecho is a custom utility by Pratik Sinha, which also has a Rust crate.
The ttyecho command line tool is functionally similar to writevt which was part of console-tools, but as best I can tell development there has stopped, and the last Ubuntu package was for 12.04.
That means you'll very likely have to compile and install your own ttyecho.
There are some additional wrinkles - aren't there always?
Opening a new tty with tmux requires root privileges.
To be able to run:
sudo --validate
tmux ...without the launched process blocking for a password, you need to add to your /etc/sudoers.d/<user>:
Defaults: <user> !tty_ticketsWith all that in place, this should work (with ttyecho in your path)
if [ -t 0 -a $# -eq 0 ]
then
## No piped commands.
## 1. Start interactive shell.
sudo /usr/bin/nsenter --setuid 1000 \
--setgid 1000 \
--net=/var/run/netns/nns-a \
-- \
socat file:$(tty),raw,echo=0 \
tcp:10.10.10.1:2222
else
## Piped commands.
## 1. Setup sudo --validate for new tty sessions:
# Add
# Defaults: <user> !tty_tickets
# to the file (chmod 440): /etc/sudoers.d/<user>
#
sudo --validate ## 2. Start a detached connection to remote shell.
#
tmux new-session \
-d \
-s a_session \
'sudo /usr/bin/nsenter --setuid 1000 --setgid 1000 --net=/var/run/netns/nns-a -- socat file:$(tty),raw,echo=0 tcp:10.10.10.1:2222' ## 3. Capture the socat process ID
#
SOCAT_PID=$(pgrep -u "root" socat) ## 4. Get the /dev/pts of the socat connection
#
DEV_PTS=$(tmux list-panes -t a_session -F '#{pane_tty}') ## 5. Consume all the piped commands
#
while read cmd
do
sudo /usr/bin/nsenter --setuid 1000 \
--setgid 1000 \
--net=/var/run/netns/nns-a \
-- \
ttyexec -n ${DEV_PTS} "${cmd}"
done ## 6. Exit, if not already done.
#
if pgrep -u "root" socat
then
sudo /usr/bin/nsenter --setuid 1000 \
--setgid 1000 \
--net=/var/run/netns/nns-a \
-- \
ttyexec -n ${DEV_PTS} "exit"
fi
fiHope that helps someone?
|
I have a bash script that:does some thing
connects/opens a reverse shell.
does another thingmy-script contents:
#!/usr/bin/env bash# does 'some thing'sudo /usr/bin/nsenter --setuid 1000 --setgid 1000 --net=/var/run/netns/ns-a -- socat file:$(tty),raw,echo=0 tcp:10.10.10.1:2222# does 'another thing'Run interactively from the terminal this script stops and provides the remote shell for a user to interact with.
The use case is to have a single script that:accepts piped input (e.g. HEREDOC style)
when no piped-input is given, present an interactive shell.What I'd like to be able to do is use this script in batch files (piped input) as well as interactively.
The following has me stumped:
my-script <<EOCMDS
echo 1
echo 2
EOCMDS
2020/12/13 21:28:59 socat[28032] E exactly 2 addresses required (there are 4); use option "-h" for helpAppreciate any solutions you might have in mind.
Update:
This is not a question about setting up a remote shell. To avoid doubt, the remote server is setup and listening, ready to offer the (bash) shell on connection. This question concerns the client side only. To further remove doubt, while not relevant to this question, in practice the the remote server is not network-namespaced, only the local client.
| Pipe multiple commands to socat reverse shell (network-namespaced) |
The approach with a bridge a veth-pairs will work, but there's a simpler one:
Use a macvlan, see e.g. here or here for some details and discussion.
That is a virtual interface that uses the physical interface (in your case, eth0) as parent (or "master"), is completely transparent to other devices that use the same parent, and can be moved into a network namespace.
You then can assign IPv4 or IPv6 addresses inside the namespace to this interface, and it will work like you had an additional physical network interface that's exclusive to that container.
There are different flavours depending on whether you want your containers to talk to each other or not. Read the documention for details.
And yes, if you want a firewall (iptables), you'll have to do that in each namespace as well.
Incidentally, Docker and other virtualization approaches that use namespaces also use macvlans, so if all you want is apache, nginx and so on, consider using Docker and/or Docker Compose, and it will do all the rest of the work (different filesystems, local DNS, port mapping).
|
I am trying to build a little project using Linux network namespaces, but am a bit overwhelmed by all the linux networking features and containerization-tech available, and thus unsure if i'm approaching this problem the right way.
THE PROBLEM / PROJECT
I currently have a simple Linux box with a single network device (eth0) which has a dozen static public IPv4 addresses assigned to it (1.1.1.1, 1.1.1.2, 1.1.1.3, ..., 1.1.1.12). I wish to create a situation where i have a bunch of (network) namespaces, each of those effectively having its own single exclusive public IPv4 addr/interface.
My goal is to spin up multiple shells, each isolated to their own network namespace (and also pid,ipc,... namespaces). So for example, the 7th shell would use network namespace ns7 which has a single (virtual) ethernet interface which has static ip 1.1.1.7. Within that shell i could (for example) start apache/nginx, let it listen on *:80 which would then serve websites to the public on 1.1.1.7:80. Any traffic directed at port 80 on any of the other IPv4 addresses would never reach ns7, and likewise any traffic directed at 1.1.1.7 should only reach processes in namespace ns7.
The basic idea is that the namespaces are effectively permanent. The namespaces themselves, and the related virtual network devices, will be (re)created and spun up when the system boots.
POTENTIAL SOLUTION (am i on the right track ?)
From what i have been able to piece together without hands-on experimentation, the solution should be something like outlined below, am i on the right track here ?Create and bring up a (virtual) L2 Bridge device br0. Making sure we operate at L2 (and not L3).Change the current ethernet device eth0 configuration, so that it still comes up on boot but doesn't set any IP address anymore (neither static nor DHCP), also assign eth0 to the br0 Bridge.Create some network namespaces ip netns add ns2, ip netns add ns3, ..., ip netns add ns12. I will treat the default/root namespace as if it is ns1.Create multiple virtual ethernet interface pairs net2a~net2b, net3a~net3b, net4a~net4b, ... For each pair, hook up the a-version to the br0 Bridge, and assign the b-version to it's respective namespace.Within each namespace, assign the local veth device (like net2b in ns2) the appropriate IPv4 address info and bring the device up.I perhaps may have to enable IPv4 forwarding (?) (/proc/sys/net/ipv4/ip_forward) and/or enable ARP filtering (?) (/proc/sys/net/ipv4/conf/all/arp_filter).Each namespace would have it's own firewall configuration (as i understand it), so within each namespace run some iptables/nftables shell script that sets some sane defaults and adjust them to local needs.Does this sound like a (roughly) workable plan, to those more familiar with these virtual linux networking devices ?
EXTRA DETAILS (in case they matter)OS Info: I run CentOS 8, which notably includes kernel 4.18, systemd and SELinux, only i use nftables for manual firewall configuration (instead of firewalld stuff).Background: All the addresses provided (like 1.1.1.1) and interface names (like eth0) and such were all hopefully obviously fictional, to some degree for privacy reasons, but more so for brevity/simplicity sake, the same goes for my stated intention of just "running a shell in each namespace".Actual Requirements: There is a bunch of different kinds of software that will be running in these namespaced environments, each namespace will have a unique job and often involve multiple services; One of my main wishes is to completely isolate the IPv4 addresses from each other; Also, many of the target programs are servers/daemons (like my apache httpd example), i want them to be able to bind to the actual public-facing interface/port, instead of say binding to a port on a private ipv4 or unixsocket, and then having software in the root-namespace act as reverse proxy middleware;Why not just docker: (TL/DR) It is an option, but i want to custom tailor something together for fun (Longer Rant) Almost all of these namespaced environments are for personal use, one will run my private mailserver, one webserver for hosting some personal low-traffic sites, one webserver for some live webdev work, a couple running their own sshd+webstack to act as a free mini-VPS for some hobbyist friends, some running some largely automated processes, that kind of stuff. I know there is a huge overlap between what i'm trying to do and what common containerization stacks like docker offer, infact most of the things i've described the system already does, much of it using Podman (near identical to docker) and much of the rest just running side by side in the common root namespace. For various reasons i like to cleanly micromanage some of it, which would be a lot easier if i can implement the separation of stuff into multiple permanent namespaces, i have found that the more i try to tune things to my wishes, the more the containersoftware gets in my way. And since the only things containersoftware provides, that i actually make use of, are actually linux kernel features, i felt it's a worthy project to ditch that wrapper and figure out how to accomplish these things without it. And i also enjoy the education value that comes with diving deeper in this stuff, as it's not something my day job usually touches on.IPC: I have no significant reasons for the (processes inside the individual) namespaces to have to communicate over IP with the other namespaces (nor with the default namespace). Though, if i change my mind on that (and assuming my idea so far was mostly correct), i imagine i could just repeat part of the proces by setting up an additional L2 bridge with additional veth-pairs for each namespace, and assign those private 192.168.x.x style IPv4-addresses.VPS / Cloud: The machine in question is not a physical machine but a VPS / virtual server / cloud server. The only/single network device that's currently present on the machine. This eth0, which in reality is called ens3, automatically functioned right away, even before completing the CentOS installation, and is thus already a virtual device itself. lsmod currently reveals veth and virtio_net are loaded. I think my hosting provider provides this VPS using Qemu. I'm not sure if any of this matters or not, i imagine it shouldn't. Though i did spend some time looking into if it was possible to just clone (or create more) interfaces like the current ens3 and then simply assigning them all a single IPv4 and namespace directly, eliminating the need for a bridge device and veth-pair. Nothing really came out of that search, i kind of just ended up assuming this wouldn't be possible without help from the hosting providers personnel changing settings at the hypervisor level. And while they are generally helpful, and i imagine potentially open to such actions, it would make my solution less flexible to future changes, so if possible i'd prefer to handle this on the device that i can fully control.IPv6: I have avoided mentioning IPv6 for brevity, i do have a block of IPv6 addresses too and plan to use those in a similar fashion. But i think it's likely best to get IPv4-only working correctly first, then enable IPv6 and go from there, i can't imagine it'll be too different.My implementation: I'm not entirely sure yet on how to implement it all, once i do actually manage to get things working for the first time. I imagine that for the network part of the project, i'll create a network-setup.sh shell script that tests for the presence of all the network-related neccecities (like the namespaces, bridge device, veth devices, ...) and then recreates or sets up whatever is missing. Then accompany it with a systemd unit file that runs that shell script, and flag that unit file as required by network-online.target. And then another shell script and unit file that run later in the (boot)process, that uses unshare or the systemd-version of init to actually start the relevant processes. But if anyone has a better idea i'd love to hear it.Different namespace implementations: One potential complication i can smell coming is that i am under the impression that what util-linux (man unshare) calls a network namespace is not the same as what iproute2 (man ip-netns) means by that same term. I'm however still not sure if the former is a superset/extension over the latter, or if it is an entirely different incompatible implementation. Infact this seems to be a recurring problem when reading up on container related tech. | Linux networking: Split eth0 (w/multiple ipv4) giving each ipv4 its own virtualdevice and namespace |
It's perfectly possible to masquerade or SNAT a device whose IP is not routable to the outside world. And being in a network namespace or not makes no difference.
You conveniently forgot to tell us what exactly you tried, but keep in mind that SNAT and MASQUERADE only work in the POSTROUTING table (while DNAT only works in the PREROUTING table), a fact which is well documented, and which you can't avoid to mention explicitely in the iptable commands.
That means SNAT will happen as the last step before the packet leaves the interface, and DNAT will happen as a very early steps for packets entering the interface from the outside.
So the usual setup is that a router (host or namespace) NATs IPs that come in from one side, to everything on the other side:
+---------------+
| |
masq'ed IP --<--| eth0 eth1 |--<-- original IP
10.0.0.99 | | 10.0.0.1
+---------------+
Host or Namespaceand you need a corresponding DNAT for incoming connections, so:
iptables -t nat -A POSTROUTING -o eth0 -s 10.0.0.1/32 -j SNAT --to 10.0.0.99
iptables -t nat -A PREROUTING -i eth0 -d 10.0.0.99/32 -j DNAT --to 10.0.0.1You didn't say exactly what IPs you want to masquerade as what IPs, but if your main namespace acts as such a router, and you want to mask "RouteableNS", that is 10.5.1.2, to the outside world, then this is doable by using the outgoing IF of your main namespace.
|
I set up a virtual ethernet (veth) pair between default namespace and another namespace named RoutableNS as follow:
-------------- --------------
- veth0 - -------------- veth1 -
- 10.5.1.1 - - 10.5.1.2 -
-------------- --------------
default NS RoutableNSI can ping outside world in namespace RoutableNS through interface veth1 but It turns out when I SNAT (or MASQUERADE) incoming traffic to 10.5.1.1 (or 10.5.1.2) nothing will come to veth interface.
I tried same thing with tun devices and I saw It's not possible to MASQUERADE to tun device when It's IP is not routable to outside world (in default namespace).
So I have two questions:Is this behaviour of SNAT (MASQUERADE) documented somewhere? I mean the behaviour that new source IPs should be routable to outside world in current namespace.
Is there a networking options (sysctls) letting me do this? | SNAT to unroutable interface |
I think your surmise is correct, they are inherited from the parent namespace. This seems similar to how processes clone themselves using the fork() system call, then any desired changes have to be applied by the clone, using the normal system calls. (Including replacing the current program with a completely different one, using exec(). fork()+exec() being how e.g. the shell runs other programs, although this magic is not usually visible to the user).
None of the options to the underlying unshare system call change this. So I'd say the answer to your question is no.
http://man7.org/linux/man-pages/man2/unshare.2.htmlOh... that wasn't even an analogy! Look at the option flags:CLONE_NEWNET (since Linux 2.6.24)
This flag has the same effect as the clone(2) CLONE_NEWNET
flag. Unshare the network namespace, so that the calling
process is moved into a new network namespace which is not
shared with any previously existing process. Use of
CLONE_NEWNET requires the CAP_SYS_ADMIN capability.clone() basically means fork().Since version 2.3.3, rather than invoking the kernel's fork() system
call, the glibc fork() wrapper that is provided as part of the NPTL
threading implementation invokes clone(2) with flags that provide the
same effect as the traditional system call. (A call to fork() is
equivalent to a call to clone(2) specifying flags as just SIGCHLD.) |
What are the default kernel parameters, when creating a new network namespace? Is there a way to override them upon creation?
I think they are inherited by the parent process. An example using unshare:
> /sbin/sysctl -a --pattern 'net.ipv4.conf.all.forwarding'
net.ipv4.conf.all.forwarding = 1
> unshare -n
> /sbin/sysctl -a --pattern 'net.ipv4.conf.all.forwarding'
net.ipv4.conf.all.forwarding = 1 | Default kernel parameters on new network namespaces |
I found the solution by myself finally after many, many hours of reading documentations, tutorials, suggestions on various web pages, making lots of trials, and doing deep and comprehensive network and netfilter monitorings and analyzes.
nft add table ip prot1
nft add chain ip prot1 prerouting '{ type filter hook prerouting priority -300; policy accept; }'
nft add rule ip prot1 prerouting iif enp1s0 udp dport '{ 50404, 50441 }' ip daddr set 0.0.0.17 notrack accept
nft add rule ip prot1 prerouting iif vprot0 ip saddr 0.0.0.17 notrack accept
nft add chain ip prot1 postrouting '{ type filter hook postrouting priority 100; policy accept; }'
nft add rule ip prot1 postrouting oif enp1s0 ip saddr 0.0.0.17 ip saddr set 0.0.0.6 acceptThe netfilter hooks page should be opened and read first to understand the following explanation.
Explanation for the used commands:A netfilter table is added for protocol ip (IPv4) with name prot1.
A chain is added to table prot1 with name prerouting of type filter for the hook prerouting with priority -300. It is important to use a priority number lower than -200 to be able to bypass the connection tracking conntrack. That excludes the usage of a chain of type nat for the destination network address translation as having an even lower priority.
A filter rule is added to table prot1 to chain prerouting which is applied only on IPv4 packets received on input interface enp1s0 of protocol type udp having as destination port either 50404 or 50441 which modifies the ip destination address of the packet from 0.0.0.6 to 0.0.0.17 and activates no tracking of the connection for this UDP packet. The verdict is specified explicitly with accept although not really necessary to pass the UDP packet received from the service sva of application CPU for the service sv2 of communication CPU as fast as possible to the next hook which is in this case the forward hook.
A second filter rule is added to table prot1 to chain prerouting which is applied only on all IPv4 packets received on input interface vprot0 independent on protocol type (udp, icmp, ...) having the ip source address 0.0.0.17 to activate no tracking of the connection for this packet. It would be of course also possible to filter just on UDP packets with appropriate source or destination port number, but this additional limitation is not needed here and this rule is also good for ICMP packets send back from 0.0.0.17 to 0.0.0.5 on destination port not yet opened because of the service sv2 is not running at the moment. The verdict is again specified explicitly with accept instead of using the implicit default continue to pass the packet as fast as possible to the forward hook.
A second chain is added to table prot1 with name postrouting of type filter for the hook postrouting with priority 100. It is important to use a chain of type filter and not of type nat to be able to apply a source address translation on the UDP (and ICMP) packets which bypassed the connection tracking.
A filter rule is added to table prot1 to second chain postrouting which is applied only on IPv4 packets sent on output interface enp1s0 independent on protocol type (udp, icmp, ...) having as source address 0.0.0.17 which modifies the ip source address of the packet from 0.0.0.17 to 0.0.0.6. The verdict is specified once more explicitly with accept although not really necessary to pass the UDP packet received from the service sv2 of the communication CPU to the service sva of the application CPU as fast as possible. This rule changes also the source address to 0.0.0.6 of the ICMP packet sent from 0.0.0.17 on destination port not reachable because of the service sv2 is not yet running. So the application CPU never notices that it communicates for two UDP channels with a different interface than 0.0.0.6 which was a second requirement to fulfill although being not really important.It was a hard work to find out that a stateless network translation was needed for this very special network configuration and kind of communication between the services sva and sv2 and that the NAT must be done without using the nat hook.
|
There is the requirement to set up a stateless NAT for two UDP connections from a physical network adapter in global network namespace via a linked pair of virtual network adapters to a service running in a special network namespace. This should be done on a CPU (Intel Atom) in an industrial device running Linux (Debian) with kernel 5.9.7.
Here is a scheme of the network configuration which should be set up:
===================== =====================================================
|| application CPU || || communication CPU ||
|| || || ||
|| || || global namespace | nsprot1 namespace ||
|| || || | ||
|| enp4s0 || || enp1s0 | enp3s0 ||
|| 0.0.0.5/30 ========== 0.0.0.6/30 | 192.168.2.15/24 =======
|| || || | ||
|| UDP port 50001 || || UDP port 50001 for sv1 | TCP port 2404 for sv2 ||
|| UDP port 50002 || || UDP port 50002 for sv1 | ||
|| UDP port 53401 || || UDP port 50401 for sv1 | ||
|| UDP port 53402 || || UDP port 50402 for sv1 | ||
|| || || | ||
|| || || vprot0 | vprot1 ||
|| || || 0.0.0.16/31 --- 0.0.0.17/31 ||
|| || || | ||
|| UDP port 53404 || || UDP port 50404 for sv2 - UDP port 50404 for sv2 ||
|| UDP port 53441 || || UDP port 50441 for sv2 - UDP port 50441 for sv2 ||
===================== =====================================================The application CPU always starts first and opens several UDP ports for communication with service sv1 and service sv2 on the communication CPU via its physical network adapter enp4s0 with the IP address 0.0.0.5.
The output of ss --ipv4 --all --numeric --processes --udp executed on application CPU is:
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp UNCONN 0 0 0.0.0.0:50001 0.0.0.0:* users:(("sva",pid=471,fd=5))
udp UNCONN 0 0 0.0.0.0:50002 0.0.0.0:* users:(("sva",pid=471,fd=6))
udp ESTAB 0 0 0.0.0.5:53401 0.0.0.6:50401 users:(("sva",pid=471,fd=12))
udp ESTAB 0 0 0.0.0.5:53402 0.0.0.6:50402 users:(("sva",pid=471,fd=13))
udp ESTAB 0 0 0.0.0.5:53404 0.0.0.6:50404 users:(("sva",pid=471,fd=19))
udp ESTAB 0 0 0.0.0.5:53441 0.0.0.6:50441 users:(("sva",pid=471,fd=21))The communication CPU starts second and has finally two services running:sv1 in global namespace and
sv2 in special network namespace nsprot1.The output of ss --ipv4 --all --numeric --processes --udp executed in global namespace of the communication CPU is:
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp UNCONN 0 0 0.0.0.0:50001 0.0.0.0:* users:(("sv1",pid=812,fd=18))
udp UNCONN 0 0 0.0.0.6:50002 0.0.0.0:* users:(("sv1",pid=812,fd=17))
udp UNCONN 0 0 0.0.0.6:50401 0.0.0.0:* users:(("sv1",pid=812,fd=13))
udp UNCONN 0 0 0.0.0.6:50402 0.0.0.0:* users:(("sv1",pid=812,fd=15))The output of ip netns exec nsprot1 ss --ipv4 --all --numeric --processes --udp (nsprot1 namespace) is:
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp ESTAB 0 0 0.0.0.17:50404 0.0.0.5:53404 users:(("sv2",pid=2421,fd=11))
udp ESTAB 0 0 0.0.0.17:50441 0.0.0.5:53441 users:(("sv2",pid=2421,fd=12))Forwarding for IPv4 is enabled in sysctl in general and for all physical network adapters.
Just broadcast and multicast forwarding is disabled as not needed and not wanted.
The network configuration is set up on communication CPU with the following commands:
ip netns add nsprot1
ip link add vprot0 type veth peer name vprot1 netns nsprot1
ip link set dev enp3s0 netns nsprot1
ip address add 0.0.0.16/31 dev vprot0
ip netns exec nsprot1 ip address add 0.0.0.17/31 dev vprot1
ip netns exec nsprot1 ip address add 192.168.2.15/24 dev enp3s0
ip link set dev vprot0 up
ip netns exec nsprot1 ip link set vprot1 up
ip netns exec nsprot1 ip link set enp3s0 up
ip netns exec nsprot1 ip route add 0.0.0.4/30 via 0.0.0.16 dev vprot1The network address translation is set up with the following commands:
nft add table ip prot1
nft add chain ip prot1 prerouting '{ type nat hook prerouting priority -100; policy accept; }'
nft add rule prot1 prerouting iif enp1s0 udp dport '{ 50404, 50441 }' dnat 0.0.0.17
nft add chain ip prot1 postrouting '{ type nat hook postrouting priority 100; policy accept; }'
nft add rule prot1 postrouting ip saddr 0.0.0.16/31 oif enp1s0 snat 0.0.0.6The output of nft list table ip prot1 is:
table ip prot1 {
chain prerouting {
type nat hook prerouting priority -100; policy accept;
iif "enp1s0" udp dport { 50404, 50441 } dnat to 0.0.0.17
} chain postrouting {
type nat hook postrouting priority 100; policy accept;
ip saddr 0.0.0.16/31 oif "enp1s0" snat to 0.0.0.6
}
}There is defined additionally in global namespace only the table inet filter with:
table inet filter {
chain input {
type filter hook input priority 0; policy accept;
} chain forward {
type filter hook forward priority 0; policy accept;
} chain output {
type filter hook output priority 0; policy accept;
}
}That NAT configuration is for a stateful NAT. It works for the UDP channel with the port numbers 50404and 53404 because of sv2 started last opens 0.0.0.17:50404 and sends a UDP packet to0.0.0.5:53404 on which source network address translation is applied in postrouting hook for enp1s0 in global namespace. The service sva of application CPU sends back a UDP packet from 0.0.0.5:53404 to 0.0.0.6:50404 which reaches 0.0.0.17:50404. The UDP packet does not pass the prerouting rule for dnat to 0.0.0.17. It is send directly via connection tracking to 0.0.0.17 asIfound out later.
But this stateful NAT configuration does not work for the UDP channel with the port numbers 50441 and 534441. It looks like the reason is that sva of application CPU sends several UDP packets already from 0.0.0.5:53441 to 0.0.0.6:50441 before the service sv2 is started at all and the destination port is opened in network namespace nsprot1. There is returned by ICMP that the destination port is unreachable. That is no surprise on taking into account that the destination port is not yet opened at all. It is unfortunately not possible to block the UDP packet sends in service sva untilservice sv2 is started and opened the two UDP ports. Service sva sends periodically and sometimes additionally triggered spontaneous UDP packets from 0.0.0.5:53441 to 0.0.0.6:50441 independent on connection state.
So the problem with this configuration seems to be the stateful NAT as the dnat rule in prerouting hook is still not used on destination port finally opened in network namespace nsprot1. There is still continued to route the UDP packets to 0.0.0.6:50441 which results in dropping the UDP packet and returning with ICMP that the destination port is not reachable.
Therefore the solution is maybe the usage of a stateless NAT. So there are executed additionally the commands:
nft add table ip raw
nft add chain ip raw prerouting '{ type filter hook prerouting priority -300; policy accept; }'
nft add rule ip raw prerouting udp dport '{ 50404, 50441, 53404, 53441 }' notrackBut the result was not as expected. The prerouting rule to change the destination address from 0.0.0.6 to 0.0.0.17 for UDP packets from input interface enp1s0 with destination port 50404 and50441 is still not taken into account.
There was executed next by me:
nft add table ip filter
nft add chain filter trace_in '{ type filter hook prerouting priority -301; }'
nft add rule filter trace_in meta nftrace set 1
nft add chain filter trace_out '{ type filter hook postrouting priority 99; }'
nft add rule filter trace_out meta nftrace set 1
nft monitor traceI looked on the trace and could see that the notrack rule is taken into account, but then the UDP packets with destination port 50441 are passed directly to the input hook. I don't know why.
I studied many, many hours very carefully following pages:nft manual (read several times completely from top to bottom)
nftables wiki (most pages completely)
nftables on ArchWiki
and many, many other web pages regarding to usage of network namespaces and network address translation.I tried really many different configurations, used Wireshark, used nft monitor trace, but I cannot find out a solution which works for the UDP channel with the ports 50441 and 53441 on sva sending UDP packets already before destination port 0.0.0.17:50441 is opened at all.
The stateful NAT configuration works if I manually terminate on application CPU the service sva, set up the network configuration on communication CPU with starting the two services sv1 and sv2 and start last manually the service sva again on all UDP ports already opened on communication CPU. But this order of starting the services cannot be done in the industrial device by default. The application service sva must run independent on communication services are ready for communication or not.
Which commands (chains/rules) are necessary to have a stateless NAT for the two UDP channels 0.0.0.5:53404 - 0.0.0.17:50404 and 0.0.0.5:53441 - 0.0.0.17:50441 independent on the open states of the destination ports and which service sends first an UDP packet to the other service?
PS: The service sv2 can be started depending on configuration of the device also in global namespace using a different physical network adapter on which no NAT and network namespace are necessary. In this network configuration there is absolutely no problem with the UDP communication between the three services.
| How to set up stateless NAT for two UDP connections from a global network to special network namespace? |
In NVM Express and related standards, controllers give access to storage divided into one or more namespaces. Namespaces can be created and deleted via the controller, as long as there is room for them (or the underlying storage supports thin provisioning), and multiple controllers can provide access to a shared namespace. How the underlying storage is organised isn’t specified by the standard, as far as I can tell.
However typical NVMe SSDs can’t be combined, since they each provide their own storage and controller attached to a PCI Express port, and the access point is the controller, above namespaces — thus a namespace can’t group multiple controllers (multiple controllers can provide access to a shared namespace). It’s better to think of namespaces as something akin to SCSI LUNs as used in enterprise storage (SANs etc.).
Namespace numbering starts at 1 because that’s how per-controller namespace identifiers work. Namespaces also have longer, globally-unique identifiers.
Namespaces can be manipulated using the nvme command, which provides support for low-level NVMe features including:formatting, which performs a low-level format and allows various features to be used (secure erase, LBA format selection...);
attaching and detaching, which allows controllers to be attached to or detached from a namespace (if they support it and the namespace allows it).Attaching and detaching isn’t something you’ll come across in laptop or desktop NVMe drives. You’d use it with NVMe storage bays such as those sold by Dell EMC, which replace the iSCSI SANs of the past.
See the NVM Express standards for details (they’re relatively easy to read), and this NVM Express tutorial presentation for a good introduction.
|
I've recently begun supporting Linux installed on devices with built-in nvme ssds. I noticed the device files had an extra number, beyond a number identifying the drive number and the partition number. IDE/SATA/SCSI drives normally only have a drive letter and partition number.
For example: /dev/nvme0n1p2
I got to wondering what the n1 part was, and after a bit of searching, it looks like that identifies an nvme 'namespace'. The definitions for it were kind of vague: "An NVMe namespace is a quantity of non-volatile memory (NVM) that can be formatted into logical blocks."
So, does this act like a partition that is defined at the hardware controller level, and not in an MBR or GPT partition table? Can a namespace span multiple physical nvme ssd's? E.g. can you create a namespace that pools together storage from multiple ssd's into a single logical namespace, similar to RAID 0?
What would you do with an NVME namespace that you can't already achieve using partition tables or LVM or a filesystem that can manage multiple volumes (like ZFS, Btrfs, etc)?
Also, why does it seem like the namespace numbering starts at 1 instead of 0? Is that just something to do with how NVME tracks the namespace numbers at a low level (e.g. partitions also start at 1, not 0, because that is how the standard for partition numbers was set, so the Linux kernel just uses whatever the partition number that is stored on disk is - I guess nvme works the same way?)
| What are nvme namespaces? How do they work? |
The wear level is given by the “Percentage Used” field, which is specified as (page 184):Percentage Used: Contains a vendor specific estimate of the percentage of NVM subsystem
life used based on the actual usage and the manufacturer’s prediction of NVM life. A value of
100 indicates that the estimated endurance of the NVM in the NVM subsystem has been
consumed, but may not indicate an NVM subsystem failure. The value is allowed to exceed 100. Percentages greater than 254 shall be represented as 255. This value shall be updated
once per power-on hour (when the controller is not in a sleep state). |
I have a laptop with NVMe SSD:
# nvme listNode SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 <-CENSORED-> KXG50ZNV512G NVMe TOSHIBA 512GB 1 512.11 GB / 512.11 GB 512 B + 0 B AADA4107S.M.A.R.T. does not tell me the usual raw values apart from the ~ 22 TB written.
# smartctl -a /dev/nvme0n1smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.0-74-generic] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION ===
Model Number: KXG50ZNV512G NVMe TOSHIBA 512GB
Serial Number: <-CENSORED->
Firmware Version: AADA4107
PCI Vendor/Subsystem ID: 0x1179
IEEE OUI Identifier: 0x00080d
Total NVM Capacity: 512,110,190,592 [512 GB]
Unallocated NVM Capacity: 0
Controller ID: 0
Number of Namespaces: 1
Namespace 1 Size/Capacity: 512,110,190,592 [512 GB]
Namespace 1 Formatted LBA Size: 512
Namespace 1 IEEE EUI-64: 00080d 0500023e1d
Local Time is: Thu Jun 3 14:12:35 2021 CEST
Firmware Updates (0x14): 2 Slots, no Reset required
Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test
Optional NVM Commands (0x005f): Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Maximum Data Transfer Size: 512 Pages
Warning Comp. Temp. Threshold: 78 Celsius
Critical Comp. Temp. Threshold: 82 Celsius
Namespace 1 Features (0x02): NA_FieldsSupported Power States
St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat
0 + 6.00W - - 0 0 0 0 0 0
1 + 2.40W - - 1 1 1 1 0 0
2 + 1.90W - - 2 2 2 2 0 0
3 - 0.0500W - - 3 3 3 3 1500 1500
4 - 0.0030W - - 4 4 4 4 50000 80000Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 2
1 - 4096 0 1=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSEDSMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 35 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 9%
Data Units Read: 63,196,994 [32.3 TB]
Data Units Written: 43,370,182 [22.2 TB]
Host Read Commands: 549,038,974
Host Write Commands: 420,271,939
Controller Busy Time: 2,885
Power Cycles: 2,160
Power On Hours: 17,684
Unsafe Shutdowns: 211
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 35 CelsiusError Information (NVMe Log 0x01, max 128 entries)
No Errors LoggedAlso, I just looked at:
# nvme error-log /dev/nvme0n1and it contains only 64 of these entries:
.................
error_count : 0
sqid : 0
cmdid : 0
status_field : 0(SUCCESS: The command completed successfully)
parm_err_loc : 0
lba : 0
nsid : 0
vs : 0
cs : 0
.................Question: Is it possible to evaluate the wear level of my SSD? Maybe via the Available Spare tags?
| How to evaluate the wear level of a NVMe SSD? |
You're mounting an ext4 filesystem:
... -t ext4 -o umask=0000
Per the ext4(5) man page, the ext4 filesystem does not have a umask mount option.I want the device to have mode=777.If you need different permissions on files and/or directories, you can set file/directory permissions on the files/directories themselves. See
What are the different ways to set file permissions etc on gnu/linux.
|
I tried to run
mount /home/user/nvme0n1 -U 8da513ec-20ce-4a2d-863d-978b60089ad3 -t ext4 -o umask=0000
and the response is:mount: /home/user/nvme0n1: wrong fs type, bad option, bad superblock
on /dev/nvme0n1, missing codepage or helper program, or other error.However, when I remove the umask option, the SSD is mounted as desired.
What should I do? How can I start debugging the problem? I want the device to have mode=777.
| mount with umask does not works |
According to the NVMe base specification 2.0a, the NVME feature ID for the Host Memory Buffer is 0x0d. You can check it with the nvme get-feature command:
# nvme get-feature /dev/nvme0 -H -f 0x0d
get-feature:0xd (Host Memory Buffer), Current value:0x000001
Memory Return (MR): False
Enable Host Memory (EHM): Enabled
Host Memory Descriptor List Entry Count (HMDLEC): 10
Host Memory Descriptor List Address (HMDLAU): 0x0
Host Memory Descriptor List Address (HMDLAL): 0xffff7000
Host Memory Buffer Size (HSIZE): 9728You can also find some information under /sys/class/nvme/, in the directory of the respective NVMe controller.
The nvme kernel module also has the max_host_mem_size_mb parameter which you can use to limit the maximum HMB size per controller.
Another nvme module parameter, use_cmb_sqes can be used to forbid the use of controller's memory buffer for I/O SQes. Assuming I've understood this correctly, this could be used to make any NVMe work like a DRAM-less one.
You can find the current values for the module parameters at /sys/module/nvme/parameters/, and also change some of them dynamically from there.
|
New DRAM-less NVME SSDs use a portion of the system memory as HMB (Host memory buffer).
How can I check / change NVME HMB on Linux?
(to verify it is working correctly or alter its behavior)
| How to check / change NVME HMB on Linux? |
The code comment within drivers/nvme/host/core.c in Linux kernel source seems to explain it best:
/*
* APST (Autonomous Power State Transition) lets us program a table of power
* state transitions that the controller will perform automatically.
*
* Depending on module params, one of the two supported techniques will be used:
*
* - If the parameters provide explicit timeouts and tolerances, they will be
* used to build a table with up to 2 non-operational states to transition to.
* The default parameter values were selected based on the values used by
* Microsoft's and Intel's NVMe drivers. Yet, since we don't implement dynamic
* regeneration of the APST table in the event of switching between external
* and battery power, the timeouts and tolerances reflect a compromise
* between values used by Microsoft for AC and battery scenarios.
* - If not, we'll configure the table with a simple heuristic: we are willing
* to spend at most 2% of the time transitioning between power states.
* Therefore, when running in any given state, we will enter the next
* lower-power non-operational state after waiting 50 * (enlat + exlat)
* microseconds, as long as that state's exit latency is under the requested
* maximum latency.
*
* We will not autonomously enter any non-operational state for which the total
* latency exceeds ps_max_latency_us.
*
* Users can set ps_max_latency_us to zero to turn off APST.
*/
static int nvme_configure_apst(struct nvme_ctrl *ctrl)So, APST is a feature that allows the NVMe controller (within the NVMe SSD) to switch between power management states autonomously, following configurable rules. The NVMe controller specifies how many microseconds it needs to enter and exit each power-save state; the kernel uses this information to configure the state transition rules within the NVMe controller.What and where is the specific flaw causing the problem?It looks like this particular Kingston NVMe SSD is either way too optimistic in its wake-up time estimates, or fails to wake up at all (without fully resetting the controller) after entering a deep enough power saving state. When given the permission to use APST, it apparently goes into some power saving state and then fails to return to operational state within the specified time, which makes the kernel unhappy.What does the workaround change to prevent the presentation of the flaw?It tells the maximum allowed time for waking up from APST power management states is exactly 0 microseconds, which causes the APST feature to be disabled.What functionality or other desired effect is lost due to such a workaround?If the NVMe controller's autonomous power management feature cannot be used, the controller will only be allowed to enter power-saving states when specifically requested by the kernel. This means the power savings most likely won't be as great as with APST in use.And especially, what is required to be fixed, the kernel, the storage-media firmware, the system firmware (i.e. UEFI/BIOS), or some other component, for users to experience a proper a resolution?The optimal fix would be for Kingston to provide a NVMe disk firmware update that either makes the APST power management work correctly, or at minimum, makes the drive not promise something it cannot deliver, i.e. not announce APST modes with overly-optimistic transition times, and/or not announce at all any APST modes that will cause the controller to fail if used.
If it turns out the problem can be avoided by e.g. programming APST to avoid the deepest power-saving state completely, it might be possible to create a more specific kernel-level workaround. Many device drivers in the Linux kernel have "quirk tables" specifying workarounds for specific hardware models. In the case of NVMe, you can find one in drivers/nvme/host/pci.c within Linux kernel source:
static const struct pci_device_id nvme_id_table[] = {
{ PCI_VDEVICE(INTEL, 0x0953), /* Intel 750/P3500/P3600/P3700 */
.driver_data = NVME_QUIRK_STRIPE_SIZE |
NVME_QUIRK_DEALLOCATE_ZEROES, },
{ PCI_VDEVICE(INTEL, 0x0a53), /* Intel P3520 */
.driver_data = NVME_QUIRK_STRIPE_SIZE |
NVME_QUIRK_DEALLOCATE_ZEROES, },
{ PCI_VDEVICE(INTEL, 0x0a54), /* Intel P4500/P4600 */
.driver_data = NVME_QUIRK_STRIPE_SIZE |
NVME_QUIRK_DEALLOCATE_ZEROES |
NVME_QUIRK_IGNORE_DEV_SUBNQN, },
{ PCI_VDEVICE(INTEL, 0x0a55), /* Dell Express Flash P4600 */
.driver_data = NVME_QUIRK_STRIPE_SIZE |
NVME_QUIRK_DEALLOCATE_ZEROES, },
{ PCI_VDEVICE(INTEL, 0xf1a5), /* Intel 600P/P3100 */
.driver_data = NVME_QUIRK_NO_DEEPEST_PS |
NVME_QUIRK_MEDIUM_PRIO_SQ |
NVME_QUIRK_NO_TEMP_THRESH_CHANGE |
NVME_QUIRK_DISABLE_WRITE_ZEROES, },
{ PCI_VDEVICE(INTEL, 0xf1a6), /* Intel 760p/Pro 7600p */
.driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, },
{ PCI_VDEVICE(INTEL, 0x5845), /* Qemu emulated controller */
.driver_data = NVME_QUIRK_IDENTIFY_CNS |
NVME_QUIRK_DISABLE_WRITE_ZEROES |
NVME_QUIRK_BOGUS_NID, },
{ PCI_VDEVICE(REDHAT, 0x0010), /* Qemu emulated controller */
.driver_data = NVME_QUIRK_BOGUS_NID, },
[...]Here the various NVME_QUIRK_ settings trigger various pieces of workaround code within the driver.
Note that there already exists a quirk setting named NVME_QUIRK_NO_DEEPEST_PS which prevents state transitions to the deepest power management state. If the APST problem of your Kingston NVMe turns out to have the same workaround as already implemented for Intel 600P/P3100 and ADATA SX8200PNP, then all it would take is writing a new quirk table entry like this (replacing the things within <angle brackets> with appropriate values, you can get them with lspci -nn):
{ PCI_DEVICE(<PCI vendor ID>, <PCI product ID of the SSD>), /* <specify make/model of SSD here> */
.driver_data = NVME_QUIRK_NO_DEEPEST_PS, },and recompiling the kernel with this modification.
Obviously, someone who actually has this exact SSD model is needed to test this. If you happen to be familiar with C programming basics and how to compile custom kernels, this could be your chance to get your name to the long list of Linux kernel contributors! If you are interested, you should probably read kernelnewbies.org for more details.
The kernel programming is not always deeply intricate: there are lot of simple parts that just need a person with the right kind of hardware and some basic programming knowledge. I've submitted a few minor patches just like this.
If setting the NVME_QUIRK_NO_DEEPEST_PS turns out not to fix the problem, then implementing a new quirk might be needed. That could be more complicated, and might require some experimentation or ideally information from Kingston to find out what exactly needs to be done to avoid this problem, and perhaps discussion with the Linux NVMe driver maintainer on the best way to implement it.
|
I have experienced an issue nearly identical to one described in the askubuntu community.
Like that of the user who posted this issue, my system features a Kingston NVME disk, and as with that user,
my issue resolved by adding the following kernel option in the grub menu: nvme_core.default_ps_max_latency_us=0.
The user's stated resolution begins as follows:The problem was of a SSD features, the Autonomous Power State Transitions(APST) was causing the freezes. To mitigate it, until they will release the fix, include the line nvme_core.default_ps_max_latency_us=0
in the GRUB_CMDLINE_LINUX_DEFAULT options.Although helpful, this comment leaves several questions open, including the following:What and where is the specific flaw causing the problem?
What does the workaround change to prevent the presentation of the flaw?
What functionality or other desired effect is lost due to such a workaround?
And especially, what is required to be fixed, the kernel, the storage-media firmware, the system firmware (i.e. UEFI/BIOS), or some other component, to provide a proper a resolution?Any comments are helpful attempting to resolve all or part of this confusion.
| clarifying nvme apst problems for linux |
The culprit is sequential access. NVMEs only show their performance on many simultaneous request. So a "cp" will simply result in one [sequential] read, as does dd and also hdparm.
If you use tricks like "parallel" to create a cp process per file, the total throughput becomes a lot higher.
Windows' Explorer seems to do just that even for big files (copying several segments in parallel - at least I guess so).
|
Why is my sequential read speed so (comparatively) slow?
While CrystalDiskMark on Win10 reports around 5GB/s (for reading as well as writing), I just do not get close to that performance on Linux.
(A copy&paste of several hundreds of GBs on Windows from/to the same drive averaged around 2,5GB/s, so I do not think CDM is far off from real values here.)
A simple
dd if=/dev/nvme0n1 of=/dev/null bs=1M count=10k
reports a mere 1.5GB/s.
On another NVME (both being Corsair Force MP600 1TB) dd reports 1.4 GB/s.
I would expect that such a sequential access is the best-case for reading from any storage device, so I really have no clue on what is going on here.
(I saw some similar questions on StackExchange, but they all went into different directions than this 'simple one'.)
Note aside: CrystalDiskMark uses 'real files' if I'm not mistaken - so it has even additional file system overhead, whereas my dd call should be the best one could possibly get - or not?
System info:both NVMEs are connected with 4 PCIe 4.0 lanes
temperature of both NVMEs < 60°C
the faster one is also mounted as root, the slower one was unmounted
Zen2 Threadripper (so more than enough PCIe 4.0 lanes..)
Kernel 5.6.4
BIOS up2date
NVME firmware up2dateAny ideas or pointers into the right direction would be greatly appreciated!
| NVME SSD performance slow on Linux |
A namespace can have a different size and capacity thanks to thin provisioning. The namespace’s size is the total size of the namespace (in logical blocks). The namespace’s capacity is the maximum number of logical blocks which can really be allocated in the namespace. So you can create a namespace which is larger than your real capacity.
This isn’t useful on a single NVMe SSD; it’s the sort of feature which only makes sense in SAN-style deployments. See the NVMe specifications for details.
For a typical SSD I wouldn’t expect you to need to do anything related to namespaces. It should come with a pre-existing namespace, and you should be able to use that directly, without even being aware that NVMe supports namespaces.
|
Using nvme-create-ns, we can assign namespace size and namespace capacity; what's the difference?
Is it necessary to do this before using a NVMe SSD?
| What's the difference between namespace size and namespace capacity? |
I encountered this issue today on my Lenovo Yoga 730 with an ADATA NVMe 512G drive. I had errors when running mkfs.ext4, but it did complete. Once I tried to mount the partition I received the same error as described.
I tried the May 2019 Arch release and did not have the problem. Seems the issue was introduced with the June 2019 release. Using the May 2019 ISO, I was able to successfully install Arch on my NVMe drive. The kernel version after install is 5.2.4-arch1-1-ARCH.
|
I'm trying to install Arch on a Dell XPS 15 9560.
I've used nomodeset to make the text legible (otherwise it's tiny on the integrated 4k monitor) and pcie_aspm=off to stop the slew of pci bus errors as per a suggestion on the device's Arch Wiki page.
However, when I try to mount the drive I get a slew of errors (continuing forever):
print_req_error: operation not supported error, dev nvme0n1, sector {secnum} flags 9
Where the secnum is gradually increasing, presumably it's going through and trying to do the mount starting at every block but I digress.
Any ideas on how to fix this? I've tried secure erasing the SSD to account for any bugs there but nothing.The dmesg log can be found here. Please note, I did not include the above kernel flags whilst obtaining this log.
The exact kernel version found using uname -r is: 5.1.15-arch1-1-ARCH. This is the one included in ISO archlinux-2019.07.01-x86_64.iso.
The nvme command suggested in the comments does not seem to exist on the ISO so I have been unable to ascertain the exact SSD model present in the system at this time. Although the device code listed in the dmesg indicates it's probably this one.
The output of journalctl -k -o short-monotonic is here.
| Operation Not Supported Error Mounting NVME Drive on Arch Install |
OK, I found 2 alternatives.
Getting a precompiled binary that works on CentOS 7
Even though their packages page only offers Smartmontools 6.2 for CentOS 7, their SVN builds page offers binaries that do work on CentOS.
The proper archive has a .linux suffix, for example I chose:smartmontools-6.6-0-20170503-r4430.linux-x86_64.tar.gzThis archive contains a smartctl binary that works like a charm.
Using the nvme command-line tool
CentOS 7 ships with an nvme command (the yum package is named nvme-cli).
It can list the NVMe drives:
# nvme listAnd can read SMART info:
# nvme smart-log /dev/nvme0And additional SMART info (not sure why it's split):
# nvme smart-log-add /dev/nvme0 |
I've just set up CentOS 7 on a server with NVMe drives, and was suprised not to be able to run smartctl on them:
# smartctl -a /dev/nvme0
/dev/nvme0: Unable to detect device type
Please specify device type with the -d option.# smartctl -a /dev/nvme0 -d nvme
/dev/nvme0: Unknown device type 'nvme'Then I noticed that CentOS ships with Smartmontools version 6.2, whereas Smartmontools supports NVMe starting from version 6.5.
How can I upgrade Smartmontools to version 6.5 on CentOS 7?
Their download page only offers Smartmontools 6.2 for CentOS 7.
Ideally, I don't want to compile from source, I would prefer a RPM, or better, a third-party repo that would include the latest Smartmontools, to get regular updates.
Alternative
I'm also open to suggestions if you know another tool, preferably included in CentOS 7, that could allow me to get SMART info from an NVMe drive.
| Smartmontools with NVMe support on CentOS 7 |
Use the -H option with the command to get the results in a human-readable format. It should look about like this:
# nvme id-ns -H /dev/nvme0n1
...
LBA Format 0 : Metadata Size: 0 bytes - Data Size: 512 bytes - Relative Performance: 2 Good (in use)
LBA Format 1 : Metadata Size: 0 bytes - Data Size: 4096 bytes - Relative Performance: 1 BetterThe prefix 0x is a common indicator that the following number is in hexadecimal, so the actual numbers after the rp are simply 2 and 1 respectively.
The SSD performance is affected by so-called write amplification as a SSD erase block is usually larger than a common filesystem 512-byte block, so in order to re-write one block, the SSD must erase and re-write the entire erase block (encompassing multiple filesystem blocks) each time. If the block size seen by the filesystem matches the erase block size of the SSD, this can be avoided. (It can also be minimized but perhaps not fully eliminated if the operating system is aware of the erase block size and will use a write caching strategy that groups writes to larger contiguous chunks to compensate.)
So with SSDs, configuring them for a larger-than-classic block size will usually improve performance. However, there are a lot of (old) operating systems and software that cannot yet take advantage of the possibility to use larger block sizes, so some SSDs are just optimized to deal with the 512-byte block size with internal buffering as best as they can.
However, there are a lot of other factors that can affect the performance of a disk or SSD, so some manufactures apparently want to avoid claiming that a particular block size would surely be the "best" for all possible situations. And so, the rp value of 0 might not be used at all by some SSDs.
|
To optimize performance of an SSD, the Arch wiki says to run nvme id-ns /dev/nvme0n1 and evaluate the output, specifically of the last lines starting with lbaf. If there's more than one lbaf entry, then the drive supports more than one sector size option. The most pertinent information from the Arch wiki here is,The rp (Relative Performance) value indicates which format will
provide the best performance, with 0 being the best.My NVMe SSD does have two lbaf entries, but it's unclear which one is more optimal. Here's the relevant output of the above nvme command on my system:
lbaf 0 : ms:0 lbads:9 **rp**:0x2 (in use)
lbaf 1 : ms:0 lbads:12 **rp**:0x1So both options display an rp starting with 0. How am I to understand the significance of x2 and x1 at the ends?
| How to understand the output of the nvme command? |
If you go to Sabrent's download page for your SSD, you'll find a package named "SSC software" - that is a Sector Size Converter.
With it, you can switch the block size presented to the system by the SSD to either 512 or 4096 bytes, but the switching process will destroy all data currently stored on the SSD.
To view the system's current idea of the block size, run lsblk -t. For a true 512-byte storage device (as far as the kernel knows), you should see PHY-SEC, LOG-SEC and MIN-IO all at the value of 512.
For a 512e device, you'll see MIN-IO and PHY-SEC as 4096 and LOG-SEC at 512, indicating that the system knows the device will perform optimally if accessed in chunks of 4k bytes, even if it is currently emulating a classic 512-byte block size.
And for a true 4k device, all the three values should be at 4096.
|
I bought a new NVMe SSD (SB-ROCKET-256) and installed Arch using gdisk for partioning. In theory, this SSD doesn't support 512e and I think the physical size should be 4096, am I wrong? How do I set it right? The partition table is the following:
$ parted --align optimal /dev/nvme0n1
GNU Parted 3.2
Using /dev/nvme0n1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: Unknown (unknown)
Disk /dev/nvme0n1: 256GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: Number Start End Size File system Name Flags
1 1049kB 273MB 272MB fat32 EFI System boot, esp
2 274MB 64.7GB 64.4GB ext4 Linux x86-64 root (/)
3 64.7GB 69.0GB 4295MB linux-swap(v1) Linux swap
4 69.0GB 256GB 187GB ext4 Linux /homesmarctl output:
smartctl 7.0 2018-12-30 r4883 [x86_64-linux-5.2.11-1-MANJARO] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION ===
Model Number: Sabrent
Serial Number: 296E0797013700062530
Firmware Version: ECFM12.3
PCI Vendor/Subsystem ID: 0x1987
IEEE OUI Identifier: 0x6479a7
Total NVM Capacity: 256,060,514,304 [256 GB]
Unallocated NVM Capacity: 0
Controller ID: 1
Number of Namespaces: 1
Namespace 1 Size/Capacity: 256,060,514,304 [256 GB]
Namespace 1 Formatted LBA Size: 512
Namespace 1 IEEE EUI-64: 6479a7 2223093330
Local Time is: Sat Oct 5 14:51:26 2019 CESTFrom what I understand the sector size is set automatically and it should optimal. Is this optimal?
| Wrong sector size in NVMe |
Your Ubuntu is running inside a kvm virtual machine with AMD-Vi so it should not be running fstrim.
The fstrim service runs on a timer so as root:
rm /var/lib/systemd/timers/stamp-fstrim.timer
systemctl stop fstrim.service fstrim.timer
systemctl disable fstrim.service fstrim.timer
systemctl mask fstrim.service fstrim.timer |
I have a Linux (Ubuntu 18, kernel 4.15) desktop booting from an M2 nvme disk.
Once a week, it will crash around midnight. The relevant log file output from /var/log/syslog.* is below:Jul 16 00:00:00 rabbitcruncher systemd[1]: Starting Discard unused blocks...
Jul 16 00:00:00 rabbitcruncher kernel: [559644.954267] nvme 0000:41:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0014 address=0x0000000000000000 flags=0x0000]
Jul 16 00:00:00 rabbitcruncher kernel: [559644.975805] nvme nvme0: async event result 00010300
Jul 16 00:00:30 rabbitcruncher kernel: [559675.338834] nvme nvme0: controller is down; will reset: CSTS=0x3, PCI_STATUS=0x1010
Jul 16 00:00:31 rabbitcruncher kernel: [559675.621182] nvme 0000:41:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0014 address=0x0000000000000000 flags=0x0000]
Jul 16 00:01:01 rabbitcruncher kernel: [559706.346300] nvme nvme0: controller is down; will reset: CSTS=0x3, PCI_STATUS=0x1010
Jul 16 00:01:01 rabbitcruncher kernel: [559706.378641] nvme nvme0: Identify namespace failed
Jul 16 13:39:24 rabbitcruncher systemd-fsck[962]: /dev/nvme0n1p1: 12 files, 1186/130812 clusters
Jul 16 13:39:24 rabbitcruncher kernel: [ 1.052853] nvme nvme0: pci function 0000:41:00.0
Jul 16 13:39:24 rabbitcruncher kernel: [ 1.285806] nvme0n1: p1 p2
Jul 16 13:39:24 rabbitcruncher kernel: [ 5.036910] EXT4-fs (nvme0n1p2): mounted filesystem with ordered data mode. Opts: (null)
Jul 16 13:39:24 rabbitcruncher kernel: [ 5.318742] EXT4-fs (nvme0n1p2): re-mounted. Opts: errors=remount-roI understand the "Discard unused blocks" means that Linux is trying to run fstrim. However, I have disabled fstrim using systemctl but it still happens!systemctl status fstrim.service
● fstrim.service - Discard unused blocks
Loaded: loaded (/lib/systemd/system/fstrim.service; static; vendor preset: enabled)
Active: inactive (dead)I'm at a loss for what to do to fix this problem. Could anyone offer advice?
| nvme fstrim causing crash on linux, disabling with systemctl doesn't help |
Use cfdisk to create a GPT partition table like this:
# fdisk /dev/nvme0n1 -l
Disk /dev/nvme0n1: 953,87 GiB, 1024209543168 bytes, 2000409264 sectors
Disk model: SAMSUNG MZVLB1T0HALR-00000
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: BD545B1F-C8D2-4145-B2C9-379506C67728Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 4095 2048 1M BIOS boot
/dev/nvme0n1p2 4096 1028095 1024000 500M Linux RAID
/dev/nvme0n1p3 1028096 2000408575 1999380480 953,4G Linux RAIDThe same on nvme1n1
Then create the raid with mdadm. I already have two raids on the HDDs.
cat /proc/mdstat
cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
523264 blocks super 1.2 [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0]
7813368128 blocks super 1.2 [2/2] [UU]
bitmap: 14/59 pages [56KB], 65536KB chunkto create new raids md3 and md4:
mdadm --create --verbose /dev/md3 --level=1 --raid-devices=2 /dev/nvme0n1p2 /dev/nvme1n1p2
mdadm --create --verbose /dev/md4 --level=1 --raid-devices=2 /dev/nvme0n1p3 /dev/nvme1n1p3md3 is reserved for /boot, so create a new physical volume on md4:
pvcreate /dev/md4and a new volume group with
lvcreate vg1 /dev/md4 |
I want to create a software raid on two identical SSDs
How do I create partitions and format them optimal?
lsblknvme0n1 259:0 0 953.9G 0 disk
nvme1n1 259:1 0 953.9G 0 diskI probably have to use fdisk or parted to create partitions. What options do I need? Is this enough?
parted /dev/nvme0n1(parted) mkpart primary ext4 0% 100%
(parted) set 1 raid onThe disks both have 1tb and
I also need a small boot partition with 500mb
| How to format NVMe drive in Linux suitable for raid 1 and lvm on xen Host |
While it's not entirely impossible to change the order, doing so won't solve any problem and will only create more, so you shouldn't do it.
Device names are assigned on a first come, first serve basis, which means the order can change anytime and you should not rely on it at all. Stick to (PART-) UUID/LABEL, one of the symlinks in /dev/disk/by-*/*, or alternatively LVM device names (if you're using LVM).
So this is just for fun (tested in qemu with emulated nvme drives, not tested on real hardware).Original detected order:
# grep nvme /proc/partitions
259 0 16777216 nvme0n1
259 1 33554432 nvme1n1
259 2 67108864 nvme2n1Changing order by unbinding, then binding in the desired order. Doing this removes the NVMe device from the system entirely and re-detects them from scratch. So it can only be done from initramfs, or when the device is not in use at all.
# ls /sys/bus/pci/drivers/nvme/
0000:00:04.0 0000:00:05.0 0000:00:06.0 bind [...] unbind
# cd /sys/bus/pci/drivers/nvme/
# echo 0000:00:04.0 > unbind
# echo 0000:00:05.0 > unbind
# echo 0000:00:06.0 > unbind
# echo 0000:00:06.0 > bind
# echo 0000:00:04.0 > bind
# echo 0000:00:05.0 > bindNew order (nvme2 » nvme0, nvme0 » nvme1, nvme1 » nvme2):
# grep nvme /proc/partitions
259 0 67108864 nvme0n1
259 1 16777216 nvme1n1
259 2 33554432 nvme2n1Normally this is not practical to do for any reason. That said I've used this method before on an embedded device that did not detect by itself when a microsd card was removed or changed.
So it might be possible it could help with NVMe in some situations (like when recovering a failing card) but I haven't had such a case yet so this is just in theory.
|
Is it possible to swap logical device names of two NVME SSD drives installed in a laptop (Lenovo Legion 5 Pro 2022) without phisically swapping their port positions?
I would like the current /dev/nvme0n1 to become /dev/nvme1n1 and vice versa.
If it's possible, how do I do this?
My OS is Ubuntu 22.04LTS.
| Swap logical device names of two NVME SSD drives |
The PCIe NVMe SSDs I've seen are either not bootable at all, or only bootable using UEFI.
If you're using legacy BIOS-style boot, and a PCIe SSD does not appear as a bootable device, it's a pretty good clue that the PCIe SSD does not support legacy-style booting.
If you can get to the bootloader, but fail to start the OS, then the problem is a missing driver; but if you cannot even get to the bootloader, the problem is that the system firmware (BIOS or UEFI) does not support that device as a bootable disk.
UEFI-style boot requires a GPT partition table and an EFI System Partition (ESP), so a straight clone of partitions from a MBR-partitioned disk to a GPT-partitioned one isn't enough. But if you can add the ESP and then replace the bootloader, e.g. from a traditional BIOS-based GRUB to an UEFI version of GRUB, that might be enough to get an existing Linux/Unix installation cloned & converted from legacy to UEFI boot.
|
Test 1:
dd if=/dev/sdb of=/dev/sdc/dev/sdb is a bootable OS HDD, /dev/sdc is another HDD, after executed the above dd command, the /dev/sdc become bootable.
Test 2:
dd if=/dev/sdb of=/dev/sdc/dev/sdb is a bootable OS HDD, /dev/sdc is a PCIe NVME SSD, after excuted the above command, the /dev/sdc can not boot.
Similar issue:
dd copy a HDD to USB but fail to boot?
In the above case, the OS needs to install usb-storage driver to initramfs; is there any driver needed to install for a NVME SSD?
| dd copy from an OS HDD to PCIe NVME SSD, SSD can't boot |
You can temporarily enable the three available schedulers via:sudo modprobe bfq
sudo modprobe mq-deadline
sudo modprobe kyber-ioschedYou can see the available modules in /lib/modules/<your kernel>/kernel/block.
To enable these modules on boot, you can add the following lines to /etc/modules-load.d/modules.conf (or by creating another .conf in the same directory):
bfq
mq-deadline
kyber-iosched |
I was wondering how to enable Kyber scheduler in Ubuntu 17.10, which has kernel 4.13 by default. I got bfq enabled using the instructions from How to enable and use the BFQ scheduler?. When I navigate to my NVMe drive, I am seeing only bfq.
cat /sys/block/nvme0n1/queue/scheduler
[noop] bfq | How to enable Kyber scheduler in Ubuntu 17.10 kernel 4.13? |
Try installing the nvme-cli package with
apt-get install nvme-cli
and then retrieve the errors using
nvme error-log /dev/nvme0
|
My daily driver (Debian Bookworm RC3 + KDE Plasma) is configured to send me emails containing error notifications.
Today, I received the following email:
This message was generated by the smartd daemon running on: host name: desk
DNS domain: local.lanThe following warning/error was logged by the smartd daemon:Device: /dev/nvme0, number of Error Log entries increased from 1754 to 1758Device info:
KBG30ZMV256G TOSHIBA, S/N:X8OPD1PGP12P, FW:ADHA0101For details see host's SYSLOG.You can also use the smartctl utility for further investigation.
The original message about this issue was sent at Wed May 17 16:09:04 2023 EDT
Another message will be sent in 24 hours if the problem persists.This is what sudo journalctl -t smart shows:
May 20 15:19:47 desk smartd[550]: smartd 7.3 2022-02-28 r5338 [x86_64-linux-6.1.0-9-amd64] (local build)
May 20 15:19:47 desk smartd[550]: Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org
May 20 15:19:47 desk smartd[550]: Opened configuration file /etc/smartd.conf
May 20 15:19:47 desk smartd[550]: Drive: DEVICESCAN, implied '-a' Directive on line 21 of file /etc/smartd.conf
May 20 15:19:47 desk smartd[550]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
May 20 15:19:47 desk smartd[550]: Device: /dev/sda, type changed from 'scsi' to 'sat'
May 20 15:19:47 desk smartd[550]: Device: /dev/sda [SAT], opened
May 20 15:19:47 desk smartd[550]: Device: /dev/sda [SAT], CT4000MX500SSD1, S/N:2304E6A3D318, WWN:5-00a075-1e6a3d318, FW:M3CR045, 4.00 TB
May 20 15:19:47 desk smartd[550]: Device: /dev/sda [SAT], not found in smartd database 7.3/5319.
May 20 15:19:47 desk smartd[550]: Device: /dev/sda [SAT], is SMART capable. Adding to "monitor" list.
May 20 15:19:47 desk smartd[550]: Device: /dev/sda [SAT], state read from /var/lib/smartmontools/smartd.CT4000MX500SSD1-2304E6A3D318.ata.state
May 20 15:19:47 desk smartd[550]: Device: /dev/nvme0, opened
May 20 15:19:47 desk smartd[550]: Device: /dev/nvme0, KBG30ZMV256G TOSHIBA, S/N:X8OPD1PGP12P, FW:ADHA0101
May 20 15:19:47 desk smartd[550]: Device: /dev/nvme0, is SMART capable. Adding to "monitor" list.
May 20 15:19:47 desk smartd[550]: Device: /dev/nvme0, state read from /var/lib/smartmontools/smartd.KBG30ZMV256G_TOSHIBA-X8OPD1PGP12P.nvme.state
May 20 15:19:47 desk smartd[550]: Monitoring 1 ATA/SATA, 0 SCSI/SAS and 1 NVMe devices
May 20 15:19:48 desk smartd[550]: Device: /dev/nvme0, number of Error Log entries increased from 1754 to 1758
May 20 15:19:48 desk smartd[550]: Sending warning via /usr/share/smartmontools/smartd-runner to root ...
May 20 15:19:48 desk smartd[550]: Warning via /usr/share/smartmontools/smartd-runner to root: successful
May 20 15:19:48 desk smartd[550]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.CT4000MX500SSD1-2304E6A3D318.ata.state
May 20 15:19:48 desk smartd[550]: Device: /dev/nvme0, state written to /var/lib/smartmontools/smartd.KBG30ZMV256G_TOSHIBA-X8OPD1PGP12P.nvme.state
May 20 15:49:48 desk smartd[550]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 73 to 74
May 20 22:49:48 desk smartd[550]: Device: /dev/nvme0, number of Error Log entries increased from 1758 to 1760When I run sudo smartctl -i -a /dev/nvme0, it shows me the error count, but I can't figure out how to see the log message associated to the increase count:
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.1.0-9-amd64] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION ===
Model Number: KBG30ZMV256G TOSHIBA
Serial Number: X8OPD1PGP12P
Firmware Version: ADHA0101
PCI Vendor/Subsystem ID: 0x1179
IEEE OUI Identifier: 0x00080d
Controller ID: 0
NVMe Version: 1.2.1
Number of Namespaces: 1
Namespace 1 Size/Capacity: 256,060,514,304 [256 GB]
Namespace 1 Formatted LBA Size: 512
Namespace 1 IEEE EUI-64: 00080d 04004ad9aa
Local Time is: Sat May 20 23:09:32 2023 EDT
Firmware Updates (0x12): 1 Slot, no Reset required
Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test
Optional NVM Commands (0x0017): Comp Wr_Unc DS_Mngmt Sav/Sel_Feat
Log Page Attributes (0x02): Cmd_Eff_Lg
Maximum Data Transfer Size: 512 Pages
Warning Comp. Temp. Threshold: 82 Celsius
Critical Comp. Temp. Threshold: 85 CelsiusSupported Power States
St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat
0 + 3.30W - - 0 0 0 0 0 0
1 + 2.70W - - 1 1 1 1 0 0
2 + 2.30W - - 2 2 2 2 0 0
3 - 0.0500W - - 4 4 4 4 8000 32000
4 - 0.0050W - - 4 4 4 4 8000 40000Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 - 4096 0 0
1 + 512 0 3=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSEDSMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 32 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 30%
Data Units Read: 23,188,612 [11.8 TB]
Data Units Written: 39,727,036 [20.3 TB]
Host Read Commands: 222,771,983
Host Write Commands: 498,052,687
Controller Busy Time: 7,440
Power Cycles: 291
Power On Hours: 20,378
Unsafe Shutdowns: 615
Media and Data Integrity Errors: 0
Error Information Log Entries: 1,760
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 32 CelsiusError Information (NVMe Log 0x01, 16 of 64 entries)
Num ErrCount SQId CmdId Status PELoc LBA NSID VS
0 1760 0 0x501a 0xc005 0x028 - 1 -
1 1759 0 0xb012 0xc005 0x028 - 1 -
2 1758 0 0x5010 0xc005 0x028 - 0 -How can I figure out what the errors are?
| How can I view the smart logs for an NVMe disk in Linux when smartclt is showing there are errors? |
TL;DR:
when using the Ubuntu-built GRUB boot loader, the UEFI boot entry MUST be named ubuntu (or at least, the GRUB EFI configuration file must be at EFI/ubuntu/grub.cfg.
Details:
The problem is with the GRUB setup in UEFI, Secure Boot (I assume) and how Ubuntu has this all set up: in legacy BIOS boot, the BIOS boots a tiny (512 bytes, I believe) executable that either immediately boots something else (in ye olde times) or finds a hardcoded section on the disk where the rest of the boot loader resides (a "stage 2 boot loader"). There are many problems with that setup, the least of it is that it can't work with advanced file systems that move things around, compress them, etc - which is why in the past if you wanted to run root on an advanced file system, you had to have a separate /boot partition in ext2, where the GRUB second stage lived.
With UEFI, there's a (comparatively) large FAT32 partition (the ESP) that operating system can deploy large boot loaders into and the UEFI firmware will load the entire boot loader at once. The way Ubuntu uses that feature is to deploy the entire GRUB boot loader as a single EFI executable file called grubx64.efi. This file gets executed by the UEFI firmware - either directly or through a shim (called shimx64.efi) for a Microsoft CA only Secure Boot system. This GRUB installation knows to load a small configuration file - grub.cfg from the EFI partition that contains instructions on where to find the full configuration file. Because you have the full GRUB boot loader available immediately, you can have BTRFS or ZFS drivers there that can read advanced file systems and you don't need a separate boot partition.
The problem is that all these file paths (up until after the grub.cfg file is loaded) are compiled into the GRUB EFI executable and are not configurable - and because this whole thing needs to be signed then you can't update this configuration during installation (unless you want to setup personal keys in the machine's TPM and start recompiling boot loaders). As a result the Ubuntu GRUB installation (what KDE Neon, which is based on Ubuntu LTS, is using) needs the EFI boot loader directory - where the grub.cfg is loaded from - to be EFI/ubuntu/grub.cfg, it can't be anything else because that path was compiled into the grubx64.efi file, and if the configuration file doesn't exist - you get the GRUB prompt, like I did.
We can see this by running set in the GRUB prompt: the output will have
prefix=(hd0,gpt1)/EFI/ubuntumeaning that the prefix (where GRUB thinks it was loaded from and where it expects to see all the other files) is hardcoded to what Ubuntu uses.
When you install KDE Neon (which is what I'm using), Neon installer will create both an EFI/ubuntu and EFI/neon installations of the boot loader - I'm not sure why they even try, as the EFI/neon folder isn't even used (probably because they want the boot loader entry name to read "neon", and for some reason that means the folder must also be called "neon"? the UEFI spec doesn't require that), so that works.
What I did wrong was to assume that the EFI/ubuntu folder was some sort of legacy (I installed Neon on top of a previous Ubuntu installation) and removed it (and also re-installed the Neon bootloader into EFI/Neon, because I wanted the boot entry to be nicely capitalized) - so the GRUB executable was loaded (by the UEFI firmware) from the new folder, but once it was loaded - it tried to find the rest of the setup in EFI/ubuntu and that wasn't there. So we get dumped to the prompt to figure it out.
|
After getting a new system, I moved the NVME drive from my old system to the new one, where both machines are set to boot using UEFI. As the EFI setup is machine specific, I expected to encounter trouble, and came prepared - I booted from a live USB, did the mounting and chrooting thing, and reinstalled GRUB according to instructions (found everywhere on the web and repeated below), after which a reboot brings me to the GRUB command prompt, and I can't figure out how to fix this - no matter what I do, if I select the Linux UEFI boot entry (as shown below), I get a grub prompt. If I select the Windows UEFI boot entry (this is a dual boot system), it loads fine.
I can always boot the Linux install from the GRUB prompt by typing configfile (hd0,gpt6)/@/boot/grub/grub.cfg which starts the GRUB menu I expected and from that I can boot any entry without problems (I'm using BTRFS as the root partition, and / is in subvolume @ - this is pretty standard for an Ubuntu install).
Here's the current setup:
# efibootmgr --verbose
BootCurrent: 0001
Timeout: 0 seconds
BootOrder: 0001,0005,0000,0002
Boot0000* UEFI Samsung SSD 970 EVO Plus 1TB S6P7NF0T423021F 1 HD(1,GPT,37b6d616-6865-44b1-a382-9987345e2cfa,0x800,0x32000)/File(\EFI\Boot\BootX64.efi)N.....YM....R,Y.
Boot0001* Neon HD(1,GPT,37b6d616-6865-44b1-a382-9987345e2cfa,0x800,0x32000)/File(\EFI\Neon\shimx64.efi)
Boot0002* UEFI HTTPs Boot PciRoot(0x0)/Pci(0x1f,0x6)/MAC(000000000000,0)/IPv4(0.0.0.00.0.0.0,0,0)/Uri()N.....YM....R,Y.
Boot0005* Windows Boot Manager HD(1,GPT,37b6d616-6865-44b1-a382-9987345e2cfa,0x800,0x32000)/File(\EFI\Microsoft\Boot\bootmgfw.efi)WINDOWS.........x...B.C.D.O.B.J.E.C.T.=.{.9.d.e.a.8.6.2.c.-.5.c.d.d.-.4.e.7.0.-.a.c.c.1.-.f.3.2.b.3.4.4.d.4.7.9.5.}....................
# ll /boot/efi/EFI/
total 6
drwx------ 6 root root 1024 Jul 26 12:38 ./
drwx------ 4 root root 1024 Jan 1 1970 ../
drwx------ 2 root root 1024 Aug 28 2022 Boot/
drwx------ 5 root root 1024 Jul 26 12:38 Dell/
drwx------ 4 root root 1024 Aug 28 2022 Microsoft/
drwx------ 2 root root 1024 Jul 24 20:01 Neon/
# cat /boot/efi/EFI/Neon/grub.cfg
search.fs_uuid 2886a665-f535-496e-a543-13c62983b0da root
set prefix=($root)'/@/boot/grub'
configfile $prefix/grub.cfg# ll /dev/disk/by-uuid/
total 0
drwxr-xr-x 2 root root 120 Jul 26 13:20 ./
drwxr-xr-x 9 root root 180 Jul 26 13:20 ../
lrwxrwxrwx 1 root root 15 Jul 26 13:20 2886a665-f535-496e-a543-13c62983b0da -> ../../nvme0n1p6
lrwxrwxrwx 1 root root 15 Jul 26 13:20 80D9-5688 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Jul 26 13:20 82926928-1ed1-4a76-b0e8-d62a0171c1ee -> ../../nvme0n1p5
lrwxrwxrwx 1 root root 15 Jul 26 13:20 D8D6FA10D6F9EF1E -> ../../nvme0n1p4
# lsblk -e 7 -o name,fstype,size,fsused,label,partlabel,mountpoint,uuid,partuuid
NAME FSTYPE SIZE FSUSED LABEL PARTLABEL MOUNTPOINT UUID PARTUUID
nvme0n1 931.5G
├─nvme0n1p1 vfat 100M 58.8M EFI system partition /boot/efi 80D9-5688 37b6d616-6865-44b1-a382-9987345e2cfa
├─nvme0n1p2 16M Microsoft reserved partition fdc1478a-852d-4f33-8e22-24a2ea209726
├─nvme0n1p3 BitLocker 50.2G Basic data partition e8584ca9-db86-4bb1-abf7-424afe77bc94
├─nvme0n1p4 ntfs 517M D8D6FA10D6F9EF1E 5373ade9-ed41-4164-85e2-c28b7026c25f
├─nvme0n1p5 swap 30.5G [SWAP] 82926928-1ed1-4a76-b0e8-d62a0171c1ee d5d866a5-97d4-49bb-a182-6eec3abcba18
└─nvme0n1p6 btrfs 850.2G 440.5G linux /var/lib/docker/btrfs 2886a665-f535-496e-a543-13c62983b0da cb45a183-14d3-4a95-b89f-6b5e315609cdI can try to reinstall by:Remove the offending EFI boot entry: efibootmgr -b 1 -B
Reinstall the boot loaded to disk and EFI: grub-install /dev/nvme0n1 --target x86_64-efi --efi-directory /boot/efi/ --bootloader-id Neon
Update the GRUB configuration (I'm not sure it is needed, but that's what the docs say): update-grub2After which the configuration looks like I'm showing above, and on restart GRUB will not start the menu automatically. I'm not sure about the UUID in the EFI NVRAM dump, but this value isn't configured anywhere that I can find and it reproduces if I remove the entries and recreate them.
Any idea what I'm missing?
| Computer boots to GRUB prompt, after moving drive from an old computer |
Problem: I told the kernel to boot "/dev/nvme1" instead of using labels or partition UUIDs; this caused a windows drive to be sometimes discovered as "nvme0" and sometimes "nvme1". Whenever the windows drive got "nvme1" (around 50% of the time) then the system couldn't boot it because of a fat-fs problem (fortunately I think).
I'm solving right now by using partition labels or UUIDs instead of paths, which was actually advised and now I know why.
It's amazing how asking a question online suddenly makes you find the solution; sorry for the fuzz.
|
I've found a ton of posts on the internet regarding this error:
wrong fs type, bad option, bad superblock on /dev/xxx,missing codepage or helper program, or other error
Yet I have never found any case where the error just "sometimes" appears when booting.
Whenever I boot my linux machine I sometimes get the mentioned error and sometimes it just works fine. It's around a 50/50 chance and I have not been able to see a pattern in any way.
If I get the error, I just reboot and try again; I've been doing this for the past half a year. 3 boots is usually the maximum n. of times I have to boot till I get to my desktop.
If I try mounting the drive in the emergency shell, no error pops up whatsoever and I can write/read to/from the drive without any problem.
I would like to know if this is a fixable problem or if I should send back the nvme drive (it still has warranty).
Kernel: 6.2.8-alderlake-xanmod1-1 (Xanmod + GCC optimizations)
OS: ArchLinux
Drive: Kingston KC3000 PCIe 4.0, 1TB, bought separately from the laptop
Laptop: Rog Zephyrus m16EDIT:
I have two drives with windows/linux dual boot. Linux tries to boot /dev/nvme1n1p1, but I just found out that from the emergency shell I can only mount /dev/nvme0n1p1 which is actually the linux root that is supposed to be booted. Whenever I get to boot into the desktop, then an fdisk -l shows me that the linux drive is correctly labeled as nvme1n1p1, therefore I suppose that the system is only able to boot when the linux drive is assigned "nvme1" and the windows drive is assigned "nvme0". I've manually specified the kernel command line with EFISTUB as such:
root=/dev/nvme1n1p1 resume=/dev/nvme1n1p2 rw quiet modprobe.blacklist=nouveau ibt=off initrd=\initramfs-linux-xanmod.img
| Probabilistic (~50%) error on boot regarding "wrong fs type, bad option, bad superblock...missing codepage or helper program, or other error" |
Yeah,
ls -la /dev/disk/by-path# or
cd /sys/block
for i in nvme*; do
echo "$i is `cat $i/device/address`"
done |
I have a U.2 SSD, which shows up as nvme1n1 in lsblk:
root@eris:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 3.6T 0 disk
├─nvme0n1p1 259:1 0 476M 0 part /boot/efi
├─nvme0n1p2 259:2 0 38.1G 0 part /
└─nvme0n1p3 259:3 0 3.6T 0 part /data
nvme1n1 259:4 0 3.5T 0 disk Looking in dmesg, I can see that:
root@eris:~# dmesg | grep -i nvme
[ 0.997417] nvme nvme0: pci function 0000:01:00.0
[ 0.997448] nvme nvme1: pci function 0000:04:00.0
...And this matches:
root@eris:~# ll /sys/bus/pci/drivers/nvme | grep 04:00.0
lrwxrwxrwx 1 root root 0 Jul 4 14:41 0000:04:00.0 -> ../../../../devices/pci0000:00/0000:00:01.2/0000:02:00.0/0000:03:02.0/0000:04:00.0Which is something I will need to know for later.
My question is, is there a simpler way to get from /dev/nvme1n1 to /sys/bus/pci/drivers/nvme/0000:04:00.0?
| Is there a simple way to see which PCI device my NVME is? |
Found the answer, this is much much better with oflag=direct, jumping from 45 Mb/s to 536 Mb/s :)
dd if=/dev/zero of=/dev/mapper/ecrypt oflag=direct bs=1M status=progressThanks to these two posts :NVMe performance hit when using LUKS encryption
https://stackoverflow.com/questions/33485108/why-is-dd-with-the-direct-o-direct-flag-so-dramatically-faster |
I try to prepare an nvme for encryption, so i first follow this post on SO.
But the speed of dd is really really slow (less than 100 mb/s). I see there is new option to speed up dm-crypt on kernel 5.9 (see this post), but before updating my kernel, i want to know if using nvme-cli format zero tools is equivalent to /dev/zero to prepare a disk : https://manpages.debian.org/testing/nvme-cli/nvme-write-zeroes.1.en.html
The actual (and very very slow) command to prepare disk before luks2 format :
cryptsetup plainOpen --key-file /dev/urandom /dev/nvme0n1p2 ecrypt
dd if=/dev/zero of=/dev/mapper/ecrypt bs=1M status=progress
cryptsetup plainCloseUpdate :
Going to kernel 5.12 with dmcrypt 2.3.4, i use this new perf options :
cryptsetup plainOpen --perf-no_read_workqueue --perf-no_write_workqueue --key-file /dev/urandom /dev/nvme0n1p2 ecrypt dmsetup table say option are correctly activated :
ecrypt: 0 1999358607 crypt aes-cbc-essiv:sha256 0000000000000000000000000000000000000000000000000000000000000000 0 259:2 0 2 no_read_workqueue no_write_workqueueI also verifyed that AES is activated with cpuid :
cpuid | grep -i aes | sort | uniq
AES instruction = true
VAES instructions = falseI have the same problem, dd write zero at 900mb/s and slowly decrease to 45 mb/s ...
| Slow /dev/zero format using dd with nvme to prepare crypto, is there nvme specific tool? |
Check these lines in /usr/lib/udev/rules.d/60-persistent-storage.rules:
KERNEL=="nvme*[0-9]n*[0-9]", ENV{DEVTYPE}=="disk", ATTRS{nsid}=="?*", ENV{ID_NSID}="$attr{nsid}"
# obsolete symlink that might get overridden on adding a new nvme controller, kept for backward compatibility
KERNEL=="nvme*[0-9]n*[0-9]", ENV{DEVTYPE}=="disk", ENV{ID_MODEL}=="?*", ENV{ID_SERIAL_SHORT}=="?*", \
OPTIONS="string_escape=replace", ENV{ID_SERIAL}="$env{ID_MODEL}_$env{ID_SERIAL_SHORT}", SYMLINK+="disk/by-id/nvme-$env{ID_SERIAL}"
KERNEL=="nvme*[0-9]n*[0-9]", ENV{DEVTYPE}=="disk", ENV{ID_MODEL}=="?*", ENV{ID_SERIAL_SHORT}=="?*", ENV{ID_NSID}=="?*",\
OPTIONS="string_escape=replace", ENV{ID_SERIAL}="$env{ID_MODEL}_$env{ID_SERIAL_SHORT}_$env{ID_NSID}", SYMLINK+="disk/by-id/nvme-$env{ID_SERIAL}"The first rule sets ID_NSID (namespace id?) of the NVMe device, compare head /sys/block/nvme*/nsid. The second rule sets ID_SERIAL and creates the regular disk/by-id/nvme-ID_SERIAL symlink. The third rule appends ID_NSID to ID_SERIAL and makes the "duplicate" disk/by-id/nvme-ID_SERIAL_NSID symlink.
In udevadm info /dev/nvme you'll only see the NSID variant for ID_SERIAL, since the third rule overwrites whatever the second rule had set.
It looks a bit confusing but it seems to be working as intended. If you want to get rid of it, you'd have to disable the third udev rule quoted above.In this commit c5ba7a2a @ github/systemd you can also see the same rules repeated for the individual partition symlinks, so you'd have to disable it in both places.
However the intention here seems to be to prefer the _1 links over the regular ones, as those are… uh… namespace-safe? ;-)
I'm not sure about this approach, with a slightly different rule you could protect the regular symlink… this patch just adds new ones without solving the original problem, that's a bit weird. But I don't use namespaces much, so…
|
Recently, I noticed that there were extra symlinks in /dev/disk/by-id for my NVME drives, with the duplicates having the same name with _1 appended.
# ls -lF /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_500GB_S466XXXXXXXXXXW{,_1}
lrwxrwxrwx 1 root root 13 Jul 29 19:22 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_500GB_S466XXXXXXXXXXW -> ../../nvme0n1
lrwxrwxrwx 1 root root 13 Jul 29 19:22 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_500GB_S466XXXXXXXXXXW_1 -> ../../nvme0n1(serial numbers edited to mostly XXXXXXXX for no particular reason)
and it duplicates the entries for all the partitions on the NVME drives too:
$ ls -lF /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_500GB_S466XXXXXXXXXXW_1*-part*'
lrwxrwxrwx 1 root root 15 Jul 29 19:22 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_500GB_S466XXXXXXXXXXW_1-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Jul 29 19:22 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_500GB_S466XXXXXXXXXXW_1-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 15 Jul 29 19:22 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_500GB_S466XXXXXXXXXXW_1-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root 15 Jul 29 19:22 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_500GB_S466XXXXXXXXXXW_1-part4 -> ../../nvme0n1p4
lrwxrwxrwx 1 root root 15 Jul 29 19:22 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_500GB_S466XXXXXXXXXXW_1-part5 -> ../../nvme0n1p5
lrwxrwxrwx 1 root root 15 Jul 29 19:22 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_500GB_S466XXXXXXXXXXW_1-part6 -> ../../nvme0n1p6
lrwxrwxrwx 1 root root 15 Jul 29 19:22 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_500GB_S466XXXXXXXXXXW_1-part7 -> ../../nvme0n1p7This is only happening for my NVME drives, not for my SATA or USB flash drives.
I can't see anything obvious in /usr/lib/udev/rules.d/60-persistent-storage.rules that would do this.
Anyone know why udev is doing this? And, more importantly, how to stop it?
| udev makes duplicate /dev/disk/by-id symlinks for nvme drives & partitions |
If the ESP is mounted at /boot/efi, the Arch loader.conf should be placed at /boot/EFI/loader/loader.conf.
If the ESP is mounted at /boot, then it should be at /boot/loader/loader.conf respectively.
And if you view the ESP filesystem through the GRUB prompt or any other mechanism that will focus on only one filesystem at a time, it should be at /loader/loader.conf. In other words, the man page is specifying the loader.conf location as relative to the mount point/root directory of the ESP filesystem.
The loader.conf described by Arch's man page refers to the configuration file of systemd-boot, which is an UEFI-only bootloader. It has no relation to FreeBSD's bootloader and its loader.conf file, although it uses the same filename. Although systemd-boot can boot Windows and MacOS in addition to Linux, it doesn't seem to directly support booting FreeBSD.
On a system that uses legacy BIOS, you cannot use the systemd-boot bootloader, and the version of GRUB that supports BIOS (=GRUB architecture code i386-pc) does not use loader.conf at all by default.
|
I'm attempting to alias /dev/nvmeX as /dev/nvdX on bootup through the following guide: https://www.freebsd.org/cgi/man.cgi?query=nvd
I would like to know where the loader.conf file with the following contents are supposed to be placed to alias /dev/nvme0 as /dev/nvd0 on bootup:
nvme_load="YES"
nvd_load="YES"loader.conf manpage: https://man.archlinux.org/man/loader.conf.5
The loader.conf manpage mentions that,systemd-boot(7) will read ESP/loader/loader.conf...I'm aware that "ESP" refers to an EFI system partition. So on an EFI system partition with the GRUB bootloader, would the proper loader.conf placement be something like /boot/loader/loader.conf, /boot/efi/loader/loader.conf, or /loader/loader.conf ?
Additional question: Is loader.conf specific to an ESP system partition and doesn't work through a BIOS/MBR system partition?
I've attempted this on a BIOS/MBR system partition using the suggested placements above with no success.
| Where to place loader.conf on an EFI system partition with the GRUB bootloader? |
Answer adapted from: how-to-fix-overlapped-partitions-in-the-mbr-table. You can try this but i think much be easier solution to just delete swap and logical partitionFixing the partition table with sfdisk:Boot with live Ubuntu disk;Confirm the problem on your disk device, here /dev/sda with parted e.g.
sudo parted /dev/sda unit s print which should show: Error: Can't have overlapping partitions.Partition details can be checked with:
sudo fdisk -l -u /dev/sda which, for you, according to your post is:
Disk /dev/nvme0n1: 953,9 GiB, 1024209543168 bytes, 2000409264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x6e617337Device Boot Start End Sectors Size Id Type
/dev/nvme0n1p1 * 2048 1998407679 1998405632 952,9G 83 Linux
/dev/nvme0n1p2 1998409726 2000397734 1988009 970,7M 5 Extended
/dev/nvme0n1p5 1998409728 2000408575 1998848 976M 82 Linux swap / SolarisChecking the overlaps: You can see that your extended partition /dev/nvme0n1p2 is smaller than your swap partition
/dev/nvme0n1p5.To make things more clear
your swap partition is inside the that extended partition and hence it's size should be smaller that extended partition size ideally.But in your case swap size is greater than logical partition size itself.
Device Size /dev/nvme0n1p2 970,7M
/dev/nvme0n1p5 976M or in other words end sector of nvme0n1p2 should be greater than end sector of nvme0n1p5.But in your casenvme0n1p2end = 2000397734
nvme0n1p5end = 2000408575and hence the problem.
Now you can simply solve it by reducing you swap partition size simply using gparted. (~ 600MB - 700MB)
OR you can use command line tools :
sfdiskUsing sfdiskAs suggested in the documentation that - "In cases where we do not know if the starting or ending sector is the problem,
we assume that the starting sector of each partition is correct, and
that the ending sector might be in error", we assume that the
starting sector of extended partition nvme0n1p2 is correct. Hence we will
be looking to change the end sector of swap partition nvme0n1p5.Calculations:
nvme0n1p5newEnd = nvme0n1p2end - 1 =
2000397734 - 1 = 2000397733
nvme0n1p5newSize = nvme0n1p5newEnd - nvme0n1p5start =
2000397733 - 1998409728 = 1988005Dumping a copy of the partition table in an file using the sfdisk command:sudo sfdisk -d /dev/sda should dump the partition table details.
This can be dumped to a file, which after necessary corrections are
made, can be fed back to sfdisk. [To OP: Please edit your
Question and include the output of sudo sfdisk -d /dev/sda]
Dump a copy of partition table with:
sudo sfdisk -d /dev/sda > sda-backup.txt Open the file with root privilege, created in the previous step, using text editor of your choice. In the example I'll use nano.
sudo nano sda-backup.txt (`sda-backup.txt` assuming the file is in the current directory, else repalce it with the file's absolutepath.)
Change the old size of nvme0n1p5 (1998848) to the corrected size
(1988005) so that your new partition table dump would look
something like:output not attached by opSave the file (Ctrl+O for nano) and close the
editor (Ctrl+X for nano).Feeding back the corrected partition details to the partition table using the sfdisk command:
sudo sfdisk /dev/sda < sda-backup.txtConfirm if the problem is resolved by running parted on your disk device:
sudo parted /dev/sda unit s printIf step 9 confirm that the partition table is fixed, you can then use GParted or other partition editors with the device.The GParted documentition also suggests an alternative method, using
testdisk to scan the disk
device to rebuild the partition table. The testdisk application is
included on GParted Live. So if you
are not comfortable with the command-line way, you can try the
alternative.
sourceUsing Gparted
unmount your swap partition before continuingcurrent stateresize the root partitionroot partition before resizeroot partition after resizecreated empty space after root partitiondeleting swapdelaeting logical partitionall partitions removed except rootcreate new logical partitionleave some free space before partition (so it doesn't overlap) and select partition type as Extended partitionthis is how it should look nowcreate swap partitionleave some free space after partition so it doesn't exceed and select filesysytem as linux swapthis is how it should look nowcopy the UUID of your new swap and replace it in your /etc/fstab |
I am trying to image my Ubuntu disk using Clonezilla and it fails because I get an error saying:error cannot have overlapping partitionsBelow is how my disk is set up and the lsblkoutput:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop1 7:1 0 42,2M 1 loop /snap/snapd/14066
nvme0n1 259:0 0 953,9G 0 disk
├─nvme0n1p5 259:3 0 976M 0 part [SWAP]
└─nvme0n1p1 259:1 0 952,9G 0 part /And here is the output of fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 953,9 GiB, 1024209543168 bytes, 2000409264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x6e617337Device Boot Start End Sectors Size Id Type
/dev/nvme0n1p1 * 2048 1998407679 1998405632 952,9G 83 Linux
/dev/nvme0n1p2 1998409726 2000397734 1988009 970,7M 5 Extended
/dev/nvme0n1p5 1998409728 2000408575 1998848 976M 82 Linux swap / SolaAnd here is how it appears in gparted:Any advice how to fix this error so I can image/save my disk?
| Clonezilla: cannot have overlapping partitions |
I/O schedulers are assigned globally at boot time.
Even if you use multiple elevator=[value] assignments only the last one will take effect.
To automatically/permanently set per-device schedulers you could use udev rules, systemd services or configuration & performance tuning tools like tuned.
As to your other question, the answer is yes, elevator=none is the correct value to use for NVME storage.
|
We have systems with both spinning mechanical disks, and NVME storage. We want to reduce the CPU overhead for IO by taking any IO scheduler out of the way. We want to specify this on the Linux boot command line; i.e. in GRUB_CMDLINE_LINUX, in the file /etc/default/grub.For mechanical disks, we can append elevator=noop to the command line. This corresponds to the noop value in /sys/block/sda/queue/scheduler
For NVME storage, we instead use none in /sys/block/nvme0n1/queue/scheduler; which presumably (could not confirm) can be specified at boot time by appending elevator=none.This becomes a two-part question:Is elevator=none the correct value to use for NVME storage in GRUB_CMDLINE_LINUX?
Can both values be specified in GRUB_CMDLINE_LINUX?If the second is correct, I'm guessing that elevator=noop will set correctly for the spinning disks, but the NVME controller will gracefully ignore it; then elevator=none will set correctly for NVME disks, but the spinning disk controller will gracefully ignore that.
| How to specify multiple schedulers on the kernel boot command line? |
A colleague of mine has done something similar for SD cards. He traced the IO after the host has received the response from the card and is about to wrap up the operation (the function is sdhci_request_done).Unlike SD cards, most data will actually be exchanged via DMA to an nvme device (usually), so your Linux can't know the content of the transfer, only that it happened. I'm sure you can disable DMA, to huge performance reduction. I don't know how to do that, but you can possibly achieve it using a kernel boot flag.
Other than that, you can already trace all commands exchanged, without having to extend anything. Linux has tracepoints, and nvme is just one family of them; so
sudo perf trace -e nvme:nvme_\* > logfile |
I want to write a small Linux driver extension. More specific: I want to write all the communication between the host and a M.2-nvme-ssd into a userspace file. The nvme driver is pretty big though and i have difficulties pinpointing some place to start at.
A colleague of mine has done something similar for SD cards. He traced the IO after the host has received the response from the card and is about to wrap up the operation (the function is sdhci_request_done). The trace shows requests and responses with opcode, data and timestamps. Something like this would be my goal.
I have found programs that trace IO, but they operate in userspace. That is a problem, as i might send a message to the card directly from the driver.
So my question is: Where do i tap into the host-driver to get the data, without delaying the operations or allocating much memory. Or is there a driver-function that does this?
| Where do i trace NVME IO within the Linux driver? |
You need to create a FAT filesystem on /dev/nvme0n1p1 before you can mount the partition:
mkfs.fat -F 32 /dev/nvme0n1p1The step is missing in in the linked tutorial.
|
I'm following along in this article to install arch in vmware on my m1 mac
I'm able to do fdisk just fine, and get the following partition table:I then create the filesystem for partition 2 per the article with mkfs.ext4 /dev/nvme0n1p2. When I mount this with mount /dev/nvme0n1p2 /mnt, works fine
But when I attempt to mount the efi filesystem, I get the following:dmesg shows:Anybody have any thoughts on where to go from here? Tried specifying mount -t ext4 ... but got VFS: can't find ext4 filesystem
| Installing ArchLinux in VmWare Fusion on M1 Mac |
Answer was simpler than I thought. This SSD had been dd'd from another laptop that had an offboard AMD GPU.
After I uninstalled the old Radeon drivers (following these instructions) boot times went back to normal.
|
I've used dd to clone my old SATA SSD into a new and larger NVMe model on a Dell XPS 9360 and switched the SATA Configuration from the original RAID On to AHCI to get it booting. It works, but the new disk feels a lot slower than my old SSD, especially during boot. Boot times are up in the minutes vs the ~20 seconds I had before.
I've read somewhere that for M.2 NVMe PCIe chips it would be best for performance to switch SATA to Disabled instead of AHCI. Is that the case?
If it is, how can I safely switch modes? (I tried just switching it on the BIOS but the SSD isn't recognized and I can't boot from it).
If I should stick to AHCI, how can I get faster boot times? (Already enabled Fastboot in the BIOS)
Question originally asked (but considered off-topic) here
lshw:
$ sudo lshw## output trimmed ##
*-pci:3
description: PCI bridge
product: Intel Corporation
vendor: Intel Corporation
physical id: 1d
bus info: pci@0000:00:1d.0
version: f1
width: 32 bits
clock: 33MHz
capabilities: pci pciexpress msi pm normal_decode bus_master cap_list
configuration: driver=pcieport
resources: irq:125 memory:dc200000-dc2fffff
*-storage
description: Non-Volatile memory controller
product: Toshiba America Info Systems
vendor: Toshiba America Info Systems
physical id: 0
bus info: pci@0000:3c:00.0
version: 01
width: 64 bits
clock: 33MHz
capabilities: storage pm msi pciexpress msix nvm_express bus_master cap_list
configuration: driver=nvme latency=0
resources: irq:16 memory:dc200000-dc203fff## output trimmed ##fdisk:
$ sudo fdisk -l
Disk /dev/loop0: 86,6 MiB, 90828800 bytes, 177400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop1: 86,6 MiB, 90812416 bytes, 177368 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop2: 86,6 MiB, 90759168 bytes, 177264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/nvme0n1: 953,9 GiB, 1024209543168 bytes, 2000409264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: A4EB683B-DB3D-49FD-AA58-67970447597CDevice Start End Sectors Size Type
/dev/nvme0n1p1 2048 1050623 1048576 512M EFI System
/dev/nvme0n1p2 1050624 1550335 499712 244M Linux filesystem
/dev/nvme0n1p3 1550336 2000409230 1998858895 953,1G Linux filesystemDisk /dev/mapper/sda3_crypt: 953,1 GiB, 1023413657088 bytes, 1998854799 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/mapper/mint--vg-root: 951,2 GiB, 1021388521472 bytes, 1994899456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/mapper/mint--vg-swap_1: 1,9 GiB, 2021654528 bytes, 3948544 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 byteshdparm:
$ sudo hdparm -Tt --direct /dev/nvme0n1/dev/nvme0n1:
Timing O_DIRECT cached reads: 1952 MB in 2.00 seconds = 976.68 MB/sec
Timing O_DIRECT disk reads: 2226 MB in 3.00 seconds = 741.09 MB/sec$ sudo hdparm -Tt /dev/nvme0n1/dev/nvme0n1:
Timing cached reads: 16664 MB in 1.99 seconds = 8352.90 MB/sec
Timing buffered disk reads: 2296 MB in 3.00 seconds = 765.09 MB/seclsb_release:
$ lsb_release -a
LSB Version: core-9.20160110ubuntu0.2-amd64:core-9.20160110ubuntu0.2-noarch:security-9.20160110ubuntu0.2-amd64:security-9.20160110ubuntu0.2-noarch
Distributor ID: LinuxMint
Description: Linux Mint 18.1 Serena
Release: 18.1
Codename: serenauname:
$ uname -a
Linux ricardo-ssd 4.4.0-128-generic #154-Ubuntu SMP Fri May 25 14:15:18 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | M.2 NVMe PCIe boot times much slower than old SATA SSD |
I was able to get this working using vanilla rEFInd, and providing it with a driver for NVMe. I installed to a USB device, as this was a non-intrusive option that would be transparent to the system. Since TrueNAS is managing the boot disk, I don't want to interfere with it.
rEFInd will boot from the USB, and then chain load grub from the NVMe disk.
rEFInd does provide an ISO you can write to a USB device, but the filesystem contains very little free space, and the fatresize tool was unable to resize it (claims it's a FAT12 filesystem). So you have to use the installer tool.Download rEFInd, the "binary zip file" option.
Use gdisk (or other partition tool) to partition the USB device, setting the partition type to EF00.
I formatted with mkfs.vfat, though I'm not sure this step is required.
Run refind-install --usedefault /dev/name_of_usb_partition. (e.g. /dev/sdz1)
Mount the USB device.
Run mkdir /path_to_usb/EFI/BOOT/drivers_x64
Download Clover. (I chose this driver as the rEFInd author specifically mentions it working)
Copy efi/clover/drivers/off/nvmexpressdxe.efi from Clover to /path_to_usb/EFI/BOOT/drivers_x64/.
Unmount everything.That's it. rEFInd will automatically use the driver, then scan for available boot options, and automatically boot after 20 seconds. You can follow the rEFInd documentation to configure the behavior.
|
I have a system which I want to boot from an NVMe disk (via a PCIe riser). The system is UEFI capable, I can boot from a USB disk and install the OS (TrueNAS Scale) to the NVMe disk, and the OS shows up in the UEFI boot options. However when attempting to boot from that UEFI option, it fails to do so (just drops me into the bios screen).
This appears to be due to the BIOS not supporting booting from NVMe disks. This would somewhat make sense since I used a PCIe riser to add the NVMe disk.
When I google the subject, there are many references to rEFInd and "DUET". However I can't find any information on this "DUET". The links I can find all point to a dead repo.
How can I get the system to boot from NVMe?
| How do I boot from an NVMe disk without bios support? |
Unfortunately adding nve and lvm to /etc/initramfs-tools/modules ,
updating initramfs and rebooting
was not effective.
Thus I reverted that change and then I tried adding
blacklist rtsx_pci
blacklist rtsx_pci_sdmmcto (a new) file
/etc/modprobe.d/blacklist_rtsx.conf
and rebooting
and the problem was solved
(then I have read that a patch for this has been already been submitted
to the kernel maintainers, so chances are this issue
may soon become a thing of the past)
|
After a recent update (not sure if that was the first including a new kernel 6.1) my ubuntu linux laptop cannot boot anymore
the error is
Volume group “ubuntu-vg” not found
Cannot process volume group ubuntu vg
IO error while decrypting keyslot.
Keyslot open failed.
Device /dev/nvme0n1p3 does not exist or access is deniedPlease unlock disk nvme0n1p3_crypt_but then the usual decrypting code does not work
while in initram shell I noticed there were
no device /dev/nvme* nor /dev/mappe* for my internal ssd
I managed to boot the laptop with a ubuntu live usb stick
and manually decrypted and mounted my ssd partitions and my data were all there
so I rebooted then I made the grub boot menu to appear again and I selected the previous kernel 5.17, and the system managed to boot normally
Now I would like to fix the new kernel in a stable way
here are some info on my laptop:
OS: Ubuntu 22.04.3 LTS x86_64
Host: XPS 15 9560
Kernel: 5.17.0-1035-oem my boot partition content is
$ ll /boot/ | grep -E "initrd|vmlinuz"
lrwxrwxrwx 1 root root 25 2023-10-05 20:38:05 initrd.img -> initrd.img-6.1.0-1023-oem
-rw-r--r-- 1 root root 112483877 2023-10-16 03:12:30 initrd.img-5.15.0-86-generic
-rw-r--r-- 1 root root 117815613 2023-10-16 03:12:18 initrd.img-5.17.0-1035-oem
-rw-r--r-- 1 root root 130800464 2023-10-16 03:12:06 initrd.img-6.1.0-1023-oem
lrwxrwxrwx 1 root root 28 2023-10-05 20:38:05 initrd.img.old -> initrd.img-5.15.0-86-generic
lrwxrwxrwx 1 root root 22 2023-10-05 20:38:05 vmlinuz -> vmlinuz-6.1.0-1023-oem
-rw------- 1 root root 11624584 2023-09-20 10:09:11 vmlinuz-5.15.0-86-generic
-rw------- 1 root root 11275528 2023-07-12 11:49:08 vmlinuz-5.17.0-1035-oem
-rw------- 1 root root 12521608 2023-09-15 14:50:36 vmlinuz-6.1.0-1023-oem
lrwxrwxrwx 1 root root 25 2023-10-05 20:38:05 vmlinuz.old -> vmlinuz-5.15.0-86-generic
$the lsblk for nvme
$ lsblk | tail -n 7
nvme0n1 259:0 0 476,9G 0 disk
├─nvme0n1p1 259:1 0 512M 0 part /boot/efi
├─nvme0n1p2 259:2 0 732M 0 part /boot
└─nvme0n1p3 259:3 0 475,7G 0 part
└─nvme0n1p3_crypt 253:0 0 475,7G 0 crypt
├─ubuntu--vg-root 253:1 0 474,8G 0 lvm /
└─ubuntu--vg-swap_1 253:2 0 980M 0 lvm [SWAP]
$the fstab
$ cat /etc/fstab | grep -E "mount point|^/"
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/ubuntu--vg-root / ext4 errors=remount-ro 0 1
/dev/mapper/ubuntu--vg-swap_1 none swap sw 0 0
/swapfile swap swap defaults 0 0
$I have read a post of similar issues on another XPS laptop - i.e.:
$ lspci | grep Unassigned
03:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS525A PCI Express Card Reader (rev 01)
$Should/could I also blacklist the drivers
blacklist rtsx_pci
blacklist rtsx_pci_sdmmcin /etc/modprobe.d/blacklist_rtsx.conf and rebuild the initramfs?
I am asking since I am quite worried to brick the system
Apologies if I used terminology incorrectly or asked dumb questions.
| volume group not found on linux laptop after update |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.