output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
My only guess at 3 is to loop over each line (skipping first) and if the next result has a different xxx= vs abc= then print a -------- but I am not in love with that.Your guess is just fine. Do it in awk. comm ... | sed 's/^\t//' | awk -F= 'NR == 1 {cur = $1; print "---- begin ----"} cur != $1 {print "---------"} {cur = $1; print} END {print "---- end ----"}'With line breaks added for readability: comm ... | sed 's/^\t//' | awk -F= ' NR == 1 { cur = $1; print "---- begin ----" } cur != $1 { print "---------" } { cur = $1; print } END { print "---- end ----" }'Output on the provided sample: ---- begin ---- AUTH_LP_ACCOUNT_ID=xxx1 AUTH_LP_ACCOUNT_ID=xxx2 --------- AWS_IMAGE_DOMAIN_NAME=abc AWS_IMAGE_DOMAIN_NAME=zyx --------- NODE_ENV=local NODE_ENV=staging --------- NODE_PORT=3000 NODE_PORT=4000 --------- REDIS_HOST=localhost REDIS_HOST=redis ---- end ----
Using comm I get results that look weird from this: comm -3 <(. "$first_env_file"; env) <(. "$second_env_file"; env) I get something like: AUTH_LP_ACCOUNT_ID=xxx1 AUTH_LP_ACCOUNT_ID=xxx2 AWS_IMAGE_DOMAIN_NAME=abc AWS_IMAGE_DOMAIN_NAME=zyx NODE_ENV=local NODE_ENV=staging NODE_PORT=3000 NODE_PORT=4000 REDIS_HOST=localhost REDIS_HOST=redis(and yes the spaces in front (prepended tabs/spaces) are there) what I would rather it look like is something like this: --begin-- AUTH_LP_ACCOUNT_ID=xx1 AUTH_LP_ACCOUNT_ID=xx2 --------- AWS_IMAGE_DOMAIN_NAME=abc AWS_IMAGE_DOMAIN_NAME=zyx --------- NODE_ENV=local NODE_ENV=staging --------- NODE_PORT=3000 NODE_PORT=4000 --------- REDIS_HOST=localhost REDIS_HOST=redis ---end---is there a way to accomplish this?to remove prepended lines we can pipe through `sed 's/^ *//. putting --begin-- and ---end--- at begin/end is easy matter but how to group results easily?My only guess at 3 is to loop over each line (skipping first) and if the next result has a different xxx= vs abc= then print a -------- but I am not in love with that.
Group results using comm
grep -v will work, just swap source with target. - will use pipe for -f instead -x whole line -w whole word is recommended to ignore whitespaces from all.txt cut -d\ -f1 active.txt | grep -vxFf - all.txtNote the two whitespaces for cut: one escaped whitespace as delimiter edit: if delimiter is a colon one must cut at : instead cut -d: -f1 active.txt | grep -vwFf - all.txt
I want to know on which ports of firewalls from a particular customer the MAC Address Filtering is not active. So I have created 2 files:all.txt contains a list of all firewalls of a customer and looks like this: abc123 ahg578 dfh879 ert258 fgh123 huz546 jki486 lop784 mnh323 xsd451 wqa512 zas423active.txt contains a list of firewalls of the same customer in which the MAC Address filtering is active, and looks like this: abc123: set macaddr 00:00:00:00:00:00 ahg578: set macaddr 00:00:00:00:00:00 dfh879: set macaddr 00:00:00:00:00:00 ert258: set macaddr 00:00:00:00:00:00 fgh123: set macaddr 00:00:00:00:00:00 huz546: set macaddr 00:00:00:00:00:00 mnh323: set macaddr 00:00:00:00:00:00 xsd451: set macaddr 00:00:00:00:00:00 wqa512: set macaddr 00:00:00:00:00:00 zas423: set macaddr 00:00:00:00:00:00I have compared the two lists using comm -3 ~/active.txt ~/all.txtand get this result:How can I get only the unmatched list as an output? So I want the output to be only jki486 lop784I have tried using sdiff, grep -rL, grep -vxFf but none of them works. FYI, I'm using GNU. Linux version 3.2.0-6-amd64; gcc version 4.9.2 I would really appreciate your help! Thank you! :)
get only the unmatched list as an output
The files have to be sorted lexically or comm will not work. Sort them into order and try again. Or use: comm -23 <(sort file1.txt) <(sort file2.txt)
I have couple of files (file 1.txt and file2.txt) and I am using unix "comm" command to compare those files to find out unique lines on file1.txt Here are the lines having on file1.txt: OD1 EN2 OD3 OD4 OD5 EN6 EN7 EN8 EN9 OD10 OD11 OD12Here are the lines having on file2.txt: EN1 EN2 EN3 OD4 OD5 EN6 EN7 EN8 EN9 OD10I am using the command as : comm -23 file1.txt file2.txtactual The result is: OD1 OD10 OD11 OD12 OD3expecting I was expecting: OD1 OD11 OD12 OD3Can you please help how to get the expected results?
comm is not proving expected result
The comm utility is used to compare whole lines between files. What you want to do is to join on a particular field. $ join -t, file2 file1 number_123,hold,this car is under maintenance number_345,done,this car checked is doneThis assumes that both files are sorted on the join field (the first comma-delimited column in each file). If the files are not sorted, you may pre-sort them using sort -t, -k1,1 -o file1 file1 sort -t, -k1,1 -o file2 file2In ksh93, bash or zsh, you may also do the sort "on the fly": join -t, <( sort -t, -k1,1 file2 ) <( sort -t, -k1,1 file1 )
I have 2 files that contain numbers_ID, status, descrpation I want to join both files based on the numbers as number_123, status1, status2 My file 1: number_123,this car is under maintenance number_345,this car checked is done number_356,this car is under main My file 2: number_123,hold number_345,done I need to join only the existing number in both files as : number_123,hold,this car is under maintenance number_345,done,this car checked is done I used comm file1 file2 to find the common numbers but the file look like: number_123,this car is under maintenance number_123,hold number_345,this car checked is done number_345,doneHow Can I print it in one line as number_123,hold,this car is under maintenance number_345,done,this car checked is done
Printing in one line the common text using comm cmd?
diff is probably the tool you want. Here are three example files: $ paste foo bar baz aaa aaa aaa aaa aaa aaz aaa aaa aaa $ if diff <(sort foo) <(sort bar); then echo "No differences"; fi No differences $ if diff <(sort foo) <(sort baz); then echo "No differences"; fi 3c3 < aaa --- > aaz
1.csv: rundeck-read-only-iam-permissions,IAMReadOnlyAccess citrix-xendesktop-ec2-provisioning",AmazonEC2FullAccess2.csv: citrix-xendesktop-ec2-provisioning",AmazonEC2FullAAA citrix-xendesktop-ec2-provisioning",AmazonS3FullAccess rundeck-read-only-iam-permissions,IAMReadOnlyAccess qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq,qqqqqqqqqqqqqqqqNote that order is not the same I want to compare difference in file 2.csv against file 1.csv (and order lines in 2.csv to match order in 1.csv If there is no change print: No changes if line in file 1.csv is changed print Line that has changed+ line-content, if line in file 1.csv is missing print "Line that was removed + line-content, if line in file 1.csv is added print "Line was added" + line-content, so far, i have this, it prints desired output but is it possible to detect what was added/removed comm -1 -3 <(sort 1.csv) <(sort 2.csv) citrix-xendesktop-ec2-provisioning",AmazonEC2FullAAA citrix-xendesktop-ec2-provisioning",AmazonS3FullAccess qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq,qqqqqqqqqqqqqqqq
compare 2 csv files and output only difference into variable
while read -r f11 f12 f13 do grep -qxFe "$f11" file2 && grep -qxFe "$f12" file3 && grep -qxFe "$f13" file4 && printf "%s\n" "$f11 $f12 $f13" done < file1
I have four files like so: File 1 contents (tab separated, 3 columns): applepen apple pen strawberry straw berryFile 2 contents: applepen strawjellyFile 3 contents(This file is sorted): apple fan strawFile 4 contents(This file is sorted): pen zenithI need to compare field 1 of file 1 with file 2, field 2 of file 1 with file 3 and field 3 of file 1 with file 4. If all three matches are found, I want to print field 1,2,3 of file 1. I want to do this for each line in file 1. The output here should be: applepen apple penIs there any way to do this using grep or comm or something similar?
How to compare multiple files and display the common lines?
From the manual:-O ctl_cmd Control an active connection multiplexing master process. When the -O option is specified, the ctl_cmd argument is interpreted and passed to the master process. Valid commands are: check (check that the master process is running), forward (request forwardings without command execution), cancel (cancel forwardings), exit (request the master to exit), and stop (request the master to stop accepting further multiplexing requests).Older versions only have check and exit, but that's enough for your purpose. ssh -O check host.example.comIf you want to delete all connections (not just the connection to a particular host) in one fell swoop, then fuser /tmp/ssh_mux_* or lsof /tmp/ssh_mux_* will list the ssh clients that are controlling each socket. Use fuser -HUP -k tmp/ssh_mux_* to kill them all cleanly (using SIGHUP as the signal is best as it lets the clients properly remove their socket).
With the following .ssh/config configuration: ControlMaster auto ControlPath /tmp/ssh_mux_%h_%p_%r ControlPersist 4hHow to close the persisting connection before the 4 hours? I know you can make new connections, but how to close them (all)? Maybe there is a way to show all the persisted connections and handle them individually but I can not find it.
How to close (kill) ssh ControlMaster connections manually
"I thought that multiple sessions should share the same socket with a connection to the same host."They can. However, note that if you connect to a host using an existing connection via ControlPath, regardless of which user you intend to log in as, you will be logged in as the original user of the connection. Eg., with no established connection to "somewhere": ssh -o ControlPath=~/.ssh/%h -o ControlMaster=yes bob@somewhereThis session is bob@somewhere. ssh -o ControlPath=~/.ssh/%h -o ControlMaster=no sue@somewhereThis session will also be bob@somewhere, because you used the same ControlPath and set ControlMaster=no; if ControlMaster=yes, you'd be logged in as sue, but ssh will have ignored your ControlPath argument, as implied in man ssh_config: Additional sessions can connect to this socket using the same ControlPath with ControlMaster set to 'no'. As evidence of this, if ControlMaster=yes in both cases, when bob exits the ControlPath socket ~/.ssh/somewhere will disappear even though the "sue" session is still running, meaning the sue session never used that socket. So, if you want to use the same connection, just %h is fine, but beware that you cannot share a connection as multiple different remote users -- ssh won't let you.
Why do the "ssh_config(5)" manpages recommend that the ControlPath option should contain at least the %h, %p and %r placeholders in order to uniquely identify each shared connection? I thought that multiple sessions should share the same socket with a connection to the same host. Wouldn't it make sense then to have a simple definition such as: ControlPath ~/.cache/ssh/mux/%hInstead of something like: ControlPath ~/.cache/ssh/mux/%r@%h:%pIn my understanding with the first definition one connection is shared between multiple sessions with different remote users, to the same remote host, on different remote ports. I want to have the first defintion in the host default section so that it suffices to say ssh -o ControlMaster=no. I want to share the connection to the same remote host between all sessions initiated by the same local user regardless of the remote user and remote port. The master client's socket should live beneath the local user's home directory.
Why not simply use %h in OpenSSH ssh's ControlPath option?
You need to set up NAT on the Linux box. There are numerous howtos on the Net when you search for NAT and iptables, maybe including the distro you use. Here is a howto for Debian which should work on other distros as well: http://debianclusters.org/index.php/NAT_with_IPTables Here are some lines that come from a German Ubuntu howto: sysctl -w net.ipv4.ip_forward=1 iptables -A FORWARD -o eth0 -s 192.168.0.0/16 -m conntrack --ctstate NEW -j ACCEPT iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADEPut them somewhere, where they are executed at startup (/etc/rc.local or you put "up" in front of every line and put the whole thing into /etc/network/interfaces) and replace eth0 by the network device that connects to the Internet and eth1 by the one that goes to your LAN. You might also have to tell your Windows box some name servers (DNS) manually if you don't want to set up bind on your Linux box. And I trust you don't need or already have a DHCP server in your LAN.
I have a Linux (Ubuntu 12.04) PC connected to the internet with a Greenpacket WiMax USB modem. I want to share the Internet connection with another computer running Windows 7 Home Premium, connected to the Linux PC over a LAN. Is this possible? How? Is the reverse possible instead (connecting the internet to the Windows computer and sharing it with Linux)?
How do I share internet with Windows from my Linux box?
Simple Here's a very simple iptables ruleset that masquerades everything. This one works for many simpler setups. It won't work if the box is working as a full-blown router — it has a potentially nasty habit of NATting all traffic that leaves your computer. iptables -A POSTROUTING -o eth+ -t nat -j MASQUERADE iptables -A POSTROUTING -o wlan+ -t nat -j MASQUERADEFull If the simple solution fails to work, or if your configuration is more complex, this ruleset might help: NATIF='vboxnet+' MARK=1 iptables -A PREROUTING -t mangle -i $NATIF -j MARK --set-mark $MARK iptables -A POSTROUTING -o eth+ -t nat -m mark --mark $MARK -j MASQUERADE iptables -A POSTROUTING -o wlan+ -t nat -m mark --mark $MARK -j MASQUERADEIt marks packets coming in through any vboxnet* interface, then, later, masquerades (SNAT) any packets going out of eth* or wlan* with the mark set. Also… In addition to the iptables rules, you'll need to turn your host computer into a router by enabling packet forwarding. Put: net.ipv4.ip_forward=1in /etc/sysctl.conf, then say sudo sysctl -p /etc/sysctl.conf.Alternatively: echo 1 | sudo tee /proc/sys/net/ipv4_ip_forwardThe guest must also have a default route that gateways packets through the host's external interfaces (and for this, chances are host-only mode just won't work). Check its routing table (this depends on the guest OS). Also, install wireshark or tshark and use them to examine packets. There's no better way to solve generic networking issues like this one. Personally, I'd suggest changing the guest to use bridged mode networking and making available to it both of the host's interfaces. Then it can connect on its own, using the DHCP service on your router to get a local address on its own. No NAT needed.
My Ubuntu 12.04 (precise) laptop has three network interfaces:eth0: wired interface sometimes connected to the Internet wlan0: wireless interface sometimes connected to the Internet vboxnet0: wired interface (actually a VirtualBox virtual interface) connected to another computer (actually a VirtualBox virtual machine with networking in host-only mode)I'd like to use iptables to set up NAT/IP masquerading to share whichever Internet connection is up (preferring the wired if both are up) with the other computer. The following works when eth0 is plugged in: echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward && sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE && sudo iptables -A FORWARD -i eth0 -o vboxnet0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT && sudo iptables -A FORWARD -i vboxnet0 -o eth0 -j ACCEPTIf I switch from wired to wireless, this obviously stops working. I tried: echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward && sudo iptables -t nat -A POSTROUTING -o '!vboxnet0' -j MASQUERADE && sudo iptables -A FORWARD -i '!vboxnet0' -o vboxnet0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT && sudo iptables -A FORWARD -i vboxnet0 -o '!vboxnet0' -j ACCEPTbut it did not work. I could try to do some Network Manager scripts to change the iptables rules whenever an interfaces goes up or down, but I figured it would be possible without jumping through such hoops. Any suggestions?
NAT (Internet connection sharing) switching between multiple public interfaces
I just came across this option in man script: -f Flush output after each write. This is nice for telecooperation: One person does ‘mkfifo foo; script -f foo’ and another can supervise real-time what is being done using ‘cat foo’.I haven't played with this yet, but it looks like exactly what I was looking for. Playing with it could establish whether color, etc., is conveyed as well.
If I am logged in to a remote server, and someone else is logged in to the same server, isn't there some way via the command line to let them "look over my shoulder"? Of course I could copy and paste my terminal scrollback buffer at intervals and dump it in a file in /tmp, and they could cat that file...that is close to what I'm talking about, though it wouldn't have color. This is very different from the typical meaning of "screen sharing" because it wouldn't involve any additional network traffic at all—just local resources. (You're both already logged in.) I have had scores of cases in just a few months where this would have been extremely useful. Is this possible? How can I do it?
"Screen sharing" on the command line?
Follow this how-to: Getting iPhone Internet Tethering Working in Linux (No jailbreaking involved!)
What are the steps to use iPhone 3G internet tethering via USB on Ubuntu 10.04? Is there software I need to install on Ubuntu for this? The tethering works fine on Windows.
How to use iphone internet tethering via USB with Ubuntu 10.04
Routing On host A you need to route all traffic for the destination network to host B. I will assume this is something like 192.168.0.0/24 for linux (on host A): ip r a 192.168.0.0/24 via 10.9.8.3 dev eth0for windows (on host A): route ADD 192.168.0.0 MASK 255.255.255.0 10.9.8.3Forwarding After routing is in place, all packages for the network 192.168.0.0/24 will be send to host B. To allow packages to be forwarded from wlp3s0 to tun0 on host B, you need to enable IP forwarding. To temporary enable IP forwarding for all interfaces: sysctl net.ipv4.conf.all.forwarding=1To enable this change permanently, add new line to /etc/sysctl.conf: net.ipv4.conf.all.forwarding = 1Additionally to interface settings, iptables could be active and need to allow package forwarding. To check if iptables is active (at least for the FORWARD chain): iptables -L FORWARD -nvIf the chain has no rules and the policy says ACCEPT, you are good to go, if not, you need to add relevant rules to allow forwarding for 192.168.0.0/24. Allow forwarding all packages to 192.168.0.0/24 on wlp3s0: iptables -I FORWARD -i wlp3s0 -d 192.168.0.0/24 -j ACCEPT iptables -I FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPTthe RELATED,ESTABLISHED automatically allows the return packages.NAT Now, after forwarding is set up, packages will be send into the tunnel. But as far as the remote network behind the VPN does not know our local network, which is normally the case, we need to NAT all packages which come from our local network and will go into the VPN to the address, we got from the VPN-Server (which is the IP on the tun0). To do this, you need to create a MASQUERADE rule in the POSTROUTING table: iptables -t nat -I POSTROUTING -o tun0 -j MASQUERADEThis will nat all outgoing packages on tun0 to the interface`s IP.
I've got two computers connected to the same router 10.9.8.1:Computer A 10.9.8.2 runs Windows 10 Insider Preview. Insider Preview has VPN broken and can't be rolled back. :( Computer B 10.9.8.3 runs Linux Mint and has a VPN connection set up via openconnect.Here's what ipconfig reports on B (fragment): tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.23.8.183 P-t-P:10.23.8.183 Mask:255.255.255.255 inet6 addr: fe80::7fb2:5598:b02e:e541/64 Scope:Link UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1410 Metric:1 RX packets:24 errors:0 dropped:0 overruns:0 frame:0 TX packets:42 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:7005 (7.0 KB) TX bytes:3243 (3.2 KB)wlp3s0 Link encap:Ethernet HWaddr 60:67:20:36:6f:a4 inet addr:10.9.8.3 Bcast:10.9.8.255 Mask:255.255.255.0 inet6 addr: fe80::8e96:7526:ff54:d1be/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:22511502 errors:0 dropped:0 overruns:0 frame:0 TX packets:16052631 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:24451442281 (24.4 GB) TX bytes:6038264731 (6.0 GB)I need to access resources behind VPN from computer A. I'm thinking of configuring routes on A in such a way so that it would access VPN resources through B while using the router directly for everything else. In the worst case, I can connect the two computers directly, but I would like to avoid that if possible. On Windows, I can simply mark any adapter as shared. But when I do the same thing on Linux, the adapter loses connectivity. Not sure how to do that correctly.
Share a VPN connection over WiFi
Seem like your default gw is on eth0 and client is redirected to it (via a icmp redirect). To fix your setup you need to add a routing rule stating that all packets incoming from client_ip should be routed to wlanO_gw. Try adding a new routing table:Edit /etc/iproute2/rt_tables and add a line for a new table, for example 252 masq where 252 is the table id and masq is the new table name. Add a rule to route ip_client packets with table masq ip rule add from ip_client/32 table masq add a default gw to the masq table ip ro add default via wlan0_gw table masq
I have a computer with two network devices (eth0 and wlan0), both connected to the internet (two different connections/isp). I'm trying to share the connection of wlan0 to another computer connected via ethernet to eth0. What I'm doing is: # sysctl net.ipv4.ip_forward=1 # iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADEFrom the client computer I can then connect to this one, but the internet connection that gets shared is the one on eth0 and not the one on wlan0. If I disable the internet connection on eth0 (by setting no gateway), then the connection to be shared is the one on wlan0. However, I'd like to have both internet connections enabled and specify to iptables which one to share. Is this possible? What am I missing? Do I need some forwarding rule?
Internet sharing with iptables: choosing which connection to share
Yes it is possible to share internet through same wireless card through which you are connect to some WIFI. You have to check whether your wireless card supports this feature or not. Following links will help you to do so:Connectify for Linux with Single wireless interface How do I create a WiFi hotspot sharing wireless internet connection (single adapter)
Is it possible to use only one wireless card to connect to other wifi network to access internet and share the internet to other devices via same wireless card at same time ? No! I didn't mean hotspot since it means only sharing internet of other network card's (example, eth0) via wifi I am doing two things at same time:connect to other wifi using wireless card wlan0 share internet via same wireless card wlan0
use only one wireless card to connect to wifi network and share internet through wifi at same time
You're probably adding a rule intended for the nat table in the filter table block suitable for iptables-restore, and with inappropriate syntax. Until you know how to edit /etc/iptables/rules.v4 directly (by studying the output of iptables-save), you should do this instead:be careful, since the rule will be applied immediately, change the current running firewall rules with: iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADEstudy the results: are they worth changing the configuration? if worthy, ask netfilter-persistent to save the rules. It will in turn run iptables-persistent's plugins which will use iptables-save under the hood. netfilter-persistent saveYou will notice that the new configuration file (a file suitable for use by iptables-restore) now has a block for the nat table with your rule (and without -t nat), separate from the filter table block.
I use netfilter-persistent to manage a firewall. I would like to share a connection between two interfaces using masquerading (example, or another). When I run those operations by invoking iptables it works. But if I try to update firewall rules stored in /etc/iptables/rules.v4 adding such a line: -t nat -A POSTROUTING -o wlan0 -j MASQUERADELines starting with -t make netfilter-persistent fail to run and the firewall is not updated: Nov 16 11:51:32 helena systemd[1]: netfilter-persistent.service: Main process exited, code=exited, status=1/FAILURE Nov 16 11:51:32 helena systemd[1]: Failed to start netfilter persistent configuration.So I am wondering if it is possible to store this kind of rules with netfilter-persistent orIs it a known limitation? Is there a good reason why it cannot work? Is there a hack to make it work?
Masquerade rule with netfilter-persistent
NetworkManager can connect you automatically if it's configured to do so. And it comes with most modern distros, such as Fedora or Ubuntu. I recommend using live USB so that you can retain the configuration between boots.
I want to take my old notebook while travelling. I have to boot up only from usb or cd though. I want to connect to the internet using usb tethering from my android phone (HTC Desire + cyanogen mod 7.1) If connect my android phone to my Windows 7 computer via usb cable and turn usb tethering on Windows does the rest and I am connect to the Internet. Can I be autoconnected (usb tethering preferably) to the internet using any live usb/cd linux/unix distro? Which one? I'll be creating the usb from Windows7.
What live distro can automatically accept usb tethering from android phone?
Yes, you can do that. You should look into dnsmasq. It is designed to serve this very need. The default DHCP server on Linux is usually ISC dhcpd. It's possible to make it work in this role, too, but it's a bit more difficult to configure, and it has to be manually configured to get the DNS server integration you get for free with dnsmasq.
I know it is possible to share an internet connection with another pc using ip-forwarding and masquerade. Is it possible to set up the sharing computer as a DHCP server, so the settings (ip, gateway, dns) get configured for the client automatically? The current way I do it, is that I setup the client NICs manually to access the internet from the client. edit: this is the setup i plan to realize -
Is it possible to use Internet Connection Sharing with DHCP?
Thanks to @Austin (http://apple.stackexchange.com/users/5916/austin) and others, I finally solved the problem! I thought damn it, this is a Unix box, i should be able to find out what's going on! I found another Snow Leopard machine at work which never had Internet Sharing turned on and in a terminal I ran: touch now && sudo find -x / -newer now and I got a short list of files that always show up (spotlight indexes, log files in /private/log and if you are using file vault a bunch of encrypted sparse bundles ...) then I enabled Internet sharing and this time I ran: sudo find -x / -newer now obviously without the "touch now". It turns out that other than a bunch of log files and other junk there are a hand full of files that are modified. I copied them all over to my machine and modified the interface names and few other hostname and ip addresses and stuff like that. The problem happend with /Library/Preferences/SystemConfiguration/com.apple.nat.plist there is a primary service key there which is set to a UUID and it is different on every machine that I tested (2 machines actually) and it does not work from if you just copy it from one machin to next. * IF YOU ARE IN A HURRY JUST READ THIS PART * From the start I avoided reinstalling my OS because I had so many Installations and configurations that I didn't want to loose. It turned out that if you use the original DVD that comes with your machine and reinstall your OS ALL YOUR APPLICATIONS, HOME DIRECTORY, custom modifications, mac ports and fink installations, preferences, network mounts, network locations, developer tools, ALL ARE PRESERVED. BUT IT REINSTALLS THE CORE SYSTEM COMPONENTS that fiex my preferences sharing pane! PROBLEM SOLVED!
for some rather strange reason my sharing preferences tab crashes (it's a long story and there seems to be no good solution for it, it's looking for a ui object that no longer exists). Anyway, I want to enable internet sharing to share my macbook's internet connection with my iPad but I can't find a way to do it without the gui or apple script (which basically calls the gui).
How can I enable internet sharing without using the gui or apple script on snow leopard?
I hadn't set up NAT on the ubuntu server. When that was set up I didn't need any 'prepend' stuff as I was able to set the IP address of the DNS server on the client (redhat, in resolv.conf) to be the same IP address as the ubuntu server was using. NAT handled the translation from one network to the other. The instructions for setting up NAT on the ubuntu server I got from here: http://ubuntuforums.org/showthread.php?t=713874 Thanks fschmitt for your answer.
I've 2 linux computers, one redhat (client), and one ubuntu (set up with shared internet connection as described here) At the moment, on the wired connection between the computers, I can ping the other computer from both sides; IP addresses are setup statically. The ubuntu computer has access to the internet through wireless. I want to setup the redhat client to be able to access the same DNS server as the ubuntu one uses. In the article above, it is assumed that the client is another ubuntu box, and they advise to do the following: prepend domain-name-servers 208.67.222.222,208.67.220.220;However, the redhat client doesn't have the file /etc/dhcp3/dhclient.conf. Is there another way of achieving the above in redhat? (I've tried to setup the ubuntu box as a dhcp server using dnsmasq, but it didn't work) (BTW, I thought I needed a crossover cable for this type of setup, but that didn't work - an ordinary ethernet cable was fine)
Static DNS setup on client of shared internet connection
You could potentially do this using screen (which you may need to install) and SSH keys. You need to log in as root and then run 'screen -US friend' (install if necessary), run whatever commands you need to do, and the detach from that (using 'Ctrl-A D') to leave it running. Then in /root/.authorized_keys add your friend's id_rsa.pub or id_dsa.pub key. With that, your friend can then ssh to root@yourmachine, then run 'screen -UDR friend' to reattach to the screen terminal, see what you've already done in it, and run Once your friend has finished, remove their key from /root/.authorized_keys right away. The only problem with this is that you will not be able to see what your friend is doing. Better would be for you to su to root in a terminal window and then use something like VNC, LogMeIn or TeamViewer to share your desktop to your friend so you can watch what they are doing.
I am logged into my Debian machine. I want my friend to get in via SSH. He comes to authenticate as the user friend. My machine asks him for a password. He does not know the password. I want to be able to let him in by running a command my session (root) that is already authenticated. I do not want to tell him the password, or change the password. I just want to open the door for this session. Is there just a way to tell Linux "OK, this authentication that is trying to come in right now is good. Let it in."?
Open the SSH Door to a Knocking Friend
A quick Google shows that recommended safe configurations for pgbouncer often set up the listening port only on the loopback interface (localhost). Here is one example: [pgbouncer] listen_port = 5433 listen_addr = localhost auth_type = any logfile = pgbouncer.log pidfile = pgbouncer.pidThe configuration documentation explains clearly how to change the addresses on which the service listens:listen_addr Specifies list of addresses, where to listen for TCP connections. You may also use * meaning “listen on all addresses”. When not set, only Unix socket connections are allowed. Addresses can be specified numerically (IPv4/IPv6) or by name. Default: not set listen_port Which port to listen on. Applies to both TCP and Unix sockets. Default: 6432Since you've now responded that you've already done this, I'll leave it here for the record, but make an additional suggestion below.The follow-up posts on the mailing list to the one you referenced provide the answer. I'll quote it here:User 1 I restarted using /etc/init.d/pgbouncer restart, which effectively launches pgbouncer with -R for a online restart. User 2 I suspect the -R is working too well for you - it reuses the old listening socket, with means the bind address stays the same. This preference is natural - you rarely change bind addres, but may change other settings (or pgbouncer version). You should just do proper stop/start, then it should take new address in use.
My question is similar but opposite to to Telnetting the Local port not working but trying the ip working For me, telnet to the local port works but trying with IP does not work :( I am running pgbouncer on port 6432: $ telnet 192.x.x.x 6432 Trying 192.x.x.x... telnet: Unable to connect to remote host: Connection refusedI set listen_addr = *, but still using telnet with IP from another server is not working. See http://lists.pgfoundry.org/pipermail/pgbouncer-general/2013-January/001097.html for the same scenario (but no useful answer). The output of netstat -plnt is tcp 0 0 127.0.0.1:6432 0.0.0.0:* LISTEN 19879/./pgbouncerHow can I fix this?
Telnetting the local port working but trying with ip not working
Here's how you can set up IPv4 connection sharing manually on a Linux machine. On the router (the desktop), enable packet forwarding, set up masquerading on the Internet-facing interface (eth0), and use a private IP range on the local interface (eth1). Run these commands as root: sysctl -w net.ipv4.ip_forward=1 iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE ifconfig eth1 up 10.1.1.1 broadcast 10.1.1.255 netmask 255.255.255.0On the laptop, you can set up a static address and route (eth0 being the wired interface): ifconfig eth0 up 10.1.1.2 broadcast 10.1.1.255 netmask 255.255.255.0 route add -net 0.0.0.0/0 gw 10.1.1.1To avoid having to set up anything special on the laptop, you can run a DHCP server on the desktop. For example, install dnsmasq and enable its built-in DHCP server by editing /etc/dnsmasq.conf to include the following lines: except-interface=eth0 dhcp-range=10.1.1.128,10.1.1.254,24hNote that Network Manager may interfere with these instructions. If you're running it on the router, either stop it or read the Ubuntu community Internet Connection Sharing page. (Network Manager on the laptop isn't a problem.) If you want these settings to persist after a reboot, this is somewhat distribution-dependent. On Debian and derived distributions, put the following line in /etc/sysctl.d/connection-sharing.conf: net.ipv4.ip_forward=1and the following lines in /etc/network/interfaces: auto eth1 iface eth1 inet static address 10.1.1.1 broadcast 10.1.1.255 netmask 255.255.255.0 post-up iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
I'm currently at work and our wifi is down. I usually connect my desktop to the internet through eth0 and my laptop connects through wifi. I have an extra port on the back of the desktop (eth1) and one on my laptop (eth0). I tried connecting a crossover cable between these two ports and bringing up a connection. I set up a route to the desktop through my laptop and I can ping between the two machines but neither one will connect to the internet through desktop's eth0. Any help is much appreciated =)
Connect my laptop through desktop to internet
Summary: Pi needs Fedora to forward traffic to the internet. Pi 1 network card (that we care about) named: usb0 -- connected to Fedora. Fedora: Internet connected. 2 network cards (that we care about) named: wlp4s0 -- wifi internet enp0s20f0u6i1 -- connected to the pi. To make life simpler I recommend stopping the Predictable Network Interface Names thingy. We want to use nic names and do not want them to change on us. Step 1: Stop systemd's Predictable Network Interface Names thingy by adding "net.ifnames=0" to kernel command line. sudo vi /etc/default/grub GRUB_CMDLINE_LINUX="net.ifnames=0" Now update grub: sudo grub-mkconfig -o /boot/grub/grub.cfg note: I have seen where the value "biosdevname=0" was added to the kernal command line in addition to net.ifnames=0. My setup did not require it. Step 2: Assign a new name using udev rules by creating a new rule file sudo vi /etc/udev/rules.d/10-myCustom-net.rules SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:f3:79:59", KERNEL=="eth*", NAME="test0" MUST change the ATTR{address}=="08:00:27:f3:79:59" line to your MAC address. Change NAME="test0" to the name you want to give the nic. note: Removed ATTR{dev_id}=="0x0" and ATTR{type}=="1" from my Ubuntu 14 template. Some say to remove KERNEL=="eth*" or the entire line is ignored. This was not the case in my setup. If you 'lose' the MAC address like I did because I rebooted before this step, does not show with ifconfig, go find it in /sys/class/net/assignedName/address. BTW: this system renamed it eth0, cat /sys/class/net/eth0/address Step 3: Assign the new interface name an address sudo vi /etc/network/interfaces auto test0 iface test0 inet static address 192.168.2.202 -- use your address netmask 255.255.255.0 -- use your address and what other entries your system requires.Step 4: reboot (its just easier for most of us) Now that just gives us a static name for our nic. You will only add iptable rules to Fedora so this is not need on the Pi. Assumptions: Both Fedora and Pi have default routing tables and no iptable rules. note: We want to keep our private ip address private and not public. RFC1918 name IP address range largest CIDR block (subnet mask) 24-bit block 10.0.0.0 – 10.255.255.255 10.0.0.0/8 (255.0.0.0) 20-bit block 172.16.0.0 – 172.31.255.255 172.16.0.0/12 (255.240.0.0) 16-bit block 192.168.0.0 – 192.168.255.255 192.168.0.0/16 (255.255.0.0)Pi: Assign ip address to usb0 sudo vi /etc/network/interfaces auto usb0 iface usb0 inet static address 172.16.0.1 netmask 255.240.0.0 add any other values needed.Fedora: Enable ipv4 forwarding sudo vi /etc/sysctl.conf net.ipv4.ip_forward=1Assign ip address for test0 (remember we changed the nic name above) sudo vi /etc/network/interfaces # This connects to the Pi auto test0 iface test0 inet static address 172.16.0.2 netmask 255.240.0.0 add any other values needed.# This is the internet connection auto wlp4s0 iface wlp4s0 inet static address 192.168.2.106 netmask 255.255.255.255 add any other values needed like gateway a.b.c.d dns-nameservers 8.8.8.8 8.8.4.4If wlp4s0 address assigned by DHCP it would look more like this This is the internet connection auto wlp4s0 iface wlp4s0 inet dhcpSet the iptable rules to forward the packets from test0 to wlp4s0 AND wrap the packets with a local subnet addressed... wrapper. Entering rules at the command line. # this rule will forward all traffic from nic test0 to nic wlp4s0 sudo iptables -A FORWARD -i test0 -o wlp4s0 -j ACCEPT # this rule will continue to forward any existing connections from test0 to wlp4so sudo iptables -A FORWARD -i test0 -o wlp4s0 -m state --state ESTABLISHED,RELATED -j ACCEPT# this rule will wrap the packet with a local address so they do not get lost in transit. sudo iptables -t nat -A POSTROUTING -j MASQUERADEnote: No firewall rules are enabled. This is a bare minimum to get it working. Add other rules to secure your system. Make the iptable rules persistent across reboots. On Ubuntu16 the package name is iptables-persistent. Fedora may be different. sudo apt-get install iptables-persistent Save the current iptable rules iptables-save > /etc/iptables/rules.v4 Reboot fedora. Verify: ip addresses. iptable rules
I am having issues setting up a bridge for my raspberry pi. My setup is: I have a laptop running fedora 27 workstation which is connected to the internet over wifi. I have a Raspberry Pi Zero W which is connected to my laptop via usb (and only usb, no external power, no ethernet, nothing). I flashed stretch lite image to my pi and then installed P4wnP1 from here: https://github.com/mame82/P4wnP1 Before i installed P4wnP1 my pi had a random 169.254.xxx.xxx address, which is why i changed the ip of my usb ethernet interface to a proper subnet to ssh into the pi. After a while i figured out the right setup to get my pi online and download git to clone the repo. After i ran the install.sh and rebooted the pi the pi had a static ip address 172.16.0.1. And i tried the same thing to get it online, changed the ip of my interface, ssh to the pi, set up the gateway to my fedora machine. But i cannot get the pi online. I should probably mention here that i enabled "share connection to other computers" in network manager and also tried a lot of things with iptables, but i cannot get it to work. I have spent the past 3 days trying to figure it out, but i had no success. here is my ifconfig on my fedora: $ ifconfig enp0s20f0u6i1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.0.2 netmask 255.255.0.0 broadcast 172.16.255.255 inet6 fe80::f7f7:80c:8a15:5771 prefixlen 64 scopeid 0x20<link> ether ee:98:9b:bc:37:ab txqueuelen 1000 (Ethernet) RX packets 2687 bytes 186674 (182.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1648 bytes 176862 (172.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0enp0s31f6: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether c8:5b:76:6b:e4:90 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 16 memory 0xf1200000-f1220000 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 1982 bytes 177290 (173.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1982 bytes 177290 (173.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:08:e4:d3 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0wlp4s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.2.106 netmask 255.255.255.0 broadcast 192.168.2.255 inet6 fe80::ebcf:d3b1:5a74:185e prefixlen 64 scopeid 0x20<link> ether e4:a7:a0:99:2e:8d txqueuelen 1000 (Ethernet) RX packets 135496 bytes 72791497 (69.4 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 51579 bytes 21450089 (20.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0here the enp0s20f06i3 interface is the one connected to the pi. Before i changed its ip address it had a 10.46.0.1 address, which is also the same address after reboot. here route -n from my pi pi@MAME82-P4WNP1:~ $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.16.0.2 0.0.0.0 UG 0 0 0 usb0 172.16.0.0 0.0.0.0 255.255.255.252 U 0 0 0 usb0 172.24.0.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0and the ifconfig of my pi pi@MAME82-P4WNP1:~ $ ifconfig lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0usb0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.0.1 netmask 255.255.255.252 broadcast 172.16.0.3 inet6 fe80::cc4b:62ff:fe84:7df0 prefixlen 64 scopeid 0x20<link> ether ce:4b:62:84:7d:f0 txqueuelen 1000 (Ethernet) RX packets 1959 bytes 182340 (178.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3197 bytes 269463 (263.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.24.0.1 netmask 255.255.255.0 broadcast 172.24.0.255 inet6 fe80::ba27:ebff:fe5e:ceb7 prefixlen 64 scopeid 0x20<link> ether b8:27:eb:5e:ce:b7 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 14 bytes 1404 (1.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0and here route -n on my fedora $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.2.1 0.0.0.0 UG 600 0 0 wlp4s0 172.16.0.0 0.0.0.0 255.255.0.0 U 0 0 0 enp0s20f0u6i1 192.168.2.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp4s0 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0resolv.conf on my pi pi@MAME82-P4WNP1:~ $ cat /etc/resolv.conf # Generated by resolvconf nameserver 10.46.0.1 nameserver 8.8.8.8 nameserver 8.8.4.4and /etc/network/interfaces on my pi pi@MAME82-P4WNP1:~ $ cat /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8)# Please note that this file is written to be used with dhcpcd # For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf'# Include files from /etc/network/interfaces.d: source-directory /etc/network/interfaces.ddns-nameservers 8.8.8.8 8.8.4.4auto usb0iface usb0 inet manualauto usb1iface usb1 inet manualfinally my iptables on my fedora, where i think the issue is: $ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere tcp dpt:bootps ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere tcp dpt:bootps ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere INPUT_direct all -- anywhere anywhere INPUT_ZONES_SOURCE all -- anywhere anywhere INPUT_ZONES all -- anywhere anywhere DROP all -- anywhere anywhere ctstate INVALID REJECT all -- anywhere anywhere reject-with icmp-host-prohibitedChain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere 10.42.0.0/24 state RELATED,ESTABLISHED ACCEPT all -- 10.42.0.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable ACCEPT all -- anywhere 192.168.122.0/24 ctstate RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere FORWARD_direct all -- anywhere anywhere FORWARD_IN_ZONES_SOURCE all -- anywhere anywhere FORWARD_IN_ZONES all -- anywhere anywhere FORWARD_OUT_ZONES_SOURCE all -- anywhere anywhere FORWARD_OUT_ZONES all -- anywhere anywhere DROP all -- anywhere anywhere ctstate INVALID REJECT all -- anywhere anywhere reject-with icmp-host-prohibited ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:bootpc OUTPUT_direct all -- anywhere anywhere Chain FORWARD_IN_ZONES (1 references) target prot opt source destination FWDI_FedoraWorkstation all -- anywhere anywhere [goto] FWDI_FedoraWorkstation all -- anywhere anywhere [goto] FWDI_FedoraWorkstation all -- anywhere anywhere [goto] Chain FORWARD_IN_ZONES_SOURCE (1 references) target prot opt source destination Chain FORWARD_OUT_ZONES (1 references) target prot opt source destination FWDO_FedoraWorkstation all -- anywhere anywhere [goto] FWDO_FedoraWorkstation all -- anywhere anywhere [goto] FWDO_FedoraWorkstation all -- anywhere anywhere [goto] Chain FORWARD_OUT_ZONES_SOURCE (1 references) target prot opt source destination Chain FORWARD_direct (1 references) target prot opt source destination Chain FWDI_FedoraWorkstation (3 references) target prot opt source destination FWDI_FedoraWorkstation_log all -- anywhere anywhere FWDI_FedoraWorkstation_deny all -- anywhere anywhere FWDI_FedoraWorkstation_allow all -- anywhere anywhere ACCEPT icmp -- anywhere anywhere Chain FWDI_FedoraWorkstation_allow (1 references) target prot opt source destination Chain FWDI_FedoraWorkstation_deny (1 references) target prot opt source destination Chain FWDI_FedoraWorkstation_log (1 references) target prot opt source destination Chain FWDO_FedoraWorkstation (3 references) target prot opt source destination FWDO_FedoraWorkstation_log all -- anywhere anywhere FWDO_FedoraWorkstation_deny all -- anywhere anywhere FWDO_FedoraWorkstation_allow all -- anywhere anywhere Chain FWDO_FedoraWorkstation_allow (1 references) target prot opt source destination Chain FWDO_FedoraWorkstation_deny (1 references) target prot opt source destination Chain FWDO_FedoraWorkstation_log (1 references) target prot opt source destination Chain INPUT_ZONES (1 references) target prot opt source destination IN_FedoraWorkstation all -- anywhere anywhere [goto] IN_FedoraWorkstation all -- anywhere anywhere [goto] IN_FedoraWorkstation all -- anywhere anywhere [goto] Chain INPUT_ZONES_SOURCE (1 references) target prot opt source destination Chain INPUT_direct (1 references) target prot opt source destination Chain IN_FedoraWorkstation (3 references) target prot opt source destination IN_FedoraWorkstation_log all -- anywhere anywhere IN_FedoraWorkstation_deny all -- anywhere anywhere IN_FedoraWorkstation_allow all -- anywhere anywhere ACCEPT icmp -- anywhere anywhere Chain IN_FedoraWorkstation_allow (1 references) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:netbios-ns ctstate NEW ACCEPT udp -- anywhere anywhere udp dpt:netbios-dgm ctstate NEW ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ctstate NEW ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns ctstate NEW ACCEPT udp -- anywhere anywhere udp dpts:blackjack:65535 ctstate NEW ACCEPT tcp -- anywhere anywhere tcp dpts:blackjack:65535 ctstate NEWChain IN_FedoraWorkstation_deny (1 references) target prot opt source destination Chain IN_FedoraWorkstation_log (1 references) target prot opt source destination Chain OUTPUT_direct (1 references) target prot opt source destination I think i need to just add the proper entries, but i could not figure it out, i searched a lot of forums. is there a way to change the 10.46.0.0/24 entries to the 172.16.0.0/24 network? because my interface had that ip before and if i could just swap the ip in the rules i would be done, right? I tried sudo iptables -t nat -A POSTROUTING -o wlp4s0 -j MASQUERADE and also tried to set the rules myself, however i cannot manage to set my FORWARD rules accordingly.
trouble setting proper forwarding rules in `iptables` with custom ip address for network sharing
Your RasPis can currently talk to the Linux PC because it is in the same network segment and has IP address 192.168.0.10. But when a RasPi attempts to access something in the internet, it will attempt to send the packets to 192.168.0.11 for further routing. But because the Linux PC's address on the RasPi network side is 192.168.0.10, not .11, the Linux PC will never receive the RasPi's outgoing packets and so cannot route them. This is wrong: the RasPis should have their router/gateway address set to 192.168.0 .10, not .11. When you specify gateway 192.168.0.11 in Linux PC's configuration for enx00249b233bda, it does not mean that the Linux PC should claim the .11 address for its own - it means you're saying there's some other system in the RasPi network with the .11 address that has internet connectivity. This is wrong: the Linux PC does not need a gateway configuration line for enx00249b233bda, because the Linux PC is the gateway for the RasPi network. You should remove or comment out the gateway line from the configuration of the enx00249b233bda interface. I don't see why you would need any of the ip route add stuff: just configuring the network interface will auto-generate a route to the 192.168.0.0/24 network, which is enough for your needs. Comment out all the ip route add commands, reboot, and keep reading. Since you apparently have just one public IP address, you would have to set up IP masquerading on the Linux PC. With plain iptables it would be done like this: iptables -w -t nat -A POSTROUTING -s 192.168.0.0/24 -o enp0s31f6 -j MASQUERADEThen, you'll need some very basic rules to enable IP forwarding from the RasPi network to the outside world, and to accept any response packets back in: iptables -w -t filter -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -w -t filter -A FORWARD -i enx00249b233bda -j ACCEPTFor certain protocols that need special connection tracking helpers, you may have to add lines as follows (this used to be automatic, until someone found a way to abuse it. That's why we can't have nice things... grumble...): iptables -w -t raw -A PREROUTING -i enx00249b233bda -p tcp --dport 21 -j CT --helper ftpThis activates the required special handling for outgoing FTP control connections from your RasPi network to the internet. The special handling will monitor the FTP control connection and automatically allow corresponding data connections to pass. There are a few other protocols that may require similar treatment. Besides FTP, other protocols that would need special treatment could be:SNMP (UDP port 161, helper name snmp) SIP (TCP and UDP, port 5060, helper name sip) IRC chat (TCP, port number may vary, helper name irc) (I know that Ubuntu has ufw, but I have no idea how to use that to set up equivalent firewall rules. If someone else knows, feel free to edit it in here.) All of the above will be completely ineffective until you activate the IPv4 routing master switch. First, make sure that /etc/sysctl.conf file has this line in it: net.ipv4.ip_forward=1Then either reboot, or run this command to make the setting effective immediately: sudo sysctl -p(Why does this master switch thing exist? Basically to make it more likely that whoever is configuring their system as a router has "done their homework" and so has a chance of not causing routing loops or any other dumb things in the network.)
How can I set up networking such that devices in a local network connected to the second ethernet interface could use the internet available on the first ethernet interface? Using iproute2 I have only got as far as creating a connection between the devices in the local network and the Linux PC, while Linux PC still has internet connection. However, this internet connection is unavailable to the devices in the local network. [Edit 2] The current configuration is based on the the this guide. I am guessing that my ip route addresses are not correct and therein lies the problem. The setup is the following: Internet | | | (enp0s31f6) = Linux PC = (enx00249b233bda) | | | NetworkSwitch | | |---(eth0) = Raspberry Pi 1 | | |---(eth0) = Raspberry Pi 2-- ethernet cable | ethernet cable (eth0) network interface name[Edit] The aim is to have Linux PC and all the Respberry Pis connected to the internet and to each other. All devices have static IP addresses. Linux PC is running Ubuntu 16.04 All settings not outlined below should be the default settings. Linux PC current settings ifconfig enp0s31f6 Link encap:Ethernet HWaddr 48:4d:7e:b1:94:4d inet addr:128.40.57.144 Bcast:128.40.57.255 Mask:255.255.255.0 inet6 addr: fe80::4a4d:7eff:feb1:944d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1806664 errors:0 dropped:82518 overruns:0 frame:0 TX packets:81807 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:601022858 (601.0 MB) TX bytes:15652101 (15.6 MB) Interrupt:19 Memory:f7100000-f7120000 enx00249b233bda Link encap:Ethernet HWaddr 00:24:9b:23:3b:da inet addr:192.168.0.10 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::224:9bff:fe23:3bda/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:300302 errors:0 dropped:0 overruns:0 frame:0 TX packets:373077 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:26170910 (26.1 MB) TX bytes:476407809 (476.4 MB)lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:193 errors:0 dropped:0 overruns:0 frame:0 TX packets:193 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:17086 (17.0 KB) TX bytes:17086 (17.0 KB)/etc/network/interfaces # Static IP for internet connection auto lo iface lo inet loopback auto enp0s31f6 iface enp0s31f6 inet static address 128.40.57.144 netmask 255.255.255.0 gateway 128.40.50.245 dns-nameservers 144.82.250.1 193.160.250.1# Network adapter interfacing with RPis allow-hotplug enx00249b233bda iface enx00249b233bda inet static address 192.168.0.10 netmask 255.255.255.0 gateway 192.168.0.11 dns-nameservers 144.82.250.1 193.160.250.1 post-up ip route add 192.168.0.0/24 dev enx00249b233bda src 192.168.0.10 table rt2 post-up ip route add default via 192.168.0.11 dev enx00249b233bda table rt2 post-up ip rule add from 192.168.0.10/32 table rt2 post-up ip rule add to 192.168.0.10/32 table rt2/etc/iproute2/rt_tables # # reserved values # 255 local 254 main 253 default 0 unspec # # local # #1 inr.ruhep 1 rt2ip route show default via 128.40.50.245 dev enp0s31f6 onlink 128.40.57.0/24 dev enp0s31f6 proto kernel scope link src 128.40.57.144 169.254.0.0/16 dev enp0s31f6 scope link metric 1000 192.168.0.0/24 dev enx00249b233bda proto kernel scope link src 192.168.0.10 Raspberry Pi 1 current settings ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.22 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::3fa1:761c:f861:dae3 prefixlen 64 scopeid 0x20<link> ether dc:a6:32:2f:11:38 txqueuelen 1000 (Ethernet) RX packets 7489 bytes 537762 (525.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7417 bytes 2128900 (2.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 2270 bytes 215650 (210.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2270 bytes 215650 (210.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0wlan0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether dc:a6:32:2f:11:3b txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0/etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8)# Please note that this file is written to be used with dhcpcd # For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf'# Include files from /etc/network/interfaces.d: source-directory /etc/network/interfaces.d/etc/dhcpcd.conf # A sample configuration for dhcpcd. # See dhcpcd.conf(5) for details.# Allow users of this group to interact with dhcpcd via the control socket. #controlgroup wheel# Inform the DHCP server of our hostname for DDNS. hostname# Use the hardware address of the interface for the Client ID. clientid # or # Use the same DUID + IAID as set in DHCPv6 for DHCPv4 ClientID as per RFC4361. # Some non-RFC compliant DHCP servers do not reply with this set. # In this case, comment out duid and enable clientid above. #duid# Persist interface configuration when dhcpcd exits. persistent# Rapid commit support. # Safe to enable by default because it requires the equivalent option set # on the server to actually work. option rapid_commit# A list of options to request from the DHCP server. option domain_name_servers, domain_name, domain_search, host_name option classless_static_routes # Respect the network MTU. This is applied to DHCP routes. option interface_mtu# Most distributions have NTP support. #option ntp_servers# A ServerID is required by RFC2131. require dhcp_server_identifier# Generate SLAAC address using the Hardware Address of the interface #slaac hwaddr # OR generate Stable Private IPv6 Addresses based from the DUID slaac private# Example static IP configuration: #interface eth0 #static ip_address=192.168.0.10/24 #static ip6_address=fd51:42f8:caae:d92e::ff/64 #static routers=192.168.0.1 #static domain_name_servers=192.168.0.1 8.8.8.8 fd51:42f8:caae:d92e::1# It is possible to fall back to a static IP if DHCP fails: # define static profile #profile static_eth0 #static ip_address=192.168.1.23/24 #static routers=192.168.1.1 #static domain_name_servers=192.168.1.1# fallback to static profile on eth0 #interface eth0 #fallback static_eth0# Static IP for connection to Recording PC interface eth0 static ip_address=192.168.0.22/24 static routers=192.168.0.11 static domain_name_servers=192.168.0.11
How to share internet from 1st interface to devices connected to 2nd interface?
Are you sure that you actually have a network device called wlan0? It seems that your wifi NIC is called "wlp5s0".
I m new to linux. I changed my wifi password and was trying to reconnect to it but I failed when I type ifconfig wlan0 I get error message. something like no such device found
wlan0 No such device found
OpenVPN seems to have the --port-share option--port-share host port [dir] When run in TCP server mode, share the OpenVPN port with another application, such as an HTTPS server. If OpenVPN senses a connection to its port which is using a non-OpenVPN protocol, it will proxy the connection to the server at host:port.If say the webserver was listening for HTTPS traffic on localhost:49152, then the openvpn config could contain: port-share localhost 49152
I cannot find any Apache mod that allows something like IIS on Windows allows - You can run an SSTP VPN server on port 443 and a HTTPS server on port 443 too at the same time, bound to the same interface. I was wondering if anything like that is possible with Apache? Or nginx? How would I configure such a thing? If not possible, what are my options for running, for example OpenVPN, on TCP:443 along with a web server on 443? Or any other software on Linux based machines?
Redirect non-https tcp streams on 443 in apache to another application
I don't believe you can reach internet from any machine which use 192.168.1.15 as default gateway. You have to nat the connection: iptables -A POSTROUTING -t nat -j MASQUERADE
I have Linux VM in my local network, with OpenVPN perfectly working on it. Let's say my main gateway (router, actually) is 192.168.1.1, VM ip is 192.168.1.15. That's my VM network settings on the interface connected to the Internet: auto eth0 iface eth0 inet static address 192.168.1.15 netmask 255.255.255.0 gateway 192.168.1.1 dns-nameservers 8.8.8.8route -n output before start of OpenVPN: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 10.60.165.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0And after start of OpenVPN: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 10.10.10.0 10.80.165.1 255.255.255.0 UG 0 0 0 game 10.60.0.0 10.80.165.1 255.252.0.0 UG 0 0 0 game 10.60.165.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.80.0.0 10.80.165.1 255.252.0.0 UG 0 0 0 game 10.80.165.1 0.0.0.0 255.255.255.255 UH 0 0 0 game 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0From other machines I need to connect to ip 10.80.156.1 through VM. So, I set the gateway on other machine to ip of VM. I can access Internet this way, but address 10.80.156.1 is unreachable. UPD: iptables -L -n output: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination
Share openvpn connection on the interface connected to internet
Looks like you have a firewall active on your client that blocks NFS-traffic. Configure NFS on the server so that all relevant ports are being bound (for NFSv3), then open the relevant ports on the client (tcp and udp). For NFSv4 (I did not use it up until now) there is imho just a tcp-port you have to open on the client.
I set up my NFS server without doing any bigger changes in configuration files. After that I added these entries to /etc/exports(both paths are valid) on server(192.168.1.11): /export 192.168.1.0/192.168.255.255(rw,fsid=0,insecure,no_subtree_check,async) /export/users 192.168.1.0/192.168.255.255(rw,nohide,insecure,no_subtree_check,async)Then I restarted the computer and I tried to get exports list: $ showmount -e 192.168.1.11 /export 192.168.1.0/192.168.255.255 /export/users 192.168.1.0/192.168.255.255According to this output there's not problem with connection. Now I want to mount /export to client filesystem(192.168.1.12): sudo mount -t nfs4 192.168.1.11:/export /mntAfter typing this there's no output and I can't do anything. Another terminal line start is not being displayed. Command is stuck. Does anybody know am I doing wrong? Please help me.
Can not mount NFS from server on local network
Add After=network-online.target to the [Unit] section of the timer. Explanation: Timers do accept all the relative ordering commands in the [Unit] section that are known for services. In fact both the [Unit] and [Install] sections are identical for timers and services. Form the official manuals:A unit configuration file whose name ends in ".timer" encodes information about a timer controlled and supervised by systemd, for timer-based activation. This man page lists the configuration options specific to this unit type. See systemd.unit(5) for the common options of all unit configuration files. The common configuration items are configured in the generic [Unit] and [Install] sections. The timer specific configuration options are configured in the [Timer] section.That said you need to know about network-online.target that defines if a network is up.network-online.target is a target that actively waits until the nework is "up", where the definition of "up" is defined by the network management software. Usually it indicates a configured, routable IP address of some kind. Its primary purpose is to actively delay activation of services until the network is set up.Limitations network-online.target is not checking for internet but for network connections. The LAN of course might not have internet access per se. If you cannot rely on the router or your ISP to provide a connection, you would have to make e.g. a special test-internet.service that pings some website and only is defined active after it succeeded once (and otherwise restarts on failure every 15s or so). That should be a Type=oneshot and RemainAfterExit=yes kind of service. But I assume that this is not what you asked for. Scopes: system/user Be careful, as network-online.target is a system scope unit. Units in the user scope will not see it and fail to start. This can be fixed by creating a linked unit: systemctl --user link /lib/systemd/system/network-online.target. The path can vary, the location can be checked by running: systemctl show --property FragmentPath network-online.target
I'm using systemd-timer to periodically run a script which consumes a webservice. Problem is, upon system resume or wake-up, internet connectivity would not get started right away but the timer gets fired and hence the script returns error (If the service waits for a couple of seconds, the script would run correctly and there would be no need to postpone the task until next run.) 1- How can I make it so that the timer (or the service associated with it), waits until net connectivity is available? 2- How can I make the timer (or service) not call the script when system is not online yet?
How to make a systemd-timer depend on internet connectivity?
I tried writing a script in python (python3, but works in 2 as well) that you can use for that. I've tried it until the connecting and disconnecting part, so that you can use the method that you prefer: with open("/proc/net/wireless", "r") as f: data = f.read()link = int(data[177:179]) level = int(data[182:185]) noise = int(data[187:192])# print("{}{}{}".format(link, level, noise))lmtqlty = -80if(link < lmtqlty): os.system(nmcli c down id NAME`) # Will disconnect the network NAME else: os.system(nmcli c down id NAME`) # Will connect the network NAMEYou have to run it as sudo, but it's no problem since you will now put it into a cron service. I have not used cron services yet, but if you can't manage yourself I will give it a try.EDIT explanation: When you read the contents of "/proc/net/wireless", you get the following long string: Inter-| sta-| Quality | Discarded packets | Missed | WE face | tus | link level noise | nwid crypt frag retry misc | beacon | 22 wlan0: 0000 31. -79. -256 0 0 0 7 0 0So you want to extract the correct values from the Quality column. This file gives you information about the connection between this system and the network. Here you have more information about it, and to explain what each Quality subcolumn means let me quote this other post:Decibel is a logarithmic unit (1 dB = 1/10 Bel, 1 Bel = power ratio 1.259 = amplitude ratio 1.122) that describes a relative relationship between signals. See wikipedia for details and a table. Negative decibels mean the received signal is weaker then the sent signals (which of course happens naturally). Level means how strong the signal is when received compared to how strong it was / it was assumed to be when sent. This is a physical measurement, and in principle the same for every Wifi hardware. However, often it's not properly calibrated etc. Link is a computed measurement for how good the signal is (i.e. how easy it is for the hardware/software to recover data from it). That's influenced by echoes, multipath propagation, the kind of encoding used, etc.; and everyone uses their own method to compute it. Often (but not always) it is computed to some value that's on the same scale as the "level" value. From experience, for most hardware I've seen, something around -50 means the signal is ok-ish, something around -80 means it's pretty weak, but just workable. If it goes much lower, the connection becomes unreliable. These values should be read just as a rough indication, and not as something scientific you can depend on, and you shouldn't expect them to be similar or even comparable on different hardware, not even "level". The best way to learn to interpret it is to take your hardware, carry it around a bit, watch how the signal changes and what the effects on speed, error rate etc. are.So I think you are interested in link (just changed it up there).Just to give you more ideas I searched, you have this one-line-script that shows you dynamically the link value: watch -n 1 "awk 'NR==3 {print \"WiFi Signal Strength = \" \$3 \"00 %\"}''' /proc/net/wireless"You could integrate it in a bash script rather than python :)
Is there any chance to create a configuration that does the following job?Only connect to available WiFi if it's signal is stronger than 30 %At many places, I stay in the border area of barely available wifi-signals. Thereupon those inevitable signal-abortions are just annoying, so I always have to switch between mobile data and wifi manually by myself. Is there any chance to set up some configuration that only allows connecting to WiFi for the case that signal strength is strong enough to avoid terminations (and hereby guarantee a stable connection)?Simplified approach: If signal strength is < 30 % ⇒ connection not allowed If signal strength is ≥ 30 % ⇒ connection allowedThe value of 30 % is only an example of course... Maybe 20 % would make more sense, we will see!
How to create a configuration to only connect to WiFi if signal is ≥ 30 %?
You are missing firmware for the bluetooth. cd /lib/firmware/brcm sudo wget https://github.com/winterheart/broadcom-bt-firmware/raw/master/brcm/BCM20702A1-13d3-3404.hcd sudo modprobe -r btusb sudo modprobe btusb See if it works
Why does Blutooth not connect? The Bluetooth unit can find, but not connect to other devices on Debian Testing (9.0 Stretch). Bluetooth works well with a different Operating System. BIOS settings permit wireless. The following packages were installed: bluez-firmware broadcom-sta-common broadcom-sta-dkms broadcom-sta-source firmware-brcm80211 firmware-misc-nonfree$ sudo dmesg | grep -i blue [ 18.086647] Bluetooth: Core ver 2.22 [ 18.086660] Bluetooth: HCI device and connection manager initialized [ 18.086663] Bluetooth: HCI socket layer initialized [ 18.086664] Bluetooth: L2CAP socket layer initialized [ 18.086668] Bluetooth: SCO socket layer initialized [ 18.149652] Bluetooth: hci0: BCM: chip id 63 [ 18.165659] Bluetooth: hci0: BCM20702A [ 18.166653] Bluetooth: hci0: BCM20702A1 (001.002.014) build 0000 [ 18.176624] bluetooth hci0: firmware: failed to load brcm/BCM20702A1-13d3-3404.hcd (-2) [ 18.176665] bluetooth hci0: Direct firmware load for brcm/BCM20702A1-13d3-3404.hcd failed with error -2 [ 18.176668] Bluetooth: hci0: BCM: Patch brcm/BCM20702A1-13d3-3404.hcd not found [ 18.553154] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 18.553156] Bluetooth: BNEP filters: protocol multicast [ 18.553160] Bluetooth: BNEP socket layer initialized [ 18.574361] Bluetooth: RFCOMM TTY layer initialized [ 18.574365] Bluetooth: RFCOMM socket layer initialized [ 18.574368] Bluetooth: RFCOMM ver 1.11$ lsmod | grep wl wl 6443008 0 cfg80211 589824 1 wl$ sudo modprobe -v broadcom-sta-dkms modprobe: FATAL: Module broadcom-sta-dkms not found in directory /lib/modules/4.9.0-2-amd64$ sudo dmesg | grep -i blu [ 18.086647] Bluetooth: Core ver 2.22 [ 18.086660] Bluetooth: HCI device and connection manager initialized [ 18.086663] Bluetooth: HCI socket layer initialized [ 18.086664] Bluetooth: L2CAP socket layer initialized [ 18.086668] Bluetooth: SCO socket layer initialized [ 18.149652] Bluetooth: hci0: BCM: chip id 63 [ 18.165659] Bluetooth: hci0: BCM20702A [ 18.166653] Bluetooth: hci0: BCM20702A1 (001.002.014) build 0000 [ 18.176624] bluetooth hci0: firmware: failed to load brcm/BCM20702A1-13d3-3404.hcd (-2) [ 18.176665] bluetooth hci0: Direct firmware load for brcm/BCM20702A1-13d3-3404.hcd failed with error -2 [ 18.176668] Bluetooth: hci0: BCM: Patch brcm/BCM20702A1-13d3-3404.hcd not found [ 18.553154] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 18.553156] Bluetooth: BNEP filters: protocol multicast [ 18.553160] Bluetooth: BNEP socket layer initialized [ 18.574361] Bluetooth: RFCOMM TTY layer initialized [ 18.574365] Bluetooth: RFCOMM socket layer initialized [ 18.574368] Bluetooth: RFCOMM ver 1.11Related Resources:BCM4352 WikiDevi Debian
Broadcom BCM4352 : Bluetooth does not connect
I guess you can be certain that ssh is installed, but not nc. In light of that, the question could make sense. Use the -p option. For example port open: $ ssh -p 111 192.168.1.16 ssh_exchange_identification: read: Connection reset by peerPort closed: $ ssh -p 112 192.168.1.16 ssh: connect to host 192.168.1.16 port 112: Connection refused
Assume that you have a pair of UNIX/Linux machines. You want to check if both machines can connect to each other via an arbitrary port, say, TCP 111. Log into the shell of Machine 1 (10.0.0.1). If Telnet is installed on it, you can check the connectivity with 10.0.0.2 by using the command $ telnet 10.0.0.2 111But some UNIX/Linux systems do not have Telnet by default. SSH is more popular thus it may be preferred, especially when you don't want to install exra applications on your machine.Question. Is there any way to test the same with ease, by using the ssh command instead of the telnet command? (Please answer the question. "You can install Telnet or use Microsoft Windows" doesn't help. TCP 111 is just an example.)
Testing connectivity and port availability using SSH instead of Telnet
You don't need to do anything special at all for this just to work straight off. The system's default route should remain via your ISP. This means that all packets that aren't addressed to devices on your local network (the LAN) will go via your ISP. Create a VPN to your required endpoint. If you're using something based on OpenVPN ensure that your default route is not updated to use the VPN. (In the configuration file this is typically achieved with the redirect-gateway def1 - you do not want this.) All traffic initiated from your system will leave via the default route to your ISP. When the VPN fires up this will continue, except for traffic targeted at the far end of your VPN link; this will be encapsulated by the VPN and then the encrypted data will leave your system to your ISP. Since your VPN connects your system to a remote system you should consider a firewall. Typically a few iptables rules will help. You also need to consider why you want the VPN. Is it to allow incoming ssh or http? If so you will probably need an SNAT (MASQUERADE) rule applied at the remote end of the VPN link so that your system knows to send return traffic back across the VPN. Be aware, though, that this may well not work for Torrents.
My ISP uses a Carrier Grade NAT (a switch inbetween ISP line and my primary router) and that doesn't allow me to do portforwarding (I have set it up but to no use). After much contemplation, I have decided to go for a VPN. On doing some research, I came to know that you can have two routes using iptables based on the direction of your connection. Apparently, the converse of what I need seems to be here (Incoming/Outgoing seperation for VPN). Will this setup even work for stuff like torrents and perhaps extend couple of services from my home network? How can I implement this in my home network? P.S: Based on the idea given by @roaima, I followed this guide to setup two routes with the VPN route being used by vpn user. This is the detailed guide to follow. http://www.htpcguides.com/configure-transmission-for-vpn-split-tunneling-ubuntu-16-04-debian-8/
How can I use VPN for incoming connections and direct line for outgoing connections?
Problem solved by reducing the MTU to 1492 in the VMs. The hypervisor is responsible to establish a PPPoE connection to the internet, and the ppp0 interface has an MTU of 1492 bytes. Still, why would MTU be a problem since both IPv4 and IPv6 implement path MTU discovery? So why path MTU discovery is not working in this case (only for some IPv6 destinations)? It seems like I encounter a black hole situation here. I captured some traffic with tcpdump and loaded the file in Wireshark. I observed that the connection goes through the TCP three-way handshake as you can see in the attached picture (packet 1-3). That's also obvious from the wget output in my question where as you can see wget gets stuck after it has printed a connected message. After the successful three-way handshake the client (my VM) sends an SSL "Client Hello" message but never receives a "Server Hello" back. What the client receives is a packet which is obviously out of order based on the TCP sequence number (wireshark also reports [TCP Previous segment not captured], Continuation Data). The client then responds with an ACK (packet 6) for the last in-order packet that has been received (a duplicate ACK) and the connection stops since the server tries to resend the lost packet which is bigger than the supported MTU and never arrives. So the connection gets stuck there until I press Ctrl+C where the connection termination is initiated (packets 8-10).Then why the Path MTU discovery is not only working for some IPv6 destinations (not all) but there is no issue with IPv4 at all? For that question, and since my installation has no IPv6 firewall in place, I assume that there is some firewall on the way towards certain web sites that blocks the ICMPv6 Packet Too Big Messages that are needed for the path MTU discovery to work. The interesting thing though is that simple ICMPv6 ping packets go through and I even receive replies.
This is driving me crazy as I cannot load certain HTTPS web sites only from KVM virtual machines and only over IPv6. IPv4 works fine. IPv6 connectivity works for the same websites from the hypervisor. My setupThe KVM hypervisor is running on Ubuntu 14.04.5 LTS. eth0 is added to the br0 bridge interface and I use this bridge to connect the VMs to the outside world. Two VMs are running on the hypervisor. The first is running on Ubuntu 12.04 (I know it has reached EOL, but that's not of concern), and the second is an Ubuntu 16.04. Both VMs experience the problem. The VMs are using a Virtio interface to connect to the network. IPv6 addresses are obtained by both the hypervisor and the VMs. My DNS server is returning IPv6 addresses if supported by a domain, otherwise it works with IPv4. I have no firewall (ip6tables) for IPv6 neither to the hypervisor nor the VMs. # ip6tables -v -L -n Chain INPUT (policy ACCEPT 196K packets, 32M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 5007K packets, 3858M bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 185K packets, 30M bytes) pkts bytes target prot opt in out source destination # ip6tables -v -L -n -t nat Chain PREROUTING (policy ACCEPT 1749 packets, 181K bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 135 packets, 24165 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 187 packets, 27578 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 1801 packets, 185K bytes) pkts bytes target prot opt in out source destinationThe problemIPv6 (and IPv4) connectivity works for all the web sites from the hypervisor (that's fine and as expected). # wget https://lwn.net -O - > /dev/null; echo Exit code: $? --2017-08-02 18:55:47-- https://lwn.net/ Resolving lwn.net (lwn.net)... 2600:3c03::f03c:91ff:fe61:5c5b, 45.33.94.129 Connecting to lwn.net (lwn.net)|2600:3c03::f03c:91ff:fe61:5c5b|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 25202 (25K) [text/html] Saving to: ‘STDOUT’100%[=====================================>] 25,202 149KB/s in 0.2s 2017-08-02 18:55:48 (149 KB/s) - written to stdout [25202/25202]Exit code: 0IPv6 connectivity works for most web sites I have tried from inside the VMs, but not all. For instance, https://lwn.net and https://hioa.no are two https web sites that I experience problems with. As you can see from the wget command below, the connection reaches a connected state but it gets stuck there: # wget https://lwn.net -O - > /dev/null; echo Exit code: $? --2017-08-02 18:53:40-- https://lwn.net/ Resolving lwn.net (lwn.net)... 2600:3c03::f03c:91ff:fe61:5c5b, 45.33.94.129 Connecting to lwn.net (lwn.net)|2600:3c03::f03c:91ff:fe61:5c5b|:443... connected.What I have tried to troubleshoot the problem so farStarted with ping6. Interestingly, pings from the VMs are working for all the domains when using IPv6! Including the ones that https is not working. # ping6 -c 1 -n hioa.no PING hioa.no(2001:700:700:2::65) 56 data bytes 64 bytes from 2001:700:700:2::65: icmp_seq=1 ttl=53 time=88.7 ms# ping6 -c 1 -n lwn.net PING lwn.net(2600:3c03::f03c:91ff:fe61:5c5b) 56 data bytes 64 bytes from 2600:3c03::f03c:91ff:fe61:5c5b: icmp_seq=1 ttl=54 time=145 msI tried to change the virtual network devices from virtio to e1000. Problem still exists. Tried to connect with IPv4 to the websites that I encounter the problem with. # dig A lwn.net; <<>> DiG 9.10.3-P4-Ubuntu <<>> A lwn.net ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41423 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;lwn.net. IN A;; ANSWER SECTION: lwn.net. 2633 IN A 45.33.94.129IPv4 connectivity works fine! # wget --no-check-certificate https://45.33.94.129 -O - > /dev/null; echo Exit code: $? --2017-08-02 18:41:32-- https://45.33.94.129/ Connecting to 45.33.94.129:443... connected. WARNING: certificate common name `*.lwn.net' doesn't match requested host name `45.33.94.129'. HTTP request sent, awaiting response... 200 OK Length: 25226 (25K) [text/html] Saving to: `STDOUT'100%[==================================>] 25,226 137K/s in 0.2s 2017-08-02 18:41:33 (137 KB/s) - written to stdout [25226/25226]Exit code: 0Tried to use "openssl s_client" to connect and see if there are any error messages, but "openssl s_client" doesn't support IPv6 yet (at least not in the openssl version that is included in Ubuntu 16.04). Checked dmesg and /var/log/syslog but there is nothing related there.Anyone has an idea of why do I get this strange behavior with some websites? Any directions on what I should try to investigate next?
Certain HTTPS web sites do not load from KVM virtual machine over IPv6
A review of MSI's manual for your notebook PC shows the manufacturer is Windows-centric, for they provided no drivers, Linux utilities, or physical switch to cycle wireless.
I'm running Ubuntu 18.04 LTS on an MSI GS65 Stealth 8RE. When the laptop gets out of sleep mode, air plane mode is on and Linux says it must be deactivated via a physical switch. The FN+F10 combination to turn it off works on Windows, but doesn't on Ubuntu. When I reboot, everything seems to be fine. So it's not too bad, but systematic and very annoying nonetheless. I have the usual rfkill output : ubuntu@ubuntu:~$ rfkill list all 0: phy0: Wireless LAN Soft blocked: no Hard blocked: yes
(Ubuntu 18.04 LTS) Can't switch off Airplane mode with physical switch
In some situations, it is helpful to use the -t option to tell ssh to allocate a pseudo-terminal device for the ssh connection: ssh -t -o StrictHostKeyChecking=no -p port user@hostA telnet hostBAnother typical example of a command that requires -t is remote editing of a file with vi, or viewing it with less: ssh -t -o StrictHostKeyChecking=no -p port user@hostA vi foo.txt
How can I use tab when I use telnet on a remote host by ssh, I have something like: ssh -o StritHopstKeyChecking=no -p port user@hostA telnet hostB; echo "Reconnect?"; while read < /dev/tty; do ssh -o StritHopstKeyChecking=no -p port user@hostA telnet hostB; done which launches the telnet session to hostB from hostA fine but inside telnet, I can't use tab for auto-completion which I can use fine when I manually ssh into hostA and then telnet to hostB from there. Any ideas?
how do I tab in telnet when executing it on remote host by ssh?
First, since your browser makes connections to multiple hosts, you need to know which one to check (if you don't already). There are a number of tools that can passively gather TCP statistics. Now, mtr is a tool specifically created to measure connection reliability and output reports that can be sent to ISPs verbatim. It makes traceroute and ping effectively obsolete for that task. Normally (without -r), it runs constantly, accumulates and updates stats of latency and loss percentage at each hop. Diagnosing Network Issues with MTR article includes some common patterns that you can see in results and how to interpret them. Since at least 0.75, mtr can use TCP SYN rather than ICMP packets with -T -P <port>, so you'll get stats for the same TCP ports as your normal traffic.
I'm using the traceroute utility to test network connectivity. The problem is usually with slow speed - that means, the webpages in the browser are often displayed very slowly. Sometimes the speed of rendering HTML page is better, but the videos from youtube are transfered very slowly, so that you can watch it usually with many pauses. I'd like to identify from the output of traceroute utility or it's combination with other utilities (such as ping, mtr and other) where the problem on the trace is. It means to use the combination of the utilities repeatedly to output some logs or statistics from which a decision can be made if the problem with slow response (or often connetivity loss) is caused by my closer ISP (three wireless routers) or his ISP. I would like to have some data I can provide them in case of connectivity or speed issues (it's really unreliable connection, very often problems).
Traceroute - approach the place with connectivity issues
This likely has more to do with the drivers involved than with the OS. Coming out of hibernation in Windows, for instance, is managed by the OS, but actually accomplished by the other software bits like drivers. If the drivers don't handle waking correctly, there can be inconsistency in how well it works. I'd suggest, as a test, when you next have the issue, use lspci or somesuch to figure out your driver's name, then lsmod to see if it's loaded as a module, rather than built into the kernel. If so, you can try unloading with rmmod or modprobe -r and then reloading with modprobe. The module you're interested in, judging from the info provided above, is the r8168 driver which is the official RealTek driver. There may be dependencies with the module, so you may need to play around a bit to make sure that you get all the right modules, though networking probably has dependencies now than when I was working with it. Once you get all the bits sorted, you can create a shell script which executes all of the commands you used, then run it whenever you have an issue, rather than trying to remember the sequence every time. All of this, including any shell script, will obviously need to be run as root or using sudo. Not positive that this will work, of course, but it may help. You can also check here to obtain the latest firmware for your adapter to ensure that it's up to date. If it isn't, updating it may also help with the issue. Just a few places to look.
For the last 15~20 days I've faced a inconsistent connection issues. I've tried several things, and mostly I can't say if they helped or not. What I can say for sure:When returning from hibernation, although I'm still connected to the WIFI, I can't access anything on the internet. Chrome returns DNS_PROBE_FINISHED_NXDOMAIN, changing browsers doesn't solve the issue. Disconnecting and connecting again, doesn't help either. The command sudo dhclient -v solves the problem, when it works. (Details bellow) It's not my ISP, as internet still works in my desktop and smartphone (connected to the same wifi). [but maybe is a router/modem issue?]Those are the weird details:Sometimes I can still ping sites, but can't access them and sometimes ping doesn't work, and it returns connect: Network is unreachable. Rebooting sometimes solves the problem, sometimes doesn't. Sometimes the problem solved it self, with me doing nothing. The first couple times this issue happened I've tried changing the DNS from AUTO to Manual (8.8.8.8) or the other way around, depending how I left the last time. It seemed to work, but now it doesn't, so maybe it never did anything to solve the issue and the issue solved it self?As I said, running sudo dhclient -v solves the problem when the return finishes like this:DHCPACK of 192.168.15.4 from 192.168.15.1 bound to 192.168.15.4 -- renewal in 17437 seconds.However, sometimes, when I run the command it returns this:No DHCPOFFERS received. No working leases in persistent database - sleeping.And the problem persists. Bellow are the returns of some commands that might help with diagnostics: lspci 00:00.0 Host bridge: Intel Corporation Haswell-ULT DRAM Controller (rev 0b) 00:02.0 VGA compatible controller: Intel Corporation Haswell-ULT Integrated Graphics Controller (rev 0b) 00:03.0 Audio device: Intel Corporation Haswell-ULT HD Audio Controller (rev 0b) 00:14.0 USB controller: Intel Corporation 8 Series USB xHCI HC (rev 04) 00:16.0 Communication controller: Intel Corporation 8 Series HECI #0 (rev 04) 00:1b.0 Audio device: Intel Corporation 8 Series HD Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 8 Series PCI Express Root Port 1 (rev e4) 00:1c.2 PCI bridge: Intel Corporation 8 Series PCI Express Root Port 3 (rev e4) 00:1c.3 PCI bridge: Intel Corporation 8 Series PCI Express Root Port 4 (rev e4) 00:1d.0 USB controller: Intel Corporation 8 Series USB EHCI #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation 8 Series LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 8 Series SATA Controller 1 [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation 8 Series SMBus Controller (rev 04) 02:00.0 Network controller: Intel Corporation Wireless 7260 (rev 73) 03:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTL8411B PCI Express Card Reader (rev 01) 03:00.1 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 12)ifconfig command not foundip route show 169.254.0.0/16 dev br-7905315c0c67 scope link metric 1000 linkdown 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 172.18.0.0/16 dev br-7905315c0c67 proto kernel scope link src 172.18.0.1 linkdown 172.19.0.0/16 dev br-bb285dfa325a proto kernel scope link src 172.19.0.1 linkdown ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp3s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000 link/ether b8:2a:72:c0:da:1d brd ff:ff:ff:ff:ff:ff inet 169.254.8.72/16 brd 169.254.255.255 scope link enp3s0f1:avahi valid_lft forever preferred_lft forever 3: wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 80:86:f2:cc:95:7f brd ff:ff:ff:ff:ff:ff inet 192.168.15.4/24 brd 192.168.15.255 scope global dynamic wlp2s0 valid_lft 42972sec preferred_lft 42972sec inet6 2804:7f2:2980:fa68:50c4:6476:6282:be9/64 scope global dynamic noprefixroute valid_lft 43168sec preferred_lft 43168sec inet6 fe80::1d23:31f:b217:e60d/64 scope link noprefixroute valid_lft forever preferred_lft forever 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:b9:59:21:ea brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 5: br-7905315c0c67: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:ca:dc:1a:34 brd ff:ff:ff:ff:ff:ff inet 172.18.0.1/16 brd 172.18.255.255 scope global br-7905315c0c67 valid_lft forever preferred_lft forever 6: br-bb285dfa325a: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:45:ee:9f:bd brd ff:ff:ff:ff:ff:ff inet 172.19.0.1/16 brd 172.19.255.255 scope global br-bb285dfa325a valid_lft forever preferred_lft foreverdhclient -v WHEN IT DOESN'T WORK Listening on LPF/br-bb285dfa325a/02:42:45:ee:9f:bd Sending on LPF/br-bb285dfa325a/02:42:45:ee:9f:bd Listening on LPF/br-7905315c0c67/02:42:ca:dc:1a:34 Sending on LPF/br-7905315c0c67/02:42:ca:dc:1a:34 Listening on LPF/docker0/02:42:b9:59:21:ea Sending on LPF/docker0/02:42:b9:59:21:ea Listening on LPF/wlp2s0/80:86:f2:cc:95:7f Sending on LPF/wlp2s0/80:86:f2:cc:95:7f Listening on LPF/enp3s0f1/b8:2a:72:c0:da:1d Sending on LPF/enp3s0f1/b8:2a:72:c0:da:1d Sending on Socket/fallback DHCPDISCOVER on br-bb285dfa325a to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on br-7905315c0c67 to 255.255.255.255 port 67 interval 5 DHCPDISCOVER on docker0 to 255.255.255.255 port 67 interval 4 DHCPREQUEST for 192.168.15.4 on wlp2s0 to 255.255.255.255 port 67 DHCPDISCOVER on enp3s0f1 to 255.255.255.255 port 67 interval 6 DHCPDISCOVER on docker0 to 255.255.255.255 port 67 interval 11 DHCPDISCOVER on br-7905315c0c67 to 255.255.255.255 port 67 interval 9 DHCPDISCOVER on enp3s0f1 to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on br-bb285dfa325a to 255.255.255.255 port 67 interval 7 DHCPREQUEST for 192.168.15.4 on wlp2s0 to 255.255.255.255 port 67 DHCPDISCOVER on enp3s0f1 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on br-bb285dfa325a to 255.255.255.255 port 67 interval 9 DHCPDISCOVER on br-7905315c0c67 to 255.255.255.255 port 67 interval 15 DHCPDISCOVER on docker0 to 255.255.255.255 port 67 interval 18 DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 4 DHCPDISCOVER on br-bb285dfa325a to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on enp3s0f1 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on br-7905315c0c67 to 255.255.255.255 port 67 interval 13 DHCPDISCOVER on br-bb285dfa325a to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on docker0 to 255.255.255.255 port 67 interval 15 DHCPDISCOVER on enp3s0f1 to 255.255.255.255 port 67 interval 13 DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 15 DHCPDISCOVER on br-bb285dfa325a to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on br-7905315c0c67 to 255.255.255.255 port 67 interval 19 DHCPDISCOVER on br-bb285dfa325a to 255.255.255.255 port 67 interval 11 DHCPDISCOVER on enp3s0f1 to 255.255.255.255 port 67 interval 15 DHCPDISCOVER on docker0 to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on br-bb285dfa325a to 255.255.255.255 port 67 interval 6 DHCPDISCOVER on docker0 to 255.255.255.255 port 67 interval 6 DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 15 DHCPOFFER of 192.168.15.4 from 192.168.15.1 DHCPREQUEST for 192.168.15.4 on wlp2s0 to 255.255.255.255 port 67 No DHCPOFFERS received. No working leases in persistent database - sleeping.dhclient -v WHEN IT DOES WORK Listening on LPF/br-bb285dfa325a/02:42:3a:6c:f8:53 Sending on LPF/br-bb285dfa325a/02:42:3a:6c:f8:53 Listening on LPF/br-7905315c0c67/02:42:a9:2c:f1:4d Sending on LPF/br-7905315c0c67/02:42:a9:2c:f1:4d Listening on LPF/docker0/02:42:47:5b:95:10 Sending on LPF/docker0/02:42:47:5b:95:10 Listening on LPF/wlp2s0/80:86:f2:cc:95:7f Sending on LPF/wlp2s0/80:86:f2:cc:95:7f Listening on LPF/enp3s0f1/b8:2a:72:c0:da:1d Sending on LPF/enp3s0f1/b8:2a:72:c0:da:1d Sending on Socket/fallback DHCPDISCOVER on br-bb285dfa325a to 255.255.255.255 port 67 interval 5 DHCPDISCOVER on br-7905315c0c67 to 255.255.255.255 port 67 interval 6 DHCPDISCOVER on docker0 to 255.255.255.255 port 67 interval 6 DHCPREQUEST for 192.168.15.4 on wlp2s0 to 255.255.255.255 port 67 DHCPDISCOVER on enp3s0f1 to 255.255.255.255 port 67 interval 5 DHCPREQUEST for 192.168.15.4 on wlp2s0 to 255.255.255.255 port 67 DHCPDISCOVER on enp3s0f1 to 255.255.255.255 port 67 interval 8 DHCPDISCOVER on br-bb285dfa325a to 255.255.255.255 port 67 interval 7 DHCPREQUEST for 192.168.15.4 on wlp2s0 to 255.255.255.255 port 67 DHCPACK of 192.168.15.4 from 192.168.15.1 bound to 192.168.15.4 -- renewal in 17437 seconds.>> EDIT After @Fubar answer I did a apt update and apt upgrade and noticed the a lot of warnings of Possible missing firmware. Not sure it ever happened before (if it did, never noticed) or if it's related. After that I haven't been able to reproduce the issue, which might be a good thing, as long it doesn't "come back".
Intermittent internet connection/DNS issues with Debian
Explanation The problem was with the configuration of /etc/resolvconf.conf automatically generated during installation. It turns out that because of setting local_unbound_enable="YES", FreeBSD added resolv_conf="/dev/null" # prevent updating /etc/resolv.confto /etc/resolvconf.conf, which prevented the modification of /etc/resolv.conf. As a result my system seems to always send DNS queries to one of the roots instead of the DNS server provided by the hot spot. SolutionRemove resolv_conf="/dev/null" from /etc/resolvconf.conf. The system automatically falls back to /etc/resolv.conf then.Remove local_unbound_enable="YES" from /etc/rc.conf.(Optionally,) run service local_unbound stop.
Problem I'm trying to connect to an open WiFi with a machine running FreeBSD 12-CURRENT.Normally, I run wifi-start.sh (see below) whenever I want to connect to the Internet. It works with WPA networks but I'm having a lot of problems with open networks. The dhclient is able to connect to the open network is set in /etc/wpa_supplicant.conf and it receives an IP address by DHCP. Later however, I'm unable to reach the captive portal to log in. Sometimes it is sufficient to open http://neverssl.com in a browser but it does not always work. Setup/boot/loader.conf: if_iwm_load="YES" iwm3160fw_load="YES"/etc/rc.conf local_unbound_enable="YES"/etc/resolvconf.conf # This file was generated by local-unbound-setup. # Modifications will be overwritten. resolv_conf="/dev/null" # prevent updating /etc/resolv.conf unbound_conf="/var/unbound/forward.conf" unbound_pid="/var/run/local_unbound.pid" unbound_service="local_unbound" unbound_restart="service local_unbound reload"/etc/wpa_supplicant.conf: ctrl_interface=/var/run/wpa_supplicant ctrl_interface_group=wheel network={ ssid="Open Network" key_mgmt=NONE }wifi-start.sh: wlandev="${wlandev:-${1:-wlan0}}" device="${device:-${2:-iwm0}}"if ! ifconfig "$wlandev" 1>&2 2>/dev/null; then sudo ifconfig "$wlandev" create wlandev "$device" else sudo service netif restart fisudo ifconfig "$wlandev" up sudo wpa_supplicant -B -i "$wlandev" -c /etc/wpa_supplicant.conf sudo dhclient "$wlandev"Script I use to configure the device and connect to the networkDetailsThe set up is hassle-free on Ubuntu and macOS so:it is most probably not the router's problem, it should be possible to configure FreeBSD correctly.The Wi-Fi device is Intel Corporation Dual Band Wireless-AC 3160, so I'm using the iwm(4) driver. Errors in xconsole Here's an error I got in the console after running wifi-start.sh -- dhclient gave up then. The second time I ran the script dhclient got an address successfully and there were no errors in xconsole. It might not be related to this problem, however.Ethernet address: 34:e6:ad:16:bf:66 iwm_auth: failed to set multicast iwm_newstate: could not move to auth state: 35 dumping device error log Start Error Log Dump: Status: 0x3, count: 6 0x0000090A | ADVANCED_SYSASSERT 080000B0 | trm_hw_status0 00000000 | trm_hw_status1 00000B30 | branchlink2 000148E0 | interruptlink1 00000000 | interruptlink2 DEADBEEF | data1 DEADBEEF | data2 DEADBEEF | data3 001CA815 | beacon time 002362E3 | tsf low 00000000 | tsf hi 00000000 | time gp1 002362E4 | time gp2 00000000 | uCode revision type 00000011 | uCode version major 000561E2 | uCode version minor 00000164 | hw version 00809004 | board version 0000001C | hcmd 00022002 | isr0 00000000 | isr1 00000002 | isr2 00417C81 | isr3 00000000 | isr4 00004110 | last cmd Id 00000000 | wait_event 00000080 | l2p_control 00450020 | l2p_duration 0000003F | l2p_mhvalid 00000000 | l2p_addr_match 00000007 | lmpm_pmg_sel 15061432 | timestamp 00003038 | flow_handler driver status: tx ring 0: qid=0 cur=1 queued=1 tx ring 1: qid=1 cur=0 queued=0 tx ring 2: qid=2 cur=0 queued=0 tx ring 3: qid=3 cur=0 queued=0 tx ring 4: qid=4 cur=0 queued=0 tx ring 5: qid=5 cur=0 queued=0 tx ring 6: qid=6 cur=0 queued=0 tx ring 7: qid=7 cur=0 queued=0 tx ring 8: qid=8 cur=0 queued=0 tx ring 9: qid=9 cur=33 queued=1 tx ring 10: qid=10 cur=0 queued=0 tx ring 11: qid=11 cur=0 queued=0 tx ring 12: qid=12 cur=0 queued=0 tx ring 13: qid=13 cur=0 queued=0 tx ring 14: qid=14 cur=0 queued=0 tx ring 15: qid=15 cur=0 queued=0 tx ring 16: qid=16 cur=0 queued=0 tx ring 17: qid=17 cur=0 queued=0 tx ring 18: qid=18 cur=0 queued=0 tx ring 19: qid=19 cur=0 queued=0 tx ring 20: qid=20 cur=0 queued=0 tx ring 21: qid=21 cur=0 queued=0 tx ring 22: qid=22 cur=0 queued=0 tx ring 23: qid=23 cur=0 queued=0 tx ring 24: qid=24 cur=0 iwm_newstate: Failed to remove station: 35 iwm_mvm_mac_ctxt_changed: called; uploaded = 0 iwm_newstate: Failed to change mac context: 5 iwm_newstate: Failed to remove channel ctx: 22 iwm_newstate: failed to update power managementifconfig -v wlan0 Here's the result of ifconfig -v wlan0:wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 34:e6:ad:16:bf:66 hwaddr 34:e6:ad:16:bf:66 inet6 fe80::36e6:adff:fe16:bf66%wlan0 prefixlen 64 tentative scopeid 0x2 inet 10.1.2.41 netmask 0xffffff00 broadcast 10.1.2.255 nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL> media: IEEE 802.11 Wireless Ethernet OFDM/54Mbps mode 11g status: associated ssid "Open Network" channel 11 (2462 MHz 11g) bssid 4e:5e:0c:eb:8e:ad regdomain FCC country US anywhere -ecm authmode OPEN -wps -tsn privacy OFF deftxkey UNDEF powersavemode OFF powersavesleep 100 txpower 30 txpowmax 50.0 -dotd rtsthreshold 2346 fragthreshold 2346 bmiss 10 11a ucast NONE mgmt 6 Mb/s mcast 6 Mb/s maxretry 6 11b ucast NONE mgmt 1 Mb/s mcast 1 Mb/s maxretry 6 11g ucast NONE mgmt 1 Mb/s mcast 1 Mb/s maxretry 6 turboA ucast NONE mgmt 6 Mb/s mcast 6 Mb/s maxretry 6 turboG ucast NONE mgmt 1 Mb/s mcast 1 Mb/s maxretry 6 sturbo ucast NONE mgmt 6 Mb/s mcast 6 Mb/s maxretry 6 11na ucast NONE mgmt 12 MCS mcast 12 MCS maxretry 6 11ng ucast NONE mgmt 2 MCS mcast 2 MCS maxretry 6 half ucast NONE mgmt 3 Mb/s mcast 3 Mb/s maxretry 6 quarter ucast NONE mgmt 1 Mb/s mcast 1 Mb/s maxretry 6 11acg ucast NONE mgmt 1 Mb/s mcast 1 Mb/s maxretry 6 11ac ucast NONE mgmt 6 Mb/s mcast 6 Mb/s maxretry 6 scanvalid 60 -bgscan bgscanintvl 300 bgscanidle 250 roam:11a rssi 7dBm rate 12 Mb/s roam:11b rssi 7dBm rate 1 Mb/s roam:11g rssi 7dBm rate 5 Mb/s roam:turboA rssi 7dBm rate 12 Mb/s roam:turboG rssi 7dBm rate 12 Mb/s roam:sturbo rssi 7dBm rate 12 Mb/s roam:11na rssi 7dBm MCS 1 roam:11ng rssi 7dBm MCS 1 roam:half rssi 7dBm rate 6 Mb/s roam:quarter rssi 7dBm rate 3 Mb/s roam:11acg rssi 7dBm rate 64 Mb/s roam:11ac rssi 7dBm rate 64 Mb/s -pureg protmode CTS -ht -htcompat -ampdu ampdulimit 64k ampdudensity NA -amsdu -shortgi htprotmode RTSCTS -puren -smps -rifs -stbc -ldpc -vht -vht40 -vht80 -vht80p80 -vht160 wme -burst -dwds roaming MANUAL bintval 100 AC_BE cwmin 4 cwmax 10 aifs 3 txopLimit 0 -acm ack cwmin 4 cwmax 10 aifs 3 txopLimit 0 -acm AC_BK cwmin 4 cwmax 10 aifs 7 txopLimit 0 -acm ack cwmin 4 cwmax 10 aifs 7 txopLimit 0 -acm AC_VI cwmin 3 cwmax 4 aifs 2 txopLimit 94 -acm ack cwmin 3 cwmax 4 aifs 2 txopLimit 94 -acm AC_VO cwmin 2 cwmax 3 aifs 2 txopLimit 47 -acm ack cwmin 2 cwmax 3 aifs 2 txopLimit 47 -acm groups: wlanhttp://neverssl.com XML Also, I received an interesting XML response from http://neverssl.com when I did the following steps:Connect to the Open Network (dhclient received an address successfully). Try to open http://neverssl.com. It just hanged trying to load. Reconnect to other Wi-Fi which actually works. Look at the http://neverssl.com tab and see the following:This XML file does not appear to have any style information associated with it. The document tree is shown below. -<Error> <Code>AccessDenied</Code> <Message>Access Denied</Message> <RequestId>3FD41663CABFE8CD</RequestId> -<HostId> dsczv0lxKSFmBneOVS5nm5Ru5D3Br1bCRCqqj25WZVb1BzKI9McRR+djm9IrmgXHVIk/mdUCvfM= </HostId> </Error>Tweaking /etc/resolv.conf It was suggested to me that I should set /etc/resolv.conf and then run resolvconf -i and resolvconf -l. Here are the results:Inside /var/db/dhclient.leases.wlan0: lease { interface "wlan0"; fixed-address 10.1.236.56; next-server 10.1.236.1; option subnet-mask 255.255.255.255; option routers 10.1.236.1; option domain-name-servers 10.1.236.1,194.204.159.1; option dhcp-lease-time 900; option dhcp-message-type 5; option dhcp-server-identifier 10.1.236.1; renew 5 2017/7/7 16:10:15; rebind 5 2017/7/7 16:15:49; expire 5 2017/7/7 16:17:45; }Output of dhclient wlan0: wlan0: no link .... got link DHCPREQUEST on wlan0 to 255.255.255.255 port 67 DHCPACK from 10.1.236.1 bound to 10.1.236.56 -- renewal in 450 seconds.Adding nameserver 10.1.236.1 to /etc/resolv.conf doesn't seem to change anything. Output of resolvconf -i: wlan0Output of resolvconf -l: # resolv.conf from wlan0 nameserver 10.1.236.1 nameserver 194.204.159.1At no point I was able to open http://neverssl.com or http://gooogle.pl. I wasn't able to get redirected to the captive portal as well. Result of ifconfig -v wlan0: wlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 34:e6:ad:16:bf:66 hwaddr 34:e6:ad:16:bf:66 inet6 fe80::36e6:adff:fe16:bf66%wlan0 prefixlen 64 tentative scopeid 0x2 inet 10.1.236.56 netmask 0xffffffff broadcast 10.1.236.56 nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL> media: IEEE 802.11 Wireless Ethernet OFDM/36Mbps mode 11g status: associated ssid "Open Hotspot" channel 6 (2437 MHz 11g) bssid 9c:1c:12:0b:10:73 regdomain FCC country US anywhere -ecm authmode OPEN -wps -tsn privacy OFF deftxkey UNDEF powersavemode OFF powersavesleep 100 txpower 30 txpowmax 50.0 -dotd rtsthreshold 2346 fragthreshold 2346 bmiss 10 11a ucast NONE mgmt 6 Mb/s mcast 6 Mb/s maxretry 6 11b ucast NONE mgmt 1 Mb/s mcast 1 Mb/s maxretry 6 11g ucast NONE mgmt 1 Mb/s mcast 1 Mb/s maxretry 6 turboA ucast NONE mgmt 6 Mb/s mcast 6 Mb/s maxretry 6 turboG ucast NONE mgmt 1 Mb/s mcast 1 Mb/s maxretry 6 sturbo ucast NONE mgmt 6 Mb/s mcast 6 Mb/s maxretry 6 11na ucast NONE mgmt 12 MCS mcast 12 MCS maxretry 6 11ng ucast NONE mgmt 2 MCS mcast 2 MCS maxretry 6 half ucast NONE mgmt 3 Mb/s mcast 3 Mb/s maxretry 6 quarter ucast NONE mgmt 1 Mb/s mcast 1 Mb/s maxretry 6 11acg ucast NONE mgmt 1 Mb/s mcast 1 Mb/s maxretry 6 11ac ucast NONE mgmt 6 Mb/s mcast 6 Mb/s maxretry 6 scanvalid 60 -bgscan bgscanintvl 300 bgscanidle 250 roam:11a rssi 7dBm rate 12 Mb/s roam:11b rssi 7dBm rate 1 Mb/s roam:11g rssi 7dBm rate 5 Mb/s roam:turboA rssi 7dBm rate 12 Mb/s roam:turboG rssi 7dBm rate 12 Mb/s roam:sturbo rssi 7dBm rate 12 Mb/s roam:11na rssi 7dBm MCS 1 roam:11ng rssi 7dBm MCS 1 roam:half rssi 7dBm rate 6 Mb/s roam:quarter rssi 7dBm rate 3 Mb/s roam:11acg rssi 7dBm rate 64 Mb/s roam:11ac rssi 7dBm rate 64 Mb/s -pureg protmode CTS -ht -htcompat -ampdu ampdulimit 8k ampdudensity NA -amsdu -shortgi htprotmode RTSCTS -puren -smps -rifs -stbc -ldpc -vht -vht40 -vht80 -vht80p80 -vht160 wme -burst -dwds roaming MANUAL bintval 100 AC_BE cwmin 4 cwmax 10 aifs 3 txopLimit 0 -acm ack cwmin 4 cwmax 10 aifs 3 txopLimit 0 -acm AC_BK cwmin 4 cwmax 10 aifs 7 txopLimit 0 -acm ack cwmin 4 cwmax 10 aifs 7 txopLimit 0 -acm AC_VI cwmin 3 cwmax 4 aifs 2 txopLimit 94 -acm ack cwmin 3 cwmax 4 aifs 2 txopLimit 94 -acm AC_VO cwmin 2 cwmax 3 aifs 2 txopLimit 47 -acm ack cwmin 2 cwmax 3 aifs 2 txopLimit 47 -acm groups: wlanAlso echo nameserver 10.1.236.1 | resolvconf -a wlan0 returns:cp: /dev/null.bak: Operation not supportedReferences & notes/var/db/dhclient.leases.wlan* files might store interesting information. /etc/resolv.conf is empty.
Cannot access the captive portal in FreeBSD 12-CURRENT
The issue was solved w/ chroot (thanks to Jason Croyle for the tip). Perhaps "cheating" (as I made the wifi connection in another system), but an "honest" solution wasn't forthcoming. Luckily, I had another Linux installed on the computer, so didn't even have to use a live USB. The procedure itself is described here. I also made a script, as suggested here (actually two: one to chroot and the other for the cleanup after the chroot). With the script I got the "No tty present and no askpass.." error on sudo chroot /mnt/sda8 /bin/bash, solved by sudo -S chroot /mnt/sda8 /bin/bash (where -S explicitly asks for a password from stdin). I could also comment on how to recover from the havoc wrecked by tasksel, but given SE's "laser-like focus", I probably shouldn't. So just do yourself a favor and avoid tasksel
When installing LAMP server, I went for tasksel, and got my DE (Xfce) and seemingly also display manager (LightDM) removed (this is a known "bug", believe it or not). Now I just need to connect to wifi to start reinstalling the removed components. However, it seems there's very little to start with. I have no wireless interface (ifconfig -a shows only enp0s and lo). I have no nmcli, iw, iwconfig, iwlist, wpa_supplicant, ifup or ifdown. By way of relevant tools, I have at least (and maybe only) ip, dhclient, netplan and ifconfig. systemctl status network-manager shows that network-manager.service is loaded and inactive (dead). I can activate it w/ systemctl restart network-manager, but that's it. I edited /etc/network/interfaces, to include the wireless info, as described here, and rebooted. This bought me nothing. lspci -v assures me I (still) have Intel Wireless 8265 / 8275 network controller. The question: Is there anything I could try to do to connect to wifi other than a complete reinstall of the distro? Something to get things going. I'm on Ubuntu 18.04 LTS
Connect to wifi w/out DE and DM
Bingo! Put 'nodetach' as command-line argument to pppd and daemon will not fork itself. All is needed then is standard 'echo $?' in next line of script: pppd call my_provider nodetach maxfail 3 echo $?
I'm designing some reporting system on Raspberry Pi system which connects to world thru 3G usb modem controlled by pppd. 99,999% of time connection works ok, but sometimes it drops and further reconnect attempts fail unless modem is re-plugged physically. As in production box will work remotely with no physical access to it so I have to manage it somehow. My idea is to run at system start, some kind of script in separate thread see below pseudocode: while(true){ wait_for_modem_device_to_appear start_pppd # may_be limiting retries not to default 10, but to, say, 3 wait_for_pppd_to_finish if(exitcode_is_one_of(6,7,8,10,15,16)){ reset_usb_port_programmatically #I have tools for that }else{ break } }How can I get pppd exit code? Should I use another approach (which)?
get pppd exit code - how?
Is your connection wireless? Look here https://help.ubuntu.com/community/KVM/Networking. It says:Warning: Network bridging will not work when the physical network device (e.g., eth1, ath0) used for bridging is a wireless device (e.g., ipw3945), as most wireless device drivers do not support bridging!
I am trying to deploy a web app in a VM, but if I use a NAT adapter, that VM is assigned a private IP. I want to use the bridge adapter to give the VM a real IP in my DHCP network. Both host OS and guest OS are ubuntu 20.04 LTS and when I start the VM with the bridge adapter I only get a message saying: Connection Failed Activation of network connection failed I've searched for a solution for a while now but I can't seem to find one. My current config of the network adapter is:How can I get my VM to have an internet connection with the bridge adapter?
Ubuntu VM with bridged adapter not connecting to internet
The session is dead, and the SSH server just hasn't timed out yet (default timeouts are insanely high, as they assume a very unreliable network). As a general rule, there's not any way you can directly connect to a disconnected SSH session, just like there's no way to connect to and take-over a session running on a different virtual terminal. For future usage though, I would suggest looking into the programs screen and/or tmux (not sure which of the two is packaged on CentOS, but if you have both as an option, I would personally recommend screen). Both programs are primarily designed for quickly switching between multiple shells started from a single remote session (in essence, they replicate virtual terminal functionality, but with different key bindings), but they have another rather useful feature: you can disconnect from a screen or tmux instance, keep it running, and reconnect later. By starting a screen (or tmux) session immediately after you log in over SSH with PuTTY, you can then reconnect to that session if the connection gets dropped.
I need to reconnecting the putty session, I will tell you one example, I'm installing the python manually in my centos os machine, while I run command make, phyton is compiling to makefile suddenly I lost putty connection and putty session is disconnected, which I reconnect and check terminal and run the command who, I can see there are 2 sessions connected and 1 is idle.
reconnect to disconnected putty session or connect to idle session in linux
The cpio block skip method given doesn't work reliably. That's because the initrd images I was getting myself didn't have both archives concatenated on a 512 byte boundary. Instead, do this: apt-get install binwalk legolas [mc]# binwalk initrd.img DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 0 0x0 ASCII cpio archive (SVR4 with no CRC), file name: "kernel", file name length: "0x00000007", file size: "0x00000000" 120 0x78 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86", file name length: "0x0000000B", file size: "0x00000000" 244 0xF4 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode", file name length: "0x00000015", file size: "0x00000000" 376 0x178 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode/GenuineIntel.bin", file name length: "0x00000026", file size: "0x00005000" 21004 0x520C ASCII cpio archive (SVR4 with no CRC), file name: "TRAILER!!!", file name length: "0x0000000B", file size: "0x00000000" 21136 0x5290 gzip compressed data, from Unix, last modified: Sat Feb 28 09:46:24 2015Use the last number (21136) which is not on a 512 byte boundary for me: legolas [mc]# dd if=initrd.img bs=21136 skip=1 | gunzip | cpio -tdv | head drwxr-xr-x 1 root root 0 Feb 28 09:46 . drwxr-xr-x 1 root root 0 Feb 28 09:46 bin -rwxr-xr-x 1 root root 554424 Dec 17 2011 bin/busybox lrwxrwxrwx 1 root root 7 Feb 28 09:46 bin/sh -> busybox -rwxr-xr-x 1 root root 111288 Sep 23 2011 bin/loadkeys -rwxr-xr-x 1 root root 2800 Aug 19 2013 bin/cat -rwxr-xr-x 1 root root 856 Aug 19 2013 bin/chroot -rwxr-xr-x 1 root root 5224 Aug 19 2013 bin/cpio -rwxr-xr-x 1 root root 3936 Aug 19 2013 bin/dd -rwxr-xr-x 1 root root 984 Aug 19 2013 bin/dmesg
I'm using debian live-build to work on a bootable system. By the end of the process i get the typical files used to boot a live system: a squashfs file, some GRUB modules and config files, and an initrd.img file. I can boot just fine using those files, passing the initrd to the kernel via initrd=/path/to/my/initrd.imgon the bootloader command line. But when I try to examine the contents of my initrd image, like so: $file initrd.img initrd.img: ASCII cpio archive (SVR4 with no CRC) $mkdir initTree && cd initTree $cpio -idv < ../initrd.imgthe file tree i get looks like this: $tree --charset=ASCII . `-- kernel `-- x86 `-- microcode `-- GenuineIntel.binWhere is the actual filesystem tree, with the typical /bin , /etc, /sbin ... containing the actual files used during boot?
Why is it that my initrd only has one directory, namely, 'kernel'?
I'm not 100% sure, but as the initial ramdisk needs to be unpacked by the kernel during boot, cpio is used because it is already implemented in kernel code.
I am making my own initramfs following the Gentoo wiki. Instead of the familiar tar and gzip, the page is telling me to use cpio and gzip. Wikipedia says that cpio is used by the 2.6 kernel's initramfs, but does not explain why. Is this just a convention or is cpio better for initramfs? Can I still use tar and gzip?
Why use cpio for initramfs?
So recently I wanted to do this with tar. Some investigation indicated to me that it was more than a little nonsensical that I couldn't. I did come up with this weird split --filter="cat >file; tar -r ..." thing, but, well, it was terribly slow. And the more I read about tar the more nonsensical it seemed. You see, tar is just a concatenated list of records. The constituent files are not altered in any way - they're whole within the archive. But they are blocked off on 512-byte block boundaries, and preceding every file there is a header. That's it. The header format is really, very simple as well. So, I wrote my own tar. I call it... shitar. z() (IFS=0; printf '%.s\\0' $(printf "%.$(($1-${#2}))d")) chk() (IFS=${IFS#??}; set -f; set -- $( printf "$(fmt)" "$n" "$@" '' "$un" "$gn" ); IFS=; a="$*"; printf %06o "$(($( while printf %d+ "'${a:?}"; do a=${a#?}; done 2>/dev/null )0))") fmt() { printf '%s\\'"${1:-n}" %s "${1:+$(z 99 "$n")}%07d" \ %07o %07o %011o %011o "%-${1:-7}s" ' 0' "${1:+$(z 99)}ustar " %s \ "${1:+$(z 31 "$un")}%s" }That's the meat and potatoes, really. It writes the headers and computes the chksum - which, relatively speaking, is the only hard part. It does the ustar header format... maybe. At least, it emulates what GNU tar seems to think is the ustar header format to the point that it does not complain. And there's more to it, it's just that I haven't really coagulated it yet. Here, I'll show you: for f in 1 2; do echo hey > file$f; done { tar -cf - file[123]; echo .; } | tr \\0 \\n | grep -b .0:file1 #filename - first 100 bytes 100:0000644 #octal mode - next 8 108:0001750 #octal uid, 116:0001750 #gid - next 16 124:00000000004 #octal filesize - next 12 136:12401536267 #octal epoch mod time - next 12 148:012235 #chksum - more on this 155: 0 #file type - gnu is weird here - so is shitar 257:ustar #magic string - header type 265:mikeserv #owner 297:mikeserv #group - link name... others shitar doesnt do 512:hey #512-bytes - start of file 1024:file2 #512 more - start of header 2 1124:0000644 1132:0001750 1140:0001750 1148:00000000004 1160:12401536267 1172:012236 1179: 0 1281:ustar 1289:mikeserv 1321:mikeserv 1536:hey 10240:. #default blocking factor 20 * 512That's tar. Everything's padded with \0nulls so I just turn em into \newlines for readability. And shitar: #the rest, kind of, calls z(), fmt(), chk() + gets $mdata and blocks w/ dd for n in file[123] do d=$n; un=$USER; gn=$(id --group --name) set -- $(stat --printf "%a\n%u\n%g\n%s\n%Y" "$n") printf "$(fmt 0)" "$n" "$@" "$(chk "$@")" "$un" "$gn" printf "$(z $((512-298)) "$gn")"; cat "$d" printf "$(x=$(($4%512));z $(($4>512?($x>0?$x:512):512-$4)))" done | { dd iflag=fullblock conv=sync bs=10240 2>/dev/null; echo .; } | tr \\0 \\n | grep -b .OUTPUT 0:file1 #it's the same. I shortened it. 100:0000644 #but the whole first file is here 108:0001750 116:0001750 124:00000000004 136:12401536267 148:012235 #including its checksum 155: 0 257:ustar 265:mikeserv 297:mikeserv 512:hey 1024:file2 ... 1172:012236 #and file2s checksum ... 1536:hey 10240:.I say kind of up there because that isn't shitar's purpose - tar already does that beautifully. I just wanted to show how it works - which means I need to touch on the chksum. If it wasn't for that I would just be dding off the head of a tar file and done with it. That might even work sometimes, but it gets messy when there are multiple members in the archive. Still, the chksum is really easy. First, make it 7 spaces - (which is a weird gnu thing, I think, as the spec says 8, but whatever - a hack is a hack). Then add up the octal values of every byte in the header. That's your chksum. So you need the file metadata before you do the header, or you don't have a chksum. And that's a ustar archive, mostly. Ok. Now, what it is meant to do: cd /tmp; mkdir -p mnt for d in 1 2 3 do fallocate -l $((1024*1024*500)) disk$d lp=$(sudo losetup -f --show disk$d) sync sudo mkfs.vfat -n disk$d "$lp" sudo mount "$lp" mnt echo disk$d file$d | sudo tee mnt/file$d sudo umount mnt sudo losetup -d "$lp" doneThat makes three 500M disk images, formats and mounts each, and writes a file to each. for n in disk[123] do d=$(sudo losetup -f --show "$n") un=$USER; gn=$(id --group --name) set -- $(stat --printf "%a\n%u\n%g\n$(lsblk -bno SIZE "$d")\n%Y" "$n") printf "$(fmt 0)" "$n" "$@" "$(chk "$@")" "$un" "$gn" printf "$(z $((512-298)) "$gn")" sudo cat "$d" sudo losetup -d "$d" done | dd iflag=fullblock conv=sync bs=10240 2>/dev/null | xz >disks.tar.xzNote - apparently block devices will just always block correctly. Pretty handy. That tar's the contents of the disk device files in-stream and pipes the output to xz. ls -l disk* -rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk1 -rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk2 -rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk3 -rw-r--r-- 1 mikeserv mikeserv 229796 Sep 3 01:05 disks.tar.xzNow, the moment of truth... xz -d <./disks.tar.xz| tar -tvf - -rw-r--r-- mikeserv/mikeserv 524288000 2014-09-03 01:01 disk1 -rw-r--r-- mikeserv/mikeserv 524288000 2014-09-03 01:01 disk2 -rw-r--r-- mikeserv/mikeserv 524288000 2014-09-03 01:01 disk3Hooray! Extraction... xz -d <./disks.tar.xz| tar -xf - --xform='s/[123]/1&/' ls -l disk* -rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk1 -rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk11 -rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk12 -rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk13 -rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk2 -rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk3 -rw-r--r-- 1 mikeserv mikeserv 229796 Sep 3 01:05 disks.tar.xzComparison... cmp disk1 disk11 && echo yay || echo shite yayAnd the mount... sudo mount disk13 mnt cat mnt/* disk3 file3And so, in this case, shitar performs ok, I guess. I'd rather not go into all of the things which it won't do well. But, I will say - don't do newlines in the filenames at the least. You can also do - and maybe should, considering the alternatives I've offered -this with squashfs. Not only do you get the single archive built from the stream - but it's mountable and builtin to the kernel's vfs: From pseudo-file.example: # Copy 10K from the device /dev/sda1 into the file input. Ordinarily # Mksquashfs given a device, fifo, or named socket will place that special file # within the Squashfs filesystem, this allows input from these special # files to be captured and placed in the Squashfs filesystem. input f 444 root root dd if=/dev/sda1 bs=1024 count=10# Creating a block or character device examples# Create a character device "chr_dev" with major:minor 100:1 and # a block device "blk_dev" with major:minor 200:200, both with root # uid/gid and a mode of rw-rw-rw. chr_dev c 666 root root 100 1 blk_dev b 666 0 0 200 200You might also use btrfs (send|receive) to stream out a subvolume into whatever stdin-capable compressor you liked. This subvolume need not exist before you decide to use it as compression container, of course. Still, about squashfs... I don't believe I'm doing this justice. Here's a very simple example: cd /tmp; mkdir ./emptydir mksquashfs ./emptydir /tmp/tmp.sfs -p \ 'file f 644 mikeserv mikeserv echo "this is the contents of file"' Parallel mksquashfs: Using 6 processors Creating 4.0 filesystem on /tmp/tmp.sfs, block size 131072. [==================================================================================|] 1/1 100% Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072 compressed data, compressed metadata, compressed fragments,... ###... ###AND SO ON ###...echo '/tmp/tmp.sfs /tmp/imgmnt squashfs loop,defaults,user 0 0'| sudo tee -a /etc/fstab >/dev/nullmount ./tmp.sfs cd ./imgmnt lstotal 1 -rw-r--r-- 1 mikeserv mikeserv 29 Aug 20 11:34 filecat filethis is the contents of filecd .. umount ./imgmntThat's only the inline -p argument for mksquash. You can source a file with -pf containing as many of those as you like. The format is simple - you define a target file's name/path in the new archive's filesystem, you give it a mode and an owner, and then you tell it which process to execute and read stdout from. You can create as many as you like - and you can use LZMA, GZIP, LZ4, XZ... hmm there are more... compression formats as you like. And the end result is an archive into which you cd. More on the format though: This is, of course, not just an archive - it is a compressed, mountable Linux file-system image. Its format is the Linux kernel's - it is a vanilla kernel supported filesystem. In this way it is as common as the vanilla Linux kernel. So if you told me you were running a vanilla Linux system on which the tar program was not installed I would be dubious - but I would probably believe you. But if you told me you were running a vanilla Linux system on which the squashfs filesystem was not supported I would not believe you.
I have six Linux logical volumes that together back a virtual machine. The VM is currently shutdown, so its easy to take consistent images of them. I'd like to pack all six images together in an archive. Trivially, I could do something like this: cp /dev/Zia/vm_lvraid_* /tmp/somedir tar c /tmp/somedir | whateverBut that of course creates an extra copy. I'd like to avoid the extra copy. The obvious approach: tar c /dev/Zia/vm_lvraid_* | whateverdoes not work, as tar recognizes the files a special (symlinks in this case) and basically stores the ln -s in the archive. Or, with --dereference or directly pointed at /dev/dm-X, it recognizes them as special (device files) and basically stores the mknod in the archive. I've searched for command-line options to tar to override this behavior, and couldn't find any. I also tried cpio, same problem, and couldn't find any options to override it there, either. I also tried 7z (ditto). Same with pax. I even tried zip, which just got itself confused. edit: Looking at the source code of GNU tar and GNU cpio, it appears neither of them can do this. At least, not without serious trickery (the special handling of device files can't be disabled). So, suggestions of serious trickery would be appreciated or alternate utilities. TLDR: Is there some archiver that will pack multiple disk images together (taken from raw devices) and stream that output, without making extra on-disk copies? My preference would be output in a common format, like POSIX or GNU tar.
How to convince tar (etc.) to archive block device contents?
It's very reliable and supported by all kernel versions that support initrd, AFAIK. It's a feature of the cpio archives that initramfs are made up of. cpio just keeps on extracting its input....we might know the file is two cpio archives one after the other, but cpio just sees it as a single input stream. Debian advises use of exactly this method (appending another cpio to the initramfs) to add binary-blob firmware to their installer initramfs. For example: DebianInstaller / NetbootFirmware | Debian WikiInitramfs is essentially a concatenation of gzipped cpio archives which are extracted into a ramdisk and used as an early userspace by the Linux kernel. Debian Installer's initrd.gz is in fact a single gzipped cpio archive containing all the files the installer needs at boot time. By simply appending another gzipped cpio archive - containing the firmware files we are missing - we get the show on the road!
I'm modifying a bunch of initramfs archives from different Linux distros in which normally only one file is being changed. I would like to automate the process without switching to root user to extract all files inside the initramfs image and packing them again. First I've tried to generate a list of files for gen_init_cpio without extracting all contents on the initramfs archive, i.e. parsing the output of cpio -tvn initrd.img (like ls -l output) through a script which changes all permissions to octal and arranges the output to the format gen_init_cpio wants, like: dir /dev 755 0 0 nod /dev/console 644 0 0 c 5 1 slink /bin/sh busybox 777 0 0 file /bin/busybox initramfs/busybox 755 0 0This involves some replacements and the script may be hard to write for me, so I've found a better way and I'm asking about how safe and portable is: In some distros we have an initramfs file with concatenated parts, and apparently the kernel parses the whole file extracting all parts packed in a 1-byte boundary, so there is no need to fill each part to a multiple of 512 bytes. I thought this 'feature' can be useful for me to avoid recreating the archive when modifying files inside it. Indeed it works, at least for Debian and CloneZilla. For example if we have modified the /init file on initrd.gz of Debian 8.2.0, we can append it to initrd.gz image with: $ echo ./init | cpio -H newc -o | gzip >> initrd.gzso initrd.gz has two concatenated archives, the original and its modifications. Let's see the result of binwalk: DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 0 0x0 gzip compressed data, maximum compression, has original file name: "initrd", from Unix, last modified: Tue Sep 1 09:33:08 2015 6299939 0x602123 gzip compressed data, from Unix, last modified: Tue Nov 17 16:06:13 2015It works perfectly. But it is reliable? what restrictions do we have when appending data to initfamfs files? it is safe to append without padding the original archive to a multiple of 512 bytes? from which kernel version is this feature supported?
Appending files to initramfs image - reliable?
It's not the kernel that's generating the initramfs, it's cpio. So what you're really looking for is a way to build a cpio archive that contains devices, symbolic links, etc. Your method 2 uses usr/gen_init_cpio in the kernel source tree to build the cpio archive during the kernel build. That's indeed a good way of building a cpio archive without having to populate the local filesystem first (which would require being root to create all the devices, or using fakeroot or a FUSE filesystem which I'm not sure has been written already). All you're missing is generating the input file to gen_init_cpio as a build step. E.g. in shell: INITRAMFS_SOURCE_DIR=/home/brandon/rascal-initramfs exec >initramfs_source.txt echo "dir /bin 755 0 0" echo "file /bin/busybox $INITRAMFS_SOURCE_DIR/bin/busybox 755 0 0" for x in sh ls cp …; do echo "slink /bin/$x busybox 777 0 0" done # etc …If you want to reflect the symbolic links to busybox that are present in your build tree, here's a way (I assume you're building on Linux): ( cd "$INITRAMFS_SOURCE_DIR/bin" && for x in *; do if [ "$(readlink "$x")" = busybox ]; then echo "slink /bin/$x busybox 777 0 0" fi done )Here's a way to copy all your symbolic links: find "$INITRAMFS_SOURCE_DIR" -type l -printf 'slink %p %l 777 0 0\n'For busybox, maybe your build tree doesn't have the symlinks, and instead you want to create one for every utility that you've compiled in. The simplest way I can think of is to look through your busybox build tree for .*.o.cmd files: there's one per generated command. find /path/to/busybox/build/tree -name '.*.cmd' -exec sh -c ' for x; do x=${x##*/.} echo "slink /bin/${x%%.*} busybox 777 0 0" done ' _ {} +
Having been directed to initramfs by an answer to my earlier question (thanks!), I've been working on getting initramfs working. I can now boot the kernel and drop to a shell prompt, where I can execute busybox commands, which is awesome. Here's where I'm stuck-- there are (at least) two methods of generating initramfs images:By passing the kernel a path to a prebuilt directory hierarchy to be compressed By passing the kernel the name of a file that lists the files to be included.The second method seemed a little cleaner, so I've been using that. Just for reference, here's my file list so far: dir /dev 755 0 0 nod /dev/console 644 0 0 c 5 1 nod /dev/loop0 644 0 0 b 7 0 dir /bin 755 1000 1000 slink /bin/sh busybox 777 0 0 file /bin/busybox /home/brandon/rascal-initramfs/bin/busybox 755 0 0 dir /proc 755 0 0 dir /sys 755 0 0 dir /mnt 755 0 0 file /init /home/brandon/rascal-initramfs/init.sh 755 0 0Unfortunately, I have learned that busybox requires a long list of links to serve as aliases to all of its different commands. Is there a way to generate the list of all these commands so I can add it to my file list? Alternatively, I could switch to method 1, using the prebuilt directory hierarchy, but I'm not sure how to make the /dev nodes in that case. Both of these paths seem messy. Is there an elegant solution to this?
How to generate initramfs image with busybox links?
Debian with the packages amd64-microcode / intel-microcode packages installed seems to use some kind of mess of an uncompressed cpio archive containing the CPU microcode followed by a gzip compressed cpio archive with the actual initrd contents. The only way I've ever been able to extract it is by using binwalk (apt install binwalk), which can both correctly list the structure: binwalk /path/to/initrdexample output: host ~ # binwalk /boot/initrd.img-5.10.0-15-amd64DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 0 0x0 ASCII cpio archive (SVR4 with no CRC), file name: "kernel", file name length: "0x00000007", file size: "0x00000000" 120 0x78 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86", file name length: "0x0000000B", file size: "0x00000000" 244 0xF4 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode", file name length: "0x00000015", file size: "0x00000000" 376 0x178 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode/.enuineIntel.align.0123456789abc", file name length: "0x00000036", file size: "0x00000000" 540 0x21C ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode/GenuineIntel.bin", file name length: "0x00000026", file size: "0x00455C00" 4546224 0x455EB0 ASCII cpio archive (SVR4 with no CRC), file name: "TRAILER!!!", file name length: "0x0000000B", file size: "0x00000000" 4546560 0x456000 gzip compressed data, has original file name: "mkinitramfs-MAIN_dTZaRk", from Unix, last modified: 2022-06-14 14:02:57 37332712 0x239A6E8 MySQL ISAM compressed data file Version 9and extract the separate parts: binwalk -e /path/to/initrdexample output: host ~ # binwalk -e /boot/initrd.img-5.10.0-15-amd64 DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 0 0x0 ASCII cpio archive (SVR4 with no CRC), file name: "kernel", file name length: "0x00000007", file size: "0x00000000" 120 0x78 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86", file name length: "0x0000000B", file size: "0x00000000" 244 0xF4 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode", file name length: "0x00000015", file size: "0x00000000" 376 0x178 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode/.enuineIntel.align.0123456789abc", file name length: "0x00000036", file size: "0x00000000" 540 0x21C ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode/GenuineIntel.bin", file name length: "0x00000026", file size: "0x00455C00" 4546224 0x455EB0 ASCII cpio archive (SVR4 with no CRC), file name: "TRAILER!!!", file name length: "0x0000000B", file size: "0x00000000" 4546560 0x456000 gzip compressed data, has original file name: "mkinitramfs-MAIN_dTZaRk", from Unix, last modified: 2022-06-14 14:02:57 37332712 0x239A6E8 MySQL ISAM compressed data file Version 9This'll give you the separate parts in separate files, and now you can finally extract the proper cpio archive: host ~ # ls -l _initrd.img-5.10.0-15-amd64.extracted insgesamt 187M drwxr-xr-x 3 root root 4,0K 14. Jun 17:53 cpio-root/ -rw-r--r-- 1 root root 114M 14. Jun 17:53 mkinitramfs-MAIN_dTZaRk -rw-r--r-- 1 root root 39M 14. Jun 17:53 0.cpio -rw-r--r-- 1 root root 35M 14. Jun 17:53 mkinitramfs-MAIN_dTZaRk.gzhost ~/_initrd.img-5.10.0-15-amd64.extracted # mkdir extracted host ~/_initrd.img-5.10.0-15-amd64.extracted # cd extracted host ~/_initrd.img-5.10.0-15-amd64.extracted/extracted # cat ../mkinitramfs-MAIN_dTZaRk | cpio -idmv --no-absolute-filenames [...]host ~/_initrd.img-5.10.0-15-amd64.extracted/extracted # ll insgesamt 28K lrwxrwxrwx 1 root root 7 14. Jun 17:55 bin -> usr/bin/ drwxr-xr-x 3 root root 4,0K 14. Jun 17:55 conf/ drwxr-xr-x 7 root root 4,0K 14. Jun 17:55 etc/ lrwxrwxrwx 1 root root 7 14. Jun 17:55 lib -> usr/lib/ lrwxrwxrwx 1 root root 9 14. Jun 17:55 lib32 -> usr/lib32/ lrwxrwxrwx 1 root root 9 14. Jun 17:55 lib64 -> usr/lib64/ lrwxrwxrwx 1 root root 10 14. Jun 17:55 libx32 -> usr/libx32/ drwxr-xr-x 2 root root 4,0K 14. Jun 16:02 run/ lrwxrwxrwx 1 root root 8 14. Jun 17:55 sbin -> usr/sbin/ drwxr-xr-x 8 root root 4,0K 14. Jun 17:55 scripts/ drwxr-xr-x 8 root root 4,0K 14. Jun 17:55 usr/ -rwxr-xr-x 1 root root 6,2K 14. Jan 2021 init*
initramfs archives on Linux can consist of a series of concatenated, gzipped cpio files. Given such an archive, how can one extract all the embedded archives, as opposed to only the first one? The following is an example of a pattern which, while it appears to have potential to work, extracts only the first archive: while gunzip -c | cpio -i; do :; done <input.cgzI've also tried the skipcpio helper from dracut to move the file pointer past the first cpio image, but the following results in a corrupt stream (not at the correct point in the input) being sent to cpio: # this isn't ideal -- presumably would need to rerun with an extra skipcpio in the pipeline # ...until all files in the archive have been reached. gunzip -c <input.cgz | skipcpio /dev/stdin | cpio -i
Extracting concatenated cpio archives
Thank for your answers, they were useful, but I figure out my own solution. Recreating the initrd image can be done with fakeroot-ng (and probably with fakeroot also). The base idea of the tools it to wrap all system calls, so all programs executed within fakeroot environment thinks, they are run by a root. I call part of my script within fakeroot environment - unpack initramfs, perform all changes and pack it again. All privelages are set correctly, root is the owner of all files. The fakeroot-ng is available at: http://fakeroot-ng.lingnu.com/index.php/Home_Page
I've got a problem with rebuilding the initrd image as a user. Firstly, when I try to "unpack" the original initrd image: cpio -idm < initrd-base cpio: dev/tty8: Cannot mknod: Operation not permitted cpio: dev/tty3: Cannot mknod: Operation not permitted cpio: dev/zero: Cannot mknod: Operation not permitted cpio: dev/loop0: Cannot mknod: Operation not permitted cpio: dev/loop4: Cannot mknod: Operation not permitted cpio: dev/loop7: Cannot mknod: Operation not permitted cpio: dev/loop5: Cannot mknod: Operation not permitted cpio: dev/loop2: Cannot mknod: Operation not permitted cpio: dev/tty9: Cannot mknod: Operation not permitted cpio: dev/tty4: Cannot mknod: Operation not permitted cpio: dev/null: Cannot mknod: Operation not permitted cpio: dev/loop6: Cannot mknod: Operation not permitted cpio: dev/loop1: Cannot mknod: Operation not permitted cpio: dev/console: Cannot mknod: Operation not permitted cpio: dev/loop3: Cannot mknod: Operation not permitted cpio: dev/tty1: Cannot mknod: Operation not permitted 133336 blocksHow can I get rid of these warnings? Secondly - I'm not sure how file ownership will be handled. After I unpack it it seems that everything belongs to the current user. How will initrd look after repacking it? I prefer to not alter the standard access rights.
Unpack, modify and pack initrd as a user
You should use the -d option to let cpio create the leading directories (path/to) if they don't exist: cpio -id < archive.cpio path/to/fileAlso, bsdtar (the regular tar on FreeBSD) knows how to extract cpio archives, whether compressed or not.
I have a cpio archive with lots of files and I need to extract only one file, not all. With tar I could just use tar -xf archive.tar path/to/file, but that does not work with cpio: cpio -i < archive.cpio path/to/file bash: path/to/file: No such file or directoryDoes anyone know how to extract just a single file from a cpio archive?
How to extract a single file from a cpio archive?
You can do this with GNU cpio: $ find . | cpio -o -H newc > /tmp/file 40 blocks $ file /tmp/file /tmp/file: ASCII cpio archive (SVR4 with no CRC)
I have a file in /boot/initramfs.gz extract it using tar -xzvf initramfs.gz -C ./ I got a file initramfs.└──╼ $ file initramfs initramfs: ASCII cpio archive (SVR4 with no CRC)It can be open using ark. But I want to change some files in this file initramfs.I extracted it using ark and got a folder initramfs. Now I want to save it as before.How do I create an ASCII cpio archive (SVR4 with no CRC) like the original?
How to create ASCII cpio archive (SVR4 with no CRC)?
Archived with relative paths I would advise against running that type of command at your root level, /. That's asking for trouble. I always run cpio -idvm related commands in their own directories, and then use mv or cp to put the files where they need to be manually. You can also use the method I described in this other U&L Q&A titled: How do I install TazPkg in SliTaz Linux, which also makes use of cpio. Archived with absolute paths If the archive was built with absolute paths you can tell cpio with the --no-absolute-filenames switch to block it from extracting into /. $ mkdir /tmp/cpio-root $ cd /tmp/cpio-root $ cat rootfs.img | cpio -idvm --no-absolute-filenamesReferencesCPIO archive with absolute paths- extraction to a different directory?
Yesterday I was making some experiments on Slitaz. It uses multiple initrd.img's to store files/changes. I wanted to extract one of its initrd.gz images (which is a cpio archive) to a folder, edit/remove them, repack again. I used this code: cat rootfs.img | cpio -idvmThen all files are extracted to my root filesystem. My whole OS is corrupted. (What an emberrasing situation...) What should I do to make such operations safely but in an easy way? Chroot? LXC? (VirtualBox is the last resort)
what are the techniques to extract files safely?
Use pax and its -s option to rename files as they are added to the archive. Pax is POSIX's replacement for the traditional utilities tar and cpio; some Linux distributions don't install it by default but it should always be available as a package. pax -w -x cpio -s '~^[/]*~~' root-directory >archive.cpio
I am trying to modify a file system image packed with cpio. For that reason I first need to extract and later pack the image. As the image contains a whole file system all the files are given in absolute file names, so I can't directly pack and unpack it, since it would conflict with my machine's root system. So when unpacking I used --no-absolute-filenames to unpack it to a working directory of my choice. Now I want to pack it again. If I just pack it i'd only get files like that: /path/to/working/directory/file1 /path/to/working/directory/file2 /path/to/working/directory/file3or ./file1 ./file2 ./file3instead of /file1 /file2 /file3Does anyone know how I could get the desired output? Google didn't help me so far. I really need absolute path names in the output file, because I am using it for an u-boot uImage file system image, and that requires the paths to be absolute, or it won't boot.
Create cpio file with different absolute directory
Most cpio implementations are dumb and do not manage directory permissions while unpacking archives. If a directory has no write access and the cpio archive is in the usual order from find, the directory would be first in the cpio archive and unpacked first from the cpio archive. When such a "readonly" directory has been unpacked and given it's permissions, it has no permissions to put files into when later the directory content is seen in the archive and going to be unpacked.one solution for this cpio problem is to create archives where the content of a directory comes first and the related directory comes after the content. This causes cpio to create the missing directory (if called with -d to create missing directories) with default permissions, extract the files inside from the archive and later, when the directory is seen in the archive, set the permissions to "readonly". another solution is to extract the archive with a dumb cpio implementation as root, since root is permitted to create files even inside a readonly directory. the third solution is to use a modern cpio implementation like the cpio emulation inside star. star remembers the directory permissions from the archive, but creates the directory with intermediate write permissions first. The remembered real directory permissions are set delayed by star, after the files in the archive have been extracted into the directory with intermediate write permission.
From the info cpio page:If you wanted to archive an entire directory tree, the find command can provide the file list to cpio: % find . -print -depth | cpio -ov > tree.cpioThe '-depth' option forces 'find' to print of the entries in a directory before printing the directory itself. This limits the effects of restrictive directory permissions by printing the directory entries in a directory before the directory name itself.What does this last part mean? How does printing the directory entries in a directory before the directory name itself limit the effects of restrictive directory permissions?
Why do we use `find -depth` with `cpio`
initramfs images contain multiple cpio archives; the name of your file suggests you’re using a Ubuntu derivative, so the simplest option for you to list the full contents is to use lsinitramfs: lsinitramfs initrd.img-5.4.0-18-genericTo extract the contents, use unmkinitramfs: unmkinitramfs initrd.img-5.4.0-18-generic initramfsThis will extract all the files to the initramfs directory.
I can see my initrd is occupied almost 90 MB of disk but after extracting it via cpio , it contains only a 30 KB microcode : $ cpio -it < initrd.img-5.4.0-18-generic . kernel kernel/x86 kernel/x86/microcode kernel/x86/microcode/AuthenticAMD.bin 62 blocksI know that there should be a lot of files and tools which are needed by the kernel in the first stage of booting , but I cannot find anything useful in it. $ file initrd.img-5.4.0-18-generic initrd.img-5.4.0-18-generic: ASCII cpio archive (SVR4 with no CRC)I took a look at here and here and this question but these are too old and don't work for me.My initrd.img is not a gzip archive . How to extract that file properly? I use kernel v.5.4.0 Thanks.
Problem extracting the "initrd" archive in kernel 5.4
cpio has a -E (--pattern-file) option, which allows you to read the list of filenames from a file instead of (or as well as) providing the filenames on the command line. For example: cpio -icuBdmv -E files-to-extract < preserved.cpiocpio also has -F to specify an archive name (instead of using stdin/stdout). -I and -O are similar but work instead of stdin or stdout respectively. e.g. you can specify the archive with -F and provide the files list on stdin: cpio -icuBdmv -F preserved.cpio < files-to-extract or use both -E and -F cpio -icuBdmv -E files-to-extract -F preserved.cpioBTW, with many GNU programs (including cpio, the man pages are almost useless, but they are well-documented in .info files. On some Linux distributions, the info docs are often separate packages (e.g. on debian, cpio-doc), so you'll need to install them as well as an info reader (such as GNU info or pinfo) anyway, here's some relevant extracts from the cpio info pages:-E FILE, --pattern-file=FILE Read additional patterns specifying filenames to extract or list from FILE. The lines of FILE are treated as if they had been non-option arguments to cpio. This option is used in copy-in mode,and-F ARCHIVE, --file=ARCHIVE Archive filename to use instead of standard input or output. To use a tape drive on another machine as the archive, use a filename that starts with HOSTNAME:, where HOSTNAME is the name or IP address of the machine. The hostname can be preceded by a username and an @ to access the remote tape drive as that user, if you have permission to do so (typically an entry in that user's ~/.rhosts file).
We have a cpio archive that was created by generating a file that contains a list of absolute paths to be included in the archive. (one absolute path per line of a plain text file) The command to generate the archive is essentially: cat list-of-files | cpio -ocvB > preserved.cpioWe later need to extract files from that archive. We again want to use a file that contains a list of files to be extracted (some subset of all of the files in the archive, again with the format of one absolute path per line of a plain text file). cpio -icuBdmv `cat files-to-extract` < preserved.cpioThis works fine unless one of the paths contains a space. Generating the archive is fine, but when extracting the files, any file with a space in the name is silently skipped. All other paths in files-to-extract are successfully extracted. I've been playing at the console trying to come up with some way to work around this, but to no avail. If I specify a single file with a space in the name and wrap it in quotes, the file is extracted successfully: # This extracts the file successfully cpio -icuBdmv "/foo/bar/some file.txt" < preserved.cpioSo I could read files-to-extract in a loop and extract each file one at a time, but these archives can be large (multiple GB), so that is dreadfully slow. I tried a couple things to somehow try to either escape the spaces in file paths or quote each path value, but nothing I've tried has worked. # Still skips the file with spaces: cpio -icuBdmv `cat files-to-extract | sed 's/ /\\ /'` < preserved.cpio # Extracts no files, even the ones without spaces: cpio -icuBdmv `cat files-to-extract | sed 's/\(.*\)/"\1"/'` < preserved.cpioIt would really be nice to be able to do this extraction with a single run of cpio rather than having to loop and extract one file at a time. I'm sure this is just a problem with how I am providing those values to cpio, but for the life of me I cannot figure out why. Probably not relevant, but just for completeness: this is on CentOS 7.2, using GNU cpio 2.11
Extract files with cpio where one or more paths may contain spaces
piping should be enough. Doing just: tar -cvj /path/to/your/files | ssh remote "cat > file.tar.bz2"(if you have set up passwordless log in using keys) Later on the other machine you can decompress the received file using tar -xvf path.tar.bz2 -C ./
I'm renting a couple of VPS with Ubuntu, and I've managed to fill-up one of them. Here I've got several directories with lots of files I'd like to put into an archive. Unfortunately, I don't have room enough to create such an archive (not even as root). I was therefor wondering if it's possible to use tar (preferred) or cpio to create the archive on the other VPS? I'll want to compress the archive, so either calling tar with the j-option (bzip2) or piping the file to bzip2 at some point (preferably before sending it over the network) - any suggestions as to how I best can compress the file? Finally, both VPS got ssh and sshd installed, so I was thinking of using it for the transmission of the file. +++ Unfortunately I'm not an expert in using tar/cpio or ssh in this way, so I'm a bit out of my depth as to something like this can (best) be accomplished. How should I use ssh? Like tunnel or pipe, like ftp, like scp? How should I use tar? Have tar running on both VPS, with a ssh-tunnel between them? Run tar on the source-VPS, tunnel the result and redirect the result to a file on the destination-VPS? So how can I do this? Are there other - better - way to do something like this? Some special-purpose package? Using network-sockets? Something else?
Archiving to remote-machine with tar/cpio and ssh?
The first error is because you're passing both -H newc and -c. You have to make up your mind on the format of the archive you want to generate. The "Operation not permitted" is a bug in GNU cpio, it's passing wrong arguments to the function that outputs that error message and should exit there. The other errors are because you're not running that command as the superuser or more likely , you're not running it from the correct location. Only the superuser can read files like /etc/shadow as it contains sensitive information. You should also make sure that the archive you generate can only be read by the superuser. If it's an initramfs you're creating, chances are /etc/shadow has not business being there, unless that initramfs contains a full operating system.
I have been using the following command on my system to create the .cpio archive to create an initramfs for my embedded target device sudo find . | cpio -H newc -oc > ~/initramfs.cpioThis has always worked for me without any problem. Yesterday I was generating a new archive and I received the following error: cpio: Archive format multiply defined: Operation not permitted cpio: ./etc/shadow: Function open failed: Permission denied cpio: ./usr/lib/ssh-keysign: Function open failed: Permission denied 64842 blocksI never received these errors in the past, the files mentions with failed opens have not been touched either so I cannot understand why this has started happening. I update my host system with Ubuntu package manager so it is possible that my cpio package has been updated too. I obviously have no faith in the initramfs generated here due to all of the errors which confuse me greatly. The only option I can think of is to try and find out if my cpio version has changed and if so remove and replace with the older version I had. Is there any way I can find out this information on my system (Ubuntu 12.04)? Or is there some other way I can get around this problem?
What is the meaning of the errors from my cpio command?
Starting with fedora 13 (I think that version anyway), RPMs started using sha256 checksums instead of md5. RHEL5/Centos5 do not support that. You need to add --nomd5 to your rpm install command.
I am trying to follow the Xen guide to provision a domU using package installation of the Fedora 15 release (the dom0 is CentOS 5.6). I've run the rpm install command with an alternate root to a mounted root LV, but I keep running into this issue: # rpm -ivh --nodeps --root /mnt/fedRoot fedora-release-15-1.noarch.rpm warning: fedora-release-15-1.noarch.rpm: Header V3 RSA/SHA256 signature: NOKEY, key ID 069c8460 Preparing... ########################################### [100%] 1:fedora-release ########################################### [100%] error: unpacking of archive failed: cpio: Bad magicI'm not sure where to begin with troubleshooting this. As I understand it, rpm reads the "root" filesystem (which I've designated to the mounted drive) and bases its verification and install directory structure based on the "root" system. What is the cpio: Bad magic bit? Any recommendations for making this rpm install work? let me know if more information is needed...
Getting "cpio: Bad magic" when trying to rpm install into a mounted Logical Volume
With pax as found on Debian, Suse, OpenBSD, NetBSD at least: find . -type f -name '*.pat' -print0 | pax -0rws'/?/_/gp' /path/to/dest/pax is a standard utility (contrary to tar or cpio), but its -0 option is not, though can be found in a few implementations. If there's both a ?.pat and _.pat files, they will end up replaced with same name so one will overwrite the other in the destination. Same, if there's a _ and ? directory, their content will be merged inside the _ directory in the destination. With GNU sort and GNU uniq, you can check for conflicts beforehand with: find . -type f -name '*.pat' -print0 | tr '?' _ | sort -z | uniq -zd | tr '\0' '\n'Which would report conflicting files (but not directories). You could use zsh's zmv which would take care of conflicts, but that would still mean one mkdir and one cp per file: autoload zmv mkdir-and-cp() {mkdir -p -- $3:h && cp $@} zmv -n -Qp mkdir-and-cp '(**/)*.pat(D.)' '/path/to/dest/$f:gs/?/_/'(remove -n when happy).
When I don't need to adjust destination filenames I can do something like this: $ find -type f -name '*.pat' -print0 | xargs -O cp -t /path/to/destIt is safe because the filenames may even contain newline characters. An alternative: $ find -type f -name '*.pat' -print0 | cpio -p -0 -d /path/to/destNow I have the problem that the destination is a VFAT filesystem ... thus certain characters are just not allowed in filenames (e.g. '?'). That means that I have to adjust the destination filenames. Something like for i `find -type f -name '*.pat'` ; do cp "$i" `echo $i | sed 's/?/_/'` doneworks only for filenames without spaces - I could change IFS to just newline - but how to set '\0' as IFS? And still - the for loop leads to as much forks/execs (of mv/sed) as you have files - which is much more excessive than the few forks/execs needed for the two examples in the beginning. What are the alternatives to solve that problem?
How to copy a list of files and adjust destination filenames on the fly?
For some weird reason, cpio doesn't like to take a file argument. Instead, you have to pipe the archive into cpio. An inexperienced user would do the following: cat initramfs-linux.img | cpio -iHowever, this would get you the Useless Use of cat Award. A better way would be: cpio -i < initramfs-linux.imgThis uses the shell's built-in redirection capabilities instead of spawning a new process.
I've been trying to unarchive a cpio archive (in this case, my initial ramdisk). However, when I try to extract the files, cpio hangs forever. It happens if I pass the -V argument to print extra info, too. alex@alexs-arch-imac:/tmp/initramfs$ cpio -i initramfs-linux.img # wait for a while after this ^C alex@alexs-arch-imac:/tmp/initramfs$ How can I extract a cpio archive without the utility hanging?
When I execute a cpio command, it hangs forever
This happens because find prints the full path from your current location (ie. including src). You need to strip off the first path component, or move further into the directory structure to avoid this. cd src && find . -name '*.json' -print0 | cpio -0pdm ../lib
I used the following command line find src -name '*.json' | cpio -pdm libSo it found the json file as in the sceenshot belowBut then it takes the whole directory structure and places it into the lib folder:What I'm aiming for is for the file and its directory files structure (src -> server -> data -> diceware.json) to be merged into the new folder (lib -> server -> data -> diceware.json) Perhaps somebody can help
Copy file and file structure and merge in new directory
if you extract the cpio as root you will preserve the ownerships.
I am trying to create modify an u-boot filesystem image. At first I tried to extract, modify and then pack it again, but that didn't work, because extracting and repacking seems to mess up the ownerships of the files. So I tried to modify it, without explicit extraction using file-roller. This should work, but sadly file-roller saves the file in the wrong format and doesn't let me change the format. So is there a way to convert a .cpio to newc-format without extracting it? Before I used the following commands to extract and pack: cpio -idv --no-absolute-filenames < ../filesystem.cpio find . -print | cpio -ov -H newc > ../output.cpio
Change CPIO format to newc without extraction
Newer versions of GNU cpio have a --reproducible flag which goes some way towards your requirements. My understanding is that the strip-nondeterminism tool will handle the timestamp requirement after the fact. touch will allow you to set the time before you package of course.
I would like my initramfs to have the same hash no matter when or where I build it if the contents of the files are the same (and are owned by root and have same permissions). I don't see any options in GNU cpio to strip or set timestamps of files in the archive. Is there a relatively standard way to massage the input to cpio and other archive programs so you can get reproducible products? Going along with this, is there a conventional "We aren't giving this a date" timestamp? Something most software won't wig out about? For example 0 epoch-seconds? For example, if I did a find pass on an input directory for an initramfs and manually set all the timestamps to 0, could I build that archive, extract it on another system, repeat the process, and build it again and get bit-identical files?
Is there a practical way to make binary-reproducible CPIO (initramfs) archives?
This should work as long as no filename contain the '>' characters: pax -w -x sv4cpio -s '>^\.>>' . >../data.cpioThe -x sv4cpio should satisfy the requirement for using -H newc (SVR4 format).
I build some archive for linux kernel it need an archive file with absolute path. File are under folder /data/ when I pack normally it keeps file path like this bin/ln bin/ls etcthe command I used is like this cd /data find|cpio -o -H newc -F ../data.cpiobut I want cpio to keep file with root path like this /bin/ln /bin/ls /etcI found I shuold use pax but I dont know how to write regex for path replacement
Create absolute path in cpio archive
There are two problems to solve:how to remove the files without interfering with your output, and where to put the output while it is being created.If you happen to not have any dot-files in /var/backup/SQL, it is simple:just create your output named with a leading ".", add to the tar-file using the --remove-files option, and rename the output to tmp.tar.gz when done.Something like cd /var/backup/SQL tar cfz .tmp.tar.gz --remove-files * && mv .tmp.tar.gz tmp.tar.gzIf you do have dot-files, then you could construct a list of the files to be tar'd and then use that list in constructing the tar-file. Using Linux, you could use the -T (--files-from) option to read this list, e.g., cd /var/backup/SQL find . -type f >/tmp/list tar czf tmp.tar.gz --remove-files --files-from /tmp/list(Someone's sure to suggest process substitution rather than a temporary file, but that has the drawback of limited size, which may be a problem).
under SQL directory we have only the tmp folder (tmp folder usage 59G) is it possible to compress the folder tmp without to leave the original tmp folder ? , so the compression will work on the original folder the folder usage: root@serverE1:/var/backup/SQL # du -sh * 59G tmpso after compression I will see only this : (8G is only example) 8G tmp.tar.gz
how to compress a folder without to leave the original folder and without to remove the original folder
I was ultimately able to do it using something like this: rpm2cpio myrpm.rpm | cpio -ivd './var/lib/**/*'
I have an RPM contaning files in ./var/lib which I need to extract onto my filesystem on a Debian machine. I'm trying to do this: rpm2cpio myrpm.rpm | cpio -ivd ./var/libNothis is extracted. If I specify the full path to the exact file which I want, I get it, but I need to extract the entire tree. How can I extract the entire tree (ie: all files within a directory in the RPM) to the local filesystem?
Extract tree from cpio archive
I found a sample big-endian cpio archive (it was already commented in the libmagic file): # https://sembiance.com/fileFormatSamples/archive/cpio/skeleton2.cpioThe path entries start at the exact spot (26th byte) as the little-endian archive. So to answer my own question: No. There's no reason not to check the 26th byte for the byte-swapped cpio archives.
I'm on a little-endian linux machine and would like to see the canoncial hexdump of a cpio archive on big-endian linux machine. Can someone please run these commands on a big-endian linux and post the output: echo TESTING > /tmp/test cpio -o <<< "/tmp/test" > /tmp/test.cpio hexdump -C /tmp/test.cpioIf you are curious, I need this because libmagic does the following to determine the cpio archive type: # same byteorder machine 0 short 070707 26 string >\0 cpio archive# opposite byteorder machine 0 short 0143561 byte-swapped cpio archiveI want to see if there's a reason libmagic doesn't check 26th byte of the archive for the opposite byteorder machine. The output of the command on my little-endian machine: 1 block 00000000 c7 71 1b 00 57 01 a4 81 e8 03 e8 03 01 00 00 00 |.q..W...........| 00000010 ff 65 ce a4 0a 00 00 00 08 00 2f 74 6d 70 2f 74 |.e......../tmp/t| 00000020 65 73 74 00 54 45 53 54 49 4e 47 0a c7 71 00 00 |est.TESTING..q..| 00000030 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 |................| 00000040 0b 00 00 00 00 00 54 52 41 49 4c 45 52 21 21 21 |......TRAILER!!!| 00000050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00000200
CPIO Archive Hexdump on a Big-Endian Linux Machine
I usually get this when I've used pip to install/upgrade dnspython. This can happen inadvertently when using pip to install some other python package that has dnspython as a dependency. Try manually removing anything in /usr/lib/python2.7/site-packages/ related to dnspython and then try installing with dnf again.
I am not sure whether this constitutes a bug - so, I dare to try it here... When attempting to install (with dnf) versions of the package python-dns, I get the following error: unpacking of archive failed on file /usr/lib/python2.7/site-packages/dnspython-1.12.0-py2.7.egg-info: cpio: renameI run 4.3.4-300.fc23.x86_64 and have tried installing python-dns-1.12.0-2.fc23.noarch as well as python-dns-1.12.0GIT465785f-1.fc23.noarch. The question is open, I am afraid: ideally I would learn how to solve the error; but I would also settle for advise where else I should post the question. added information as reaction to comments I used the command "sudo dnf install python-dns" to install the package. python-dns-1.12.0GIT465785f-1.fc23.noarch came from the default fedora repository "Fedora 23 - x86_64". python-dns-1.12.0-2.fc23.noarch came from http://koji.fedoraproject.org/koji/buildinfo?buildID=659336
How to engage with Fedora 23 error message in dnf install error of dnspython?
If other archive tools can archive both files and directories and cpio can only archive files, what are the advantages and/or use cases of cpio?None. Fedora's cpio documentation discourages the use of CPIO: __WARNING__ The cpio utility is considered LEGACY based on POSIX specification. Users are encouraged to use other archiving tools for archive creation.There's really no advantages, unless you need to use a system for which CPIO exists, but TAR, mksquashfs, … don't. (Such systems do to the best of my knowledge not exist for the last 20 or so years.) Some formats are simply stuck on cpio for reasons of legacy; .rpm is one of these. That's the only reason why you'd use cpio: Some decades-old software expects you to.If it's legacy and discouraged from use, why is it required for the LFCS exam?Exams imitate real life, and typically, with a long delay between reality and examination. Also, often people designing tests are "old-school" because, well they were there when cpio was the only reliable way to exchange archives between different POSIX systems. But then again, Stéphane is right, Linux initrd is typically cpio! Generally, "it's in a test, it must be relevant" is not inherently... true.ZIP is not an option, usually, because it simply doesn't support file ownership and access attributes. It's a fine tool if you're using MS-DOS, which is exactly what it was made for. Even on modern Windows, files have more attributes that ZIP simply can't represent.
With cpio, you need to direct a list of files into cpio's standard input, whereas with tools such as tar, zip, etc, it's possible to recursively archive a directory (or multiple directories). I understand it's considered good manners and/or best practice to archive a directory so that if you give your *.tar or *.zip archive to somebody else, when they extract it, they don't get a splattering of files all over the place in whatever directory they're extracting it to. If other archive tools can archive both files and directories and cpio can only archive files, what are the advantages and/or use cases of cpio?
What are the advantages of cpio over tar, zip, etc? [closed]
The following instructions are valid for CUDA 7.0, 7.5, and several previous (and probably later) versions. As far as Debian distributions, they're valid for Jessie and Stretch and probably other versions. They assume an amd64 (x86_64) architecture, but you can easily adapt them for x86 (x86_32). Installation prerequisitesg++ - You should use the newest GCC version supported by your version of CUDA. For CUDA 7.x this would be version 4.9.3, last of the 4.x line; for CUDA 8.0, GCC 5.x versions are supported. If your distribution uses GCC 5.x by default, use that, otherwise GCC 5.4.0 should do. Earlier versions are usable but I wouldn't recommend them, if only for the better modern-C++ feature support for host-side code. gcc - comes with g++. I even think CMake might default to having nvcc invoke gcc rather than g++ in some cases with a -x switch (but not sure about this). libGLU - Mesa OpenGL libraries (+ development files?) libXi - X Window System Xinput extension libraries (+ development files?) libXmu - X Window System "miscellaneous utilities" library (+ development files?) Linux kernel - headers for the kernel version you're running.If you want a list of specific packages - well, that depends on exactly which distribution you're using. But you can try the following (for CUDA 7.x): sudo apt-get install gcc g++ gcc-4.9 g++-4.9 libxi libxi6 libxi-dev libglu1-mesa libglu1-mesa-dev libxmu6 libxmu6-dev linux-headers-amd64 linux-sourceAnd you might add some -dbg versions of those packages for debugging symbols. I'm pretty sure this covers it all - but I might have missed something I just had installed already. Also, CUDA can work with clang, at least experimentally, but I haven't tried that. Installing the CUDA kernel driverGo to NVIDIA's CUDA Downloads page. Choose Linux > x86_64 > Ubuntu , and then whatever latest version they have (at the time of writing: Ubuntu 15.04). Choose the .run file option. Download the .run file (currently this one). Make sure not to put it in /tmp. Make the .run file executable: chmod a+x cuda_7.5.18_linux.run. Become root. Execute the .run file: Pretend to accept their silly shrink-wrap license; say "yes" to installing just the NVIDIA kernel driver, and say "no" to everything else.The installation should tell you it expects to have installed the NVIDIA kernel driver, but that you should reboot before continuing/retrying the toolkit installation. So...Having apparently succeeded, reboot.Installing CUDA itselfBe root. Locate and execute cuda_7.5.18_linux.run This time around, say No to installing the driver, but Yes to installing everything else, and accept the default paths (or change them, whatever works for you).The installer is likely to now fail. That is a good thing assuming it's the kind of failure we expect: It should tell you your compiler version is not supported - CUDA 7.0 or 7.5 supports up to gcc 4.9 and you have some 5.x version by default. Now, if you get a message about missing libraries, that means my instructions above regarding prerequisites somehow failed, and you should comment here so I can fix them. Assuming you got the "good failure", proceed to:Re-invoke the .run file, this time with the --override option. Make the same choices as in step 11.CUDA should now be installed, by default under /usr/local/cuda (that's a symlink). But we're not done! Directing NVIDIA's nvcc compiler to use the right g++ version NVIDIA's CUDA compiler actually calls g++ as part of the linking process and/or to compile actual C++ rather than .cu files. I think. Anyway, it defaults to running whatever's in your path as g++; but if you place another g++ under /usr/local/cuda/bin, it will use that first! So...Execute symlink /usr/bin/g++-4.9 /usr/local/cuda/bin/g++ (and for good measure, maybe also symlink /usr/bin/gcc-4.9 /usr/local/cuda/bin/gcc.That's it. Trying out the installationcd /root/NVIDIA_CUDA-7.5_Samples/0_Simple/vectorAdd makeThe build should conclude successfully, and when you do./vectorAddyou should get the following output: root@mymachine:~/NVIDIA_CUDA-7.5_Samples/0_Simple/vectorAdd# ./vectorAdd [Vector addition of 50000 elements] Copy input data from the host memory to the CUDA device CUDA kernel launch with 196 blocks of 256 threads Copy output data from the CUDA device to the host memory Test PASSED DoneNotesYou don't need to install the NVIDIA GDK (GPU Development Kit), but it doesn't hurt and it might be useful for some. Install it to the root directory of your system; it's pretty safe and there's an uninstaller afterwards: /usr/bin/uninstall_gdk.pl. In CUDA 8 it's already integrated into CUDA itself IIANM. Do not install additional packages with names like nvidia-... or cuda... ; they might not hurt but they'll certainly not help. Before doing any of these things, you might want to make sure your GPU is recognized at all, using lspci | grep -i nvidia.
How to install Cuda Toolkit 7.0 or 8 on Debian 8? I know that Debian 8 comes with the option to download and install CUDA Toolkit 6.0 using apt-get install nvidia-cuda-toolkit, but how do you do this for CUDA toolkit version 7.0 or 8? I tried installing using the Ubuntu installers, as described below: sudo wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_7.0-28_amd64.debdpkg -i cuda-repo-ubuntu1404_7.0-28_amd64.debsudo apt-get updatesudo apt-get install -y cudaHowever it did not work and the following message was returned: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation:The following packages have unmet dependencies: cuda : Depends: cuda-7-0 (= 7.0-28) but it is not going to be installed E: Unable to correct problems, you have held broken packages.
How to install CUDA Toolkit 7/8/9 on Debian 8 (Jessie) or 9 (Stretch)?
I have cuda 9.1 installed on Minty 18.3 with nvidia 387 390 drivers. This is what I did:Use the driver manager to install nvidia 390 387 (for some reason, 390 isn't working). Note the pic shows 387, but 390 works now.Get the nvidia repo from: wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1704/x86_64/cuda-repo-ubuntu1704_9.1.85-1_amd64.debInstall nvdia's repo and repo key for ubuntu 17.04: sudo dpkg -i cuda-repo-ubuntu1704_9.1.85-1_amd64.deb sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1704/x86_64/7fa2af80.pubUpdate apt's software index sudo apt-get updateInstall cuda sudo apt-get install cudaEdit: one other thing. If you build the samples, you need to set GLPATH environment variable to /usr/lib: export GLPATH=/usr/lib before you run make in the samples dir. see: https://stackoverflow.com/a/34648972/356011
I'm running Linux Mint 18.3 with the Cinnamon desktop environment, and I want to install CUDA 9.1 and NVIDIA drivers. How can I do that?
How to install CUDA 9.1 on Mint 18.3?
You didn't enable epel. You enabled the codeready-builder repo. First, add the epel repo: dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpmIt's normally enabled by default after it's installed but if not: dnf config-manager --enable epelYou can then install dkms.
I want to install CUDA according to the info on NVIDIA CUDA toolkit site wget http://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda-repo-rhel8-10-2-local-10.2.89-440.33.01-1.0-1.x86_64.rpm sudo rpm -i cuda-repo-rhel8-10-2-local-10.2.89-440.33.01-1.0-1.x86_64.rpm sudo dnf clean all sudo dnf -y module install nvidia-driver:latest-dkms sudo dnf -y install cuda ... Error: Problem 1: conflicting requests - nothing provides dkms needed by kmod-nvidia-latest-dkms-3:440.33.01-1.el8.x86_64If I enable epel via sudo subscription-manager repos --enable "codeready-builder-for-rhel-8-$(arch)-rpms" [sudo] password for adminsafe20: Repository 'codeready-builder-for-rhel-8-x86_64-rpms' is enabled for this system.it looks ok, but I don't see anything returning from yum search dkms except for: ============================== Name Matched: dkms ============================== kmod-nvidia-latest-dkms.x86_64 : NVIDIA display driver kernel modulewitch I get the same original message as before trying to install: - nothing provides dkms needed by kmod-nvidia-latest-dkms-3:440.33.01-1.el8.x86_64
CUDA 10 and dkms on RHEL8
The nvidia-cuda-toolkit package is a non-free software , edit your /etc/apt/sources.list by adding the non-free component : apt edit-sourcesThen edit your sources , there is an example: deb http://deb.debian.org/debian stretch main contrib non-freesave and run: apt update apt install nvidia-cuda-toolkit Componentmain consists of DFSG-compliant packages, which do not rely on software outside this area to operate. These are the only packages considered part of the Debian distribution. contrib packages contain DFSG-compliant software, but have dependencies not in main (possibly packaged for Debian in non-free). non-free contains software that does not comply with the DFSG.
I am new to Debian and I want to install the NVIDIA CUDA toolkit, so I typed: apt install nvidia-cuda-toolkitbut it did not work, I found out I need to add a source in /etc/apt/sources.list which contains this CUDA package. However, as for now there are only a few lines in /etc/apt/sources.list referring to a university, which I chose during installation. I do not know how to find out which source I need to add. Is there a Debian database, where I can submit the program I need and which in turn gives me a list of sources containing it?
What sources to add in order to install cuda toolkit with apt on Debian? [duplicate]
Update April 2024: Now Fedora 39 supports Cuda 12 toolkit officially. I prefer using conda to install it; nvidia has an official conda package for it. (my preference is because I don't want to touch the nvidia driver installed thru rpmfusion. Hence no need anymore to go through the steps below, until Fedora becomes version 40, after which Cuda will probably not officially be supported on that for several months. Update August 2023: It's been about a year since I posted my question, and today the latest Fedora 38 has gcc13, while the latest supported cuda toolkit by Nvidia is Fedora 37 with gcc12. The instructions above still work, though I want to add some more context when using CLion. The Jetbrains documentation suggest to edit CMAKE settings within CLion, but that did not work for me at all. Instead, what worked for me was to put the following in ~/.bash_profile (not .bashrc because I think that's just for terminals) and then log out and log back in: PATH="$PATH:/usr/local/cuda/bin" CUDAHOSTCXX='/home/linuxbrew/.linuxbrew/bin/g++-12'; export CUDAHOSTCXXI installed Cuda toolkit 12.2 using the local runfile, not with the rpm repo methods, since those gave me scary messages about package incompatibilities. When running the runfile, I make sure to uncheck "install driver" in the little command line GUI they provide (which kinda resembles a debian installer in "low-graphics" mode). That way it doesn't clobber the rpmfusion akmod nvidia driver I already had installed. Original answer: First, install fedora 36, and choose to enable third party repos when asked. Then (from the RPM fusion nvidia howto page): sudo dnf install akmod-nvidia sudo dnf install xorg-x11-drv-nvidia-cudathen wait a minute or two until modinfo -F version nvidia gives a non-error output. Then, reboot so that Nvidia drivers will take effect over Nouveau. Then, (From RPM cuda fusion howto page): sudo dnf config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/fedora35/x86_64/cuda-fedora35.repo sudo dnf clean all sudo dnf module disable nvidia-driver sudo dnf -y install cudaThe 35 in first line is intentional. Also, the module disable line does not disable your existing akmod nvidia driver that you just installed, but rather prevents the next line from installing Nvidia's dkms driver over your existing driver. After this, /usr/local/cuda/bin/nvcc will be available, but if you try to run it on a .cu file, it will complain that "gcc 12 is not supported". It gives you a flag to ignore this and just go ahead anyways, but to get rid of this warning, we can do the following to quickly obtain gcc-11: Credit goes to a comment in this reddit thread. First, Install homebrew using their instructions. I just used the default location, which was /home/linuxbrew, but if you wanted, you could install in a custom location like your home directory. Then do brew install gcc@11. Finally, nvcc will work without complaints if you directly tell it to use gcc-11 using the -ccbin flag, for example: /usr/local/cuda/bin/nvcc -ccbin g++-11 foo.cu -o fooIf you don't want to pollute your default path with brew's gcc-11 for some reason, you can explicitly tell nvcc to always use brew's gcc-11 using an env variable. For example, put the following in ~/bash_profile and then logout and login: export NVCC_PREPEND_FLAGS='-ccbin /home/linuxbrew/.linuxbrew/bin/g++-11'
As of Sept 2022, Nvidia still has not officially supported cuda toolkit on Fedora 36. The particular part missing is support for gcc12, which Fedora 36 defaults to. One solution to use nvcc on fedora is to go to fedora mirrors and download Fedora 35. However, I'd like to know how to getting nvcc to work on Fedora 36. There's an RPM fusion wiki page on cuda, though some of the info is still somewhat difficult to find. The fedora 35 cuda repo is complete and has all the necessary files, but (as of Sept 2022) the equivalent fedora 36 nvidia cuda repo exists but seems incomplete, in particular it's missing the rpm files that start with cuda-11....
How do I use Cuda toolkit nvcc 11.7.1 on Fedora 36?
nvidia provides the way to set the group ID of its special device files without needing to resort to whatever extra somber script :Whether a user-space NVIDIA driver component does so itself, or invokes nvidia-modprobe, it will default to creating the device files with the following attributes: UID: 0 - 'root' GID: 0 - 'root' Mode: 0666 - 'rw-rw-rw-'Existing device files are changed if their attributes don't match these defaults. If you want the NVIDIAdriver to create the device files with different attributes, you can specify them with the "NVreg_DeviceFileUID" (user), "NVreg_DeviceFileGID" (group) and "NVreg_DeviceFileMode" NVIDIA Linux kernel module parameters.The nvidia Linux kernel modue parameters can be set in the /etc/modprobe.d/nvidia.conf file, mine tells : ... options nvidia \ NVreg_DeviceFileGID=27 \ NVreg_DeviceFileMode=432 \ NVreg_DeviceFileUID=0 \ NVreg_ModifyDeviceFiles=1\ ...And I indeed can ls -ails /dev/nvidia0 : 3419 0 crw-rw---- 1 root video 195, 0 4 déc. 15:01 /dev/nvidia0and witness the fact that access to root owned special files is actually restricted to the members of the video group (GID=27 on my system) Therefore, all you need to do is to get the group id of your gpu_cuda group and modify (or setup) your nvidia.conf accordingly.Credits : /usr/share/doc/nvidia-drivers-470.141.03/html/faq.html (you'll probably need to adapt the path to your driver version).
On a server with Tesla Nvidia Card we decide to Restrict user access to GPU. In our server 2 GPU. # ls -las /dev/nvidia* 0 crw-rw-rw-. 1 root root 195, 0 Dec 2 22:02 /dev/nvidia0 0 crw-rw-rw-. 1 root root 195, 1 Dec 2 22:02 /dev/nvidia1I found this solve Defining User Restrictions for GPUs I create local group gpu_cuda sudo groupadd gpu_cudaafter add user to group gpu_cuda Create a config file at /etc/modprob.d/nvidia.conf with content #!/bin/bash options nvidia NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=0 NVreg_DeviceFileMode=0777 NVreg_ModifyDeviceFiles=0Create script in /etc/init.d/gpu-restriction #!/bin/bash ### BEGIN INIT INFO # Provides: gpu-restriction # Required-Start: $all # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. # permissions if needed. ### END INIT INFO set -e start() { /sbin/modprobe --ignore-install nvidia; /sbin/modprobe nvidia_uvm; test -c /dev/nvidia-uvm || mknod -m 777 /dev/nvidia-uvm c $(cat /proc/devices | while read major device; do if [ "$device" == "nvidia-uvm" ]; then echo $major; break; fi ; done) 0 && chown :root /dev/nvidia-uvm; test -c /dev/nvidiactl || mknod -m 777 /dev/nvidiactl c 195 255 && chown :root /dev/nvidiactl; devid=-1; for dev in $(ls -d /sys/bus/pci/devices/*); do vendorid=$(cat $dev/vendor); if [ "$vendorid" == "0x10de" ]; then class=$(cat $dev/class); classid=${class%%00}; if [ "$classid" == "0x0300" -o "$classid" == "0x0302" ]; then devid=$((devid+1)); test -c /dev/nvidia${devid} || mknod -m 750 /dev/nvidia${devid} c 195 ${devid} && chown :gpu_cuda /dev/nvidia${devid}; fi; fi; done } stop() { : } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; status) # code to check status of app comes here # example: status program_name ;; *) echo "Usage: $0 {start|stop|status|restart}" esac exit 0I reboot server and run /etc/init.d/gpu-restriction startcheck result in first time is good. # ls -las /dev/nvidia* 0 crw-rw-rw-. 1 root gpu_cuda 195, 0 Dec 2 22:02 /dev/nvidia0 0 crw-rw-rw-. 1 root gpu_cuda 195, 1 Dec 2 22:02 /dev/nvidia1but in second time, chown group is back to root # ls -las /dev/nvidia* 0 crw-rw-rw-. 1 root root 195, 0 Dec 2 22:02 /dev/nvidia0 0 crw-rw-rw-. 1 root root 195, 1 Dec 2 22:02 /dev/nvidia1Why result back? and how to solve this problem?
Restricting user access to nvidia GPU?
Current driver seems to be causing black screen and freezing machine on boot. +-----------------------------------------------------------------------------+ | NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8 | |-------------------------------+----------------------+----------------------+I have this issue on bare metal Ubuntu 22.04 after upgrading the driver/cuda packages. However, virtual machines that get similar rtx3090 passthrough GPUs work fine with the same driver and OS versions. Perhaps because they use GPUs only for compute and not for display. Some people say switching from HDMI input to DP might help. I haven't tested. The fix according to Nvidia rep will be out in the next release, so you can either downgrade to previous version or wait for a fix. https://forums.developer.nvidia.com/t/nvidia-driver-520-61-05-cuda-11-8-rtx-3090-black-display-and-superslow-modesets/230217/5
My GPU is NVIDIA - GeForce RTX 3090 Ti, and the OS is Ubuntu 18.04. As my code didn’t work, I checked the versions of python, pytorch, cuda, and cudnn.Python: 3.6 torch. version : 1.4.0 torch.version.cuda : 10.1 (nvidia-smi shows CUDA version 11.3) cudnn: 7.6.3These are not compatible with 3090 Ti, I successfully upgraded Python to 3.9, and Pytorch to 1.12.1+cu102. However, “pip3 install cuda-python” and “pip install nvidia-cudnn” did not work for me. So I followed the steps on the website.For cuda (tried version 11.8): https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=18.04&target_type=deb_local For cudnn (tried version 8.6.0, tar file installation): Installation Guide :: NVIDIA Deep Learning cuDNN DocumentationAfter the installation steps, nvidia-smi shows “Failed to initialize NVML: Driver/library version mismatch”. I found that rebooting would work, but the system is stuck at the rebooting step. dpkg -l |grep nvidiaiU libnvidia-cfg1-520:amd64 520.61.05-0ubuntu1 amd64 NVIDIA binary OpenGL/GLX configuration library ii libnvidia-common-465 465.19.01-0ubuntu1 all Shared files used by the NVIDIA libraries iU libnvidia-common-520 520.61.05-0ubuntu1 all Shared files used by the NVIDIA libraries rc libnvidia-compute-465:amd64 465.19.01-0ubuntu1 amd64 NVIDIA libcompute package iU libnvidia-compute-520:amd64 520.61.05-0ubuntu1 amd64 NVIDIA libcompute package iU libnvidia-compute-520:i386 520.61.05-0ubuntu1 i386 NVIDIA libcompute package ii libnvidia-container-tools 1.11.0-1 amd64 NVIDIA container runtime library (command-line tools) ii libnvidia-container1:amd64 1.11.0-1 amd64 NVIDIA container runtime library iU libnvidia-decode-520:amd64 520.61.05-0ubuntu1 amd64 NVIDIA Video Decoding runtime libraries iU libnvidia-decode-520:i386 520.61.05-0ubuntu1 i386 NVIDIA Video Decoding runtime libraries iU libnvidia-encode-520:amd64 520.61.05-0ubuntu1 amd64 NVENC Video Encoding runtime library iU libnvidia-encode-520:i386 520.61.05-0ubuntu1 i386 NVENC Video Encoding runtime library iU libnvidia-extra-520:amd64 520.61.05-0ubuntu1 amd64 Extra libraries for the NVIDIA driver iU libnvidia-fbc1-520:amd64 520.61.05-0ubuntu1 amd64 NVIDIA OpenGL-based Framebuffer Capture runtime library iU libnvidia-fbc1-520:i386 520.61.05-0ubuntu1 i386 NVIDIA OpenGL-based Framebuffer Capture runtime library iU libnvidia-gl-520:amd64 520.61.05-0ubuntu1 amd64 NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD iU libnvidia-gl-520:i386 520.61.05-0ubuntu1 i386 NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD rc nvidia-compute-utils-465 465.19.01-0ubuntu1 amd64 NVIDIA compute utilities iU nvidia-compute-utils-520 520.61.05-0ubuntu1 amd64 NVIDIA compute utilities ii nvidia-container-toolkit 1.11.0-1 amd64 NVIDIA Container toolkit ii nvidia-container-toolkit-base 1.11.0-1 amd64 NVIDIA Container Toolkit Base rc nvidia-dkms-465 465.19.01-0ubuntu1 amd64 NVIDIA DKMS package iU nvidia-dkms-520 520.61.05-0ubuntu1 amd64 NVIDIA DKMS package iU nvidia-driver-520 520.61.05-0ubuntu1 amd64 NVIDIA driver metapackage rc nvidia-kernel-common-465 465.19.01-0ubuntu1 amd64 Shared files used with the kernel module iU nvidia-kernel-common-520 520.61.05-0ubuntu1 amd64 Shared files used with the kernel module iU nvidia-kernel-source-520 520.61.05-0ubuntu1 amd64 NVIDIA kernel source package iU nvidia-modprobe 520.61.05-0ubuntu1 amd64 Load the NVIDIA kernel driver and create device files ii nvidia-opencl-dev:amd64 9.1.85-3ubuntu1 amd64 NVIDIA OpenCL development files ii nvidia-prime 0.8.16~0.18.04.1 all Tools to enable NVIDIA’s Prime iU nvidia-settings 520.61.05-0ubuntu1 amd64 Tool for configuring the NVIDIA graphics driver iU nvidia-utils-520 520.61.05-0ubuntu1 amd64 NVIDIA driver support binaries iU xserver-xorg-video-nvidia-520 520.61.05-0ubuntu1 amd64 NVIDIA binary Xorg driver ls -l /usr/lib/x86_64-linux-gnu/libcuda* lrwxrwxrwx 1 root root 28 Sep 29 05:22 /usr/lib/x86_64-linux-gnu/libcudadebugger.so.1 → libcudadebugger.so.520.61.05 -rw-r–r-- 1 root root 10934360 Sep 29 01:20 /usr/lib/x86_64-linux-gnu/libcudadebugger.so.520.61.05 lrwxrwxrwx 1 root root 12 Sep 29 05:22 /usr/lib/x86_64-linux-gnu/libcuda.so → libcuda.so.1 lrwxrwxrwx 1 root root 20 Sep 29 05:22 /usr/lib/x86_64-linux-gnu/libcuda.so.1 → libcuda.so.520.61.05 -rw-r–r-- 1 root root 26284256 Sep 29 01:56 /usr/lib/x86_64-linux-gnu/libcuda.so.520.61.05dkms statusvirtualbox, 5.2.42, 5.4.0-126-generic, x86_64: installed virtualbox, 5.2.42, 5.4.0-72-generic, x86_64: installed
Stuck at booting after upgrading
You already have the repo for cuda 10.1.105-1 available. That is why it installed with yum install cuda You also already have cuda 8-0-8.0.61-1 and cuda 9-0-9.0.176-1 installed. If you want a different older version installed such as 10.0.130-1 then use this command: yum install cuda-10-0That will install it and all of its dependencies. You will then need to prepend it to your PATH and LD_LIBARRY_PATH. Add these lines to your ~/.bashrc export PATH=/usr/local/cuda-10.0/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda-10.0/targets/x86_64-linux/lib:$LD_LIBRARY_PATHThen log out and back in. That is assuming that you want cuda-10.0. If you don't then replace 10.0 with the version that you want with yum install cuda- and in prepending it to your PATH and LD_LIBRARY_PATH in ~/.bashrc.
I am trying to install CUDA version 10.0, but it is telling me a newer version is already installed. I am not able to find the newer version, so I can't run the uninstaller. I am on CentOS 7. Here's what I did: I used wget to get the rpm and then tried to install it with sudo rpm -i cuda-repo-rhel7-10.0.130-1.x86_64.rpm I get the following: package cuda-repo-rhel7-10.1.105-1.x86_64 (which is newer than cuda-repo-rhel7-10.0.130-1.x86_64) is already installedHowever, if I run nvcc --version I get: -bash: nvcc: command not found. which nvcc also doesn't show anything. I checked in usr/local and I have directories for cuda-7.5, cuda-9.0, cuda, and cuda-8.0. The cuda folder contains 9.0 in it. How can I find the installed version of CUDA and uninstall it? Based on a comment, I have added some outputs: yum list available | grep cuda results in: cuda.x86_64 10.1.105-1 cuda cuda-10-0.x86_64 10.0.130-1 cuda cuda-10-1.x86_64 10.1.105-1 cuda cuda-7-0.x86_64 7.0-28 cuda cuda-7-5.x86_64 7.5-18 cuda cuda-8-0.x86_64 8.0.61-1 cuda cuda-9-0.x86_64 9.0.176-1 cuda cuda-9-1.x86_64 9.1.85-1 cuda cuda-9-2.x86_64 9.2.148-1 cuda cuda-command-line-tools-10-0.x86_64 10.0.130-1 cuda cuda-command-line-tools-10-1.x86_64 10.1.105-1 cuda cuda-command-line-tools-7-0.x86_64 7.0-28 cuda cuda-command-line-tools-9-1.x86_64 9.1.85-1 cuda cuda-command-line-tools-9-2.x86_64 9.2.148-1 cuda cuda-compat-10-0.x86_64 1:410.104-1.el7 cuda cuda-compat-10-1.x86_64 1:418.40.04-1 cuda cuda-compiler-10-0.x86_64 10.0.130-1 cuda cuda-compiler-10-1.x86_64 10.1.105-1 cuda cuda-compiler-9-1.x86_64 9.1.85-1 cuda cuda-compiler-9-2.x86_64 9.2.148-1 cuda cuda-core-10-0.x86_64 10.0.130-1 cuda cuda-core-10-1.x86_64 10.1.105-1 cuda cuda-core-7-0.x86_64 7.0-28 cuda cuda-core-9-1.x86_64 9.1.85-1 cuda cuda-core-9-2.x86_64 9.2.148-1 cuda cuda-cublas-10-0.x86_64 10.0.130-1 cuda cuda-cublas-7-0.x86_64 7.0-28 cuda cuda-cublas-9-1.x86_64 9.1.85.3-1 cuda cuda-cublas-9-2.x86_64 9.2.148.1-1 cuda cuda-cublas-dev-10-0.x86_64 10.0.130-1 cuda cuda-cublas-dev-7-0.x86_64 7.0-28 cuda cuda-cublas-dev-9-1.x86_64 9.1.85.3-1 cuda cuda-cublas-dev-9-2.x86_64 9.2.148.1-1 cuda cuda-cudart-10-0.x86_64 10.0.130-1 cuda cuda-cudart-10-1.x86_64 10.1.105-1 cuda cuda-cudart-7-0.x86_64 7.0-28 cuda cuda-cudart-9-1.x86_64 9.1.85-1 cuda cuda-cudart-9-2.x86_64 9.2.148-1 cuda cuda-cudart-dev-10-0.x86_64 10.0.130-1 cuda cuda-cudart-dev-10-1.x86_64 10.1.105-1 cuda cuda-cudart-dev-7-0.x86_64 7.0-28 cuda cuda-cudart-dev-9-1.x86_64 9.1.85-1 cuda cuda-cudart-dev-9-2.x86_64 9.2.148-1 cuda cuda-cufft-10-0.x86_64 10.0.130-1 cuda cuda-cufft-10-1.x86_64 10.1.105-1 cuda cuda-cufft-7-0.x86_64 7.0-28 cuda cuda-cufft-9-1.x86_64 9.1.85-1 cuda cuda-cufft-9-2.x86_64 9.2.148-1 cuda cuda-cufft-dev-10-0.x86_64 10.0.130-1 cuda cuda-cufft-dev-10-1.x86_64 10.1.105-1 cuda cuda-cufft-dev-7-0.x86_64 7.0-28 cuda cuda-cufft-dev-9-1.x86_64 9.1.85-1 cuda cuda-cufft-dev-9-2.x86_64 9.2.148-1 cuda cuda-cuobjdump-10-0.x86_64 10.0.130-1 cuda cuda-cuobjdump-10-1.x86_64 10.1.105-1 cuda cuda-cuobjdump-9-1.x86_64 9.1.85-1 cuda cuda-cuobjdump-9-2.x86_64 9.2.148.1-1 cuda cuda-cupti-10-0.x86_64 10.0.130-1 cuda cuda-cupti-10-1.x86_64 10.1.105-1 cuda cuda-cupti-9-1.x86_64 9.1.85-1 cuda cuda-cupti-9-2.x86_64 9.2.148.1-1 cuda cuda-curand-10-0.x86_64 10.0.130-1 cuda cuda-curand-10-1.x86_64 10.1.105-1 cuda cuda-curand-7-0.x86_64 7.0-28 cuda cuda-curand-9-1.x86_64 9.1.85-1 cuda cuda-curand-9-2.x86_64 9.2.148-1 cuda cuda-curand-dev-10-0.x86_64 10.0.130-1 cuda cuda-curand-dev-10-1.x86_64 10.1.105-1 cuda cuda-curand-dev-7-0.x86_64 7.0-28 cuda cuda-curand-dev-9-1.x86_64 9.1.85-1 cuda cuda-curand-dev-9-2.x86_64 9.2.148-1 cuda cuda-cusolver-10-0.x86_64 10.0.130-1 cuda cuda-cusolver-10-1.x86_64 10.1.105-1 cuda cuda-cusolver-7-0.x86_64 7.0-28 cuda cuda-cusolver-9-1.x86_64 9.1.85-1 cuda cuda-cusolver-9-2.x86_64 9.2.148-1 cuda cuda-cusolver-dev-10-0.x86_64 10.0.130-1 cuda cuda-cusolver-dev-10-1.x86_64 10.1.105-1 cuda cuda-cusolver-dev-7-0.x86_64 7.0-28 cuda cuda-cusolver-dev-9-1.x86_64 9.1.85-1 cuda cuda-cusolver-dev-9-2.x86_64 9.2.148-1 cuda cuda-cusparse-10-0.x86_64 10.0.130-1 cuda cuda-cusparse-10-1.x86_64 10.1.105-1 cuda cuda-cusparse-7-0.x86_64 7.0-28 cuda cuda-cusparse-9-1.x86_64 9.1.85-1 cuda cuda-cusparse-9-2.x86_64 9.2.148-1 cuda cuda-cusparse-dev-10-0.x86_64 10.0.130-1 cuda cuda-cusparse-dev-10-1.x86_64 10.1.105-1 cuda cuda-cusparse-dev-7-0.x86_64 7.0-28 cuda cuda-cusparse-dev-9-1.x86_64 9.1.85-1 cuda cuda-cusparse-dev-9-2.x86_64 9.2.148-1 cuda cuda-demo-suite-10-0.x86_64 10.0.130-1 cuda cuda-demo-suite-10-1.x86_64 10.1.105-1 cuda cuda-demo-suite-8-0.x86_64 8.0.61-1 cuda cuda-demo-suite-9-0.x86_64 9.0.176-1 cuda cuda-demo-suite-9-1.x86_64 9.1.85-1 cuda cuda-demo-suite-9-2.x86_64 9.2.148-1 cuda cuda-documentation-10-0.x86_64 10.0.130-1 cuda cuda-documentation-10-1.x86_64 10.1.105-1 cuda cuda-documentation-7-0.x86_64 7.0-28 cuda cuda-documentation-9-1.x86_64 9.1.85-1 cuda cuda-documentation-9-2.x86_64 9.2.148-1 cuda cuda-driver-dev-10-0.x86_64 10.0.130-1 cuda cuda-driver-dev-10-1.x86_64 10.1.105-1 cuda cuda-driver-dev-7-0.x86_64 7.0-28 cuda cuda-driver-dev-9-1.x86_64 9.1.85-1 cuda cuda-driver-dev-9-2.x86_64 9.2.148-1 cuda cuda-drivers.x86_64 418.40.04-1 cuda cuda-drivers-diagnostic.x86_64 418.40.04-1 cuda cuda-gdb-10-0.x86_64 10.0.130-1 cuda cuda-gdb-10-1.x86_64 10.1.105-1 cuda cuda-gdb-9-1.x86_64 9.1.85-1 cuda cuda-gdb-9-2.x86_64 9.2.148.1-1 cuda cuda-gdb-src-10-0.x86_64 10.0.130-1 cuda cuda-gdb-src-10-1.x86_64 10.1.105-1 cuda cuda-gdb-src-7-0.x86_64 7.0-28 cuda cuda-gdb-src-7-5.x86_64 7.5-18 cuda cuda-gdb-src-8-0.x86_64 8.0.61-1 cuda cuda-gdb-src-9-0.x86_64 9.0.176-1 cuda cuda-gdb-src-9-1.x86_64 9.1.85-1 cuda cuda-gdb-src-9-2.x86_64 9.2.148-1 cuda cuda-gpu-library-advisor-10-0.x86_64 10.0.130-1 cuda cuda-gpu-library-advisor-10-1.x86_64 10.1.105-1 cuda cuda-gpu-library-advisor-9-1.x86_64 9.1.85-1 cuda cuda-gpu-library-advisor-9-2.x86_64 9.2.148-1 cuda cuda-libraries-10-0.x86_64 10.0.130-1 cuda cuda-libraries-10-1.x86_64 10.1.105-1 cuda cuda-libraries-9-1.x86_64 9.1.85-1 cuda cuda-libraries-9-2.x86_64 9.2.148-1 cuda cuda-libraries-dev-10-0.x86_64 10.0.130-1 cuda cuda-libraries-dev-10-1.x86_64 10.1.105-1 cuda cuda-libraries-dev-9-1.x86_64 9.1.85-1 cuda cuda-libraries-dev-9-2.x86_64 9.2.148-1 cuda cuda-license-10-0.x86_64 10.0.130-1 cuda cuda-license-10-1.x86_64 10.1.105-1 cuda cuda-license-7-0.x86_64 7.0-28 cuda cuda-license-9-1.x86_64 9.1.85-1 cuda cuda-license-9-2.x86_64 9.2.148-1 cuda cuda-memcheck-10-0.x86_64 10.0.130-1 cuda cuda-memcheck-10-1.x86_64 10.1.105-1 cuda cuda-memcheck-9-1.x86_64 9.1.85-1 cuda cuda-memcheck-9-2.x86_64 9.2.148-1 cuda cuda-minimal-build-10-0.x86_64 10.0.130-1 cuda cuda-minimal-build-10-1.x86_64 10.1.105-1 cuda cuda-minimal-build-7-0.x86_64 7.0-28 cuda cuda-minimal-build-7-5.x86_64 7.5-18 cuda cuda-minimal-build-8-0.x86_64 8.0.61-1 cuda cuda-minimal-build-9-0.x86_64 9.0.176-1 cuda cuda-minimal-build-9-1.x86_64 9.1.85-1 cuda cuda-minimal-build-9-2.x86_64 9.2.148-1 cuda cuda-misc-headers-10-0.x86_64 10.0.130-1 cuda cuda-misc-headers-10-1.x86_64 10.1.105-1 cuda cuda-misc-headers-7-0.x86_64 7.0-28 cuda cuda-misc-headers-9-1.x86_64 9.1.85-1 cuda cuda-misc-headers-9-2.x86_64 9.2.148-1 cuda cuda-npp-10-0.x86_64 10.0.130-1 cuda cuda-npp-10-1.x86_64 10.1.105-1 cuda cuda-npp-7-0.x86_64 7.0-28 cuda cuda-npp-9-1.x86_64 9.1.85-1 cuda cuda-npp-9-2.x86_64 9.2.148-1 cuda cuda-npp-dev-10-0.x86_64 10.0.130-1 cuda cuda-npp-dev-10-1.x86_64 10.1.105-1 cuda cuda-npp-dev-7-0.x86_64 7.0-28 cuda cuda-npp-dev-9-1.x86_64 9.1.85-1 cuda cuda-npp-dev-9-2.x86_64 9.2.148-1 cuda cuda-nsight-10-0.x86_64 10.0.130-1 cuda cuda-nsight-10-1.x86_64 10.1.105-1 cuda cuda-nsight-9-1.x86_64 9.1.85-1 cuda cuda-nsight-9-2.x86_64 9.2.148-1 cuda cuda-nsight-compute-10-0.x86_64 10.0.130-1 cuda cuda-nsight-compute-10-1.x86_64 10.1.105-1 cuda cuda-nsight-systems-10-1.x86_64 10.1.105-1 cuda cuda-nvcc-10-0.x86_64 10.0.130-1 cuda cuda-nvcc-10-1.x86_64 10.1.105-1 cuda cuda-nvcc-9-1.x86_64 9.1.85.2-1 cuda cuda-nvcc-9-2.x86_64 9.2.148-1 cuda cuda-nvdisasm-10-0.x86_64 10.0.130-1 cuda cuda-nvdisasm-10-1.x86_64 10.1.105-1 cuda cuda-nvdisasm-9-1.x86_64 9.1.85-1 cuda cuda-nvdisasm-9-2.x86_64 9.2.148.1-1 cuda cuda-nvgraph-10-0.x86_64 10.0.130-1 cuda cuda-nvgraph-10-1.x86_64 10.1.105-1 cuda cuda-nvgraph-9-1.x86_64 9.1.85-1 cuda cuda-nvgraph-9-2.x86_64 9.2.148-1 cuda cuda-nvgraph-dev-10-0.x86_64 10.0.130-1 cuda cuda-nvgraph-dev-10-1.x86_64 10.1.105-1 cuda cuda-nvgraph-dev-9-1.x86_64 9.1.85-1 cuda cuda-nvgraph-dev-9-2.x86_64 9.2.148-1 cuda cuda-nvidia-kmod-common.x86_64 352.99-0 cuda cuda-nvjpeg-10-0.x86_64 10.0.130-1 cuda cuda-nvjpeg-10-1.x86_64 10.1.105-1 cuda cuda-nvjpeg-dev-10-0.x86_64 10.0.130-1 cuda cuda-nvjpeg-dev-10-1.x86_64 10.1.105-1 cuda cuda-nvml-dev-10-0.x86_64 10.0.130-1 cuda cuda-nvml-dev-10-1.x86_64 10.1.105-1 cuda cuda-nvml-dev-9-1.x86_64 9.1.85-1 cuda cuda-nvml-dev-9-2.x86_64 9.2.148-1 cuda cuda-nvprof-10-0.x86_64 10.0.130-1 cuda cuda-nvprof-10-1.x86_64 10.1.105-1 cuda cuda-nvprof-9-1.x86_64 9.1.85-1 cuda cuda-nvprof-9-2.x86_64 9.2.148.1-1 cuda cuda-nvprune-10-0.x86_64 10.0.130-1 cuda cuda-nvprune-10-1.x86_64 10.1.105-1 cuda cuda-nvprune-9-1.x86_64 9.1.85-1 cuda cuda-nvprune-9-2.x86_64 9.2.148-1 cuda cuda-nvrtc-10-0.x86_64 10.0.130-1 cuda cuda-nvrtc-10-1.x86_64 10.1.105-1 cuda cuda-nvrtc-7-0.x86_64 7.0-28 cuda cuda-nvrtc-9-1.x86_64 9.1.85-1 cuda cuda-nvrtc-9-2.x86_64 9.2.148-1 cuda cuda-nvrtc-dev-10-0.x86_64 10.0.130-1 cuda cuda-nvrtc-dev-10-1.x86_64 10.1.105-1 cuda cuda-nvrtc-dev-7-0.x86_64 7.0-28 cuda cuda-nvrtc-dev-9-1.x86_64 9.1.85-1 cuda cuda-nvrtc-dev-9-2.x86_64 9.2.148-1 cuda cuda-nvtx-10-0.x86_64 10.0.130-1 cuda cuda-nvtx-10-1.x86_64 10.1.105-1 cuda cuda-nvtx-9-1.x86_64 9.1.85-1 cuda cuda-nvtx-9-2.x86_64 9.2.148-1 cuda cuda-nvvp-10-0.x86_64 10.0.130-1 cuda cuda-nvvp-10-1.x86_64 10.1.105-1 cuda cuda-nvvp-9-1.x86_64 9.1.85-1 cuda cuda-nvvp-9-2.x86_64 9.2.148-1 cuda cuda-runtime-10-0.x86_64 10.0.130-1 cuda cuda-runtime-10-1.x86_64 10.1.105-1 cuda cuda-runtime-7-0.x86_64 7.0-28 cuda cuda-runtime-7-5.x86_64 7.5-18 cuda cuda-runtime-8-0.x86_64 8.0.61-1 cuda cuda-runtime-9-0.x86_64 9.0.176-1 cuda cuda-runtime-9-1.x86_64 9.1.85-1 cuda cuda-runtime-9-2.x86_64 9.2.148-1 cuda cuda-samples-10-0.x86_64 10.0.130-1 cuda cuda-samples-10-1.x86_64 10.1.105-1 cuda cuda-samples-7-0.x86_64 7.0-28 cuda cuda-samples-9-1.x86_64 9.1.85-1 cuda cuda-samples-9-2.x86_64 9.2.148-1 cuda cuda-sanitizer-api-10-1.x86_64 10.1.105-1 cuda cuda-toolkit-10-0.x86_64 10.0.130-1 cuda cuda-toolkit-10-1.x86_64 10.1.105-1 cuda cuda-toolkit-7-0.x86_64 7.0-28 cuda cuda-toolkit-9-1.x86_64 9.1.85-1 cuda cuda-toolkit-9-2.x86_64 9.2.148-1 cuda cuda-tools-10-0.x86_64 10.0.130-1 cuda cuda-tools-10-1.x86_64 10.1.105-1 cuda cuda-tools-9-1.x86_64 9.1.85-1 cuda cuda-tools-9-2.x86_64 9.2.148-1 cuda cuda-visual-tools-10-0.x86_64 10.0.130-1 cuda cuda-visual-tools-10-1.x86_64 10.1.105-1 cuda cuda-visual-tools-7-0.x86_64 7.0-28 cuda cuda-visual-tools-9-1.x86_64 9.1.85-1 cuda cuda-visual-tools-9-2.x86_64 9.2.148-1 cuda dkms-nvidia.x86_64 3:418.40.04-1.el7 cuda gpu-deployment-kit.x86_64 352.93-0 cuda libcublas-devel.x86_64 10.1.0.105-1 cuda libcublas10.x86_64 10.1.0.105-1 cuda libglvnd-debuginfo.x86_64 1:1.0.1-0.6.git5baa1e5.el7 cuda nvidia-driver.x86_64 3:418.40.04-4.el7 cuda nvidia-driver-NVML.x86_64 3:418.40.04-4.el7 cuda nvidia-driver-NvFBCOpenGL.x86_64 3:418.40.04-4.el7 cuda nvidia-driver-cuda.x86_64 3:418.40.04-4.el7 cuda nvidia-driver-cuda-libs.x86_64 3:418.40.04-4.el7 cuda nvidia-driver-devel.x86_64 3:418.40.04-4.el7 cuda nvidia-driver-diagnostic.x86_64 3:418.40.04-4.el7 cuda nvidia-driver-libs.x86_64 3:418.40.04-4.el7 cuda nvidia-kmod.x86_64 1:396.82-2.el7 cuda nvidia-libXNVCtrl.x86_64 3:418.40.04-1.el7 cuda nvidia-libXNVCtrl-devel.x86_64 3:418.40.04-1.el7 cuda nvidia-modprobe.x86_64 3:418.40.04-1.el7 cuda nvidia-persistenced.x86_64 3:418.40.04-1.el7 cuda nvidia-settings.x86_64 3:418.40.04-1.el7 cuda nvidia-uvm-kmod.x86_64 1:352.99-3.el7 cuda nvidia-xconfig.x86_64 3:418.40.04-1.el7 cuda xorg-x11-drv-nvidia.x86_64 1:396.82-1.el7 cuda xorg-x11-drv-nvidia-devel.x86_64 1:396.82-1.el7 cuda xorg-x11-drv-nvidia-diagnostic.x86_64 1:396.82-1.el7 cuda xorg-x11-drv-nvidia-gl.x86_64 1:396.82-1.el7 cuda xorg-x11-drv-nvidia-libs.x86_64 1:396.82-1.el7 cuda rpm -qa | grep cuda results in: cuda-toolkit-8-0-8.0.61-1.x86_64 cuda-cublas-dev-8-0-8.0.61.2-1.x86_64 cuda-toolkit-9-0-9.0.176-1.x86_64 cuda-license-9-0-9.0.176-1.x86_64 cuda-cusparse-dev-7-5-7.5-18.x86_64 cuda-curand-7-5-7.5-18.x86_64 cuda-license-8-0-8.0.61-1.x86_64 cuda-cusolver-9-0-9.0.176-1.x86_64 cuda-cusparse-8-0-8.0.61-1.x86_64 cuda-cufft-9-0-9.0.176-1.x86_64 cuda-cufft-dev-8-0-8.0.61-1.x86_64 cuda-samples-8-0-8.0.61-1.x86_64 cuda-cufft-7-5-7.5-18.x86_64 cuda-cudart-7-5-7.5-18.x86_64 cuda-npp-dev-7-5-7.5-18.x86_64 cuda-documentation-7-5-7.5-18.x86_64 cuda-nvrtc-9-0-9.0.176-1.x86_64 cuda-nvgraph-dev-8-0-8.0.61-1.x86_64 cuda-npp-dev-9-0-9.0.176-1.x86_64 cuda-npp-8-0-8.0.61-1.x86_64 cuda-nvgraph-9-0-9.0.176-1.x86_64 cuda-command-line-tools-8-0-8.0.61-1.x86_64 cuda-cublas-8-0-8.0.61.2-1.x86_64 cuda-libraries-dev-9-0-9.0.176-1.x86_64 cuda-samples-9-0-9.0.176-1.x86_64 cuda-cufft-dev-7-5-7.5-18.x86_64 cuda-cusolver-dev-7-5-7.5-18.x86_64 cuda-cudart-dev-7-5-7.5-18.x86_64 cuda-cublas-7-5-7.5-18.x86_64 cuda-nvrtc-dev-9-0-9.0.176-1.x86_64 cuda-cusparse-9-0-9.0.176-1.x86_64 cuda-cudart-9-0-9.0.176-1.x86_64 cuda-curand-9-0-9.0.176-1.x86_64 cuda-nvgraph-dev-9-0-9.0.176-1.x86_64 cuda-documentation-8-0-8.0.61-1.x86_64 cuda-repo-rhel7-10.1.105-1.x86_64 cuda-toolkit-7-5-7.5-18.x86_64 cuda-nvrtc-8-0-8.0.61-1.x86_64 cuda-curand-8-0-8.0.61-1.x86_64 cuda-npp-dev-8-0-8.0.61-1.x86_64 cuda-core-8-0-8.0.61-1.x86_64 cuda-cusolver-8-0-8.0.61-1.x86_64 cuda-command-line-tools-9-0-9.0.176-1.x86_64 cuda-license-7-5-7.5-18.x86_64 cuda-nvrtc-7-5-7.5-18.x86_64 cuda-cusparse-7-5-7.5-18.x86_64 cuda-misc-headers-7-5-7.5-18.x86_64 cuda-command-line-tools-7-5-7.5-18.x86_64 cuda-cublas-dev-7-5-7.5-18.x86_64 cuda-visual-tools-7-5-7.5-18.x86_64 cuda-driver-dev-9-0-9.0.176-1.x86_64 cuda-nvml-dev-9-0-9.0.176-1.x86_64 cuda-cusparse-dev-9-0-9.0.176-1.x86_64 cuda-cudart-dev-9-0-9.0.176-1.x86_64 cuda-curand-dev-9-0-9.0.176-1.x86_64 cuda-visual-tools-9-0-9.0.176-1.x86_64 cuda-cublas-dev-9-0-9.0.176.4-1.x86_64 cuda-core-9-0-9.0.176.3-1.x86_64 cuda-nvrtc-dev-8-0-8.0.61-1.x86_64 cuda-curand-dev-8-0-8.0.61-1.x86_64 cuda-cufft-8-0-8.0.61-1.x86_64 cuda-cudart-8-0-8.0.61-1.x86_64 cuda-cusolver-dev-8-0-8.0.61-1.x86_64 cuda-cublas-9-0-9.0.176.4-1.x86_64 cuda-nvrtc-dev-7-5-7.5-18.x86_64 cuda-core-7-5-7.5-18.x86_64 cuda-npp-7-5-7.5-18.x86_64 cuda-samples-7-5-7.5-18.x86_64 cuda-nvgraph-8-0-8.0.61-1.x86_64 cuda-npp-9-0-9.0.176-1.x86_64 cuda-nvml-dev-8-0-8.0.61-1.x86_64 cuda-misc-headers-9-0-9.0.176-1.x86_64 cuda-cudart-dev-8-0-8.0.61-1.x86_64 cuda-visual-tools-8-0-8.0.61-1.x86_64 cuda-libraries-9-0-9.0.176-1.x86_64 cuda-documentation-9-0-9.0.176-1.x86_64 cuda-driver-dev-7-5-7.5-18.x86_64 cuda-cusolver-7-5-7.5-18.x86_64 cuda-curand-dev-7-5-7.5-18.x86_64 cuda-driver-dev-8-0-8.0.61-1.x86_64 cuda-cusolver-dev-9-0-9.0.176-1.x86_64 cuda-cusparse-dev-8-0-8.0.61-1.x86_64 cuda-cufft-dev-9-0-9.0.176-1.x86_64 cuda-misc-headers-8-0-8.0.61-1.x86_64EDIT 2: It looks like cuda isn't actually installed, because running this command sudo yum install cuda downloads and installs it.
Unable to find (and uninstall) installed version of CUDA
I had some time to try and reproduce your problem. Stock CentOS 7.9 minimal. Then: export IMOD_VERSION=4.11.12 export CUDA_VERSION=10.1 wget https://bio3d.colorado.edu/imod/AMD64-RHEL5/imod_${IMOD_VERSION}_RHEL7-64_CUDA${CUDA_VERSION}.sh sudo sh imod_${IMOD_VERSION}_RHEL7-64_CUDA${CUDA_VERSION}.shOutput: This script will install IMOD in /usr/local and rename any previous version, or remove another copy of this version.It will copy IMOD-linux.csh and IMOD-linux.sh to /etc/profile.dYou can add the option -h to see a full list of optionsEnter Y if you want to proceed: Y Extracting imod_4.11.12_RHEL7-64_CUDA10.1.tar.gz ... Extracting installIMOD Checking system and package types Unpacking IMOD in /usr/local ... Linking imod_4.11.12 to IMOD Copying startup scripts to /etc/profile.d: IMOD-linux.csh IMOD-linux.shSELinux is enabled - Trying to change security context of libraries.The installation of IMOD 4.11.12 is complete. You may need to start a new terminal window for changes to take effectIf there are version-specific IMOD startup commands in individual user startup files (.cshrc, .bashrc, .bash_profile) they should be changed or removed.Cleaning up imod_4.11.12_RHEL7-64_CUDA10.1.tar.gz, installIMOD, and IMODtempDirIt appears that the installation script installed software under /usr/local/IMOD: [test@centos7test ~]$ ll /usr/local/ total 0 <...> lrwxrwxrwx. 1 root root 12 Feb 3 10:31 IMOD -> imod_4.11.12 drwxr-xr-x. 13 1095 111 286 Nov 19 12:32 imod_4.11.12 <...>Now, it's very important to logout and login to your shell, because it needs to pick up the following piece of code that was installed in /etc/profile.d/IMOD-linux.sh: <...> export IMOD_DIR=${IMOD_DIR:=/usr/local/IMOD}# Put the IMOD programs on the path # if ! echo ${PATH} | grep -q "$IMOD_DIR/bin" ; then export PATH=$IMOD_DIR/bin:$PATH fi <...>This is reflected in your current $PATH env var: [test@centos7test ~]# echo $PATH /usr/local/IMOD/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/binI was now successfully able to locate and run both the imod and imodhelp binaries: [test@centos7test local]# whereis imod imodhelp imod: /usr/local/imod_4.11.12/bin/imod imodhelp: /usr/local/imod_4.11.12/bin/imodhelpIf for some reason your machine isn't picking up the file under /etc/profile.d/IMOD-linux.sh you can force run it like so: [test@centos7test ~]# source /etc/profile.d/IMOD-linux.sh
I'm trying to create a shell script that installs a series of things for me. One such thing is iMod. I've located self-installing shell script for iMod and have run the following commands on my bash console: export IMOD_VERSION=4.11.12 export CUDA_VERSION=10.1 wget https://bio3d.colorado.edu/imod/AMD64-RHEL5/imod_${IMOD_VERSION}_RHEL7-64_CUDA${CUDA_VERSION}.sh sudo sh imod_${IMOD_VERSION}_RHEL7-64_CUDA${CUDA_VERSION}.shNote The issue still persists after restarting the device and disconnecting and reconnecting to it (via SSH, starting a new terminal) Installation Output $ export IMOD_VERSION=4.11.12 $ export CUDA_VERSION=10.1 $ wget https://bio3d.colorado.edu/imod/AMD64-RHEL5/imod_${IMOD_VERSION}_RHEL7-64_CUDA${CUDA_VERSION}.sh --2022-02-02 03:16:12-- https://bio3d.colorado.edu/imod/AMD64-RHEL5/imod_4.11.12_RHEL7-64_CUDA10.1.sh Resolving bio3d.colorado.edu (bio3d.colorado.edu)... 128.138.72.88 Connecting to bio3d.colorado.edu (bio3d.colorado.edu)|128.138.72.88|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 205325213 (196M) [application/x-sh] Saving to: ‘imod_4.11.12_RHEL7-64_CUDA10.1.sh.1’100%[===================================================================================================================>] 205,325,213 5.60MB/s in 38s2022-02-02 03:16:51 (5.21 MB/s) - ‘imod_4.11.12_RHEL7-64_CUDA10.1.sh.1’ saved [205325213/205325213]$ sudo sh imod_4.11.12_RHEL7-64_CUDA10.1.shThis script will install IMOD in /usr/local and rename any previous version, or remove another copy of this version.It will copy IMOD-linux.csh and IMOD-linux.sh to /etc/profile.dYou can add the option -h to see a full list of optionsEnter Y if you want to proceed: y Extracting imod_4.11.12_RHEL7-64_CUDA10.1.tar.gz ... Extracting installIMOD Checking system and package types Saving the Plugins directory in the existing installation Removing link to previous version but leaving previous version Removing an existing copy of the same version... Unpacking IMOD in /usr/local ... Linking imod_4.11.12 to IMOD Restoring the Plugins directory Copying startup scripts to /etc/profile.d: IMOD-linux.csh IMOD-linux.shSELinux is enabled - Trying to change security context of libraries.The installation of IMOD 4.11.12 is complete. You may need to start a new terminal window for changes to take effectIf there are version-specific IMOD startup commands in individual user startup files (.cshrc, .bashrc, .bash_profile) they should be changed or removed.Cleaning up imod_4.11.12_RHEL7-64_CUDA10.1.tar.gz, installIMOD, and IMODtempDir
Installation of iMod on CentOS 7
NVIDIA drivers 455.45.01 fully support kernel 5.9. The ones you're using don't support this kernel version. Please update.
I tried to install cuda on Manjaro with kernel linux54 and linux59 but did not manage to get it work. I have a 64-bit laptop with Hybrid graphics, my graphic card being a GeForce 950M. My video-driver is video-hybrid-intel-nvidia-450xx-prime (I don't think this has an impact though). The nvidia packages/drivers I have are: > pacman -Qqe | grep nvidia lib32-nvidia-450xx-utils linux54-nvidia-450xx linux59-nvidia-450xx nvidia-450xx-utils nvidia-primeHere's what I got from nvidia-smi: Fri Nov 27 13:46:47 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce GTX 950M Off | 00000000:01:00.0 Off | N/A | | N/A 43C P8 N/A / N/A | 3MiB / 2004MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 797 G /usr/lib/Xorg 3MiB | +-----------------------------------------------------------------------------+Furtermore, here's the cuda version I have: > pacmcan -Q cuda cuda 11.0.3-1Finally, here's what I get when running the deviceQuery sample from cuda: > ./deviceQuery ./bin/x86_64/linux/release/deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking)cudaGetDeviceCount returned 999 -> unknown error Result = FAILI've seen several articles/posts indicating that CUDA was not supported by kernel 5.9 but they're all from October 2020, and NVIDIA was planning a working driver for 5.9 by mid-November. However, I did not find any posts indicating that the problem was solved. Do I merely need to wait for the nvidia-driver, or is my problem due to something else?
CUDA on kernel 5.9
Hold the package: sudo apt-mark hold cudaman apt-mark: hold hold is used to mark a package as held back, which will prevent the package from being automatically installed, upgraded or removed.
I am trying to install a specific package , CUDA for Nvidia to be exact. I followed the steps in their developer guide and my question is not about CUDA specifically. When I try to install it with APT normally it tries to install the latest version at this time which is 11.6. The guide I followed, however, is for version 11.4 which is compatible with my current kernel version. I downloaded the deb package for the 11.4 version manually, installed it with dpkg, and did a sudo apt update before trying to sudo apt install cuda. I was not sure why it tries to install version 11.6 while the deb package I installed is for 11.4 until I saw the output of apt-cache policy cuda: cuda: Installed: (none) Candidate: 11.6.2-1 Version table: 11.6.2-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.6.1-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.6.0-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.5.2-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.5.1-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.5.0-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.4.4-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 600 file:/var/cuda-repo-ubuntu2004-11-4-local Packages 11.4.3-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.4.2-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.4.1-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.4.0-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.3.1-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.3.0-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.2.2-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.2.1-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.2.0-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.1.1-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.1.0-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.0.3-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages 11.0.2-1 600 600 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 PackagesThe version I installed shows indeed in the var directory but I have a lot of other versions in the version table. I know I can install the specific version I want using sudo apt insatll cuda=11.4.4-1 but I am afraid if someone else does an upgrade that it will auto upgrade breaking my installation as I share the machine. My question is: will running sudo apt upgrade after installing the specific CUDA version I want (version 11.4) update it to the latest version in the version table (version 11.6)? If yes, how can I prevent that? Is there any way I can clear the version table to remove the links shown above? I think I may have added them by mistake when I was trying the network installed for CUDA but I am not sure.
Clearing apt-cache policy version table to prevent the installation of newer versions
Update to the latest packages in a CentOS 7. You should be able to do this by running “yum update” This was fixed in https://access.redhat.com/errata/RHSA-2018:3059
I am trying to install CUDA on a Linux CentOS 7 x86_64 AWS instance via the installation guide and running into an error that I cannot resolve. Here are the steps I took: I verified that I had gcc and a CUDA-compatible NVIDIA GPU I installed the kernel headers: sudo yum install kernel-devel-$(uname -r) kernel-headers-$(uname -r) I grabbed the CUDA repo: wget https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-repo-rhel7-10.1.105-1.x86_64.rpm and installed it: sudo rpm -i cuda-repo-rhel7-10.1.105-1.x86_64.rpm sudo yum clean all sudo yum install cudaIt downloaded the file but then I got the following error at the end: Transaction check error: file /usr/lib64/libGL.so.1 from install of libglvnd-glx-1:1.0.1-0.8.git5baa1e5.el7.x86_64 conflicts with file from package mesa-libGL-17.0.1-6.20170307.el7.x86_64 file /usr/lib64/libEGL.so.1 from install of libglvnd-egl-1:1.0.1-0.8.git5baa1e5.el7.x86_64 conflicts with file from package mesa-libEGL-17.0.1-6.20170307.el7.x86_64Just to see if it would still work, I updated the path: export PATH=/usr/local/cuda-10.1/bin:/usr/local/cuda-10.1/NsightCompute-2019.1${PATH:+:${PATH}} And then tested it: nvcc --version But it couldn't find CUDA. What can I do to fix this error?
Transaction check error during CUDA installation on CentOS 7
Follow the next steps Installation Instructions: 1 `sudo dpkg -i cuda-repo-ubuntu1604_8.0.61-1_amd64.deb` 2 `sudo apt-get update` 3 `sudo apt-get install cuda`Extracted from https://developer.nvidia.com/cuda-downloads
I'm running Linux Mint 18.2 "Sonya" and want to install CUDA 8. When I install it using package manager it install Cuda 7.5 instead: sudo apt-get install nvidia-cuda-dev nvidia-cuda-toolkit ... nvidia-cuda-dev is already the newest version (7.5.18-0ubuntu1). nvidia-cuda-toolkit is already the newest version (7.5.18-0ubuntu1).I need to force to install version 8. How to do it?
Install CUDA 8 on linux mint 18.2
Question 1.) Sorry, it looks like you've misunderstood a few things. dhcpcd is a DHCP client daemon, which is normally started by NetworkManager or ifupdown, not directly by systemd. It is what will be handling the IP address assignment for your wlan0. You can use dhcpcd as started by systemd if you wish, however that will require disabling all the normal network interface configuration logic (i.e. /etc/network/interfaces must be empty of non-comment lines) of the distribution and replacing it with your own custom scripting wherever necessary. That is for special uses only; if you're not absolutely certain you should do that, you shouldn't. dhcpcd will never serve IP addresses to any other hosts. This part you added to dhcpcd.conf looks like it would belong to the configuration file of ISC DHCP server daemon, dhcpd (yes it's just one-letter difference) instead: host Accountant { hardware ethernet 10:60:4b:68:03:21; fixed-address 192.168.2.83; }host Accountant1 { hardware ethernet 00:0c:29:35:95:ed; fixed-address 192.168.2.66; } host Accountant3 { hardware ethernet 30:85:A9:1B:C4:8B; fixed-address 192.168.2.70; }But if you are following the YouTube tutorial you mentioned, you might not even have dhcpd installed, since dnsmasq is supposed to do that job. As far as I can tell, the equivalent syntax for dnsmasq.conf would be: dhcp-host=10:60:4b:68:03:21,192.168.2.83,Accountant dhcp-host=00:0c:29:35:95:ed,192.168.2.66,Accountant1 dhcp-host=30:85:A9:1B:C4:8B,192.168.2.70,Accountant3Disclaimer: I haven't actually used dnsmasq, so this is based on just quickly Googling its man page.Question 2.) In the tutorial you mentioned, dnsmasq was supposed to act as a DHCP server on eth0. You did not say anything about it, so I don't know whether it was running or not. If not, the one client that was always getting the same IP might have been simply falling back to a previously-received old DHCP lease that wasn't expired yet. Yes, DHCP clients may store a DHCP lease persistently and keep using it if a network doesn't seem to have a working DHCP server available.Question 3.): /etc/network/interfaces is a classic Debian/Ubuntu style network interface configuration file. Use man interfaces to see documentation for it, or look here. In Debian, *Ubuntu, Raspbian etc., NetworkManager will have a plug-in that will read /etc/network/interfaces but won't write to it. If NetworkManager configuration tools like nmcli, nmtui or GUI-based NetworkManager configuration tools of your desktop environment of choice are used, the configuration would be saved to files in /etc/NetworkManager/system-connections/ directory instead. If NetworkManager is not installed, the /etc/network/interfaces file is used by the ifupdown package, which includes the commands ifup and ifdown. The package also includes a system start-up script that will run ifup -a on boot, enabling all network interfaces that have auto <interface name> in /etc/network/interfaces. There is also an udev rule which will run ifup <interface name> if a driver for a new network interface gets auto-loaded and /etc/network/interfaces has an allow-hotplug <interface name> line for it.
Deployment: VM -- (eth0)RPI(wlan0) -- Router -- ISP ^ ^ ^ ^ DHCP Static DHCP GWNOTE: RPI hostname: gateway • The goal was to make VMs accessible from the outside the network. Accomplished, according to the tutorial https://www.youtube.com/watch?v=IAa4tI4JrgI, via the Port Forwarding on Router and RPI, by installing dhcpcd and configuring iptables on RPI. • Here is my interfaces, where I have commented out the auto wlan0, in attempt to fix the issue (before, it was uncommented, and was still the same thing...) # interfaces(5) file used by ifup(8) and ifdown(8)# Please note that this file is written to be used with dhcpcd # For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf'# Include files from /etc/network/interfaces.d: source-directory /etc/network/interfaces.d#auto wlan0 iface wlan0 inet dhcp wpa-ssid FunBox-84A8 wpa-psk 7A73FA25C43563523D7ED99A4D#auto eth0 allow-hotplug eth0 iface eth0 inet static address 192.168.2.1 netmask 255.255.255.0 network 192.168.2.0 broadcast 192.168.2.255• Here is the firewall.conf used by the iptables: # Generated by iptables-save v1.6.0 on Sun Feb 17 20:01:56 2019 *nat :PREROUTING ACCEPT [86:11520] :INPUT ACCEPT [64:8940] :OUTPUT ACCEPT [71:5638] :POSTROUTING ACCEPT [37:4255] -A PREROUTING -d 192.168.1.21/32 -p tcp -m tcp --dport 170 -j DNAT --to-destination 192.168.2.83:22 -A PREROUTING -d 192.168.1.21/32 -p tcp -m tcp --dport 171 -j DNAT --to-destination 192.168.2.83:443 -A PREROUTING -d 192.168.1.21/32 -p tcp -m tcp --dport 3389 -j DNAT --to-destination 192.168.2.66:3389 -A POSTROUTING -o wlan0 -j MASQUERADE COMMIT # Completed on Sun Feb 17 20:01:56 2019 # Generated by iptables-save v1.6.0 on Sun Feb 17 20:01:56 2019 *filter :INPUT ACCEPT [3188:209284] :FORWARD ACCEPT [25:2740] :OUTPUT ACCEPT [2306:270630] -A FORWARD -i wlan0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i eth0 -o wlan0 -j ACCEPT COMMIT # Completed on Sun Feb 17 20:01:56 2019 # Generated by iptables-save v1.6.0 on Sun Feb 17 20:01:56 2019 *mangle :PREROUTING ACCEPT [55445:38248798] :INPUT ACCEPT [3188:209284] :FORWARD ACCEPT [52257:38039514] :OUTPUT ACCEPT [2306:270630] :POSTROUTING ACCEPT [54565:38310208] COMMIT # Completed on Sun Feb 17 20:01:56 2019 # Generated by iptables-save v1.6.0 on Sun Feb 17 20:01:56 2019 *raw :PREROUTING ACCEPT [55445:38248798] :OUTPUT ACCEPT [2306:270630] COMMIT # Completed on Sun Feb 17 20:01:56 2019• iptables -L: pi@gateway:/etc$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destinationChain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhereChain OUTPUT (policy ACCEPT) target prot opt source destination • Here is the dhcpcd.conf: # A sample configuration for dhcpcd. # See dhcpcd.conf(5) for details.# Allow users of this group to interact with dhcpcd via the control socket. #controlgroup wheel# Inform the DHCP server of our hostname for DDNS. hostname# Use the hardware address of the interface for the Client ID. clientid # or # Use the same DUID + IAID as set in DHCPv6 for DHCPv4 ClientID as per RFC4361. # Some non-RFC compliant DHCP servers do not reply with this set. # In this case, comment out duid and enable clientid above. #duid# Persist interface configuration when dhcpcd exits. persistent# Rapid commit support. # Safe to enable by default because it requires the equivalent option set # on the server to actually work. option rapid_commit# A list of options to request from the DHCP server. option domain_name_servers, domain_name, domain_search, host_name option classless_static_routes # Most distributions have NTP support. option ntp_servers # Respect the network MTU. This is applied to DHCP routes. option interface_mtu# A ServerID is required by RFC2131. require dhcp_server_identifier# Generate Stable Private IPv6 Addresses instead of hardware based ones slaac private# Example static IP configuration: #interface eth0 #static ip_address=192.168.0.10/24 #static ip6_address=fd51:42f8:caae:d92e::ff/64 #static routers=192.168.0.1 #static domain_name_servers=192.168.0.1 8.8.8.8 fd51:42f8:caae:d92e::1# It is possible to fall back to a static IP if DHCP fails: # define static profile #profile static_eth0 #static ip_address=192.168.1.23/24 #static routers=192.168.1.1 #static domain_name_servers=192.168.1.1# fallback to static profile on eth0 #interface eth0 #fallback static_eth0denyinterfaces eth0host Accountant { hardware ethernet 10:60:4b:68:03:21; fixed-address 192.168.2.83; }host Accountant1 { hardware ethernet 00:0c:29:35:95:ed; fixed-address 192.168.2.66; } host Accountant3 { hardware ethernet 30:85:A9:1B:C4:8B; fixed-address 192.168.2.70; }• The error message, that I am not able to figure out: root@gateway:/home/pi# systemctl restart dhcpcd Warning: dhcpcd.service changed on disk. Run 'systemctl daemon-reload' to reload units. Job for dhcpcd.service failed because the control process exited with error code. See "systemctl status dhcpcd.service" and "journalctl -xe" for details. root@gateway:/home/pi# systemctl status dhcpcd ● dhcpcd.service - dhcpcd on all interfaces Loaded: loaded (/lib/systemd/system/dhcpcd.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/dhcpcd.service.d └─wait.conf Active: failed (Result: exit-code) since Sun 2019-02-17 20:36:42 GMT; 6s ago Process: 775 ExecStart=/usr/lib/dhcpcd5/dhcpcd -q -w (code=exited, status=6)Feb 17 20:36:42 gateway systemd[1]: Starting dhcpcd on all interfaces... Feb 17 20:36:42 gateway dhcpcd[775]: Not running dhcpcd because /etc/network/interfaces Feb 17 20:36:42 gateway dhcpcd[775]: defines some interfaces that will use a Feb 17 20:36:42 gateway dhcpcd[775]: DHCP client or static address Feb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Control process exited, code=exited status=6 Feb 17 20:36:42 gateway systemd[1]: Failed to start dhcpcd on all interfaces. Feb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Unit entered failed state. Feb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Failed with result 'exit-code'. Warning: dhcpcd.service changed on disk. Run 'systemctl daemon-reload' to reload units. root@gateway:/home/pi# root@gateway:/home/pi# systemctl daemon-reload root@gateway:/home/pi# systemctl status dhcpcd ● dhcpcd.service - dhcpcd on all interfaces Loaded: loaded (/lib/systemd/system/dhcpcd.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/dhcpcd.service.d └─wait.conf Active: failed (Result: exit-code) since Sun 2019-02-17 20:36:42 GMT; 1min 23s agoFeb 17 20:36:42 gateway systemd[1]: Starting dhcpcd on all interfaces... Feb 17 20:36:42 gateway dhcpcd[775]: Not running dhcpcd because /etc/network/interfaces Feb 17 20:36:42 gateway dhcpcd[775]: defines some interfaces that will use a Feb 17 20:36:42 gateway dhcpcd[775]: DHCP client or static address Feb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Control process exited, code=exited status=6 Feb 17 20:36:42 gateway systemd[1]: Failed to start dhcpcd on all interfaces. Feb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Unit entered failed state. Feb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Failed with result 'exit-code'. root@gateway:/home/pi# •gateway version: pi@gateway:/etc$ cat os-release PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)" NAME="Raspbian GNU/Linux" VERSION_ID="9" VERSION="9 (stretch)" ID=raspbian ID_LIKE=debianQuestions: 1) What does the error message Not running dhcpcd because /etc/network/interfaces defines some interfaces that will use a DHCP client or static address mean? How to fix it, according to my config above? 2) Why hosts are not getting assigned the IP address according to my dhcpcd.conf, except the host Accountant, which is always getting the same IP, which I want, even if comment out the binding...? How to fix it, in order to be able to bind more than one hosts MAC with IP? 3) What does this notation mean: #auto eth0 allow-hotplug eth0 iface eth0 inet static address 192.168.2.1 netmask 255.255.255.0 network 192.168.2.0 broadcast 192.168.2.255What are the notation rules for the interfaces file in Linux?
"Not running dhcpcd because /etc/network/interfaces defines some interfaces that will use a DHCP client or static address"
I was wondering the same thing and also couldn't find any definitive answers out there, so I went digging. I don't know if this is an exhaustive list, but here is a list of valid values for the static option that I have been able to glean from looking at the source code (available here): ip_address subnet_mask broadcast_address routes static_routes classless_static_routes ms_classless_static_routes routers interface_mtu mtu ip6_addressThese parameters are directly handled in the if-options.c file. Here is where I am less certain about it being exhaustive, and where I am getting a bit speculative on what is going on. As you have no doubt noticed, this doesn't include domain_name_servers, etc. After parsing the config file and directly dealing with any of the above parameters, there can still be some parameters that have not been handled in if-options.c. I think that these remaining parameters are dealt with in the default hook scripts, specifically the 20-resolv.conf hook script (/usr/lib/dhcpcd/dhcpcd-hooks), for which I think there are only the following options: domain_name domain_name_servers domain_searchAs I said, I'm a bit unsure about the last bit as I didn't want to spend crazy amounts of time going through the source code. So any corrections would be very welcome.
What are the valid values (and what are they used for) for the static option in /etc/dhcpcd.conf file? I'm configuring a network interface of a Raspberry (running raspbian stretch) by editing the /etc/dhcpcd.conf file. Altough I was able to set up it correctly, I am curious about all the configuration options provided through this file, specifically for static configuration. I read the man page of dhcpcd.conf and didn't find any explanation of the values the static option accepts. I wasn't able to find anything on google neither. The man page of dhcpcd.conf just says this: static value Configures a static value. If you set ip_address then dhcpcd will not attempt to obtain a lease and will just use the value for the address with an infinite lease time. If you set ip6_address, dhcpcd will continue auto-configuation as normal. Here is an example which configures two static address, overriding the default IPv4 broadcast address, an IPv4 router, DNS and disables IPv6 auto-configuration. You could also use the inform6 command here if you wished to obtain more information via DHCPv6. For IPv4, you should use the inform ipaddress option instead of setting a static address. interface eth0 noipv6rs static ip_address=192.168.0.10/24 static broadcast_address=192.168.0.63 static ip6_address=fd51:42f8:caae:d92e::ff/64 static routers=192.168.0.1 static domain_name_servers=192.168.0.1 fd51:42f8:caae:d92e::1 Here is an example for PPP which gives the destination a default route. It uses the special destination keyword to insert the destination address into the value. interface ppp0 static ip_address= destination routersAfter reading some tutorials all valid options I know are these: ip_address routers domain_name_servers domain_search domain_nameJFYI my /etc/dhcpcd.conf configuration file looks like this: # Inform the DHCP server of our hostname for DDNS. hostname # Use the hardware address of the interface for the Client ID. clientid # Persist interface configuration when dhcpcd exits. persistent # Rapid commit support. # Safe to enable by default because it requires the equivalent option set # on the server to actually work. option rapid_commit # A list of options to request from the DHCP server. option domain_name_servers, domain_name, domain_search, host_name option classless_static_routes # Most distributions have NTP support. option ntp_servers # A ServerID is required by RFC2131. require dhcp_server_identifier # Generate Stable Private IPv6 Addresses instead of hardware based ones slaac private # A hook script is provided to lookup the hostname if not set by the DHCP # server, but it should not be run by default. nohook lookup-hostname# Static IP configuration for eth0. interface eth0 static ip_address=192.168.12.234/24 static routers=192.168.12.1 static domain_name_servers=192.168.12.1 nogateway
Configure static values in /etc/dhcpcd.conf
You can either add the relevant lines at the top of debian/changelog (find here details on the contents of that file). You can duplicate the current top stanza and change the version number (making an useful log comment is a good idea). Alternatively you can use the dch tool (from devtools): dch --local your_package_nameOnce installed, you can check the installed version of the package with something like this (there are alternatives) dpkg -l dhcpcd5Upstream version identifiers cannot be automatically imported because they don't always officially exist (say python3-lzss) and when they do, they might not be compatible with the restrictions and sorting of the package system versions. For example epoch is needed sometimes to migrate from upstream to Debian versions.
I've just completed a simple source code modification & rebuild on a Raspberry Pi OS - bullseye machine. Because this is new to me, I'll list the steps I followed in an effort to avoid ambiguity: $ dhcpcd --version dhcpcd 8.1.2 # "before" version $ sudo apt install devscripts # build tools for using `debuild` $ apt-get source dhcpcd5 # creates source tree ~/dhcpcd5-8.1.2; Debian git repo is far off! $ cd dhcpcd5-8.1.2 # cd to source dir $ nano src/dhcp.c # make required changes to the source (one line) ~/dhcpcd5-8.1.2 $ debuild -b -uc -us # successful build $ cd .. $ sudo dpkg -i dhcpcd5_8.1.2-1+rpt5_armhf.deb # install .deb file created by debuild $ dhcpcd --version dhcpcd 8.1.2 # "after" version $ All well & good, but the "before" & "after" version numbers are exactly the same, which leaves me without a simple way to know whether I have my corrected code running, or the un-corrected code. I'll install the corrected .deb file to several hosts, I may get requests from others, etc, so I'd like some way to easily distinguish corrected from un-corrected code. Using dhcpcd --version seems an easy way to do this. I've read that Debian has rules re version numbers, but as I'm not releasing this to "the world" I see no need for formality. Also - I've submitted a pull request/merge request to the Debian repo, and I've advised the RPi organization on the issue. I've gotten no feedback from either party, but this bug is a huge annoyance for me. I don't wish to wait for a new release of dhcpcd with a "proper" version number. What must I do to cause the corrected version of dhcpcd to report dhcpcd 8.1.2.1 - or something similar? EDIT for Clarification: Based on this answer, I edited dhcpcd5-8.1.2/debian/changelog. Following this change, the apt utilities consistently report the version of dhcpcd as 8.1.3: $ apt-cache policy dhcpcd5 dhcpcd5: Installed: 1:8.1.3-1+rpt1 Candidate: 1:8.1.3-1+rpt1 Version table: *** 1:8.1.3-1+rpt1 100 100 /var/lib/dpkg/status 1:8.1.2-1+rpt1 500 500 http://archive.raspberrypi.org/debian buster/main armhf Packages 7.1.0-2 500 500 http://raspbian.raspberrypi.org/raspbian buster/main armhf Packages $ # $ dpkg -s dhcpcd5 | grep Version Version: 1:8.1.3-1+rpt1 $However: dhcpcd --version still reports 8.1.2. dhcpcd is aliased to dhcpcd5 in /etc/alternatives. Consequently, dhcpcd --version is actually dhcpcd5 --version. It appears that the executable dhcpcd5 is getting its --version from a different source than than apt utilities.? EDIT 2: Turns out the version # that gets reported by dhcpcd --version is defined in defs.h as follows: #define PACKAGE "dhcpcd" #define VERSION "8.1.2"I think dhcpcd is a bit of an outlier. The RPi team apparently decided to forego the upstream version 9 when released (years ago), and have stuck to version 8.1.2 even though there were several upstream releases following ver 8.1.2. Still more confusing is the fact that the .dsc file lists Vcs-Browser: https://salsa.debian.org/smlx-guest/dhcpcd5 as the Git repo - but it's actually stuck at version 7. This doesn't make much sense to me - I guess that's one reason I'm not a package maintainer. :)
How to set a new version number in a .deb package I've built
I'd say it's very likely the problem you're seeing is with the [emailprotected] that's configured on your system. So my recommendation would be to disable it, hopefully that's enough to make that timeout during boot disappear: $ sudo systemctl disable dhcpcd@eth0I'll go over the evidence to support that claim. There's more troubleshooting that can be done here, I'll suggest some more steps (in case you want to look further, or troubleshoot similar issues in the future.) The main evidence of the issue is the message on output of systemctl status dhcpcd@eth0 which says: Mar 05 09:42:42 brightprogrammer systemd[1]: [emailprotected]: Job [emailprotected]/start failed with result 'dependency'.Failed with result "dependency" means, in this case, it was waiting for something else, that failed. This service will have a dependency on eth0.device and this device will not appear, so that's the probable source of the timeout. You can take a look at systemctl status eth0.device to see if anything else shows up, it's possible it will (but then, it's possible it won't.) Like you mentioned in your question, there's probably a mix up between eth0 and the actual device name of enp1s0f1 in your system. systemd (more specifically udevd) will rename network interfaces to give them a consistent name and this typically happens very early at boot (sometimes even before systemd comes up), so systemd will not really see the eth0 name anymore. If you want to enable DHCP on that interface in the future, enable dhcpcd@enp1s0f1 instead. The output of systemd-analyze critical-chain supports the hypothesis of timeout on that dhcpcd@eth0 service, which you can see from these two steps: └─network.target @1min 33.501s └─wpa_supplicant.service @15.761s +638ms The times after @ are the clock times right after boot. The wpa_supplicant service came up 13s after systemd started, but network.target was only reached at 1m33s (roughly the 90s you talk about.) You would probably had seen dhcpcd@eth0 here more explicitly, but the unit actually went into the "loaded"/"inactive" state, rather than "failed", so that's probably why it isn't listed prominently here (and in systemd-analyze blame), which would have helped point it out as the culprit. Finally, one step that's usually a great start when troubleshooting systemd boot issues is to start by looking at the bare systemctl status output, which will tell you whether the system is in "degraded" state, which indicates that something failed during boot. You want to ensure the system status will be "running", so investigating those failures will typically uncover issues such as timeouts, etc. You can proceed the investigation from that point by looking at output of systemctl, which will list all active units and their status, if you see problems there, look further by investigating specific units (with systemctl status <unit> or journalctl -u <unit>.) Command systemctl --state=failed is also useful to show only the failed units. Finally, checking the journal is really good to make correlations. Command journalctl -b shows the journal since the system booted, so it's great to look into issues during boot. As mentioned before, journalctl -u <unit> is useful to investigate logs for a single unit. Hopefully these tips will be helpful to you in digging deeper and understanding what is happening in your system. Also hoping that disabling that dhcpcd@eth0 is enough to solve the boot delay you're experiencing.
I am using a new installation of Arch Linux and whenever I boot my system I have to wait for 90 seconds as there is a start job running for my network interfaces. I installed Arch yesterday and whenever I do ip a I get that ethernet interfaces is in DOWN state. I used a wired usb tether to complete the whole installation. I just want to remove that start job process while starting. I saw a solution somewhere in Arch community that I have to disable my interface using: # systemctl disable dhcpcd@interface_nameI haven't done that yet. My question is if I disable that interface will that cause any problems in future? I am not using any LAN connections now. Will that cause any problems if in future I want to use a LAN or some kind of ethernet connection? Output of uname -a: [siddharth@brightprogrammer ~]$ uname -a Linux brightprogrammer 4.19.26-1-lts #1 SMP Wed Feb 27 16:06:52 CET 2019 x86_64 GNU/LinuxOutput of ip a: [siddharth@brightprogrammer ~]$ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp1s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000 link/ether 80:fa:5b:5b:9e:47 brd ff:ff:ff:ff:ff:ff 3: wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 94:b8:6d:c9:57:89 brd ff:ff:ff:ff:ff:ff inet 192.168.43.201/24 brd 192.168.43.255 scope global dynamic noprefixroute wlp2s0 valid_lft 2153sec preferred_lft 2153sec inet6 2405:205:a061:4977:348c:2fe2:102:47ac/64 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::614a:460c:ff14:9caa/64 scope link noprefixroute valid_lft forever preferred_lft foreverOutput of find /etc/systemd: [siddharth@brightprogrammer ~]$ find /etc/systemd /etc/systemd /etc/systemd/journald.conf /etc/systemd/coredump.conf /etc/systemd/sleep.conf[siddharth@brightprogrammer ~]$ systemd-analyzeStartup finished in 5.369s (firmware) + 1.785s (loader) + 5.214s (kernel) + 1min 33.882s (userspace) = 1min 46.252s graphical.target reached after 1min 33.882s in userspace /etc/systemd/journal-remote.conf /etc/systemd/system.conf /etc/systemd/timesyncd.conf /etc/systemd/journal-upload.conf /etc/systemd/networkd.conf /etc/systemd/system /etc/systemd/system/getty.target.wants /etc/systemd/system/getty.target.wants/[emailprotected] /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service /etc/systemd/system/bluetooth.target.wants /etc/systemd/system/bluetooth.target.wants/bluetooth.service /etc/systemd/system/multi-user.target.wants /etc/systemd/system/multi-user.target.wants/NetworkManager.service /etc/systemd/system/multi-user.target.wants/[emailprotected] /etc/systemd/system/multi-user.target.wants/wicd.service /etc/systemd/system/multi-user.target.wants/[emailprotected] /etc/systemd/system/multi-user.target.wants/remote-fs.target /etc/systemd/system/network-online.target.wants /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service /etc/systemd/system/dbus-org.bluez.service /etc/systemd/system/dbus-org.wicd.daemon.service /etc/systemd/system/display-manager.service /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service /etc/systemd/logind.conf /etc/systemd/user /etc/systemd/user/sockets.target.wants /etc/systemd/user/sockets.target.wants/p11-kit-server.socket /etc/systemd/user/sockets.target.wants/pipewire.socket /etc/systemd/user/sockets.target.wants/gpg-agent.socket /etc/systemd/user/sockets.target.wants/dirmngr.socket /etc/systemd/user/sockets.target.wants/gpg-agent-extra.socket /etc/systemd/user/sockets.target.wants/gpg-agent-browser.socket /etc/systemd/user/sockets.target.wants/gpg-agent-ssh.socket /etc/systemd/user/sockets.target.wants/pulseaudio.socket /etc/systemd/user/default.target.wants /etc/systemd/user/default.target.wants/xdg-user-dirs-update.service /etc/systemd/user.conf /etc/systemd/network /etc/systemd/resolved.confOutput of systemd-analyze : [siddharth@brightprogrammer ~]$ systemd-analyze Startup finished in 5.369s (firmware) + 1.785s (loader) + 5.214s (kernel) + 1min 33.882s (userspace) = 1min 46.252s graphical.target reached after 1min 33.882s in userspaceOutput of systemd-analyze critical-analyze : [siddharth@brightprogrammer ~]$ systemd-analyze critical-chain The time after the unit is active or started is printed after the "@" character.graphical.target @1min 33.882s └─gdm.service @1min 33.615s +265ms └─systemd-user-sessions.service @1min 33.503s +110ms └─network.target @1min 33.501s └─wpa_supplicant.service @15.761s +638ms └─basic.target @11.036s └─sockets.target @11.036s └─dbus.socket @11.036s └─sysinit.target @11.028s └─systemd-backlight@backlight:intel_backlight.service @14.008s > └─system-systemd\x2dbacklight.slice @14.006s └─system.slice @2.915s └─-.slice @2.915sOutput of systemd-analyze blame : [siddharth@brightprogrammer ~]$ systemd-analyze blame 11.692s [emailprotected] 11.692s [emailprotected] 6.472s lvm2-monitor.service 4.616s wicd.service 3.222s systemd-journal-flush.service 3.188s NetworkManager.service 2.719s bluetooth.service 2.711s systemd-logind.service 1.395s systemd-sysusers.service 1.216s systemd-udevd.service 1.213s ldconfig.service 981ms udisks2.service 971ms polkit.service 649ms [emailprotected] 638ms wpa_supplicant.service 600ms systemd-modules-load.service 526ms systemd-tmpfiles-setup.service 501ms systemd-tmpfiles-setup-dev.service 493ms upower.service 487ms systemd-udev-trigger.service 464ms systemd-journald.service 371ms systemd-journal-catalog-update.service 338ms systemd-sysctl.service 268ms colord.service 265ms gdm.service 260ms kmod-static-nodes.service 238ms dev-sda2.swap 236ms accounts-daemon.service 142ms systemd-random-seed.service 135ms systemd-backlight@backlight:intel_backlight.service 110ms systemd-user-sessions.service 91ms [emailprotected] 81ms systemd-update-utmp.service 54ms systemd-remount-fs.service 48ms sys-kernel-debug.mount 35ms systemd-tmpfiles-clean.service 28ms dev-hugepages.mount 26ms [emailprotected] 25ms sys-kernel-config.mount 16ms [emailprotected] 15ms dev-mqueue.mount 9ms rtkit-daemon.service 6ms systemd-update-done.service 4ms systemd-rfkill.service 3ms sys-fs-fuse-connections.mount 2ms tmp.mountOutput of systemctl status [emailprotected] and dhcpcd@enp1s0f1 : [siddharth@brightprogrammer ~]$ sudo systemctl status [emailprotected] ● [emailprotected] - dhcpcd on eth0 Loaded: loaded (/usr/lib/systemd/system/[emailprotected]; enabled; vendor pre> Active: inactive (dead)Mar 05 09:42:42 brightprogrammer systemd[1]: Dependency failed for dhcpcd on eth0. Mar 05 09:42:42 brightprogrammer systemd[1]: [emailprotected]: Job [emailprotected]/start failed with result 'dependency'.[siddharth@brightprogrammer ~]$ sudo systemctl status [emailprotected] ● [emailprotected] - dhcpcd on enp1s0f1 Loaded: loaded (/usr/lib/systemd/system/[emailprotected]; disabled; vendor pr> Active: inactive (dead)I recently disabled enp1s0f1. That might be the reason it is disabled. I can also provide the output of journalctl -xe but that is very large! Also I suspect that dhcpcd is somehow confused between eth0 and my enp1s0f1
A start job is running for eth0
NetworkManager (and its nmcli CLI command) calls a lower level API in the end. As this has nothing to do with dhcpcd and not much to do with wpa_supplicant, if you're not using NetworkManager, you can still (install the adequate package and) use as root the rfkill command. To list the status of all available RF devices: rfkill listTo disable all of them: rfkill block allTo enable all of them (only if no hardware switch prevents it): rfkill unblock allFor other options please check the man.
I believe "airplane mode" on various applets is equivalent to nmcli radio wifi off. What is its equivalence when we use dhcpcd/wpa_supplicant? pkill wpa_supplicant?
airplane mode in wpa_supplicant
You can use tshark, the commandline version of wireshark, tshark -tad -nn -VOdhcp -ixenbr0 -f 'ip and udp port 67'
How does one list the options that are sent to a client from a DHCP server? Using a utility run from bash?
How to list DHCP options sent to a client from a DHCP server in Linux?
From man 5 dhclient.conf, the config entry is send host-name <...>. It's near the bottom. The example they give is: interface "ep0" { send host-name "andare.example.com"; request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, host-name; }But, I doubt you need to put it under an interface section. I would not worry if it's sending Linux, just override it. It may be being set via systemd. on openSUSE, the manpage for dhclient says it has a -H switch. YMMV.
I start by saying that my experience in networking is somewhere between low and medium. I'm working on a Linux machine with DHCP configured and from tcpdump traces I see that the dhclient send the hostname "linux" in Option 12, Request packages. I verified the files /etc/hostname, /etc/hosts and /etc/dhclient.conf and there's no parameter related to the hostname that have the value "linux". I must specify that I use dhcpcd. Any help/hint is appreciated since I don't have any ideas where that value is set.
DHCP client send hostname "linux"
The DHCID records are parts of a scheme to identify which client currently holds the corresponding other dynamically-updated DNS record(s) with the same name; see RFC 4701. The TXT records are an older scheme for the same purpose: if you are using the ISC dhcpd as your DHCP server and have set ddns-update-style standard; then DHCID records will be used. If you have set ddns-update-style interim; then TXT records will be used instead. If you want a DHCP client to always be able to override the DNS records for the IP address it currently holds, regardless of the DHCID/TXT records, you'll need to configure your DHCP server with update-conflict-detection off; (or equivalent for DHCP servers other than the ISC dhcpd). This will make it just delete the old record(s) and create new ones, even if a different client ID record exists. If only the DHCP server (and the administrator) is allowed to make DDNS updates, this is probably acceptable. If you allow clients to send their own DDNS updates directly to the DNS server, disabling conflict detection might allow evil clients to impersonate other clients or important servers in the zone, depending on what kinds of updates will be allowed by the DNS server.
I have a working dns+dhcp server. When clients receive the ip from dchpd server, it send the hostname to dhcpd+dns server and works fine. But there is only a problem: suppose a client called nagios1.myzone.com, for some reason I delete it and replace with another one with the same name, but different linux distro and of course different DHCID(dhcp client id). The dns+dhcp server return this error: client @0x6g12280f2z00 192.168.0.4#48193/key dhcp.myzone.com: updating zone 'myzone.com/IN': update unsuccessful: nagios1.myzone.com: 'name not in use' prerequisite not satisfied (YXDOMAIN) Nov 26 20:38:11 dns1 named[1541]: client @0x6g12280f2z00 192.168.0.4#37309/key dhcp.myzone.com: updating zone '.myzone.com/IN': update unsuccessful: nagios1..myzone.com/TXT: 'RRset exists (value dependent)' prerequisite not satisfied (NXRRSET) Nov 26 20:38:11 dns1 dhcpd[1548]: Forward map from nagios1..myzone.com to 192.168.0.110 FAILED: Has an address record but no DHCID, not mine.I have a workaround for this, simply delete the TXT record with those line vim file.txtzone myzone.com. server dns1.myzone.com update del nagios1.myzone.com. 600 IN A 192.168.0.110 sendzone myzone.com. server dns1.myzone.com update del nagios1.myzone.com. 600 IN TXT "3147358c8b5523979cfecd8d67f26b6678" sendzone 0.168.192.in-addr.arpa. server dns1.myzone.com. update del 110.0.168.192.in-addr.arpa. 600 IN PTR nagios1.myzone.com. sendthen use the command nsupdate file.txt My question is: is possible to force or create the dynamic update of the DHCID/TXT record? I have configured dns with those settings zone "myzone.com." IN { type master; file "/var/named/data/myzone.zone"; update-policy { grant dhcp.myzone.com. wildcard * A TXT SRV CNAME MX DHCID; };The dynamic update works for all, tested A and PTR, why not for TXT/DHCID?
dhcp and dns dynamic update, is possible to override/renew the DHCID record?
I am a bit confused about your setup. Maybe I am misunderstanding it. Anyhow, the way it's normally done is to have one central place to configure everything (in your case, that should probably your router). Then you don't have to care about the configuration of the RaspPi's. In fact, you can configure them identically; all differences will be resolved by the RaspPi's using DHCP. If you look at dnsmasq's man page, it can read /etc/ethers (man ethers for details) to give each RaspPi a static IP based on the RaspPi's MAC address. It also reads /etc/hosts to provide DNS resolution for those static IP addresses, so you can name your RaspPi's however you want. If you do it that way, a plain out-of-the-box dhcp client on the RaspPi's should suffice. You don't need dhcpd anywhere. Editbecause why would you assign an ip via DHCP when there's already one assigned statically?Because you don't want to configure each RaspPi separately. "Statically" doesn't mean "locally configured". Statically means "every machine gets always the same IP address". You can do that with DHCP by looking at the MAC address of the machine. Imagine you had a thousand RaspPi's. Do you manage those individually? No, you manage them in a central location, and keep them otherwise identical.The reason is I don't know how to set dhcpcd back to go look for an address from dnsmasq.I don't get why you think you need to run dhcpd on the RaspPi's. If they need to get other information by DHCP, you need a DHCP client, not a DHCP server. If you want to configure each static address for them locally, then you again can do that without a DHCP server. If you in addition want to configure each DNS name for them locally by running a DHCP server on them, then this is not going to work. (Though you can make it work by running DHCP clients on them, and having them tell the central DHCP server (your router) their hostnames in the DHCP request). For DNS, you need to have a central server where all the information is.
I'm setting up a couple of Raspberry Pi's on my router's DMZ (don't worry all the ports are closed); my router uses DNSMasq for DNS and so I added the MAC addresses; hostnames and IPs of the pi's to the dhcp static leases. Now that said, I'm only learning to use dhcpcd; I'm used to the old way of using /etc/networking/interfaces to configure ip address assignment. On the pi's themselves, I've configured them with /etc/dhcpcd.conf as having a static ip address and pointed them at my DNSMasq DNS Server. It seems a little strange to do this, but is it okay to do so? This way my pi's get a DNS record (so the devices can find each other) and a static ip address; I suppose I could configure it so that it pulls the IP based on the MAC address using the dhcpcd client. That said I don't really know how to configure dhcpcd to pull it's ip address from DNSMasq; I'm planning on adding additional DNS records (maybe from /etc/hosts) for the pi's to pick up for separate nginx server blocks, so is it okay to have static IPs configured in dhcpcd while I have static DHCP leases configured? Or is that weird and I shouldn't do that?
Static IP and DHCP Lease in dnsmasq?
Got some help over here NetworkManager was taking over. I probably installed it at some point for one of it's tools, never intending it to actively take over my Pi as a DHCP client. Honestly, I'm surprised this doesn't happen more often. It was quite tricky to troubleshoot. I have WiFi disabled on my Pi, and in investigating I couldn't enable WiFi either. NM is a handful! I also somehow disabled DHCPD when troubleshooting, so initially disabling NM simply took the Pi offline and I have to connect a monitor/keyboard since SSH went down. In the end here is what fixed it... sudo systemctl enable dhcpcd followed by... sudo systemctl disable NetworkManager
I have a Pihole with a fixed IP, 192.168.0.3. It works, and I can get to the GUI interface with that IP. Recently I noticed my router displaying alternating IP addresses for the Pi in its UI (the router lists clients by Mac). I tried navigating to the second address on the same subnet and the Pihole GUI is served. ip a shows a bunch of stuff but here is eth0... eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether xx:xx:eb:de:54:87 brd ff:ff:ff:ff:ff:ff inet 192.168.0.64/24 brd 192.168.0.255 scope global dynamic noprefixroute eth0 valid_lft 86015sec preferred_lft 86015sec inet 192.168.0.3/24 brd 192.168.0.255 scope global secondary noprefixroute eth0 valid_lft forever preferred_lft forevercat /etc/dhcpcd.conf shows... interface eth0 static ip_address=192.168.0.3/24 static routers=192.168.0.1 static domain_name_servers=127.0.0.1Other notes...WiFi is disabled on the Pi I only have one DHCP server on the network The Pi DHCP server is disabled I have a docker container on the Pi serving HAAS I have a backup Pihole on the network fixed to 192.168.0.2, which doesn't have this issue
Eth0 Has two IP addresses
I added this to the bottom of the dhcpcd.conf file and 169.254.x.x route is not added. denyinterfaces veth* br*
I'm trying to disable 169.254.xx routes from being added to the route table on a pi4 (Raspbian 10 Buster). All I have read so far points to dhcp configuring link local address, APIPA, zeroconf. Added noipv4ll and set eth0 to static ip in dhcpcd.conf with no joy.pi@raspberrypi:/etc/dhcp $ cat /etc/dhcpcd.conf # A sample configuration for dhcpcd. # See dhcpcd.conf(5) for details.# Allow users of this group to interact with dhcpcd via the control socket. #controlgroup wheel# Inform the DHCP server of our hostname for DDNS. hostname# Use the hardware address of the interface for the Client ID. clientid # or # Use the same DUID + IAID as set in DHCPv6 for DHCPv4 ClientID as per RFC4361. # Some non-RFC compliant DHCP servers do not reply with this set. # In this case, comment out duid and enable clientid above. #duid# Persist interface configuration when dhcpcd exits. persistent# Rapid commit support. # Safe to enable by default because it requires the equivalent option set # on the server to actually work. option rapid_commit# A list of options to request from the DHCP server. option domain_name_servers, domain_name, domain_search, host_name option classless_static_routes # Respect the network MTU. This is applied to DHCP routes. option interface_mtu# Most distributions have NTP support. #option ntp_servers# A ServerID is required by RFC2131. require dhcp_server_identifier# Generate SLAAC address using the Hardware Address of the interface #slaac hwaddr # OR generate Stable Private IPv6 Addresses based from the DUID slaac private# Example static IP configuration: interface eth0 static ip_address=10.10.20.3/24 #static ip6_address=fd51:42f8:caae:d92e::ff/64 static routers=10.10.20.10 static domain_name_servers=10.10.20.3 1.1.1.1 # It is possible to fall back to a static IP if DHCP fails: # define static profile #profile static_eth0 #static ip_address=192.168.1.23/24 #static routers=192.168.1.1 #static domain_name_servers=1.1.1.1# fallback to static profile on eth0 #interface eth0 #fallback static_eth0noipv6 noipv4ll
disable 169.254.x.x routes for veth interfaces - pi4 buster
Thanks for comments. The solution was combination of enabling systemd service: sudo systemctl enable --now dhcpcdand uninstalling Networkmanager which I did not know I had installed and was causing nondeterministic behavior of my device. sudo pacman -Rs networkmanagerThanks to every one who was trying to help. and uninstalling network manager
I am using arch linux and every time I boot y system I have to manually run: sudo dhcpcd enp0s31f6 sudo dhcpcd wlan0to have internet connection. How Can I make start these services automaticly after boot ? Thanks for help
Arch linux: automatic start of dhcpd on boot
From man dhcpd.conf:REFERENCE: OPTION STATEMENTS DHCP option statements are documented in the dhcp-options(5) manual page.Have a look at man dhcp-options or the online manual.
OS: Debian 11 I'm attempting to create an example isc dhcpd.conf which has an entry for each option. So far I've managed to find about 30 options to include but I can't find the other ~160. I looked at the dhcp & dhcpd.conf man pages and consulted chatgpt. Any have such a list they'd be willing to share or point me to where I can look? Thanks
Is there an example dhcpd.conf that contains an example for each option?
This is all for historical reasons. There used to be a dhcpd package which integrated with ifupdown, running one instance of dhpcd per interface. Version 5 of the project changed behaviour, with a single instance handling all interfaces. To simplify upgrades, it was packaged as an entirely new package; this allowed administrators to have both versions in parallel, and handle the configuration upgrade as they saw fit. The dhcpcd symlink is still used by the init script on non-systemd-managed systems.
I was digging into dhcpcd behavior, and I've found something that confuses me: dhcpcd vs. dhcpcd5. $ which dhcpcd /sbin/dhcpcdBut dhcpcd is only a link: dhcpcd -> /etc/alternatives/dhcpcd, which in turn points back to: dhcpcd -> /sbin/dhcpcd5. So - a dhcpcd and a dhcpcd5 - both in sbin. On my Raspberry Pi dhcpcd is apparently invoked at boot time from /etc/systemd/system/dhcpcd.service.d/wait.conf using this command: ExecStart=/usr/lib/dhcpcd5/dhcpcd -q -w. AFAIK, neither dhcpcd nor dhcpcd5 are called anywhere else in the system. I guessed there must be a reason for all of this, but after searching I could find no explanation. Why was dhcpcd renamed dhcpcd5? Also - if it's only called once by systemd at boot time, why all the links & alternative/synonym?
There are two `dhcpcd` files in Debian buster - why is that?
After looking at the linked tutorial ( https://pimylifeup.com/raspberry-pi-wifi-bridge/ ), I could conclude that this is not a bridge tutorial, but a NAT/router tutorial. Even a comment in it also states:Also, important to note that this setup is a wifi client NAT router, not technically a bridge.So to actually use a bridge, follow a bridge tutorial. Since it's Raspbian, Debian's BridgeNetworkConnections should be good enough. The bridge-utils package mentionned isn't really needed for its (obsolete) brctl command which could be completely replaced with modern iproute2's ip link and (if actually needed) bridge, but for its bridge-utils-interfaces plugin for ifupdown's configuration. So in the end the configuration can be done with something similar to: iface eth0 inet manualiface eth1 inet manualauto br0 iface br0 inet dhcp bridge_ports eth0 eth1Don't put any IP on the real interfaces, because they now become bridge ports and their layer 3 settings will be ignored. Also not vital but the bridge should inherit its first's interface MAC address. So if it really matters and you'd rather have eth1's MAC used, put it first in the bridge_ports command (this would probably also change the router's DHCP offer). Now change any reference to eth0 in various settings that would state an interface into br0 instead, but chances are you don't even need this, since for example you don't need anymore dnsmasq. That's it. Some extra informations:If you ever use iptables instead or in addition of ebtables to try to do filtering between the two interfaces (hint: you should probably not, it's a bridge now, not a router, but it's needed for a stateful transparent firewalling bridge), please be aware, if activating br-netfilter of the special interactions between the bridge filtering and the IP filtering layers: ebtables/iptables interaction on a Linux-based bridge. This can lead to hard-to-debug results when not knowing about it. Many tc qdisc effects (like netem) work on the outgoing direction (egress) only. Since you're between both interfaces eth0 and eth1 , you could ponder that you can always find an egress interface for a specific intended action, but if it's done on eth0 then the RPi itself can be affected on the internet side, which is probably not what is wished. You can avoid this by attaching an Intermediate Functional Block device (ifb0) to eth1: this artificially inserts an interface between the ingress and the rest of the network code. This interface is thus now an egress interface from the point of view of eth1's incoming data flow, and can happily accept egress features like netem. For any other interpretation it's part of the ingress flow. You can now then apply TC to eth1 and ifb0 and leave eth0 undisturbed. More informations in my answer there: Simulation of packet loss on bridged interface using netem
I'm trying to set up a raspberry pi as a network bridge between a wireless access point and a router (the reason for this being that I'd like to connect a device to the AP and use tc on the pi to simulate a poor network). The router is wired to the pi at eth0 and the AP is wired to the pi at eth1 (usb to ethernet adapter). I'm using dhcpcd and dnsmasq to try accomplish this. However, even though I can connect a device to the AP and it is provided with an ip address (within the range specified in dhcpcd.conf), all pings (whether to domains or ip address) time out (I can't even ping the pi when connected to the AP). I have enabled ipv4 forwarding in /etc/sysctl.conf: net.ipv4.ip_forward=1 To the default dhcpcd.conf I've added: # eth1 is connected to the AP interface eth1 # This is the ip address of the Raspberry Pi static ip_address=10.0.0.100/24 # This is the ip address of the router static routers=10.0.0.1My dnsmasq.conf looks like this (I'm not entirely sure the interface is correct, I've set it to be the interface connected to the AP but changing it to eth0 doesn't seem to make any difference): interface=eth1 listen-address=10.0.0.100 bind-interfaces server=8.8.8.8 server=8.8.4.4 domain-needed bogus-priv dhcp-range=10.0.0.110,10.0.0.130,4hI ran these commands to add iptable rules (I then saved iptables to a file and am restoring them on boot via rc.local): sudo iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE sudo iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT sudo iptables -A FORWARD -i eth0 -o eth1 -j ACCEPTFrom what I've read, the rules above should be correctly forwarding traffic through the pi, but this doesn't seem to be the case. I checked the status of the dhcpcd and dnsmasq services but didn't see anything that looks like an error. dhcpcd status: ● dhcpcd.service - dhcpcd on all interfaces Loaded: loaded (/lib/systemd/system/dhcpcd.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/dhcpcd.service.d └─wait.conf Active: active (running) since Tue 2019-02-26 12:02:43 GMT; 29min ago Main PID: 368 (dhcpcd) CGroup: /system.slice/dhcpcd.service └─368 /sbin/dhcpcd -q -wFeb 26 12:02:43 raspberrypi dhcpcd[368]: eth0: offered 10.0.0.140 from 10.0.0.1 Feb 26 12:02:43 raspberrypi dhcpcd[368]: eth0: probing address 10.0.0.140/24 Feb 26 12:02:47 raspberrypi dhcpcd[368]: eth0: using IPv4LL address 169.254.202.179 Feb 26 12:02:47 raspberrypi dhcpcd[368]: eth0: adding route to 169.254.0.0/16 Feb 26 12:02:48 raspberrypi dhcpcd[368]: eth0: leased 10.0.0.140 for 86400 seconds Feb 26 12:02:48 raspberrypi dhcpcd[368]: eth0: adding route to 10.0.0.0/24 Feb 26 12:02:48 raspberrypi dhcpcd[368]: eth0: adding default route via 10.0.0.1 Feb 26 12:02:49 raspberrypi dhcpcd[368]: eth0: deleting route to 169.254.0.0/16 Feb 26 12:02:50 raspberrypi dhcpcd[368]: eth0: no IPv6 Routers available Feb 26 12:02:50 raspberrypi dhcpcd[368]: eth1: no IPv6 Routers availablednsmasq status: ● dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server Loaded: loaded (/lib/systemd/system/dnsmasq.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2019-02-26 12:02:43 GMT; 33min ago Main PID: 401 (dnsmasq) CGroup: /system.slice/dnsmasq.service └─401 /usr/sbin/dnsmasq -x /run/dnsmasq/dnsmasq.pid -u dnsmasq -r /run/dnsmasq/resolv.conf -7 /etc/dnsmasq.d,.dpkg-dist,.dpkg-old,.dpkg-new --local-service --trust-anchor=.,19036,8,2,49aac11d7b6f6446702e54a1607371607a1a41855200fd2ce1cdde32f24e8fb5 --trust-anchor=.,20326,8,2,e06d44b80b8f1d39a95c0b0d7c65d0Feb 26 12:02:50 raspberrypi dnsmasq-dhcp[401]: DHCPDISCOVER(eth1) a0:f3:c1:6d:2f:1b Feb 26 12:02:50 raspberrypi dnsmasq-dhcp[401]: DHCPOFFER(eth1) 10.0.0.129 a0:f3:c1:6d:2f:1b Feb 26 12:02:50 raspberrypi dnsmasq-dhcp[401]: DHCPDISCOVER(eth1) a0:f3:c1:6d:2f:1b Feb 26 12:02:50 raspberrypi dnsmasq-dhcp[401]: DHCPOFFER(eth1) 10.0.0.129 a0:f3:c1:6d:2f:1b Feb 26 12:02:58 raspberrypi dnsmasq-dhcp[401]: DHCPDISCOVER(eth1) a0:f3:c1:6d:2f:1b Feb 26 12:02:58 raspberrypi dnsmasq-dhcp[401]: DHCPOFFER(eth1) 10.0.0.129 a0:f3:c1:6d:2f:1b Feb 26 12:02:58 raspberrypi dnsmasq-dhcp[401]: DHCPREQUEST(eth1) 10.0.0.129 a0:f3:c1:6d:2f:1b Feb 26 12:02:58 raspberrypi dnsmasq-dhcp[401]: DHCPACK(eth1) 10.0.0.129 a0:f3:c1:6d:2f:1b TL-WR702N Feb 26 12:35:05 raspberrypi dnsmasq-dhcp[401]: DHCPREQUEST(eth1) 10.0.0.124 f4:5c:89:8e:aa:a1 Feb 26 12:35:05 raspberrypi dnsmasq-dhcp[401]: DHCPACK(eth1) 10.0.0.124 f4:5c:89:8e:aa:a1 georgeIn this status, TL-WR702N is the AP and george is a device connected to the AP. I'm stumped as to where I went wrong. I was following a tutorial for setting up a pi as bridge and have tried to debug this issue by referring to the man pages for dnsmasq, dhcpcd and iptables to no avail. The pi has been rebooted since setting this up.
Trouble setting up Raspbian network bridge