output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
It is 'Cache', not cache. systemd configuration is case-sensitive.
I am using another dns resolver, blocky and using it with systemd-resolved. blocky already has features like caching and prefetching. So I don't want systemd-resolved running a cache and messing with blocky's prefetching. But how do I disable the "cache" for systemd-resolved? Config file: /etc/systemd/resolved.conf.d/dns.conf [Resolve] #blocky DNS=127.0.0.10DNSSEC=yes# how to disable cache? cache=no FallbackDNS=84.200.69.80 8.8.8.8 2001:1608:10:25::9249:d69b 2001:4860:4860::8844Domains=~.Setting cache=no has no effect. systemd-resolved statistics: ❯ systemd-resolve --statistics DNSSEC supported by current servers: yesTransactions Current Transactions: 2 Total Transactions: 4008 Cache Current Cache Size: 189 Cache Hits: 1044 Cache Misses: 3072 DNSSEC Verdicts Secure: 230 Insecure: 410 Bogus: 731 Indeterminate: 0PS: I could directly use blocky but systemd-resolved handles DNS during various network scenarios better.
How do I disable the "cache" for systemd-resolved?
I think I solved it? If I’m not mistaken, the problem seems to be with systemd-resolve / resolvectl not persisting it’s settings for long... If I change the file /etc/systemd/resolved.conf such that it contains ... [Resolve] DNS=127.0.0.1 ...And then reboot, it seems to do (finally) what it should.I’d still like to know why then apparantly resolvectl dns wlp0s20f3 127.0.0.1only takes effect so briefly
I tried to block the .zip TLD on my laptop (running fedora 38) with bind.Installing bindUpdating named.conf: options { listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; secroots-file "/var/named/data/named.secroots"; recursing-file "/var/named/data/named.recursing"; allow-query { localhost; }; /* - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion. - If you are building a RECURSIVE (caching) DNS server, you need to enable recursion. - If your recursive DNS server has a public IP address, you MUST enable access control to limit queries to your legitimate users. Failing to do so will cause your server to become part of large scale DNS amplification attacks. Implementing BCP38 within your network would greatly reduce such attack surface */ recursion yes; forwarders { 8.8.8.8; }; dnssec-validation yes; managed-keys-directory "/var/named/dynamic"; geoip-directory "/usr/share/GeoIP"; pid-file "/run/named/named.pid"; session-keyfile "/run/named/session.key"; /* https://fedoraproject.org/wiki/Changes/CryptoPolicy */ include "/etc/crypto-policies/back-ends/bind.config"; /* this makes it block everything */ // response-policy { zone "zip"; }; };logging { channel default_debug { file "data/named.run"; severity dynamic; }; };zone "zip" IN { type master; file "zip-rpz"; allow-update { none; }; };zone "." IN { type hint; file "named.ca"; };include "/etc/named.rfc1912.zones"; include "/etc/named.root.key";Added /var/named/zip-rpz: $TTL 1D ; default expiration time (in seconds) of all RRs without their own TTL value @ IN SOA ns.zip. postmaster.ns.zip. ( 2020091025 7200 3600 1209600 3600 ) @ IN NS ns1 ; nameserver * IN A 127.0.0.1 ; localhost IN AAAA :: ; localhostApply temporarily sudo systemctl enable named sudo service named restart resolvectl dns wlp0s20f3 127.0.0.1However, running dig url.zip returns 127.0.0.1 only for the next minute or so – after that it shows the "correct" ip (and I can visit the site in the Browser again). Why is it getting reset? If I remove the forwarders line, same result. If I set recursion no;, I am unable to resolve anything other than .zip urls (those point to 127.0.0.1)
How do I get BIND (DNS) to be authoritative about a tld for more than a minute
I went on quite a journey with this one, so I just wanted to capture my finding and a solution. Apparently, the gold standard for configuring this in the past was a helper script called update-systemd-resolved but apparently, this has stopped working with recent versions of NetworkManager. These are the steps I went through to set up the configuration that I wanted. (This assumes you already have your OpenVPN client configured and connecting.) # Make a copy of any existing resolv.conf configuration sudo mv /etc/resolv.conf /etc/resolv.conf.original # This sets resolvd to redirect to the systemd-resolve 127.0.0.53 redirect sudo ln -s /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf # Edit the systemd-resolve configuration file sudo vi /etc/systemd/resolved.confSet DNS= to your local LAN/router DNS IP (i.e. 192.168.1.1) Set Domains= to your local LAN domain (i.e. my.company.com)sudo service systemd-resolved restart resolvectl statusThe global entry should now reflect the values you set in /etc/systemd/resolved.conf Global Protocols: +LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupported resolv.conf mode: stub Current DNS Server: 192.168.1.1 DNS Servers: 192.168.1.1 DNS Domain: my.company.comNow create a helper script in the same directory your openvpn.client.conf file is /etc/openvpn or /etc/openvpn/client This is an example that I used for NordVPN sudo vi /etc/openvpn/nordvpn.systemd.resolve.sh # Make the script executable sudo chmod 750 /etc/openvpn/nordvpn.systemd.resolve.sh#!/bin/sh set -e systemd-resolve -i tun0 \ --set-dns=103.86.96.100 --set-dns=103.86.99.100 \ --set-domain=~. \ --set-dnssec=offNow, modify your actual openvpn.client.conf file (mine is named nordvpn.conf, but yours may be different) sudo vi /etc/openvpn/nordvpn.conf...and add the following lines: (using the name of the script you created above) script-security 2 up /etc/openvpn/nordvpn.systemd.resolve.shNow restart your OpenVPN client: The service name may vary, depending on how you configured your OpenVPN client systemd service sudo systemctl restart [emailprotected]Confirm there were no errors during the restart: sudo systemctl status [emailprotected] resolvectl statusThe global entry should reflect the values you set in /etc/systemd/resolved.conf and the tun interface should show the DNS values you added via your script Global Protocols: +LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupported resolv.conf mode: stub Current DNS Server: 192.168.1.1 DNS Servers: 192.168.1.1 DNS Domain: my.company.comLink # (tun0) Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6 Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported Current DNS Server: 103.86.96.100 DNS Servers: 103.86.96.100 103.86.99.100 DNS Domain: ~.Now, when you're connected to your VPN, you'll use the VPN provider's DNS servers, but when you're not connected, you'll revert back to your LAN DNS server.
How to configure a Debian-based build with systemd, such that when connected to my VPN provider via an OpenVPN client, the system uses the DNS servers of the VPN provider?
How to configure OpenVPN client to use different DNS servers when connected
It comes from this part of the source code: resolve/resolvectl.c 1843 1844 r = bus_map_all_properties(bus, 1845 "org.freedesktop.resolve1", 1846 "/org/freedesktop/resolve1", 1847 property_map, 1848 BUS_MAP_BOOLEAN_AS_BOOL, 1849 &error, 1850 &m, 1851 &global_info); 1852 if (r < 0) 1853 return log_error_errno(r, "Failed to get global data: %s", bus_error_message(&error, r)); 1854 I checked in what cases systemd would report ENOBUFS, and you seemingly only get this error when you've filled up the pending send or receive buffers for DBUS. The underlying error (ENOBUFS) occurs in the a DBUS internal library request. Its an internal buffer you cannot increase the size of, but its undoubtedly an indication that DBUS (or the underlying library) has stopped responding to requests, up until the internal buffer has filled up and it gives up adding more data to it, returning ENOBUFS instead. I would consider checking your systemd DBUS. Perhaps its stopped, crashed or become stuck somehow.
I'm on Mint 20.3 (based on Ubuntu 20.04) and I keep having this weird problem where my network is all working fine and then suddenly, apparently out of the blue, it stops working and I can't access any websites. It's not my browser because the same happens using curl. I found a thread that mentioned using systemd-resolve --status to get the current status of DNS. That command ran fine when the network was OK, and I saved a copy to compare with the output when the network was playing up. However, now the network is playing up again, I've run the command again and is output is an error: Failed to get global data: No buffer space available. I've searched for a solution but nothing seems to mention systemd-resolve, so I'm at a loss as to what's going on. This answer gave me some hope, but I increased the buffer size to twice what's mentioned, and it had no effect. Anyone got any ideas, please?
"Failed to get global data: No buffer space available" on running "systemd-resolve --status"
Running a local resolver (in general, not just systemd-resolved) can provide a number of benefits; typically, the first one is caching, since most local resolvers will cache responses to queries. This means that repeated queries will be handled faster. Other benefits can be derived from the fact that the resolver is local, and therefore knows about your local environment (which external resolvers can’t). In systemd-resolved’s case, that covers:a number of synthetic records, including localhost, _gateway, and any entries in /etc/hosts (so that applications get consistent resolution, whether they query using DNS directly or gethostbyname etc.); hosts discoverable using link-local multicast name resolution (so that other systems on your LAN can be found through DNS); hosts discoverable using multicast DNS, with a .local domain suffix (e.g. printers).It can also enforce policies which only make complete sense locally, e.g. DNSSEC. On top of that, systemd-resolved handles all name resolution services: DNS using its stub resolver, RFC-3493-style getaddrinfo/gethostbyname, and its own D-Bus interface, again ensuring a consistent experience for all clients, at least at the resolution level (it can’t solve variations arising from the use of proxies for example).
I was reading about systemd-resolved and how it listens on 127.0.0.53 for DNS requests. What is the purpose of running this server and making queries to it when you could directly query a DNS server such as one from Google, Cloudflare, or your ISP instead?
What is the purpose of running a local DNS proxy server like systemd-resolved?
While digging in this strange situation, I've had an inspiration from articles telling about permissions. On first reading, no relation with my problem.Anyway, I've checked file permissions on /etc/resolv.conf Everything looks OK, except one particular point : /etc/resolv.conf is a symbolik link to /opt/... path Even if permissions were looking right, I've tried to just copy the file instead of linking it. Then Alleluyah, it worked.. Do not ask me why apt checks this settings differently than global OS, I've no clue why. But that's the solution !
I have a really annoying & confusing issue about domain name resolution, focused only on apt/apt-get utility. When I try apt update, it gives me (sorry in french) root@myhostname:~# apt update Err:1 http://ftp.igh.cnrs.fr/pub/os/linux/raspbian/raspbian buster InRelease Erreur temporaire de résolution de «ftp.igh.cnrs.fr»Tests & analysisActually, DNS resolution is OK for all other tested resources✔ nslookup ftp.igh.cnrs.fr gives me Non-authoritative answer: ftp.igh.cnrs.fr canonical name = ftp4.igh.cnrs.fr. Name: ftp4.igh.cnrs.fr Address: 193.50.6.155✔ I can also try nslookup ftp.igh.cnrs.fr 8.8.8.8 with same resultNote: On these 2 first tests, I've got a strange long delay for response✔ dig ftp.igh.cnrs.fr gives me the same result✔ I can run wget or curl commands successfully with the same URL root@myhostname:~# curl http://ftp.igh.cnrs.fr/pub/os/linux/raspbian/raspbian <html> <head><title>301 Moved Permanently</title></head> <body bgcolor="white"> <center><h1>301 Moved Permanently</h1></center> <hr><center>nginx</center> </body> </html> wget http://ftp.igh.cnrs.fr/pub/os/linux/raspbian/raspbian --2021-06-17 12:56:26-- http://ftp.igh.cnrs.fr/pub/os/linux/raspbian/raspbian Résolution de ftp.igh.cnrs.fr (ftp.igh.cnrs.fr)… 193.50.6.155 Connexion à ftp.igh.cnrs.fr (ftp.igh.cnrs.fr)|193.50.6.155|:80… connecté. requête HTTP transmise, en attente de la réponse… 301 Moved Permanently Emplacement: http://ftp.igh.cnrs.fr/pub/os/linux/raspbian/raspbian/ [suivant] --2021-06-17 12:56:26-- http://ftp.igh.cnrs.fr/pub/os/linux/raspbian/raspbian/ Réutilisation de la connexion existante à ftp.igh.cnrs.fr:80. requête HTTP transmise, en attente de la réponse… 200 OK Taille: non indiqué [text/html] Sauvegarde en: «raspbian»✔ If I try any web command such as ssh, it works againNext step, I thought about APT repositories availabiliy itself (if error message could be not reliable - you never know ;-))Tries with others APT repository in /etc/apt/sources.list gives exactly the same bad result.About global system performance, I wondered if there can be any relation between slow system and dns resolution failure. I've also killed all heavy process. (such as a opened web browser). Here top example remaining result 1 root 20 0 34824 8304 6496 S 1,3 0,9 0:26.26 systemd 8596 root 20 0 10292 2876 2380 S 1,0 0,3 0:06.02 top 120 root 20 0 32600 11760 10732 S 0,7 1,3 0:08.62 systemd-journal 742 root 20 0 8144 3016 2836 S 0,7 0,3 0:00.55 check-vpn 10878 root 20 0 10192 2792 2436 R 0,7 0,3 0:00.14 top 12 root 20 0 0 0 0 I 0,3 0,0 0:02.68 rcu_sched 13 root rt 0 0 0 0 S 0,3 0,0 0:00.02 migration/0 110 root 0 -20 0 0 0 I 0,3 0,0 0:00.19 kworker/3:2H-kblockd 299 root 20 0 0 0 0 S 0,3 0,0 0:02.30 brcmf_wdog/mmc1 380 message+ 20 0 6664 3588 3084 S 0,3 0,4 0:10.36 dbus-daemon 739 vnstat 20 0 2440 432 372 S 0,3 0,0 0:00.43 vnstatd 761 root 20 0 0 0 0 I 0,3 0,0 0:03.51 kworker/u8:3-brcmf_wq/mmc1:0001:1 10624 root 20 0 0 0 0 I 0,3 0,0 0:00.29 kworker/0:1-events 10757 root 20 0 0 0 0 I 0,3 0,0 0:00.03 kworker/2:0-events 2 root 20 0 0 0 0 S 0,0 0,0 0:00.01 kthreadd 3 root 0 -20 0 0 0 I 0,0 0,0 0:00.00 rcu_gp 4 root 0 -20 0 0 0 I 0,0 0,0 0:00.00 rcu_par_gpI notice on this list a recurrent kworker process. Is it normal ? But globally, system seems not too high loaded Tasks: 141 total, 1 running, 139 sleeping, 1 stopped, 0 zombie %Cpu(s): 0,7 us, 1,0 sy, 0,0 ni, 98,1 id, 0,0 wa, 0,0 hi, 0,2 si, 0,0 st MiB Mem : 872,7 total, 275,2 free, 123,7 used, 473,8 buff/cache MiB Swap: 100,0 total, 100,0 free, 0,0 used. 679,3 avail MemGlobally, I've found no other command line or graphic tool where dns resolution fails. !❓Global actionsI've tried to restart some services without any result systemctl restart resolved systemctl restart systemd-resolved ⛔ Even when I restart network, it fails systemctl restart networking ⛔ Even when I restart system, it failsDNS Config Files I've checked following files :/etc/resolv.conf nameserver 8.8.8.8 option timeout:7/etc/resolvconf/run/resolv.conf nameserver 192.168.95.1 nameserver 127.0.0.53 search lan/var/run/resolvconf/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 192.168.95.1 nameserver 127.0.0.53 search lan/etc/network/interfaces auto eth0 allow-hotplug eth0 iface eth0 inet dhcpauto wlan0 allow-hotplug wlan0 iface wlan0 inet dhcp wpa-conf /etc/wpa_supplicant/wpa_supplicant.confI've also checked with only one interface eth0, with static IP. Same result.OS Config detailsI'm running Debian 10.9. (with raspbian distro) Issue seems to trigger randomly on half of my devices (I've got ~100 appliances)Workaround. The only way that I've found to fix this issue is to reinstall resolvconf service apt purge -y openresolv resolvconf wget http://ftp.igh.cnrs.fr/pub/os/linux/raspbian/raspbian/pool/main/r/resolvconf/resolvconf_1.79_all.deb dpkg -i resolvconf_1.79_all.deb systemctl restart systemd-resolved.service systemctl enable systemd-resolved.serviceBut it is not reliable : I have to do this after every restart. Bad way... Any idea on what's wrong on my system ?
APT - temporary failure in name resolution error
In the systemd-resolved terminology, a domain prefixed by ~ in DNS Domain fields indicates "direct queries for this domain into system-wide default DNS server: don't use per-link DNS servers for this domain". The combination ~. is the same, but for the root DNS domain . which is the implied suffix of all DNS domains. But the problem seems to be that you don't seem to have any system-wide default DNS servers: you have only per-link DNS servers configured. FallbackDNS= is only used if no other DNS server information is known, according to resolved.conf(5) man page. Because both eno1 and enp3s0 have defined per-link DNS servers, FallbackDNS does not get used at all. Your post says the network configuration for enp3s0 isenp3s0: IP 192.168.200.101 (DNS/GW: 192.168.200.101)Because it does not make sense for an interface to be its own gateway (although it's sometimes used as a workaround if there is no connectivity outside the segment and some configuration tool insists on having to configure a gateway), I assume that the DNS/GW part has a typo and you meant "DNS/GW: 192.168.200.1". That would match what the resolved status says. So if the 3G modem has the IP address 192.168.200.1, then the root cause could be that even if the 3G modem does not have internet connectivity, resolved can still contact the modem itself. So resolved might think the 192.168.200.1 is still a valid DNS server even if the 3G link is down. And if the 3G modem's DNS server/proxy functionality is poorly written, it might not respond with SERVFAIL or by just plain letting the request time out if it does not have a link: this might further mislead resolved into thinking the 3G modem is a valid DNS server when it actually does not have an active link to internet at all. A DNS proxy in a 3G modem like this might simplify the network configuration when you have just one outgoing internet connection, but it may complicate matters when you have an alternative connection: then you need an I'd suggest that you specify the static DNS in /etc/systemd/resolved.conf with just DNS= instead of FallbackDNS=. On resolved.conf(5) man page, the description of DNS= says:DNS= A space-separated list of IPv4 and IPv6 addresses to use as system DNS servers. DNS requests are sent to one of the listed DNS servers in parallel to suitable per-link DNS servers acquired from systemd-networkd.service(8) or set at runtime by external applications. For compatibility reasons, if this setting is not specified, the DNS servers listed in /etc/resolv.conf are used instead, if that file exists and any servers are configured in it. This setting defaults to the empty list.So the DNS= setting in resolved.conf will never block the use of per-link DNS servers. To resolve your issue, I'd suggest the following: If 192.168.200.1 is the IP address of the 3G modem, it's just acting as a proxy for the 3G network operator's DNS server(s). Find out the real IP addresses of those servers (possibly by checking the internet-side network settings of the 3G modem itself, while it has a link). Then configure those using a DNS= line in /etc/systemd/resolved.conf. After this, the DNS requests should go in parallel to the 3G network's DNS server(s) (because of a DNS= line in resolved.conf) and whichever per-link DNS server is applicable. Since you now have a system-wide DNS server configured, the 3G modem's built-in DNS service should now only be used if querying for a name in domain *.rig, as that is what the DNS Domain: settings for enp3s0 should mean. If the 3G network link is down, the attempt to use the 3G network's DNS servers directly should now unquestionably fail (instead of producing possibly-ambiguous responses from the modem itself), giving resolved a clue that this DNS server is no longer available and it should consider other alternatives. As soon as the Connectivity Check has increased the priority of the other link, its per-link DNS servers should start to be used. If the 3G modem's administration web interface is accessible using a name like <something>.rig, it will still be available using that name, regardless of whether the 3G link is up or down, as long as the modem itself is accessible.
I have an box running Ubuntu 18.04.4. There are two LAN interfaces enp3s0 and eno1: the former is connected to a 3G modem on LAN network and the latter is connected to a satellite modem on another network. Network Manager takes care of setting priority to the interfaces through Connectivity Check setting in its configuration file (let's say, in case of no 3G coverage eno1 interface metrics is downgraded by adding 20000 to its default value and same applies to enp3s0 in case of no satellite - default priority is higher on eno1 with both connections active). DNS is handled by systemd-resolved and systemd-networked. Both interfaces are getting their IP addresses and DNS from DHCP services run on 3G and satellite modems. enp3s0: IP 192.168.200.101 (DNS/GW: 192.168.200.101) - this address is assigned permanently via MAC address assignment eno1: IP 192.168.55.xxx (DNS/GW: 192.168.55.1) Somehow DNS server address supplied to the first interface takes priority over the second one. E.g., if enp3s0 does not have internet connection and internet traffic is routed to 192.168.55.1, name resolution is still attempted via 192.168.200.1 and obviously fails. I tried replacing dynamically assigned DNS for eno1 with a static DNS added to /etc/systemd/network/eno1.network [Match] Name=eno1[Network] DNS=8.8.8.8 DNS=8.8.4.4But the system still prefers to use 192.168.200.1 for name resolution. I added a global fallback DNS to /etc/systemd/resolved.conf but it does not seem to help if even taken into account. [Resolve] #DNS= FallbackDNS=8.8.4.4 #Domains= #LLMNR=no #MulticastDNS=no #DNSSEC=no #Cache=yes #DNSStubListener=yesWhat I would like to achieve is to not use only 192.168.200.1 as the DNS server but in case it does not work to use the DNS server associate with the second interface but I am failing to find the way. My understanding it is somehow related to how the first DNS is configured, it got that ~ in its domain assignment what I suspect makes it the main source of internet names but may be I am dreaming. Any advice on that is highly appreciated, basically how can I get both DNSs working? Below is my resolved status output where I see that ~ (with Google DNS assigned to 2nd iface). Global DNSSEC NTA: 10.in-addr.arpa 16.172.in-addr.arpa 168.192.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa corp d.f.ip6.arpa home internal intranet lan local private testLink 15 (tun0) Current Scopes: none LLMNR setting: yes MulticastDNS setting: no DNSSEC setting: no DNSSEC supported: noLink 14 (veth2e5ae19) Current Scopes: none LLMNR setting: yes MulticastDNS setting: no DNSSEC setting: no DNSSEC supported: noLink 12 (veth5b411fa) Current Scopes: none LLMNR setting: yes MulticastDNS setting: no DNSSEC setting: no DNSSEC supported: noLink 10 (br-b950c350c024) Current Scopes: none LLMNR setting: yes MulticastDNS setting: no DNSSEC setting: no DNSSEC supported: noLink 9 (docker0) Current Scopes: none LLMNR setting: yes MulticastDNS setting: no DNSSEC setting: no DNSSEC supported: noLink 8 (can0) Current Scopes: none LLMNR setting: yes MulticastDNS setting: no DNSSEC setting: no DNSSEC supported: noLink 7 (wlp1s0) Current Scopes: none LLMNR setting: yes MulticastDNS setting: no DNSSEC setting: no DNSSEC supported: noLink 6 (wwp0s20u6i10) Current Scopes: none LLMNR setting: yes MulticastDNS setting: no DNSSEC setting: no DNSSEC supported: noLink 5 (wwp0s20u6i8) Current Scopes: none LLMNR setting: yes MulticastDNS setting: no DNSSEC setting: no DNSSEC supported: noLink 4 (enp3s0) Current Scopes: DNS LLMNR setting: yes MulticastDNS setting: no DNSSEC setting: no DNSSEC supported: no DNS Servers: 192.168.200.1 DNS Domain: ~. rigLink 3 (enp2s0) Current Scopes: none LLMNR setting: yes MulticastDNS setting: no DNSSEC setting: no DNSSEC supported: noLink 2 (eno1) Current Scopes: DNS LLMNR setting: yes MulticastDNS setting: no DNSSEC setting: no DNSSEC supported: no DNS Servers: 8.8.8.8 8.8.4.4
DNS server selection between two LAN interfaces
The answer turned out to be brutal and simple. The DHCP client was superseding the DNS entries in /etc/dhcp/dhclient.conf via supersede domain-name-servers ...,...;. I have no idea why it was there, I must've forgot that I've set this a while back. The command that saved me: sudo find /etc -type f -print0 2>/dev/null | xargs -0 sudo grep "<hardcoded address>"Yep, simple as that.
I'm using Kubuntu 18.04. When I'm in the office network, everything works fine, but when I connect to any other network (wired of wifi) I don't get proper DNS names - the old ones are still in systemd-resolve --status output. When I add the proper DNS address via systemd-resolve --set-dns=10.0.0.1 --interface=eno1, the problem is solved temporarily and I can resolve hostnames, but after a while it stops working and I have to run the --set-dns again. How do I solve this?
DNS is not applied with systemd-resolved [closed]
There is a justification posted in 2015. https://www.mail-archive.com/[emailprotected]/msg31563.htmliiuc, the concern was that multi-level domain names (i.e. those with at least one dot) could be spoofed by controlling the search suffix. But for names with at least two levels glibc only uses the search list as a fallback.Well, sure, being able to influence things at the beginning of the search logic is more problematic than influencing things at the end of the search logic, but i still think it's problematic, since it still allows you to insert "home.foobar.com" into a domain "foobar.com" that doesn't have "home.foobar.com" itself but only "www.bar.com"... Sure, classic (non-DNSSEC) DNS is not ever going to be fully secure, but it I still believe we should default to the safer options, and allow the others. Altering the search paths is inherently something that makes no sense on public networks, it only makes sense if you know your network well, and trust it to some level. Hence opt-in sounds like the better option to me.
UseDomains= Takes a boolean argument, or the special value "route". When true, the domain name received from the DHCP server will be used as DNS search domain over this link, similar to the effect of the Domains= setting. If set to "route", the domain name received from the DHCP server will be used for routing DNS queries only, but not for searching, similar to the effect of the Domains= setting when the argument is prefixed with "~". Defaults to false. It is recommended to enable this option only on trusted networks, as setting this affects resolution of all host names, in particular of single-label names. It is generally safer to use the supplied domain only as routing domain, rather than as search domain, in order to not have it affect local resolution of single-label names.compareLLMNR= A boolean or "resolve". When true, enables Link-Local Multicast Name Resolution on the link. When set to "resolve", only resolution is enabled, but not host registration and announcement. Defaults to true.but LLMNR also resolves single-label names. Can anyone explain this? Source: https://www.freedesktop.org/software/systemd/man/systemd.network.html as of systemd version 239.
Why does systemd-networkd consider UseDomains= (domain search list from DHCP) less safe than LLMNR?
It took some time to have the correct inspiration that maybe there was something about the MAC addresses, so I added this safeguard expectation that then immediately triggered: mac2 := Successful(netlink.LinkByIndex(macvlan2.Attrs().Index)). Attrs().HardwareAddr// ... something going on heremac2now := Successful(netlink.LinkByIndex(macvlan2.Attrs().Index)). Attrs().HardwareAddr Expect(mac2now).To(Equal(mac2))Looking through especially the few *.link configurations in especially /usr/lib/systemd/network/ there's a catch-all default configuration in 99-default.link that is as follows: [Match] OriginalName=*[Link] NamePolicy=keep kernel database onboard slot path AlternativeNamesPolicy=database onboard slot path MACAddressPolicy=persistentSo what does MACAddressPolicy=persistent actually do? The networkd link documentation explains:If the hardware has a persistent MAC address, as most hardware should, and if it is used by the kernel, nothing is done. Otherwise, a new MAC address is generated which is guaranteed to be the same on every boot for the given machine and the given device, but which is otherwise random.So, networkd (or is it actually udevd?) is replacing the original MAC address of any new MACVLAN netdev with another one. As this takes some time from where the MACVLAN comes up the unit test has queried the original MAC addresses and by the time the test starts sending Ethernet packets, the second MACVLAN's MAC address has changed, so that the original destination MAC has been gone with the test unaware of this. A fix is to at least selectively revert the MACAddressPolicy= to none, for instance: # /etc/systemd/network/00-notwork.link [Match] Kind=macvlan OriginalName=mcvl-*[Link] Description="keep systemd's sticky fingers off test netdevs" MACAddressPolicy=none
While working on some unit test code that basically sends raw Ethernet packets from one MACVLAN to another MACVLAN (virtual) network interface I noticed that most of the time the test code fails to receive any of the packets sent from the first to the second MACVLAN. Using Wireshark I could see that the packets leave the first MACVLAN, but never reach the second MACVLAN or the listening raw socket. Only in a few odd instances do any packets go through at all -- without any change in the test code. The host system is Ubuntu 22.10 (kernel 5.19.0-38-generic) with systemd and network manager. Only after some time systemd-resolved, systemd-networkd and network manager arose my suspicion. By running the test in its own isolated transient network namespace I could successfully establish that out of the reach of these host services the test always correctly succeeds. Suspecting network manager -- even if nmcli device status tells me that the virtual dummy and MACVLAN interfaces are "unmanaged -- I found https://developer-old.gnome.org/NetworkManager/stable/NetworkManager.conf.html and then added wildcards for unmanaged devices: [keyfile] unmanaged-devices=interface-name:docker*;interface-name:br-*;interface-name:veth*;interface-name:mcvl-*;interface-name:dumy-*Unfortunately, this didn't improve the situation and the test was still failing on almost every run (even after restarting network manager multiple times and making sure that the config file is correct). In Wireshark I noticed MDNS broadcasts on the MACVLAN network interfaces where there shouldn't be any. How can I tell both systemd's networkd as well as resolved to keep their dirty paws off any virtual network interface, especially dummy, MACVLAN and VETH network interfaces? I searched for configuration options but couldn't find anything suitable. Any idea how to keep systemd's component off of things they should never touch in the first place? The following is a Ginkgo/Gomega-based unit test that reproduces the situation. package pingpongimport ( "bytes" "context" "fmt" "net" "os" "strings" "time" "github.com/mdlayher/ethernet" "github.com/mdlayher/packet" "github.com/thediveo/notwork/dummy" "github.com/thediveo/notwork/link" "github.com/thediveo/notwork/macvlan" "github.com/thediveo/notwork/netns" "github.com/vishvananda/netlink" . "github.com/onsi/ginkgo/v2" . "github.com/onsi/gomega" . "github.com/thediveo/success" )func TestPingPong(t *testing.T) { RegisterFailHandler(Fail) RunSpecs(t, "pingpong package") }const ( experimentalEthType = 0xffee // something (hopefully) unused pings = 10 pingInterval = 100 * time.Millisecond )var payload = bytes.Repeat([]byte("HELO"), 100)var _ = Describe("pingponging netdevs", Ordered, func() { BeforeAll(func() { if os.Geteuid() != 0 { Skip("needs root") } }) DescribeTable("virtual network pingpong", func(ctx context.Context, dropall bool) { // By("creating a new network namespace") // defer netns.EnterTransientNetns()() By("creating two MACVLANs connected via a dummy network interface") dummy := dummy.NewTransientUp() macvlan1 := macvlan.NewTransient(dummy) netlink.LinkSetUp(macvlan1) macvlan2 := macvlan.NewTransient(dummy) netlink.LinkSetUp(macvlan2) macvlan1 = Successful(netlink.LinkByIndex(macvlan1.Attrs().Index)) mac1 := macvlan1.Attrs().HardwareAddr macvlan2 = Successful(netlink.LinkByIndex(macvlan2.Attrs().Index)) mac2 := macvlan2.Attrs().HardwareAddr Expect(mac1).NotTo(Equal(mac2)) By(fmt.Sprintf("waiting for MACVLANs (%s-%s, %s-%s) to become operationally UP", macvlan1.Attrs().Name, macvlan1.Attrs().HardwareAddr.String(), macvlan2.Attrs().Name, macvlan2.Attrs().HardwareAddr.String())) link.EnsureUp(macvlan1) link.EnsureUp(macvlan2) By("opening data-link layer sockets") txconn := Successful(packet.Listen( &net.Interface{Index: macvlan1.Attrs().Index}, packet.Raw, experimentalEthType, nil)) defer txconn.Close() rxconn := Successful(packet.Listen( &net.Interface{Index: macvlan2.Attrs().Index}, packet.Raw, experimentalEthType, nil)) defer rxconn.Close() ctx, cancel := context.WithCancel(ctx) defer cancel() By("sending data-link layer PDUs") go func() { defer cancel() defer GinkgoRecover() f := ethernet.Frame{ Destination: mac2, Source: mac1, EtherType: experimentalEthType, Payload: payload, } frame := Successful(f.MarshalBinary()) toAddr := packet.Addr{HardwareAddr: mac2} for i := 0; i < pings; i++ { By("sending something...") _, err := txconn.WriteTo(frame, &toAddr) Expect(err).NotTo(HaveOccurred()) select { case <-ctx.Done(): return case <-time.After(pingInterval): } } }() By("receiving data-link layer PDUs (or not)") received := 0 receive: for { buffer := make([]byte, 1500) rxconn.SetReadDeadline(time.Now().Add(1 * time.Second)) n, fromAddr, err := rxconn.ReadFrom(buffer) select { case <-ctx.Done(): break receive default: } if err != nil && dropall && strings.Contains(err.Error(), "i/o timeout") { continue } Expect(err).NotTo(HaveOccurred()) By("...received something") f := ethernet.Frame{} Expect(f.UnmarshalBinary(buffer[:n])).To(Succeed()) Expect(f.EtherType).To(Equal(ethernet.EtherType(experimentalEthType))) Expect(fromAddr.(*packet.Addr).HardwareAddr).To(Equal(mac1)) Expect(f.EtherType).To(Equal(ethernet.EtherType(experimentalEthType))) Expect(len(f.Payload)).To(BeNumerically(">=", len(payload))) Expect(f.Payload[:len(payload)]).To(Equal(payload)) received++ } if !dropall { Expect(received).To(BeNumerically(">=", (2*pings)/3), "too much packet loss") } else { Expect(received).To(BeZero()) } }, Entry("receives passed-on packets", false), )})
systemd networkd and/or resolved blocking receiving (raw) packets on virtual network interface?
This was a bug in systemd which has since been fixed.
Lately systemd-resolved fails to resolve most domains with the following error: $ resolvectl query slashdot.org slashdot.org: resolve call failed: 'slashdot.org' does not have any RR of the requested typeI have currently reduced my config to the following: [Resolve] DNS=1.1.1.1$ resolvectl status Global Protocols: +LLMNR +mDNS -DNSOverTLS DNSSEC=allow-downgrade/unsupported resolv.conf mode: foreign Current DNS Server: 1.1.1.1 DNS Servers: 1.1.1.1 Fallback DNS Servers: 1.1.1.1 8.8.8.8 1.0.0.1 8.8.4.4 2606:4700:4700::1111 2001:4860:4860::8888 2606:4700:4700::1001 2001:4860:4860::8844I can resolve names through the same dns service via dig: $ dig @1.1.1.1 slashdot.org; <<>> DiG 9.16.11 <<>> @1.1.1.1 slashdot.org ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 57735 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 1232 ;; QUESTION SECTION: ;slashdot.org. IN A;; ANSWER SECTION: slashdot.org. 262 IN A 216.105.38.15;; Query time: 20 msec ;; SERVER: 1.1.1.1#53(1.1.1.1) ;; WHEN: Wed Mar 31 10:33:43 CEST 2021 ;; MSG SIZE rcvd: 57Somehow it still succeeds in resolving a few domains, even after I resolvectl flush-caches: $ resolvectl query stackexchange.com stackexchange.com: 151.101.65.69 -- link: enp0s25 151.101.1.69 -- link: enp0s25 151.101.193.69 -- link: enp0s25-- Information acquired via protocol DNS in 34.2ms. -- Data is authenticated: noHow can I solve this? UPDATE I have taken a look at the traffic (1st: resolvectl, 2nd dig): 20:29:53.540625 IP (tos 0x0, ttl 64, id 40410, offset 0, flags [DF], proto UDP (17), length 58) 192.168.178.39.35819 > one.one.one.one.domain: [bad udp cksum 0x7509 -> 0x3c24!] 10838+% A? slashdot.org. (30) 20:29:53.558319 IP (tos 0x0, ttl 58, id 46858, offset 0, flags [DF], proto UDP (17), length 74) one.one.one.one.domain > 192.168.178.39.35819: [udp sum ok] 10838 q: A? slashdot.org. 1/0/0 slashdot.org. A 216.105.38.15 (46) -- 20:29:55.350287 IP (tos 0x0, ttl 64, id 40434, offset 0, flags [none], proto UDP (17), length 81) 192.168.178.39.59104 > one.one.one.one.domain: [bad udp cksum 0x7520 -> 0x93a9!] 57704+ [1au] A? slashdot.org. ar: . OPT UDPsize=4096 [COOKIE e0f529ee021d164e] (53) 20:29:55.367233 IP (tos 0x0, ttl 58, id 57041, offset 0, flags [DF], proto UDP (17), length 85) one.one.one.one.domain > 192.168.178.39.59104: [udp sum ok] 57704 q: A? slashdot.org. 1/0/1 slashdot.org. A 216.105.38.15 ar: . OPT UDPsize=1232 (57)resolvectl sets the CD bit, dig sets AD (which is cleared by the server). Other than that, they receive basically the same response. It's the same with the query about stackexchange, which succeeds. (Although I am surprised that I could still read the traffic after I had re-enabled DNS over TLS, the pcap doesn't show any tls records.)
resolvectl query fails: 'domain' does not have any RR of the requested type
A single-label name is only handled by LLMNR, if solved. If not solved, each of the list of words in Domains=: Domains=domainA.example domainB.example ~exampleis added at the end of the single-label and an attempt to resolve it is done. If it gets resolved at any trial, that is the end, if not resolved, try with the next word. To resolve, different name resolution resources might be used: Avahi, resolved, LLMNR, or (usually at last) several DNS servers.
in networkd man page, the search domains are used to handle single-label names: The domains without the prefix are called "search domains" and are first used as search suffixes for extending single-label host names (host names containing no dots) to become fully qualified domain names (FQDNs). If a single-label host name is resolved on this interface, each of the specified search domains are appended to it in turn, converting it into a fully qualified domain name, until one of them may be successfully resolved. Both "search" and "routing-only" domains are used for routing of DNS queries: look-ups for host names ending in those domains (hence also single label names, if any "search domains" are listed), are routed to the DNS servers configured for this interface. I wonder if a single-label name lookup request is handled by LLMNR or by the specified dns severs or both?
do "search domain" in networkd configuration and LLMNR conflict?
You may be able to achieve split DNS (conditional forwarding) with the following configuration (assuming ppp0 is your VPN interface and enp6s0 your regular LAN): resolvectl dns ppp0 corp.ip.add.ress resolvectl domain ppp0 ~corp.domain.name resolvectl default-route ppp0 false resolvectl default-route enp6s0 trueThis will use the default DNS for all queries except for those that have a domain ending with corp.domain.name. For those queries, it will use corp.ip.add.ress. Also note the default route has to be corrected, as connecting to the VPN might result in updating the default.
I'm connecting to a corporate VPN via network-manager-l2tp with a pre-shared key and user+pass. I'm getting a correct DNS server IP automatically, which resolves the companies URLs correctly. However, public internet isn't resolved (I tested with www.google.com all the time), but this depends on the perspective: I can't get systemd-resolved to resolve from 2 DNS servers at the same time (1.1.1.1 and the corporate DNS). It's strictly either or and I've tried a lot of different configs... Question: How do I configure systemd-resolved to use both a corporate VPN's DNS and the regular DNS servers at the same time? I don't care if it's 'conditional forwarding' based on domain or using the 2nd DNS after the 1st fails. I couldn't get neither approach to work. My guess is this has something to do with l2tp, but I can't find any solutions that apply to my case. I use: NetworkManager 1.30.0, systemd-resolved (systemd 247.3) and openresolv (instead of old resolvconf) on Pop OS. Both services are up and running. resolv.conf -> /run/systemd/resolve/stub-resolv.conf # This file is managed by man:systemd-resolved(8). Do not edit. [...]nameserver 127.0.0.53 options edns0 trust-ad search fritz.box/etc/systemd/resolved.conf [Resolve] FallbackDNS=1.1.1.1 corp.ip.add.ressresolvectl status output after connecting to VPN Global Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported resolv.conf mode: stub Fallback DNS Servers: 1.1.1.1 corp.ip.add.ressLink 2 (enp6s0) Current Scopes: DNS Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported Current DNS Server: 192.168.178.1 DNS Servers: 192.168.178.1 DNS Domain: fritz.boxLink 3 (ip_vti0) Current Scopes: none Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupportedLink 23 (ppp0) Current Scopes: DNS Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported Current DNS Server: corp.ip.add.ress DNS Servers: 1.1.1.1 corp.ip.add.ressI've tried a lot of different things, but what you see above is a good starting point to come up with a robust, final solution.
systemd-resolved+VPN: 2nd DNS server ignored (L2TP)
At the risk of being downvoted, if you don't want systemd to access network interfaces, don't use systemd. Over the years, we have seen the systemd-suite absorbing more and more functionality, and some of these require network access. As you said it yourself: it's an octopus. And the inter-dependencies of the different processes of the systemd-suite are often surprising (at least: to me). That means that if you don't go with the flow of the systemd-suite, after a few upgrades, you will need to retest and reconfigure everything. For a real minimalistic gateway function, I would look at a busybox based system.
How would one quarantine any systemd processes from ever using ANY network interface or resources at OS-level? Overview Minimalistically, I'd like to setup a Linux gateway. Simple NAT, forwarding home-based LAN gateway ... WITHOUT systemd or systemd-networkd accessing network-based interfaces. Rationale There is no need for me to be having systemd-network or NetworkManager (or even systemd) to access any network interface in this minimalist mode. What Was Done? I use Bind9, ISC DHCP, local /etc/resolv.conf, and nftables for all my networking needs. Merely use the systemd for bootup sequence (as it was originally designed to do). I disabled the following everything network-related for systemd, except systemd package itself (inspired by this Yoon's Blog. systemctl disable systemd-resolved.service systemctl stop systemd-resolved systemctl stop systemd-networkd.socket systemctl stop systemd-networkd systemctl stop networkd-dispatcher systemctl stop systemd-networkd-wait-online systemctl disable systemd-networkd.socket systemctl disable systemd-networkd systemctl disable networkd-dispatcher systemctl disable systemd-networkd-wait-online apt-get remove systemd-resolvconfd apt-get remove systemd-networkd apt-get remove openresolv apt-get purge netplan.io rm /etc/dhcp/dhclient-enter-hooks.d/resolved #ISC now updates resolv.confand installed apt-get install ifupdownIssues My problem is that nftables firewall really cannot block traffic at a per-process basis. How would one quarantine all systemd processes from ever using ANY network interface at OS-level? Perhaps a group resource limiter of some kind? Note Please, no need for systemd flamewar. This is an effort in retaining systemd’s original design: fastest startup.
How do you block network acceess to systemd?
As explained in man pgrep,The process name used for matching is limited to the 15 characters present in the output of /proc/pid/stat. Use the -f option to match against the complete command line, /proc/pid/cmdline.“systemd-resolved” has 16 characters, so it falls foul of this limit. If you run pgrep -f systemd-resolved you’ll find the process.
#!/usr/bin/env bash echo "pgrep not finding systemd-resolved has bitten many times." if [ -z $(pgrep systemd-resolved) ]; then echo -e "systemd-resolved not found by pgrep, trying another way.\n"; ps aux | egrep -i '(DNS|HOST|DH|RESOLV|systemd-resolved)' | egrep -v 'grep -E'; fi;systemd-resolved not found by pgrep, trying another way: systemd+ **914** 0.0 0.0 26196 4048 ? Ss Nov12 0:02 /lib/systemd/**systemd-resolved** rjt 73300 0.0 0.0 9228 2160 pts/2 S+ 23:02 0:00 grep -E --color=auto -i (DNS|HOST|DH|RESOLV|systemd-resolved)I work on many different systems of various age. Need to know the backend name resolution system and what is covered by the name resolver. So I often use pgrep to find all dns related processes. Appears to be a string length limit for pgrep?
Why does pgrep not find systemd-resolved?
As per the Ubuntu security notice, the issue only affects systemd-resolved (this can be confirmed by looking at the patch fixing the issue). So a system which isn’t running systemd-resolved isn’t exposed, and stopping systemd-resolved is sufficient to prevent the attack. This is the reason why the Debian tracker mentions “[stretch] - systemd (Minor issue, systemd-resolved not enabled by default)”, meaning that while Debian 9 does include the affected code, it’s a minor issue and won’t result in a security advisory. You can receive notification of the fix in Debian 9 or later by subscribing to the corresponding Debian bug.
A new vulnerability has discovered on the systemd package called Evil DNS allowing the remote control of a linux machine. From the security-tracker.debian , the debian Stretch , Buster and Sid are vulnerable. ( Also affect a various Linux distro with Systemd) System check: On Debian Stretch , my systemd --version is systemd 232 before and after the system update. The systemctl status systemd-resolved.service command say that the systemd-resolved is disabled. How to easily understand and mitigate the Evil DNS remote attack under linux systems? Does stopping the systemd-resolved service is sufficient to prevent the Evil DNS attack?
How to understand and mitigate the Evil DNS remote attack under linux systems?
nslookup will query for both A and AAAA records, so if the A query returns immediately and the AAAA never returns, then nslookup will print an immediate response, then timeout. Here's a table I made of how the dnsmasq server on 128.8.8.254 answered various types of queries: dig @128.8.8.254 A focal-250 immediate success (A record) dig @128.8.8.254 A focal-250.test immediate success (A record) dig @128.8.8.254 AAAA focal-250 immediate SERVFAIL dig @128.8.8.254 AAAA focal-250.test 15 second timeout, no responseWhat the output from nslookup meant is that it got the A record response (the first six lines), then timed out waiting for AAAA record. One way I found to "fix" the problem is to tell dnsmasq that it's authoritative for the test domain by putting auth-zone=test in its config file. Now it behaves like this: dig @128.8.8.254 A focal-250 immediate success (A record) dig @128.8.8.254 A focal-250.test immediate success (A record) dig @128.8.8.254 AAAA focal-250 immediate SERVFAIL dig @128.8.8.254 AAAA focal-250.test immediate NOERROR (no records)nslookup and ping now respond immediately. I've also found it useful to make dnsmasq "authoritative" for in-addr.arpa, for the same reason: so it returns an immediate NOERROR instead of timing out. The systemd-resolved service seems to use the answer from the server that responded with a record instead of the server that responded with nothing: ubuntu@ca:~$ dig +short @128.8.8.254 -x 18.165.83.71 ubuntu@ca:~$ dig +short @192.168.1.1 -x 18.165.83.71 server-18-165-83-71.iad55.r.cloudfront.net. ubuntu@ca:~$ dig +short @127.0.0.53 -x 18.165.83.71 server-18-165-83-71.iad55.r.cloudfront.net.
Here's what my nslookup is doing: ubuntu@ca:~$ time nslookup focal-250 Server: 127.0.0.53 Address: 127.0.0.53#53Non-authoritative answer: Name: focal-250.test Address: 128.8.8.187 ;; connection timed out; no servers could be reachedreal 0m15.024s user 0m0.005s sys 0m0.018sThe first six lines (i.e, the correct response) printed instantly, then it waited 15 seconds to "time out". Something like ping does the same thing: stalls for 15 seconds, then starts working. It's an Ubuntu 20.04 LTS system running systemd-resolved. The only thing weird about it is that it has dnsmasq listening for name service on one of its interfaces, and that interface's address is configured as its own nameserver: ubuntu@ca:~$ resolvectl Global LLMNR setting: no MulticastDNS setting: no DNSOverTLS setting: no DNSSEC setting: no DNSSEC supported: no DNSSEC NTA: 10.in-addr.arpa 16.172.in-addr.arpa 168.192.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa corp d.f.ip6.arpa home internal intranet lan local private test Link 3 (ens5) Current Scopes: DNS DefaultRoute setting: yes LLMNR setting: yes MulticastDNS setting: no DNSOverTLS setting: no DNSSEC setting: no DNSSEC supported: no Current DNS Server: 128.8.8.254 DNS Servers: 128.8.8.254 DNS Domain: test Link 2 (ens4) Current Scopes: DNS DefaultRoute setting: yes LLMNR setting: yes MulticastDNS setting: no DNSOverTLS setting: no DNSSEC setting: no DNSSEC supported: no Current DNS Server: 192.168.1.1 DNS Servers: 192.168.1.1 DNS Domain: freesoft.orgubuntu@ca:~$ ip -br addr lo UNKNOWN 127.0.0.1/8 ::1/128 ens4 UP 192.168.4.183/24 fe80::e2c:d2ff:fe67:0/64 ens5 UP 128.8.8.254/24 fe80::e2c:d2ff:fe67:1/64ubuntu@ca:~$ tail -5 /etc/dnsmasq.conf listen-address=128.8.8.254 bind-interfaces dhcp-range=128.8.8.101,128.8.8.200,12h dhcp-authoritative domain=testubuntu@ca:~$ tail -4 /etc/resolv.conf nameserver 127.0.0.53 options edns0 trust-ad search test freesoft.orgIt's doing what I want, which is to answer queries for the ".test" domain, but I don't understand why it stalls for 15 seconds after getting the answer.
Why would nslookup return a response, then timeout?
Try adding resolve to your /etc/nsswitch.conf before dns entry, so hosts line will look like: hosts: files mymachines resolve [!UNAVAIL=return] dns myhostname
I am working with an embedded linux Target (ARM) and have the following problem: When /etc/resolv.conf is updated, while a process is running (e.g. C Program using gethostbyname()) the running process does not take care about the new nameserver entry until it is restarted. The DNS entry has been made with systemd-resolve -i eth0 --set-dns="ipaddr" If I try the same with my desktop linux any change to /etc/resolv.conf is used immediately by a running processes without restart. How can I see whats happening (or not happening) in the background when /etc/resolv.conf is beeing modified? What service could be missing on the embedded target? Why does it work after restart of the application?
Update of /etc/resolve.conf needs restart of application
I suggest you check file /etc/nsswitch.conf. It is name service switch configuration file. Param. host in this file shows you sources which systemd-resolved uses for getting host by name.
With my laptop running an arch-linux I can not access some websites when I am at home like for example:https://www.wikipedia.org/ https://wiki.archlinux.org/ https://www.leo.org/But I can access those websites from university as well as from my parents' place. There are some websites that I can access from home like for example:https://unix.stackexchange.com/ https://www.startpage.com/ https://www.youtube.com/This is independent of whether I use Qutebrowser or Firefox. An old debian system installed as dual boot on the same laptop is able to access all of those websites from home. When I am trying to access a not working website Firefox says:Hmm. We’re having trouble finding that site. We can’t connect to the server at www.leo.org. If that address is correct, here are three other things you can try:Try again later.I have done that plenty of times.Check your network connection.I am connected and I have an ip address.If you are connected but behind a firewall, check that Firefox has permission to access the Web.I am able to access other webpages and why should a firewall forbid wikipedia but allow youtube? I am able to ping those webpages from my arch system via ip but not via url. Therefore I am assuming a DNS problem. According to dig my DNS server is my router, which is the same for another debian system in the same network where everything is running fine. I am using systemd-resolved. I am lost how to debug this further. Why does it work elsewhere but not at home? Why does it work on other systems in the same network but not on my arch linux?
Some web sites can not be reached [closed]
Changing the /etc/resolv.conf link to /run/systemd/resolve/resolv.conf and listing the DNS in /etc/systemd/network/foo.network and in /etc/systemd/resolved.conf made it so that the global and NIC settings listed by resolvectl point to 8.8.8.8 and succesfully resolve.
The issue Changing the DNS option in resolv.conf from the VM's gateway to 8.8.8.8 will result in no DNS resolution. resolvectl query google.com succeeds if I set DNS to my gateway. The set-up/etc/resolv.conf is linked to stub-resolv.confsystemd-networkd is configured as follows: [Match] Name=enp0s3[Network] Address=192.168.0.222/24 Gateway=192.168.0.1 DNS=8.8.8.8/etc/systemd/resolv.conf has not been modifiedAdditional troubleshootingI tried linking /etc/resolv.conf to /run/systemd/resolve/resolv.conf instead and only listing 8.8.8.8 as nameserver there. But systemd-resolved overwrites /run/systemd/resolve/resolv.conf
Bypassing DHCP DNS with systemd-resolved
It's a typical behavior of a DNS resolver that with multiple DNS servers defined, the servers are used round-robin. There is no concept of "primary" or "secondary" server in the resolver configuration - each of the configured servers is treated as equivalent.
Maybe this is a dumb question but should Linux DNS resolver prefer primary DNS server over secondary and primary or is it free to use any of them? I have a pretty standard ubuntu 20.04 LTS image with resolver configuration as per Microsoft recommendation $ cat /etc/resolv.conf options timeout:1 attempts:5 nameserver 127.0.0.53 search reddog.microsoft.comAnd I'm quite often in a situation where secondary or tertiary DNS server is used when primary is available $ systemd-resolve --status |tail -5 Current DNS Server: Z.Z.Z.Z DNS Servers: X.X.X.X Y.Y.Y.Y Z.Z.Z.Z DNS Domain: reddog.microsoft.comIs that expected? Shouldn't resolver prefer primary DNS server if it's available? Any point to resolver documentation would be welcome. Or maybe this is not even a matter of specific resolver but part of RFC requirement.
Should Linux resolver fail back to primary DNS server when it's available?
Stealing Andy's idea and making it a function so it's easier to use: # print the header (the first line of input) # and then run the specified command on the body (the rest of the input) # use it in a pipeline, e.g. ps | body grep somepattern body() { IFS= read -r header printf '%s\n' "$header" "$@" }Now I can do: $ ps -o pid,comm | body sort -k2 PID COMMAND 24759 bash 31276 bash 31032 less 31177 less 31020 man 31167 man ...$ ps -o pid,comm | body grep less PID COMMAND 31032 less 31177 less
I am getting output from a program that first produces one line that is a bunch of column headers, and then a bunch of lines of data. I want to cut various columns of this output and view it sorted according to various columns. Without the headers, the cutting and sorting is easily accomplished via the -k option to sort along with cut or awk to view a subset of the columns. However, this method of sorting mixes the column headers in with the rest of the lines of output. Is there an easy way to keep the headers at the top?
sort but keep header line at the top
You just need the column command, and tell it to use tabs to separate columns paste file1 file2 | column -s $'\t' -tTo address the "empty cell" controversy, we just need the -n option to column: $ paste <(echo foo; echo; echo barbarbar) <(seq 3) | column -s $'\t' -t foo 1 2 barbarbar 3$ paste <(echo foo; echo; echo barbarbar) <(seq 3) | column -s $'\t' -tn foo 1 2 barbarbar 3My column man page indicates -n is a "Debian GNU/Linux extension." My Fedora system does not exhibit the empty cell problem: it appears to be derived from BSD and the man page says "Version 2.23 changed the -s option to be non-greedy"
I have two text files. The first one has content: Languages Recursively enumerable Regularwhile the second one has content: Minimal automaton Turing machine FiniteI want to combine them into one file column-wise. So I tried paste 1 2 and its output is: Languages Minimal automaton Recursively enumerable Turing machine Regular FiniteHowever I would like to have the columns aligned well such as Languages Minimal automaton Recursively enumerable Turing machine Regular FiniteI was wondering if it would be possible to achieve that without manually handling? Added: Here is another example, where Bruce method almost nails it, except some slight misalignment about which I wonder why? $ cat 1 Chomsky hierarchy Type-0 —$ cat 2 Grammars Unrestricted$ paste 1 2 | pr -t -e20 Chomsky hierarchy Grammars Type-0 Unrestricted — (no common name)
combine text files column-wise
The routing table is used in order of most specific to least specific. However on linux it's a bit more complicated than you might expect. Firstly there is more than one routing table, and when which routing table is used is dependent on a number of rules. To get the full picture: $ ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default$ ip route show table local broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 broadcast 192.168.0.0 dev eth0 proto kernel scope link src 192.168.1.27 local 192.168.1.27 dev eth0 proto kernel scope host src 192.168.1.27 broadcast 192.168.1.255 dev eth0 proto kernel scope link src 192.168.1.27 $ ip route show table main default via 192.168.1.254 dev eth0 192.168.0.0/23 dev eth0 proto kernel scope link src 192.168.1.27 $ ip route show table default$The local table is the special routing table containing high priority control routes for local and broadcast addresses. The main table is the normal routing table containing all non-policy routes. This is also the table you get to see if you simply execute ip route show (or ip ro for short). I recommend not using the old route command anymore, as it only shows the main table and its output format is somewhat archaic. The table default is empty and reserved for post-processing if previous default rules did not select the packet. You can add your own tables and add rules to use those in specific cases. One example is if you have two internet connections, but one host or subnet must always be routed via one particular internet connection. The Policy Routing with Linux book explains all this in exquisite detail.
On my PC I have to following routing table: Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 wlan0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0I don't understand how it is analyzed, I mean from top-down or bottom-up? If it is analyzed from top-down then everything will always be sent to the router in my home even though the IP destination was 192.168.1.15; but what I knew (wrongly?) was that if a PC is inside my same local network then once I recovered the MAC destination through a broadcast message then my PC could send directly the message to the destination.
Which order is the route table analyzed in?
I imagine that column doesn't know that \e[7m is a v100 escape sequence that takes no space in the output. It seems to asssume character codes 0 to 037 octal take no space. You can get what you want by putting the initial escape sequence on a line of its own, then removing that newline from the output: printf '\e[7m\n1\t2\t3\e[0m\nasdasdasdasdasdasdasd\tqwe\tqweqwe\n' | column -ts $'\t' | sed '1{N;s/\n//}'
I'm colorizing the header of a table formatted with column -ts $'\t' Works well without color codes, but when I add color codes to the first line column doesn't properly align the output. Without colored output it works as expected: printf "1\t2\t3\nasdasdasdasdasdasdasd\tqwe\tqweqwe\n" | column -ts $'\t' But when adding color on the first line column doesn't align the text of the colored row: printf "\e[7m1\t2\t3\e[0m\nasdasdasdasdasdasdasd\tqwe\tqweqwe\n" | column -ts $'\t' Observed this behaviour both on Ubuntu Linux and Mac OS X.
Issue with column command and color escape codes
Awk and the %42s printf format comes to mind. Here's a simple script to get you started. Setting width[i] to a positive value in the BEGIN clause makes that column have the given width, left-aligned. If width[i] is negative, then column i is right-aligned and has the width -width[i]. This script doesn't handle wide fields intelligently, all subsequent columns are just shifted right on that line. awk -F, -vOFS= ' BEGIN {width[1]=-10; width[2]=8;} { for (i=1; i<=NF; i++) {$i = sprintf("%*s", width[i], $i)} print }'If you have numeric fields, you can use other printf formats. If you have the BSD utility column (Debian ships it, I don't know about other Linux distributions), you can easily format things in columns with column -t -s ,. The nice thing about column is that it determines the column width automatically. However, it doesn't do right-hand formatting; while you can get it with some post-processing, I don't know if the complexity is worth it. You could do everything in Perl. Its format facility may help. A more powerful approach to table formatting with common unix tools is tbl, which is the part of *roff (the man page formatter) that handles tables. But that's also more complex because you need to convert the input to roff. Yet another possible tool is the text mode browser w3m, which is good at table rendering. Here, you'd have to convert the input to HTML.
Is there an easy way to split comma-delimited lines of text into columns with some of the columns right-justified? As a bonus, it would be nice to format numbers, but this probably isn't too hard with sed.
Split a line into columns with some of the columns right-justified?
UPDATE: added a script (not a one liner, though) which allows you to choose which columns you want justified... It caters for Left (default) and Right (not Center).. As-is, it expects TAB delimited fields. You can change the column output seperator via $S. RJustCol=(2 3 5) # Set columns to be right justified. RJustRex=; Q=$'\x01'; R=$'\x02'; S=" | " for r in ${RJustCol[@]} ;do # Build the Right-justify regex. RJustRex="${RJustRex}s/^(([^\t]+\t){$((r-1))})([^\t]+)\t/\1\3$R\t/; " done sed -r "s/$/\tZ/g; s/^/$Q/; s/\t/\t$Q/g; $RJustRex" file | column -t -s $'\t' | sed -r "s/ $Q/$Q/g; s/$Q([^$Q$R]*)$R([^$Q]*)/$S\2\1/g; s/$Q/$S/g; s/Z$//"Typical output: | The Lost Art | +1255 | 789 | Los | -55 | | of the Idle Moment | -159900 | 0123 | Fabulosos Cadillacs | +321987 | Note:column doesn't work as you might expect when you have empty cells. Option -n By default, the column command will merge multiple adjacent delimiters into a single delimiter when using the --t option; this option disables that behavior. This option is a Debian GNU/Linux extension.From here on is the original answer which is related to but doesn't specifically address tne main issue of th question.. Here is the "one-liner" which suits integers (and allows +/- signs) .. The "X" place-holder forces column to right-pad the last cell. sed 's/$/\tX/g' file |column -t |sed -r 's/([-+]?[0-9.]+)( +)/\2\1/g; s/^ //; s/X$//'Typical output +1255 789 011 -55 34 -159900 33 022 +321987 2323566If you have float values, or floats mixed with integers, or just integers, (optional leading +/- signs), a bit more shuffling works. sed -r 's/$/\tX/; s/([-+]?[0-9]+\.[0-9]+)\t/\1@\t/g s/([-+]?[0-9]+)\t/\1.@\t/g s/\./\t./g' file | column -t | sed -r 's/ \././g s/([-+]?[0-9.]+)( +)/\2\1/g s/\.@/ /g s/@//g s/ +X$//'Typical output +1255 789 0.11 -55 34 -15.9900 33 0.22 +321.987 2323566
I use column -t to format data for easy viewing in the shell, but there seems to be no option to specify column alignment (e.g. align to the right). Any Bash one-liners to do it? I have arbitrary number of columns.
Set alignment of numeric columns when columnating data
This should work and output to data2.csv: head -n 1 data1.csv > data2.csv && tail -n +2 data1.csv | sort -t "|" -k 1 >> data2.csv
I need to sort a CSV file, but the the header row (1st row) keeps getting sorted. This is what I'm using: cat data1.csv | sort -t"|" -k 1 -o data1.csv Here's a sample line: Name|Email|Country|Company|Phone Brent Trujillo|[emailprotected]|Burkina Faso|Donec LLC|(612) 943-0167
Sorting a CSV file, but not it's header [duplicate]
Using perl's Text::ASCIITable module (also supports multi-line cells): print_table() { perl -MText::ASCIITable -e ' $t = Text::ASCIITable->new({drawRowLine => 1}); while (defined($c = shift @ARGV) and $c ne "--") { push @header, $c; $cols++ } $t->setCols(@header); $rows = @ARGV / $cols; for ($i = 0; $i < $rows; $i++) { for ($j = 0; $j < $cols; $j++) { $cell[$i][$j] = $ARGV[$j * $rows + $i] } } $t->addRow(\@cell); print $t' -- "$@" }print_table Domain 'Without WWW' 'With WWW' -- \ "$@" "${WOUT_WWW[@]}" "${WITH_WWW[@]}"Where the WOUT_WWW and WITH_WWW arrays have been constructed as: for domain do WOUT_WWW+=("$(dig +short "$domain")") WITH_WWW+=("$(dig +short "www.$domain")") doneWhich gives: .---------------------------------------------------------------------. | Domain | Without WWW | With WWW | +-------------------+----------------+--------------------------------+ | google.com | 216.58.208.142 | 74.125.206.147 | | | | 74.125.206.104 | | | | 74.125.206.106 | | | | 74.125.206.105 | | | | 74.125.206.103 | | | | 74.125.206.99 | +-------------------+----------------+--------------------------------+ | stackexchange.com | 151.101.65.69 | stackexchange.com. | | | 151.101.1.69 | 151.101.1.69 | | | 151.101.193.69 | 151.101.193.69 | | | 151.101.129.69 | 151.101.129.69 | | | | 151.101.65.69 | +-------------------+----------------+--------------------------------+ | linux.com | 151.101.193.5 | n.ssl.fastly.net. | | | 151.101.65.5 | prod.n.ssl.us-eu.fastlylb.net. | | | 151.101.1.5 | 151.101.61.5 | | | 151.101.129.5 | | '-------------------+----------------+--------------------------------'
I'm quite new at bash and I am trying to learn it by creating some small scripts. I created a small script to look up the DNS entry for multiple domains at the same time. The domains are given as attributes. COUNTER=0 DOMAINS=()for domain in "$@" do WOUT_WWW=$(dig "$domain" +short) if (( $(grep -c . <<<"$WOUT_WWW") > 1 )); then WOUT_WWW="${WOUT_WWW##*$'\n'}" ; fi WITH_WWW=$(dig "www.${domain}" +short) if (( $(grep -c . <<<"$WITH_WWW") > 1 )); then WITH_WWW="${WITH_WWW##*$'\n'}" ; fi DOMAINS[$COUNTER]="$domain|$WOUT_WWW|$WITH_WWW" COUNTER=$(($COUNTER+1)) doneNow I just want to loop through the new "multidimensional" array and give the output like mysql table: +------------------------------+ | Row 1 | Row 2 | Row 3 | +------------------------------+ | Value | Value | Value | +------------------------------+How can I do that?
Bash output array in table
Maybe something like: awk ' function isleap(y) { return y % 4 == 0 && (y % 100 != 0 || y % 400 == 0) } $2 == 3 && $3 == 1 && isleap($1) && last_day != 29 { print $1, 2, 29, (last_data + $4) / 2 } {print; last_day = $3; last_data = $4}' file
I have some tables (table.txt) as follow: YEAR MONTH DAY RES 1971 1 1 1345 1971 1 2 1265 1971 1 3 1167The length of each time series goes from 1.1.1971 until 31.12.2099. Unfortunately, some time series are missing leap years and their values (e.g. year 1972 is a leap year so the month of February should have 29 days, but my time series just have 28 days in February 1972). For examples in my current tables the end of the February month in 1972 is presented as follow: YEAR MONTH DAY RES 1972 2 27 100 1972 2 28 101 1972 3 1 102This is wrong, cause it´s not accounting any leap year. Instead of that I would like to include in my time series each missing days (obviously the 29th of February) of every leap years in my time series, by extrapolating the value with the previous and next day, as follow: YEAR MONTH DAY RES 1972 2 27 100 1972 2 28 101 1972 2 29 101.5 1972 3 1 102Is there a way to do that using shell/bash?
Leap year - extrapolating value
Here are two mutually exclusive sed loops: sed -ne'p;/ 12 * 31 /!d;:n' -e'n;//!bn' <<"" YEAR MONTH DAY RES 1971 1 1 245 1971 1 2 587 ... 1971 12 31 685 1971 1 1 245 1971 1 2 587 ... 1971 12 31 685 1972 1 1 549 1972 1 2 746 ... 1972 12 31 999 1972 1 1 933 1972 1 2 837 ... 1972 12 31 343YEAR MONTH DAY RES 1971 1 1 245 1971 1 2 587 ... 1971 12 31 685 1972 1 1 549 1972 1 2 746 ... 1972 12 31 999Basically sed has two states - print and eat. In the first state - the print state - sed automatically prints every input line then checks it against the / 12 * 31 / pattern. If the current pattern space does ! not match it is deleted and sed pulls in the next input line and starts the script again from the top - at the print command without attempting to run anything that follows the delete command at all. When an input line does match / 12 * 31 /, however, sed falls through to the second half of the script - the eat loop. First it defines a branch : label named n; then it overwrites the current pattern space with the next input line, and then it compares the current pattern space to the // last matched pattern. Because the line that matched it before has just been overwritten with the next one, the first iteration of this eat loop doesn't match, and every time it does ! not sed branches back to the :n label to get the next input line and once again compare it to the // last matched pattern. When another match is finally made - some 365 next lines later - sed does -not automatically print it when it completes its script, pulls in the next input line, and starts again from the top at the print command in its first state. So each loop state will fall through to the next on the same key and do as little as possible in the meantime to find the next key. Note that the entire script completes without invoking a single editing routine, and that it needs only to compile the single regexp. The automaton that results is very simple - it understands only [123 ] and [^123 ]. What's more, at least half of the comparisons will very likely be made without any compilations, because the only address referenced in the eat loop at all is the // empty one. sed can therefore complete that loop entirely with a single regexec() call per input line. sed may do similar for the print loop as well.timedI was curious about how the various answers here might perform, and so I came up with my own table: dash <<"" d=0 D=31 IFS=: set 1970 1 while case "$*:${d#$D}" in (*[!:]) ;; ($(($1^($1%4)|(d=0))):1:) D=29 set $1 2;; (*:1:) D=28 set $1 2;; (*[3580]:) D=30 set $1 $(($2+1));; (*:) D=31 set $(($1+!(t<730||(t=0)))) $(($2%12+1)) esac do printf '%-6d%-4d%-4d%d\n' "$@" $((d+=1)) $((t+=1)) done| head -n1000054 >/tmp/datesdash <<<'' 6.62s user 6.95s system 166% cpu 8.156 totalThat puts a million+ lines in /tmp/dates and doubles the output for each of years 1970 - 3338. The file looks like: tail -n1465 </tmp/dates | head; echo; tail </tmp/dates3336 12 27 728 3336 12 28 729 3336 12 29 730 3336 12 30 731 3336 12 31 732 3337 1 1 1 3337 1 2 2 3337 1 3 3 3337 1 4 4 3337 1 5 53338 12 22 721 3338 12 23 722 3338 12 24 723 3338 12 25 724 3338 12 26 725 3338 12 27 726 3338 12 28 727 3338 12 29 728 3338 12 30 729 3338 12 31 730...some of it anyway. And then I tried the different commands on it: for cmd in "sort -uVk1,3" \ "sed -ne'p;/ 12 * 31 /!d;:n' -e'n;//!bn'" \ "awk '"'{u=$1 $2 $3 $4;if (!a[u]++) print;}'\' do eval "time ($cmd|wc -l)" </tmp/dates done500027 ( sort -uVk1,3 | wc -l; ) \ 1.85s user 0.11s system 280% cpu 0.698 total500027 ( sed -ne'p;/ 12 * 31 /!d;:n' -e'n;//!bn' | wc -l; ) \ 0.64s user 0.09s system 110% cpu 0.659 total500027 ( awk '{u=$1 $2 $3 $4;if (!a[u]++) print;}' | wc -l; ) \ 1.46s user 0.15s system 104% cpu 1.536 totalThe sort and sed commands both completed in less than half the time awk did - and these results were typical. I did run them several times. It appears all of the commands are writing out the correct number of lines as well - and so they probably all work. sort and sed were fairly well neck and neck - with sed generally a hair ahead - for completion time for every run, but sort does more actual work to achieve its results than either of the other two commands. It is running parallel jobs to complete its task and benefits a great deal from my multi-core cpu. awk and sed both peg the single-core assigned them for the entire time they process. The results here are from a standard, up-to-date GNU sed, but I did try another. In fact, I tried all three commands with other binaries, but only the sed command actually worked with my heirloom tools. The others, as I guess due to non-standard syntax, simply quit with error before getting off the ground. It is good to use standard syntax when possible - you can freely use more simple, honed, and efficient implementations in many cases that way: PATH=/usr/heirloom/bin/posix2001:$PATH; time ...500027 ( sed -ne'p;/ 12 * 31 /!d;:n' -e'n;//!bn' | wc -l; ) \ 0.31s user 0.12s system 136% cpu 0.318 total
I have some tables (table.txt) that have been wrongly built and present redundancey in the results, as follow: YEAR MONTH DAY RES 1971 1 1 245 1971 1 2 587 ... 1971 12 31 685 1971 1 1 245 1971 1 2 587 ... 1971 12 31 685 1972 1 1 549 1972 1 2 746 ...Instead I would like to have: YEAR MONTH DAY RES 1971 1 1 245 1971 1 2 587 ... 1971 12 31 685 1972 1 1 549 1972 1 2 746 ...So the problem is that the results are presented twice in the table. That means (with the provided example) that after the '1971' I should expected year '1972' and not '1971' again. Is there a way to delete the redundant results using sh/bash? I have to notice that my data run throughout 1971 until 2099 day by day, and that they have exactly the same format even after year 2000, as follow: YEAR MONTH DAY RES 1971 1 1 245 1971 1 2 587 ... 2000 1 1 875 2000 1 2 456 ... 2099 12 31 321
Reformat tables
The sed script gets rid of the space after the 'e', and the awk script just prints out each field (multiplying $3 by 1 to "convert" it to a non-fp decimal number): $ sed -e 's/e /e/g' file | awk '{print $1, $2, $3 * 1}' 1 1 1423 1 2 1589 1 3 85000 1 4 8900 1 5 8796This assumes that the floating point numbers in the file:have an extraneous space after the 'e' omit the '+' for positive exponents don't have really large exponents otherwise awk will print them as fp.It's possible to get awk to do the 's/e /e/' transformation (so sed isn't needed) but it's getting late and my brain's tired. sed | awk is easy and it works.
I have several 'ascii' tables in a directory, some of them have numbers expressed in decimal and some others ones in floating point, as follow: 1 1 1423 1 2 1589 1 3 0.85e 5 1 4 0.89e 4 1 5 8796 ...Is there a way to convert all the value of the tables in decimal numbers? I heard that using tr editor might be useful but I can´t figure out how to operate the conversion.
Convert floating point numbers to decimal numbers
This caters for a varying number of fields in the same file, and the last segment being only partialy filled, ie less fields than specified (per segment). Note though, that if the number of fields in a line results in fewer segments than specified, nothing is written to the output file for those shortfall segments. awk -v 'ncol=5' -v 'pfix=file' '{ fldn = 0 sfix = 1 segs = NF/ncol # round up if number of field is not evenly divisible by number of columns segs = (segs == int(segs)) ?segs :int(segs)+1 while (fldn != NF) { fmod = (++fldn) % ncol printf "%s%s", dlim, $(fldn) >> pfix sfix if (fmod == 1 ) { dlim = " " } if ((fmod==0 ) || (fldn==NF)) { printf "\n" >> pfix sfix dlim = ""; sfix++ } } }' infile
I have a data file, which can have N rows, and each row is composed M elements separated by white space. Currently, I want to separate each row into several segments. In other words, assume the number of segments is 3; then the original file will be separated into 3 files, each of which has N rows and each row has M/3 elements. Besides writing C++ or Java program, Is there any efficient approach that can fulfill this task on Unix/Linux?
separate a file into several small files according to columns
uniq -c file | awk '$1 >= 3 { print $2,$3 }'The uniq -c will output each line together with a count of how many times that line occurs consecutively. For the given data, it will produce 3 A 0 1 B 0 1 B 1 1 B 0 1 B 1 1 B 0The awk script will take this and output the last two fields if the first field is greater than or equal to 3. The result will be A 0
I have a table in Linux : A 0 A 0 A 0 B 0 B 1 B 0 B 1 B 0I want to extract lines appeared consecutively for 3 times or more. My expected output is : A 0Actually, 3 times or more is just a simplified example. The actual situation is I want to extract lines that appear consecutively for 30 times more. Any idea? Thank you!
Extracting lines appeared consecutively for 3 times or more in Linux
$ alias MAGICK="printf '%5s\n'" $ MAGICK 10 10
So I don't want this: echo "9" 9Rather I need this, with e.g.: 4 spaces before it: MAGICK "9" 9So if I try it with 10: MAGICK "10" 10then it will just have 3 spaces before it. How can I format my output this way?
How can I right-justify variable length output?
This is not supposed to be a "write this program for me" site, so I am assuming that you have no idea where to start. So here's one way: #!/bin/bash highest=-999 for x in a[0-9]/a[0-9].txt;do fourth="$(awk 'NR==1{print $4}' $x)" if [ $highest -lt $fourth ];then highest=$fourth hifile=$x fi done echo "highest was $highest in $hifile" mv $hifile high/A brief of what the above code does: loops through all directory/file combinations named a[0-9]/a[0-9].txt it uses awk to assign the fourth field ({print $4}) from the first line (NR==1) to the variable fourth. It then compares if highest is less than fourth (if [ $highest -lt $fourth ];then), and if so saves the filename in the hifile variable. When the loop is done, it moves the file to the directory "high"
Considering a variable a describing temperature series for the city a. I have 9 directories (a1, a2, a3, a4, a5, a6, a7, a8, a9) each of them containing a table (respectively a1.txt, a2.txt, a3.txt, a4.txt, a5.txt, a6.txt, a7.txt, a8.txt, a9.txt). I would like to move to another directory the table presenting the highest value at the first row and fourth column (with space separator). Does anyone know how to do that?
Move files/table following selection criteria
awk ' {samples[$1] = samples[$1] OFS $NF} END { # print the header first print "Geneid", samples["Geneid"] delete samples["Geneid"] # and then the rest of the data for (geneid in samples) print geneid, samples[geneid] } ' Tab*Pipe the output into | column -t if you want to line up the columns
I have a question concerning awk command in unix to merge multiple tables with a common value Tab1 Geneid Chr Start End Strand Length Sample_1 ENSG00000278267 1 17369 17436 - 68 0 ENSG00000243485 1;1;1 29554;30267;30976 30039;30667;31109 +;+;+ 1021 0Tab 2 Geneid Chr Start End Strand Length Sample_2 ENSG00000278267 1 17369 17436 - 68 0 ENSG00000243485 1;1;1 29554;30267;30976 30039;30667;31109 +;+;+ 1021 0Tab 3 Geneid Chr Start End Strand Length Sample_3 ENSG00000278267 1 17369 17436 - 68 0 ENSG00000243485 1;1;1 29554;30267;30976 30039;30667;31109 +;+;+ 1021 0As you can see, Geneid is similar in these tables, and I would like to merge these files into 1 with the GeneID column and the "Sample_n" column awk 'NR==FNR {h[$1] = $7; next} {print $1,$7,h[$1]}' Sample_1.txt Sample_2.txt | headIf I don't miss something it means: NR==FNR, the first file is the template for the output {h[$1] = $7; next} h contains the GeneID of file 1 associated with value in 7th column {print $1,$7,h[$1]} print the first/seven/ column of the second file for the GeneID contained in h value This work for 2 files, but not for 3 or more Geneid Sample_1 Sample_2 ENSG00000278267 0 0 ENSG00000243485 0 0 I looked on this website, and people posted all the code, but I don't really understand the command, so does anybody know how to merge these files and can explain parameters in the command ?
Awk for merging multiple files with common column
You can use Ex editor (part of Vi/Vim) as demonstrated in the following shell command: $ ex +"g/<tr/;,/tr>/join" +"/<table\_.\{-}\zs<tr/;,/table>/sort /.\{-}<a href/" +%p -scq! table.html | html2text [image of a] a [image of b] b [image of c] c [image of f] fAbove example is using html2text command-line tool to display parsed HTML from stdin (install if required). To save sorted table to the new file, replace +%p -scq! with +'wq! sorted.html', so: ex +"g/<tr/;,/tr>/join" +"/<table\_.\{-}\zs<tr/;,/table>/sort /.\{-}<a href/" +'wq! sorted.html' table.htmlExplanation:+"cmd" - Executes Vim command. g/<tr/;,/tr>/join - Joins lines between <tr/ and tr> (for easier sorting). /<table\_.\{-}\zs<tr/;,/table>/ - Selects content between first <tr/ and /table>. sort /.\{-}<a href/ - Sort above selection for lines starting after <a href/. +%p - Prints buffer. -scq! - Silently quit the editor without saving.Check out similar example here.
I need a very quick and easy way to sort HTML tables. The table rows contain images that should stay with their appropriate row. I tried pasting my HTML into Libre Office calc, but the images are not pasted into rows, so sorting is not possible. BTW, I do not want a sortable table. I want a sorted table. When done, I just want a plain HTML table that I can paste into a blog page, but I want the items in the table sorted. I want to start with my clean HTML table, paste it into an app, sort the table and get the new HTML source without any added styling or junk having been added. It seems simple, but I can't find a solution. Example of a table I wish to sort: <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="content-type"> <title></title> </head> <body> <table style="text-align: left; width: 100%;" border="1" cellpadding="2" cellspacing="2"> <tbody> <tr> <td style="vertical-align: top;"> <a href="http://example.com/images/a"> <img src="http://example.com/images/a_thumb.jpeg" alt="image of a"> </a> </td> <td style="vertical-align: top;">a<br> </td> </tr> <tr> <td style="vertical-align: top;"><a href="http://example.com/images/f"> <img src="http://example.com/images/f_thumb.jpeg" alt="image of f"> </a> </td> <td style="vertical-align: top;">f<br> </td> </tr> <tr> <td style="vertical-align: top;"><a href="http://example.com/images/c"> <img src="http://example.com/images/c_thumb.jpeg" alt="image of c"> </a> </td> <td style="vertical-align: top;">c<br> </td> </tr> <tr> <td style="vertical-align: top;"><a href="http://example.com/images/b"> <img src="http://example.com/images/b_thumb.jpeg" alt="image of b"> </a> </td> <td style="vertical-align: top;">b<br> </td> </tr> </tbody> </table> <br> <br> </body> </html>
Sorted HTML table
So I found it by myself. The code is as follows: struct vm_area_struct *vma; unsigned long oldflags, newflags, pfn;vma = find_extend_vma(mm, addr); oldflags = vma->vm_flags; newflags = oldflags &= ~VM_EXEC;//... //... //...if (pte_present(pte)){ printk("NX bit before: %d", pte_exec(pte)); pte = pte_modify(pte, vm_get_page_prot(newflags)); printk("NX bit after: %d", pte_exec(pte)); pfn = pte_pfn(pte); flush_cache_page(vma, addr, pfn); set_pte(ptep, pte); flush_tlb_page(vma, addr); update_mmu_cache(vma, addr, ptep); pte_unmap(ptep); }and thus the NX bit of a specific PTE is changed.
On Ubuntu with kernel 4.16.7 I am writing a custom system call and, I want to set the NX bit of a specific Page Table Entry. So far I have this piece of code, where I am doing a page table walk to get the PTE I want and then try to set its NX bit: pgd = pgd_offset(mm, addr); if (pgd_none(*pgd) || pgd_bad(*pgd)){ printk("Invalid pgd"); return -1; }p4d = p4d_offset(pgd, addr); if (p4d_none(*p4d) || p4d_bad(*p4d)){ printk("Invalid p4d"); return -1; }pud = pud_offset(p4d, addr); if (pud_none(*pud) || pud_bad(*pud)){ printk("Invalid pud"); return -1; }pmd = pmd_offset(pud, addr); if (pmd_none(*pmd) || pmd_bad(*pmd)){ printk("Invalid pmd"); return -1; }ptep = pte_offset_map(pmd, addr); if (!ptep){ printk("Invalid ptep"); return -1; } pte = *ptep;if (pte_present(pte)){ printk("pte_set_flags"); printk("NX bit before: %d", pte_exec(pte)); // pte_set_flags(pte, _PAGE_NX); // printk("NX bit after : %d", pte_exec(pte)); printk("pte_clear_flags"); // pte_clear_flags(pte, _PAGE_NX); // Same as pte_mkexec() pte_mkexec(pte); printk("NX bit after : %d", pte_exec(pte)); page = pte_page(pte); if (page){ printk("Page frame struct is @ %p", page); } pte_unmap(ptep); }but it doesn't work. All the printk commands show the same result. Any insight?
Manually set NX bit of specific PTE
As @Devon mentioned in the comments: Use || instead of &&. The reason is that you want show lines where at least one of the columns 3,4,5,6 is different then zero. Here's another way to understand it. You're trying to remove lines where those columns are all zeros. Let's begin with the other way around: print all the lines where all those columns are 0. This is easy: awk '$3 == 0 && $4 == 0 && $5 == 0 && $6 == 0'Now you want to invert this statement: Show all the lines that don't match the condition above. So you just negate the statement. awk '(!($3 == 0 && $4 == 0 && $5 == 0 && $6 == 0))'The command above will also fulfill your requirement, by the way. Anyway, according to logical negation rules, the negation of the statement "A and B" is "not A or not B". So to negate this statement: $3 == 0 && $4 == 0 && $5 == 0 && $6 == 0You need to negate each expression, and transform all the "and" operators to "or". $3 != 0 || $4 != 0 || $5 != 0 || $6 != 0Now you can better understand why your command didn't work. The negation of the statement you used would be: $3 == 0 || $4 == 0 || $5 == 0 || $6 == 0Which means it would remove all the lines where at least one of the columns (and not all) is zero.
I was wondering how to filter a table with several columns based on a specific value in each of the columns of interest. I have this example here: Chr1 16644 0 0 1 1 Chr1 16645 0 0 1 1 Chr1 16646 0 0 1 1 Chr1 16647 0 0 1 1 Chr1 16648 0 0 1 1 Chr1 16649 0 0 1 1 Chr1 16650 0 0 1 1 Chr1 16651 0 0 1 1 Chr1 16782 0 0 0 0 Chr1 16783 0 0 0 0 Chr1 16784 0 0 0 0 Chr1 16785 0 0 0 0 Chr1 16786 0 0 1 1 Chr1 16787 0 0 1 1 Chr1 16788 0 0 1 1 Chr1 16789 0 0 1 1 Chr1 16790 0 0 1 1And I would like to remove all the rows containing a zero in all of the columns 3,4,5,6. I have tried it as such cat STARsamples_read_depth.txt | awk '$3 != 0 && $4 != 0&& $5 != 0 && $6 != 0' | lessBut it removes also the rows where only some of these columns have a zero, not in all four! Is there a way to do it? thanks Assa
How to filter a table using awk
Since you're happy for the header line to also be merged this is simple awk awk -F';' -vOFS=';' '{ $(NF+1)=$2$3 ; print}'Basically we add a new field $(NF+1) which consists of $2$3, which merges those fields. With OFS=';' the fields are output with ; separator.
I have several csv table as follow: YEAR;MONTH;DAY;RES1;RES2 1971;1;1;1206.1;627 1971;1;2;1303.4;654.3 1971;1;3;1248.9;662 1971;1;4;1188.8;666.8From this I would like to create a new column that concatenate the values of the columns MONTH and DAY. Therefore the output should look like that: YEAR;MONTH;DAY;RES1;RES2;MONTHDAY 1971;1;1;1206.1;627;11 1971;1;2;1303.4;654.3;12 1971;1;3;1248.9;662;13 1971;1;4;1188.8;666.8;14
Adding column to a table by concatenating values from other columns
Ideally, since the data is in GTF format, one should use a GTF parser to parse it. I currently have no such parser or parsing library installed so my solution is based solely on the data that you have provided in the question. To extract the 9th column: $ cut -f 9 data.gtf gene_id "strAD1.1"; transcript_id "strAD1.1.1"; reference_id "ENST00000469289"; ref_gene_id "ENSG00000243485"; ref_gene_name "MIR1302-10"; cov "0.028725"; FPKM "0.053510"; TPM "0.109957"; gene_id "strAD1.1"; transcript_id "strAD1.1.1"; exon_number "1"; reference_id "ENST00000469289"; ref_gene_id "ENSG00000243485"; ref_gene_name "MIR1302-10"; cov "0.014218"; gene_id "strAD1.1"; transcript_id "strAD1.1.1"; exon_number "2"; reference_id "ENST00000469289"; ref_gene_id "ENSG00000243485"; ref_gene_name "MIR1302-10"; cov "0.072139";To get the data that we want from this, we need to treat transcripts and exons separately as their attributes have different order in the data. We do this with awk and output different fields in the input data depending on whether the current line contains the string exon_number or not: $ cut -f 9 data.gtf | awk '/exon_number/ { print $2, $4, $8, $10; next } { print $2, $4, $6, $8 }' "strAD1.1"; "strAD1.1.1"; "ENST00000469289"; "ENSG00000243485"; "strAD1.1"; "strAD1.1.1"; "ENST00000469289"; "ENSG00000243485"; "strAD1.1"; "strAD1.1.1"; "ENST00000469289"; "ENSG00000243485";Then we remove the double quotes and semicolons from this: $ cut -f 9 data.gtf | awk '/exon_number/ { print $2, $4, $8, $10; next } { print $2, $4, $6, $8 }' | tr -d '";' strAD1.1 strAD1.1.1 ENST00000469289 ENSG00000243485 strAD1.1 strAD1.1.1 ENST00000469289 ENSG00000243485 strAD1.1 strAD1.1.1 ENST00000469289 ENSG00000243485
I have a large GTF file, like below: # ./stringtie -p 4 -G /home/humangenome_hg19/homo_gtf_file.gtf -o strAD1_as/transcripts.gtf -l strAD1 /home/software/star-2.5.2b/bin/Linux_x86_64/mapA1Aligned.sortedByCoord.out.bam # StringTie version 1.3.2d 1 StringTie transcript 30267 31109 1000 + . gene_id "strAD1.1"; transcript_id "strAD1.1.1"; reference_id "ENST00000469289"; ref_gene_id "ENSG00000243485"; ref_gene_name "MIR1302-10"; cov "0.028725"; FPKM "0.053510"; TPM "0.109957"; 1 StringTie exon 30267 30667 1000 + . gene_id "strAD1.1"; transcript_id "strAD1.1.1"; exon_number "1"; reference_id "ENST00000469289"; ref_gene_id "ENSG00000243485"; ref_gene_name "MIR1302-10"; cov "0.014218"; 1 StringTie exon 30976 31109 1000 + . gene_id "strAD1.1"; transcript_id "strAD1.1.1"; exon_number "2"; reference_id "ENST00000469289"; ref_gene_id "ENSG00000243485"; ref_gene_name "MIR1302-10"; cov "0.072139";I want to have the 9th column with just gene_id, transcript_id, reference_id and ref_gene_id. They are in the 9th column and separated by space (the columns themselves are TAB-separated). Could you please help me out how I can such a column with a simple command in Linux? I don't want to use Excel for it.
Extracting quoted and labelled data from a given column
sed '1s/\./-/g' file.txtshould do it for youWhy . ? Because the . has a special meaning in sed. It is used used match any characcter. You need to strip the special meaning by escaping it ie \..
I have different file for which I would like to change the header. Currently the tables are as follow: MONTH GFDL.ESM2M_ECOMAG GFDL.ESM2M_HYPE 1 3546.21855483871 2345.11127781945I would like to change the . by some - but just for the header. Therefore I would like the following output: MONTH GFDL-ESM2M_ECOMAG GFDL-ESM2M_HYPE 1 3546.21855483871 2345.11127781945So far I have try a sed command: sed -i.bak "1,1s/./-/" file.txtwhich just replace the M of "MONTH" by a -. I have also tried an awk command: awk '(NR==1){gsub(".","-", $0);}{print;}' file.txt > jony.txtWhich just replace the entire header by a succession of -
Change header using sed or awk
Perl's paragraph mode (-00) is good for this, it reads the input (stdin and/or file(s)) one paragraph at a time. A paragraph is a block of text extending until the next blank line - the paragraph boundary is one or more blank lines. For example: $ perl -00 -ne 'print if /Table1 Header/' quartus.rpt +-----------------+ ; Table1 Header ; +--------+--------; ; Field1 ; Field2 ; ; Field3 ; Field4 ; +--------+--------+ Table notesThat prints any paragraph matching the pattern "Table1 Header" - the pattern is a perl regular expression, so can be as simple or complicated as you need. See man perlre for details.BTW, if you wanted to print an entire Section, rather than just one table, you could do something like: $ perl -00 -ne 'if (/Section/) { $match = /Section1/ ? 1 : 0 }; print if $match' quartus.rpt +---------------------+ ; Section1 Title ; +---------------------+ Miscellaneous text+-----------------+ ; Table1 Header ; +--------+--------; ; Field1 ; Field2 ; ; Field3 ; Field4 ; +--------+--------+ Table notes+------------------------+ ; Table2 Header ; +---------------+--------; ; Longer Field1 ; Field2 ; ; Longer Field3 ; Field4 ; +---------------+--------+In English: if the current paragraph matches "Section" then variable $match is set to 1 if the pargraph matches "Section1" or 0 if it doesn't. Print any paragraph when $match evaluates as true (non-zero). Here's another more generic variant that might be more useful if the literal string "Section" isn't part of the pattern to match on: $ perl -00 -ne '$match = 1 if /Section1/; $match = 0 if /Section2/; print if $match' quartus.rptThis prints every paragraph starting from the paragraph matching "Section1" up to, but not including the paragraph containing "Section2". i.e. printing is toggled on at "Section1", and toggled off at "Section2".
In a report file generated from Quartus, there are multiple "tables" like the following: +---------------------+ ; Section1 Title ; +---------------------+ Miscellaneous text+-----------------+ ; Table1 Header ; +--------+--------; ; Field1 ; Field2 ; ; Field3 ; Field4 ; +--------+--------+ Table notes+------------------------+ ; Table2 Header ; +---------------+--------; ; Longer Field1 ; Field2 ; ; Longer Field3 ; Field4 ; +---------------+--------++---------------------+ ; Section2 Title ; +---------------------+ Miscellaneous textNOTE: There is always a blank line between sections and tables. I want to be able to print out just one full table like the following based on it matching the "Table Header". +-----------------+ ; Table1 Header ; +--------+--------; ; Field1 ; Field2 ; ; Field3 ; Field4 ; +--------+--------+ Table notesWe currently use the following combination of a grep to print out the beginning table line and a sed to print the rest, but it seems like I should be able to do it all with just sed. grep -h -B 1 "; Table1 Header" quartus.rpt | grep -v "; Table1 Header" sed -n '/; Table1 Header/,/^$/p' quartus.rpt
How do I use sed to print only a specific "table" in a text report file? [duplicate]
I can think of two ways to approach this:implement your own 'paste' that skips the first three fields of all but the first file - for example awk -F\; ' FNR==NR { a[FNR]=$0; next; } { for (i=4;i<=NF;i++) a[FNR] = sprintf("%s;%s", a[FNR], $i); } END { for (n=1;n<=FNR;n++) print a[n]; }' file*.csvpaste the files together, then retain fields based on an indicator derived from the header row paste -d\; file*.csv | perl -MList::MoreUtils=indexes -F\; -alne ' @keep = indexes { $_ !~ /YEAR|MONTH|DAY/ } @F if $. == 1; print join ";", @F[0..2,@keep]'(if you don't have the List::MoreUtils module, you should be able to implement the same functionality using perl's grep).
I have a bunch of input csv files (delimited with semi-colon ";" having the following format YEAR;MONTH;DAY;RES1FILE1;RES2FILE1;RES3FILE1 1901;01;01;101;154;169 1901;01;02;146;174;136The number of columns for each files is variable, meaning that some files could have 6 columns and some others 4. I would like to paste each files into one big csv file (with ";" as a delimiter. My problem is that, in order to avoid redundancy, I would like to avoid pasting the first three column each time since for every files they are the same (YEAR;MONTH;DAY). Therefore the output should look like this: YEAR;MONTH;DAY;RES1FILE1;RES2FILE1;RES3FILE1;RES1FILE2;RES2FILE2 1901;01;01;101;154;169;185;165 1901;01;02;146;174;136;129;176I am currently using the following command: arr=( *_rcp8p5.csv ) paste "${arr[@]}" | cut -f-4,$(seq -s, 8 4 $((4*${#arr[@]}))) >out_rcp8p5.txtBut it is not working at all
Paste different csv files
Create the following files: merge21: BEGIN { FS = "\t" OFS = "\t" } NR==FNR { # file2 key = $2 "," $3 present[key] = 1 minor8[key] = $1 next } { # file1 key = $1 "," $3 if (present[key]) print $1, $2, $3, $4, minor8[key] else print $1, $2, $3, $4, "-" } merge312: BEGIN { FS = "\t" OFS = "\t" } NR==FNR { # file3 key = $1 "," $2 present[key] = 1 minor9[key] = $3 next } { # file1 + file2 key = $1 "," $3 if (present[key]) print $1, $2, $3, $4, $5, minor9[key] else print $1, $2, $3, $4, $5, "-" } They are nearly identical; I have bolded the differences. Now type the command awk -f merge21 file2 file1 | awk -f merge312 file3 -This assumes that none of your key fields include comma(s) and none of your data include hyphens, but it really depends only on there being some strings that don’t appear in the data. It would be trivial to extend this to support more columns; I hope that is obvious. This could be enhanced to do everything in a single awk run, but that would be a bit more complex, and (IMNSHO) not worth the effort. This produces what is called a “left outer join” of the data in your files; see Difference between INNER and OUTER joins on Stack Overflow for some definitions. (“Left outer join” is defined in the accepted answer to that question as (paraphrased) «all rows in the first table, plus any common rows in the other table(s)».) Your output will be MAIN1 minor1 MAIN2 minor3 minor8 minor9 1 bla1 a blabla1 yes6 sure3 1 bla2 b blabla2 yes7 sure4 1 bla3 c blabla3 yes8 sure5 2 bla4 a blabla4 yes9 sure6 2 bla5 d blabla5 - sure7 3 bla6 e blabla6 yes2 sure8 4 bla7 f blabla7 yes3 sure9 5 bla8 a blabla8 yes4 - 5 bla9 g blabla9 yes5 sure2and, obviously, you can remove the - characters with sed. (And, of course, if your real data actually include hyphens, choose some unused character or string as the placeholder for absent data.)NotesFS and OFS are the Input Field Separator and the Output Field Separator, respectively. (Apparently IFS is meaningless in awk; that was an error on my part.) You probably don’t really need the FS="\t" — awk recognizes tabs as field separators on input by default. (It lets you have fields that contain spaces, but you don’t seem to be interested in that.) OFS="\t" is important; because of it, I can say print $1, $2, $3, $4 and get the input fields to be output with tabs between them. If I didn’t say OFS="\t", they would be separated by spaces, unless I said print $1 "\t" $2 "\t" $3 "\t" $4, which is tedious and impairs readability. If you had given additional constraints on MAIN1 and MAIN2 — for example, they are always just one character each, or MAIN1 is always a number and MAIN2 always begins with a letter — I wouldn’t have needed the comma (,) in key. But the original version of your first question shows no such constraint. Consider the following data: MAIN1 ($2) MAIN2 ($3) badkey = $2 $3 goodkey = $2 "," $3 2 34151 234151 2,34151 23 4151 234151 23,4151If we don’t include some separator character in the key that doesn’t otherwise appear in the key fields (MAIN1 and MAIN2), we can get the same key value for different rows. At the risk of splitting hairs, I’m not “telling Linux” anything; I’m telling awk what to do. Regarding the codeNR==FNR { # file3 key = $1 "," $2 present[key] = 1 minor9[key] = $3 next }Consider the seventh-from-the-last line of file3, which contains 1 a sure3. Obviously we have $1=1, $2=a, and $3=sure3, so key=1,a. present[key] = 1 means I am setting present["1,a"] to 1 as a flag to indicate that file3 has a 1,a line; i.e., that there is a minor9 value for key=1,a. Since there is no 5,a line in file3, present["5,a"] doesn’t get set, and so the "file1 + file2" part of the code knows that there is no minor9 for key=5,a, and it should print - instead. The name present is just an arbitrary choice on my part; it indicates that the 1,a row is present in file3 (and the 5,a row is not). It's conventional to use 1 to represent “TRUE”. You can replace print $1, $2, $3, $4 with for (n=1; n<=4; n++) printf "%s\t", $n. You should end the line either by using plain print (as opposed to printf) for the last field, or by doing printf "\n". You can simplify even further by doing something like for (n=1; n<=4; n++) printf "%s\t", $n if (present[key]) print minor8[key] else print "-"Please read awk(1), the POSIX specification for awk, The GNU Awk User’s Guide, and see Awk.info for more information.
This is a follow-up question to my previous question asked about 24 hours ago: Matching two main columns at the same time between files, and paste supplementary columns into the output file when those main columns match G-Man solved that problem with a useful code, but I have a follow-up question. I already accepted the answer, hence this second post... I have 3 files, each with a unique number of columns, all tab-separated, but some columns are shared between the 3 files. It's the shared columns between the 3 files that I want to use to create some sort of "aggregate" file. The tables below show examples of what the files could look like. Basically I want to match columns MAIN1 and MAIN2 between the files. Both columns between the three files have to match. I want to add column "minor8" from file2 to the right side of the table in file1 for those lines when MAIN1 and MAIN2 between the two files match. Subsequently, I want to add "minor9" from file3 on the right side of the file1 table for those cases when MAIN1 and MAIN2 between the two files match. Because "minor8" should go immediately next to the rightmost column of file1 (column name: "minor3"), I would like "minor9" to go next to "minor8" into the new OUTPUT file. The OUTPUT file gives an idea what my ideal final file should look like. Basically these are examples of 3 files (the "tabs" are a bit messed up) file1: MAIN1 minor1 MAIN2 minor3 1 bla1 a blabla1 1 bla2 b blabla2 1 bla3 c blabla3 2 bla4 a blabla4 2 bla5 d blabla5 3 bla6 e blabla6 4 bla7 f blabla7 5 bla8 a blabla8 5 bla9 g blabla9file2: minor8 MAIN1 MAIN2 yes1 2 d yes2 3 e yes3 4 f yes4 5 a yes5 5 g yes6 1 a yes7 1 b yes8 1 c yes9 2 afile3: MAIN1 MAIN2 minor9 5 a sure1 5 g sure2 1 a sure3 1 b sure4 1 c sure5 2 a sure6 2 d sure7 3 e sure8 4 f sure9desired OUTPUT file: MAIN1 minor1 MAIN2 minor3 minor8 minor9 1 bla1 a blabla1 yes6 sure3 1 bla2 b blabla2 yes7 sure4 1 bla3 c blabla3 yes8 sure5 2 bla4 a blabla4 yes9 sure6 2 bla5 d blabla5 yes1 sure7 3 bla6 e blabla6 yes2 sure8 4 bla7 f blabla7 yes3 sure9 5 bla8 a blabla8 yes4 sure1 5 bla9 g blabla9 yes5 sure2As mentioned before, G-Man provided a useful code that exactly did what I asked for (please see previous post). I will probably ask G-Man (or someone else who has time) some specific questions about some of the individual lines of the code that I don't quite understand yet, but until then, I have that follow-up question. G-Man's code was able to recreate the abovementioned OUTPUT file, so thank you G-Man! The follow-up question: One thing I forgot to mention, that the code wasn't able to do (as far as I have seen), is that it will remove rows from file1 if there is no match with columns MAIN1 and MAIN2 between the files. This is my fault, since I did not specify that. My goal is to have an OUTPUT file where no lines from file1 are removed. Basically file1 is my priority file. Whatever amount of rows this file has (close to a million), that's the amount of rows the OUTPUT file should have too. Columns "minor8" and "minor9" can be empty for some rows if there is no column MAIN1,MAIN2 match. But I would like to keep those rows of file1 when there is a "missing/empty" value for either "minor8" or "minor9" (or both). I will try to illustrate this using a slightly different version of files 2 and 3 mentioned above (so file1 stays the same). adjusted file2 (does not have MAIN1,MAIN2 combination: 2,d): minor8 MAIN1 MAIN2 yes2 3 e yes3 4 f yes4 5 a yes5 5 g yes6 1 a yes7 1 b yes8 1 c yes9 2 aadjusted file3 (does not have MAIN1,MAIN2 combination: 5,a): MAIN1 MAIN2 minor9 5 g sure2 1 a sure3 1 b sure4 1 c sure5 2 a sure6 2 d sure7 3 e sure8 4 f sure9adjusted, desired OUTPUT (i.e., empty value in column minor8 for MAIN1,MAIN2 combination 2-d; and empty value in column minor9 for MAIN1,MAIN2 combination 5-a): MAIN1 minor1 MAIN2 minor3 minor8 minor9 1 bla1 a blabla1 yes6 sure3 1 bla2 b blabla2 yes7 sure4 1 bla3 c blabla3 yes8 sure5 2 bla4 a blabla4 yes9 sure6 2 bla5 d blabla5 sure7 3 bla6 e blabla6 yes2 sure8 4 bla7 f blabla7 yes3 sure9 5 bla8 a blabla8 yes4 5 bla9 g blabla9 yes5 sure2I hope my way of explaining this is clear enough. I see that the tabs of the tables are a bit messed up. Do you guys prefer it like this, or for me to straighten out the tables visually? (only issue that can result from that, I can imagine, is that when you copy-paste my example data, that you would have additional tabs that shouldn't be there...) Anyways, I very much appreciate you guys' help. Hopefully at some point in the near future I will be able to contribute to this forum, apart from simply asking for help... Do you have any suggestions how G-Man's code should be edited in order to make this possible? Or if you have a totally different suggestion how a useful code could be written that takes this additional requirement into account, please let me know.
Matching 2 main columns between files; and paste other columns into the output file when those main columns match. Keep row size of 1st file intact
This should do the trick awk '{if (NF>4){print $1, $2, $3 , "0" } else {print $0}}' INPUTFILE.txt
I have a table (ascii format with space delimiter), as follow: 1 1 1900 111 1 2 1900 121 1 3 1900 145 1 4 1900 1.45e 07 1 5 1900 5.21e 25 1 6 1900 152I would like, that if there is a fifth column (obviously enclosing the value of the exponent) the value is replaced by 0. Therefore, considering this example, the desired output should be as follow: 1 1 1900 111 1 2 1900 121 1 3 1900 145 1 4 1900 0 1 5 1900 0 1 6 1900 152Does anyone have any guidance?
Replace values in a table
You need to add a newline at the end of the printf statement, like so: printf "START %10s %10s %10s %10s %10s %10s %10s %10s %10s %10s %10s %5s\n" $f1 $f2 $f3 $f4 $f5 $f6 $f7 $f8 $f9 $f10 $f11 $f12; # ifet the student id(note the \n)
How can I resolve the problem of the tables going to the right. I just want it to be shown under 1. Here's my script with START added. The alignment has gone wonky now: while IFS="," read f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 do printf "START %10s %10s %10s %10s %10s %10s %10s %10s %10s %10s %10s %5s" $f1 $f2 $f3 $f4 $f5 $f6 $f7 $f8 $f9 $f10 $f11 $f12; # ifet the student id done < records.csv echo " Press <enter> to return to main menu" read null
Alignment when printing a series of records
Try: $ awk 'BEGIN { FS="\t" } NR==1 { split($0,header,"\t") ; next } { for(i=2;i<=NF;i++) print $1,header[i],$i }' data HOUSE TC 55 HOUSE CC 65 HOUSE PC 75 HOUSE TCP 85 HOUSE FTX 95 HOUSE FRX 105 CAR TC 100 CAR CC 200 CAR PC 300 CAR TCP 400 CAR FTX 500 CAR FRX 600 H2 TC 5 H2 CC 10 H2 PC 15 H2 TCP 20 H2 FTX 25 H2 FRX 30 C2 TC 10 C2 CC 20 C2 PC 30 C2 TCP 40 C2 FTX 50 C2 FRX 60The oneliner broken into pieces: Set tab char as field separator of input files: BEGIN { FS="\t" }If first line (NR==1) split it into fields and store them in array header. This simpy is shorter than copying all fields $1, $2, ... in a for loop and store them. The next command prevents line 1 from being processed by the following code too, which is for the other lines only. (FS instead of "\t" would have been more consequent...) NR==1 { split($0,header,"\t") ; next }For each other line (NR!=1) print all fields ($2...$NF) prefixed by $1 and the field's name (header[i]). { for(i=2;i<=NF;i++) print $1,header[i],$i }Setting OFS=FS="\t" in the BEGIN block will make print use a tab between the fields. I did not change this in the answer because it would need to reformat all output lines too.
I have reports that are generated in the following tab-delimited format: UNIT TC CC PC TCP FTX FRX HOUSE 55 65 75 85 95 105 CAR 100 200 300 400 500 600 H2 5 10 15 20 25 30 C2 10 20 30 40 50 60I need to change them to the following format: HOUSE TC 55 HOUSE CC 65 HOUSE PC 75 HOUSE TCP 85 HOUSE FTX 95 HOUSE FRX 105 CAR TC 100 CAR CC 200 CAR PC 300 CAR TCP 400 CAR FTX 500 CAR FRX 600And so on. I would like to use standard tools such as SED AWK BASH but any suggestions are welcome. The code will be inserted into a BASH script that I'm already using to parse and concatenate the data beforehand. The number so entries will always be the same, the reports don't change.
Move data row(s) to single column while retaining row header(s)
Using GNU awk gawk ' { grp = 0 # see if any of these words already have a group for (i=1; i<=NF; i++) { if (group[$i]) { grp = group[$i] break } } # no words have been seen before: new group if (!grp) { grp = ++n } # if we have not seen this word, add it to the output for (i=1; i<=NF; i++) { if (!group[$i]) { line[grp] = line[grp] $i OFS } group[$i] = grp } } END { PROCINFO["sorted_in"] = "@ind_num_asc" for (n in line) { print line[n] } } ' input.fileWith the first input: AMAZON NILE ALASKA MANGROVE HELLO MY NAME ISWith the second input (piping the output to column -t): apple_bin2file strawberry_24files mango2files strawberry_39files apple_bin8file dastool_bin6files strawberry_40files apple_bin6file orange_bin004file dastool_bin004files orange_bin005file dastool_bin005files apple_bin3file dastool_bin3files apple_bin5file dastool_bin5files apple_bin7file dastool_bin7files
I have a tab-delimited file like shown below, and would like to merge the rows based on matches in any of the columns. The number of columns are usually 2, but could vary in some cases and be 3. input: AMAZON NILE ALASKA NILE HELLO MY MANGROVE AMAZON MY NAME IS NAMEdesired output: AMAZON NILE ALASKA MANGROVE HELLO MY NAME ISHow could one go about this with awk? Will this work for the below file also? input: apple_bin2file strawberry_24files mango2files strawberry_39files apple_bin8file strawberry_39files dastool_bin6files strawberry_40files apple_bin6file strawberry_40files orange_bin004file dastool_bin004files orange_bin005file dastool_bin005files apple_bin3file dastool_bin3files apple_bin5file dastool_bin5files apple_bin6file dastool_bin6files apple_bin7file dastool_bin7files apple_bin8file mango2filesexpected output in tab-delimited format: apple_bin2file strawberry_24files mango2files strawberry_39files apple_bin8file dastool_bin6files strawberry_40files apple_bin6file orange_bin004file dastool_bin004files orange_bin005file dastool_bin005files apple_bin3file dastool_bin3files apple_bin5file dastool_bin5files apple_bin7file dastool_bin7filesSorry to those who answered, I updated the input files!
Merge rows using common values in any column
If the delimiter is anything other than one fixed character, then cut is the wrong tool. Use awk instead. Consider this test file which has three fields: $ cat file one///two/2//two///threeTo print the second field and only the second field: $ awk -F/// '{print $2}' file two/2//two
I have a table in which each entry looks something like, coagulation factor VIII-associated 1 /// coagulation factor VIII-associated 2 /// coagulation factor VIII-associated 3I would like to use cut -d/// -f2 myfile.txt, but I'm getting an error:cut: bad delimiterSame case when I use single quotes or double quotes around the delimiter: cut -d'///' -f2 myfile.txt cut -d"///" -f2 myfile.txt Do I have to escape the slash somehow? If so, what is the escape character for cut? Documentation doesn't seem to have that information, and I tried \.
How can I use a triple slash as a delimiter with cut?
Assuming your table is actually a file of TAB-separated values: awk -v OFS='\t' 'NR-1 { for(i=1; i<=NF; i++) $i = sprintf("%.2f", $i) } 1' <file.csvEdit: Same thing with Perl: perl -lape '$.-1 and $_ = join "\t", map { sprintf "%.2f", $_ } @F' file.csv
I have some tables were the numbers have too much digits; as follow: MONTH A1 A2 A3 ...... 1 1.54564468 2.48949 6.4984984 .....Is there a way, using unix, to reformat the table in the following way: MONTH A1 A2 A3 ... 1 1.54 2.49 6.50 ...
Reformat table - Number of digits
You can try awk as follows: awk '$5 == "Nov" { sum += $4 }END { print sum }' file 80600$5 represent the column relevant to months. $5 == "Nov" will filter the table for all records in November, then awk will sum the numbers in column $4
I have this table below, how can I calculate the sum of bytes for records in Nov only? for example below I want to find rows in Nov then sum the numbers in column4 relevant to Nov only? how I can do it? 1 arnold user 1933 Nov 7 13:05 2 megan user 10809 Nov 7 13:03 3 sam user 983 Apr 13 12:14 4 mark user 31869 Jun 15 12:20 5 sandy user 22414 Nov 7 13:03 6 semon user 37455 Nov 7 13:03 7 andre user 27511 Dec 9 13:07 8 jim user 7989 Nov 7 13:03
How to calculate the sum of bytes in a column?
If you want to average each day over all years, you could do something like awk -F\; ' NR>1 { sum1[$2";"$3]+=$4; sum2[$2";"$3]+=$5; n[$2";"$3]++; } END { printf "MONTH;DAY;RES1;RES2\n"; for (i in n) printf "%s;%.1f;%.1f\n", i, sum1[i]/n[i], sum2[i]/n[i] }' file.csvNote that the output order isn't guaranteed unless you sort the arrays - the most convenient way to do that depends somewhat on your flavor of awk. Or you could simply pipe the output through an external sort.
I have some "CSV" data (actually using ; as a delimiter) having a row for every day from 1971-01-01 through 2099-12-31 (a span of 2099−1971=128 years). The data are organized as follows: YEAR;MONTH;DAY;RES1;RES2 1971;1;1;1206.1;627 1971;1;2;1303.4;654.3 1971;1;3;1248.9;662 1971;1;4;1188.8;666.8 1971;1;5;1055.2;667.8 1971;1;6;987.1;663.3 1971;1;7;939.2;655.1 1971;1;8;883.2;644.4 ︙ 2099;12;29;791.7;664.3 2099;12;30;746.7;646.4 2099;12;31;706.8;629.3With this data I need to calculate the average value for each calendar day (of the 365 in a year) over all years (so retain month and day and average over the years). For example, since the data span from 1971 until 2100, I have 128 data points for 01-01 (January1). I would like to calculate the average of those 128 values for January 1 (i.e., the values for 1971-01-01, 1972-01-01, ..., 2099-01-01); and so on for day 01-02 (January2) until day 12-31 (December31). Therefore, the desired output should include 365 days and look as follows: MONTH;DAY;RES1;RES2 1;1;AVERAGE_1.1_RES1;AVERAGE_1.1_RES2 1;2;AVERAGE_1.2_RES1;AVERAGE_1.2_RES2 1;3;AVERAGE_1.3_RES1;AVERAGE_1.3_RES2 1;4;AVERAGE_1.4_RES1;AVERAGE_1.4_RES2 1;5;AVERAGE_1.5_RES1;AVERAGE_1.5_RES2 1;6;AVERAGE_1.6_RES1;AVERAGE_1.6_RES2 1;7;AVERAGE_1.7_RES1;AVERAGE_1.7_RES2 ︙ 12;29;AVERAGE_12.29_RES1;AVERAGE_12.29_RES2 12;30;AVERAGE_12.30_RES1;AVERAGE_12.30_RES2 12;31;AVERAGE_12.31_RES1;AVERAGE_12.31_RES2How can I do that?
Calculate average values for each day over multiple years
As steeldriver astutely pointed out in the comments, you've told sed to -i edit the file in-place. As a result, sed will not provide any output, and so the > redirection will put that nothing into the output file. Either keep the -i flag and accept that the input file will be updated in-place: sed -i 's/datum/YEAR-MONTH-DAY/g' inputor drop the -i flag and use the redirection to put the updated contents in the output file: sed 's/datum/YEAR-MONTH-DAY/g' input > output
I have some tables for which I need to replace a field that has a random position in each table. For information, the table are semicolon separated fields and I would like to replace the field "datum" by "YEAR-MONTH-DAY". So far, I have tried: sed -i 's/datum/YEAR-MONTH-DAY/g' input > outputBut it just outputs an empty file.
Replace value in a table
By default, htop lists each thread of a process separately, while ps doesn't. To turn off the display of threads, press H, or use the "Setup / Display options" menu, "Hide userland threads". This puts the following line in your ~/.htoprc or ~/.config/htop/htoprc (you can alternatively put it there manually): hide_userland_threads=1(Also hide_kernel_threads=1, toggled by pressing K, but it's 1 by default.) Another useful option is “Display threads in a different color” in the same menu (highlight_threads=1 in .htoprc), which causes threads to be shown in a different color (green in the default theme). In the first line of the htop display, there's a line like “Tasks: 377, 842 thr, 161 kthr; 2 running”. This shows the total number of processes, userland threads, kernel threads, and threads in a runnable state. The numbers don't change when you filter the display, but the indications “thr” and “kthr” disappear when you turn off the inclusion of user/kernel threads respectively. When you see multiple processes that have all characteristics in common except the PID and CPU-related fields (NIce value, CPU%, TIME+, ...), it's highly likely that they're threads in the same process.
In ps xf 26395 pts/78 Ss 0:00 \_ bash 27016 pts/78 Sl+ 0:04 | \_ unicorn_rails master -c config/unicorn.rb 27042 pts/78 Sl+ 0:00 | \_ unicorn_rails worker[0] -c config/unicorn.rb In htop, it shows up like:Why does htop show more process than ps?
Why does `htop` show more process than `ps`
I think this part of the clone(2) man page may clear up the difference re. the PID:CLONE_THREAD (since Linux 2.4.0-test8) If CLONE_THREAD is set, the child is placed in the same thread group as the calling process. Thread groups were a feature added in Linux 2.4 to support the POSIX threads notion of a set of threads that share a single PID. Internally, this shared PID is the so-called thread group identifier (TGID) for the thread group. Since Linux 2.4, calls to getpid(2) return the TGID of the caller.The "threads are implemented as processes" phrase refers to the issue of threads having had separate PIDs in the past. Basically, Linux originally didn't have threads within a process, just separate processes (with separate PIDs) that might have had some shared resources, like virtual memory or file descriptors. CLONE_THREAD and the separation of process ID(*) and thread ID make the Linux behaviour look more like other systems and more like the POSIX requirements in this sense. Though technically the OS still doesn't have separate implementations for threads and processes. Signal handling was another problematic area with the old implementation, this is described in more detail in the paper @FooF refers to in their answer. As noted in the comments, Linux 2.4 was also released in 2001, the same year as the book, so it's not surprising the news didn't get to that print.
I'm going through this book, Advanced Linux Programming by Mark Mitchell, Jeffrey Oldham, and Alex Samuel. It's from 2001, so a bit old. But I find it quite good anyhow. However, I got to a point when it diverges from what my Linux produces in the shell output. On page 92 (116 in the viewer), the chapter 4.5 GNU/Linux Thread Implementation begins with the paragraph containing this statement:The implementation of POSIX threads on GNU/Linux differs from the thread implementation on many other UNIX-like systems in an important way: on GNU/Linux, threads are implemented as processes.This seems like a key point and is later illustrated with a C code. The output in the book is: main thread pid is 14608 child thread pid is 14610And in my Ubuntu 16.04 it is: main thread pid is 3615 child thread pid is 3615ps output supports this. I guess something must have changed between 2001 and now. The next subchapter on the next page, 4.5.1 Signal Handling, builds up on the previous statement:The behavior of the interaction between signals and threads varies from one UNIX-like system to another. In GNU/Linux, the behavior is dictated by the fact that threads are implemented as processes.And it looks like this will be even more important later on in the book. Could someone explain what's going on here? I've seen this one Are Linux kernel threads really kernel processes?, but it doesn't help much. I'm confused. This is the C code: #include <pthread.h> #include <stdio.h> #include <unistd.h>void* thread_function (void* arg) { fprintf (stderr, "child thread pid is %d\n", (int) getpid ()); /* Spin forever. */ while (1); return NULL; }int main () { pthread_t thread; fprintf (stderr, "main thread pid is %d\n", (int) getpid ()); pthread_create (&thread, NULL, &thread_function, NULL); /* Spin forever. */ while (1); return 0; }
Are threads implemented as processes on Linux?
The problem is caused by the TasksMax systemd attribute. It was introduced in systemd 228 and makes use of the cgroups pid subsystem, which was introduced in the linux kernel 4.3. A task limit of 512 is thus enabled in systemd if kernel 4.3 or newer is running. The feature is announced here and was introduced in this pull request and the default values were set by this pull request. After upgrading my kernel to 4.3, systemctl status docker displays a Tasks line: # systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/etc/systemd/system/docker.service; disabled; vendor preset: disabled) Active: active (running) since Fri 2016-01-15 19:58:00 CET; 1min 52s ago Docs: https://docs.docker.com Main PID: 2770 (docker) Tasks: 502 (limit: 512) CGroup: /system.slice/docker.serviceSetting TasksMax=infinity in the [Service] section of docker.service fixes the problem. docker.service is usually in /usr/share/systemd/system, but it can also be put/copied in /etc/systemd/system to avoid it being overridden by the package manager. A pull request is increasing TasksMax for the docker example systemd files, and an Arch Linux bug report is trying to achieve the same for the package. There is some additional discussion going on on the Arch Linux Forum and in an Arch Linux bug report regarding lxc. DefaultTasksMax can be used in the [Manager] section in /etc/systemd/system.conf (or /etc/systemd/user.conf for user-run services) to control the default value for TasksMax. Systemd also applies a limit for programs run from a login-shell. These default to 4096 per user (will be increased to 12288) and are configured as UserTasksMax in the [Login] section of /etc/systemd/logind.conf.
I am running a docker server on Arch Linux (kernel 4.3.3-2) with several containers. Since my last reboot, both the docker server and random programs within the containers crash with a message about not being able to create a thread, or (less often) to fork. The specific error message is different depending on the program, but most of them seem to mention the specific error Resource temporarily unavailable. See at the end of this post for some example error messages. Now there are plenty of people who have had this error message, and plenty of responses to them. What’s really frustrating is that everyone seems to be speculating how the issue could be resolved, but no one seems to point out how to identify which of the many possible causes for the problem is present. I have collected these 5 possible causes for the error and how to verify that they are not present on my system:There is a system-wide limit on the number of threads configured in /proc/sys/kernel/threads-max (source). In my case this is set to 60613. Every thread takes some space in the stack. The stack size limit is configured using ulimit -s (source). The limit for my shell used to be 8192, but I have increased it by putting * soft stack 32768 into /etc/security/limits.conf, so it ulimit -s now returns 32768. I have also increased it for the docker process by putting LimitSTACK=33554432 into /etc/systemd/system/docker.service (source, and I verified that the limit applies by looking into /proc/<pid of docker>/limits and by running ulimit -s inside a docker container. Every thread takes some memory. A virtual memory limit is configured using ulimit -v. On my system it is set to unlimited, and 80% of my 3GB of memory are free. There is a limit on the number of processes using ulimit -u. Threads count as processes in this case (source). On my system, the limit is set to 30306, and for the docker daemon and inside docker containers, the limit is 1048576. The number of currently running threads can be found out by running ls -1d /proc/*/task/* | wc -l or by running ps -elfT | wc -l (source). On my system they are between 700 and 800. There is a limit on the number of open files, which according to some sources is also relevant when creating threads. The limit is configured using ulimit -n. On my system and inside docker, the limit is set to 1048576. The number of open files can be found out using lsof | wc -l (source), on my system it is about 30000.It looks like before the last reboot I was running kernel 4.2.5-1, now I’m running 4.3.3-2. Downgrading to 4.2.5-1 fixes all the problems. Other posts mentioning the problem are this and this. I have opened a bug report for Arch Linux. What has changed in the kernel that could be causing this?Here are some example error messages: Crash dump was written to: erl_crash.dump Failed to create aux threadJan 07 14:37:25 edeltraud docker[30625]: runtime/cgo: pthread_create failed: Resource temporarily unavailabledpkg: unrecoverable fatal error, aborting: fork failed: Resource temporarily unavailable E: Sub-process /usr/bin/dpkg returned an error code (2)test -z "/usr/include" || /usr/sbin/mkdir -p "/tmp/lib32-popt/pkg/lib32-popt/usr/include" /bin/sh: fork: retry: Resource temporarily unavailable /usr/bin/install -c -m 644 popt.h '/tmp/lib32-popt/pkg/lib32-popt/usr/include' test -z "/usr/share/man/man3" || /usr/sbin/mkdir -p "/tmp/lib32-popt/pkg/lib32-popt/usr/share/man/man3" /bin/sh: fork: retry: Resource temporarily unavailable /bin/sh: fork: retry: No child processes /bin/sh: fork: retry: Resource temporarily unavailable /bin/sh: fork: retry: No child processes /bin/sh: fork: retry: No child processes /bin/sh: fork: retry: Resource temporarily unavailable /bin/sh: fork: retry: Resource temporarily unavailable /bin/sh: fork: retry: No child processes /bin/sh: fork: Resource temporarily unavailable /bin/sh: fork: Resource temporarily unavailable make[3]: *** [install-man3] Error 254Jan 07 11:04:39 edeltraud docker[780]: time="2016-01-07T11:04:39.986684617+01:00" level=error msg="Error running container: [8] System error: fork/exec /proc/self/exe: resource temporarily unavailable"[Wed Jan 06 23:20:33.701287 2016] [mpm_event:alert] [pid 217:tid 140325422335744] (11)Resource temporarily unavailable: apr_thread_create: unable to create worker thread
Creating threads fails with “Resource temporarily unavailable” with 4.3 kernel
Finding all PIDs to renice recursively We need to get the PIDs of all processes ("normal" or "thread") which are descendant (children or in the thread group) of the to-be-niced process. This ought to be recursive (considering children's children). Anton Leontiev answer's gives the hint to do so: all folder names in /proc/$PID/task/ are threads' PID containing a children file listing potential children processes. However, it lacks recursivity, so here is a quick & dirty shell script to find them: #!/bin/sh [ "$#" -eq 1 -a -d "/proc/$1/task" ] || exit 1PID_LIST= findpids() { for pid in /proc/$1/task/* ; do pid="$(basename "$pid")" PID_LIST="$PID_LIST$pid " for cpid in $(cat /proc/$1/task/$pid/children) ; do findpids $cpid done done }findpids $1 echo $PID_LISTIf process PID 1234 is the one you want to recursively nice, now you can do: renice -n 15 -p $(/path/to/findchildren.sh 1234)Side notes Nice value or CPU shares ? Please note that nowadays, nice values may not be so relevant "system-wide", because of automatic task grouping, especially when using systemd. Please see this answer for more details. Difference between threads and processes Note: this answer explains Linux threads precisely. In short: the kernel only handles "runnable entities", that is, something which can be run and scheduled. Kernel wise, these entities are called processes. A thread, is just a kind of process that shares (at least) memory space and signal handlers with another one. Every such process has a system-wide unique identifier: the PID (Process ID). As a result, you can renice each "thread" individually because they do have their own PID1.1 See this answer for more information about PID (ProcessID) and TID difference (ThreadID).
Linux does not (yet) follow the POSIX.1 standard which says that a renice on a process affects "all system scope threads in the process", because according to the pthreads(7) doc "threads do not share a common nice value". However, sometimes, it can be convenient to renice "everything" related to a given process (one example would be Apache child processes and all their threads). So, how can I renice all threads belonging to a given process ? how can I renice all child processes belonging to a given process ?I am looking for a fairly easy solution. I know that process groups can sometimes be helpful, however, they do not always match what I want to do: they can include a broader or different set of processes. Using a cgroup managed by systemd might also be helpful, but even if I am interested to hear about it, I mostly looking for a "standard" solution. EDIT: also, man (7) pthreads says "all of the threads in a process are placed in the same thread group; all members of a thread group share the same PID". So, is it even possible to renice something which doesn't have it's own PID?
How to renice all threads (and children) of one process on Linux?
There is absolutely no difference between a thread and a process on Linux. If you look at clone(2) you will see a set of flags that determine what is shared, and what is not shared, between the threads. Classic processes are just threads that share nothing; you can share what components you want under Linux. This is not the case on other OS implementations, where there are much more substantial differences.
I've read in many places that Linux creates a kernel thread for each user thread in a Java VM. (I see the term "kernel thread" used in two different ways:a thread created to do core OS work and a thread the OS is aware of and schedules to perform user work.I am talking about the latter type.) Is a kernel thread the same as a kernel process, since Linux processes support shared memory spaces between parent and child, or is it truly a different entity?
Are Linux kernel threads really kernel processes?
Threads are an integral part of the process and cannot be killed outside it. There is the pthread_kill function but it only applies in the context of the thread itself. From the docs at the link:Note that pthread_kill() only causes the signal to be handled in the context of the given thread; the signal action (termination or stopping) affects the process as a whole.
$ ps -e -T | grep myp | grep -v grep 797 797 ? 00:00:00 myp 797 798 ? 00:00:00 myp 797 799 ? 00:00:00 myp 797 800 ? 00:00:00 mypThis shows the process myp with PID = 797 and four threads with different SPIDs. How can I kill a particular thread of the process without killing the whole process. I understand that it might not be possible at all in some cases when there are fatal dependencies on that particular thread. But, is it possible in any case? Is yes, how? I tried kill 799 and the process itself was terminated. Now I am not sure this was because there were dependencies that made myp fail without the process 800 or because kill is simple not able to kill individual processes.
How can I kill a particular thread of a process?
From a task_struct perspective, a process’s threads have the same thread group leader (group_leader in task_struct), whereas child processes have a different thread group leader (each individual child process). This information is exposed to user space via the /proc file system. You can trace parents and children by looking at the ppid field in /proc/${pid}/stat or .../status (this gives the parent pid); you can trace threads by looking at the tgid field in .../status (this gives the thread group id, which is also the group leader’s pid). A process’s threads are made visible in the /proc/${pid}/task directory: each thread gets its own subdirectory. (Every process has at least one thread.) In practice, programs wishing to keep track of their own threads would rely on APIs provided by the threading library they’re using, instead of using OS-specific information. Typically on Unix-like systems that means using pthreads.
Linux doesn't actually distinguish between processes and threads, and implements both as a data structure task_struct. So what does Linux provide to some programs for them to tell threads of a process from its child processes? For example, Is there a way to see details of all the threads that a process has in Linux? Thanks.
How does Linux tell threads apart from child processes?
Let us understand the difference between a process and a thread. As per this link,The typical difference is that threads (of the same process) run in a shared memory space, while processes run in separate memory spaces.Now, we have the pid_max parameter which can be determined as below. cat /proc/sys/kernel/pid_maxSo the above command returns 32,768 which means I can execute 32,768 processes simultaneously in my system that can run in separate memory spaces. Now, we have the threads-max parameter which can be determined as below. cat /proc/sys/kernel/threads-maxThe above command returns me the output as 126406 which means I can have 126406 threads in a shared memory space. Now, let us take the 3rd parameter ulimit -u which says the total processes a user can have at a particular time. The above command returns me the output as 63203. This means for all the processes that a user has created at a point of time the user can have 63203 processes running. Hypothetical case So assuming there are 2 processes simultaneously being run by 2 users and each process is consuming memory heavily, both the processes will effectively use the 63203 user limit on the processes. So, if that is the case, the 2 users will have effectively used up the entire 126406 threads-max size. Now, I need to determine how many processes an user can run at any point of time. This can be determined from the file, /etc/security/limits.conf. So, there are basically 2 settings in this file as explained over here. A soft limit is like a warning and hard limit is a real max limit. For example, following will prevent anyone in the student group from having more than 50 processes, and a warning will be given at 30 processes. @student hard nproc 50 @student soft nproc 30Hard limits are maintained by the kernel while the soft limits are enforced by the shell.
I am trying to understand the Linux processes. I'm confused on the respective terms pid_max, ulimit -u and thread_max. What exactly is the difference between these terms? Can someone clarify the differences?
Understanding the differences between pid_max, ulimit -u and thread_max
You can always do: ps -eLo pid= -o tid= | awk '$2 == 792 {print $1}'On Linux: $ readlink -f /proc/*/task/792/../.. /proc/300Or with zsh: $ echo /proc/*/task/792(:h:h:t) 300
I run iotop to check on programs that are heavy disk users, in case I need to decrease their priority. Usually this is good enough, but iotop only shows thread ID (TID), and sometimes I want to know process ID (PID) so I can find out more about which process is responsible. Unfortunately, while ps can display TID (a.k.a SPID, LWP), it doesn't have a flag to take a list of TIDs the way it does for a list of PIDs with --pid. The best I can do is list TIDs and then grep the output. For example, if the thread id is 792, I can do $ ps -eLf | grep ' 792 'which works reasonably well, but is a little inelegant. Is there a better way?
Get PID from TID
When you run strace the lines it's returning are system functions. In case it wasn't obvious epoll_wait() is a function that you can do a man epoll_wait to find out implementation details like so: epoll_wait, epoll_pwait - wait for an I/O event on an epoll file descriptorThe description for epoll:The epoll API performs a similar task to poll(2): monitoring multiple file descriptors to see if I/O is possible on any of them. The epoll API can be used either as an edge-triggered or a level-triggered interface and scales well to large numbers of watched file descriptors. So it would seem that you're process is blocking on file descriptors, waiting to see if I/O is possible on any of them. I would change my tactics a bit and try and make use of lsof -p <pid> to see if you can narrow down what these files actually are.
Following on from a problem described in "How is it that I can attach strace to a process that isn't in the output of ps?" I'm trying to debug a process that hangs part way through. By using strace -f on my parent process, I was able to determine that I have a bunch of threads that are just showing: # strace -p 26334 Process 26334 attached - interrupt to quit epoll_wait(607, {}, 4096, 500) = 0 epoll_wait(607, {}, 4096, 500) = 0 epoll_wait(607, {}, 4096, 500) = 0 epoll_wait(607, {}, 4096, 500) = 0 epoll_wait(607, ^C <unfinished ...> Process 26334 detachedInvestigating further: # readlink /proc/26334/fd/607 anon_inode:[eventpoll]My gut tells me that I've managed to get some threads in a deadlock situation, but I don't really know enough about epoll to move forward. Are there any commands that can give me some insight into what these threads are polling for, or which file descriptors this epoll descriptor maps to.
What information can I find out about an eventpoll on a running thread?
Linux: The Linux kernel have a great implementation for the matter and have many features/settings intended to manage the ressources for the running process (over CPU governors, sysctl or cgroup), in such situation tuning those settings along with swap adjustment (if required) is recommended, basically you will be adapting the default functioning mode to your appliance. Benchmark, stress tests and situation analysis after applying the changes are a must especially on production servers. The performance gain can be very important when the kernel settings are adjusted to the needed usage, on the other hand this require testing and a well understanding of the different settings which is time consuming for an admin. Linux does use governors to load balance CPU ressources between the running application, many governors are available; depending on your distro's kernel some governor may not be available (rebuilding the kernel can be done to add missing or non upstream governors). you can check what is the current governor, change it and more importantly in this case, tune its settings. Additional documentations: reading, guide, similar question, frequency scaling, choice of governor, the performance governor and cpufreq. SysCtl: Sysctl is a tool for examining and changing kernel parameters at runtime, adjustments can be made permanent with the config file /etc/sysctl.conf, this is an important part of this answer as many kernel settings can be changed with Sysctl, a full list of available settings can be displayed with the command sysctl -a, details are available on this and this article. Cgroup: The kernel provide the feature: control groups, which are called by their shorter name cgroups in this guide. Cgroups allow you to allocate resources such as CPU time, system memory, network bandwidth, or combinations of these resources among user-defined groups of tasks (processes) running on a system. You can monitor the cgroups you configure, deny cgroups access to certain resources, and even reconfigure your cgroups dynamically on a running system. The cgconfig (control group config) service can be configured to start up at boot time and reestablish your predefined cgroups, thus making them persistent across reboots. Source, further reading and question on the matter. Ram: This can be useful if the system have a limited amount of ram, otherwise you can disable the swap to mainly use the ram. Swap system can be adjusted per process or with the swappiness settings. If needed the ressources (ram) can be limited per process with ulimit (also used to limit other ressources). Disk: Disk I/O settings (I/O Scheduler) may be changed as well as the cluster size. Alternatives: Other tools like nice, cpulimit, cpuset, taskset or ulimit can be used as an alternative for the matter.
A simple example. I'm running a process that serves http request using TCP sockets. It might A) calculate something which means CPU will be the bottleneck B) Send a large file which may cause the network to be the bottleneck or C) Complex database query with semi-random access causing a disk bottleneck Should I try to categorize each page/API call as one or more of the above types and try to balance how much of each I should have? Or will the OS do that for me? How do I decide how many threads I want? I'll use 2 numbers for hardware threads 12 and 48 (intel xeon has that many). I was thinking of having at 2/3rds of the threads be for heavy CPU (8/32), 1 thread for heavy disk (or 1 heavy thread per disk) and the remaining 3/15 be for anything else which means no trying to balance the network. Should I have more than 12/48 threads on hardware that only supports 12/48 threads? Do I want less so I don't cause the CPU to go into a slower throttling mode (I forget what it's called but I heard it happens if too much of the chip is active at once). If I have to load and resource balance my threads how would I do it?
Should I attempt to 'balance' my threads or does linux do this?
The mistake was to presume those numbers were PIDS, when in fact they are TIDS (thread IDs). See Linux function gettid(2). Reading up on clone(2) gives a lot of extra (and interesting) details.
I ran the program pstree -p 31872 which printed the following output: ruby(31872)─┬─{ruby}(31906) └─{ruby}(32372)The man page for pstree says:Child threads of a process are found under the parent process and are shown with the process name in curly braces, e.g. icecast2---13*[{icecast2}](The above is displayed differently because of the missing -p option, which disables compaction.) Running pstree 31872 without -p gives: ruby───2*[{ruby}] When I try to observe those PIDS using ps, no results are found. However, the pids, exist in /proc. My question is, why would threads have different pids? I would expect them to be the same (31872) as the process. The same behavior is observed when running htop.
The program pstree and htop showing threads with unique PIDS. How is this possible?
On Linux it refers to the number of threads. From setrlimit(2) (which is the system call used to set the limits):RLIMIT_NPROC The maximum number of processes (or, more precisely on Linux, threads) that can be created for the real user ID of the calling process. Upon encountering this limit, fork(2) fails with the error EAGAIN. This limit is not enforced for processes that have either the CAP_SYS_ADMIN or the CAP_SYS_RESOURCE capability.So ps -efL | wc -l would be more appropriate, however the limits in limits.conf apply per login session (see limits.conf(5) for details).
If I want to check if I got to the max of the nproc value should I do: ps -ef | wc -lOr ps -efL | wc -lDoes nproc in limits.conf refers to number of processes or number of threads?
Does nproc in limits.conf refers to number of processes or number of threads?
Linux also provides the ability to create threads using the clone() system call. However, Linux does not distinguish between processes and threads. In fact, Linux uses the term task —rather than process or thread— when referring to a flow of control within a program.We need to distinguish between the actual implementation and the surface you see. From user (system software developer) point of view there is a big difference: threads share a lot of common resources (e.g. memory mappings - apart from stack, of course - file descriptors). Internally (warning: imprecise handwaving arguments) the Linux kernel1) is using what it has at hand, i.e. the same structure for processes and for threads, where for threads of a single process it doesn't duplicate some things rather it references a single instance thereof (memory map description). Thus on the level of directly representing a thread or a process there is not much difference in the basic structure, the devil lies in how the information is handled. You may as well be interested in reading Are threads implemented as processes on Linux?1) Remember that "Linux" these days stands mostly for the whole OS, while in fact it only is the kernel itself.
As far as I know in Linux kernel, the structure task_struct represents threads i.e. light weight processes, but not processes. processes are not represented by any structure, but by groups of threads sharing the same thread group id.So is the following from Operating System Concepts correct?Linux also provides the ability to create threads using the clone() system call. However, Linux does not distinguish between processes and threads. In fact, Linux uses the term task —rather than process or thread— when referring to a flow of control within a program.What does it mean? Thanks. Related How does Linux tell threads apart from child processes?
Does Linux not distinguish between processes and threads?
For your concrete example, there is a function cd_builtin, which is defined in builtins/cd.def (in the bash source code). It normally does a cd by calling that function. But it may fork first if you use it in a pipeline—for example, cd / | echo forks and calls cd_builtin in the child. You can also notice this by how the directory doesn't actually change: anthony@Zia:~$ cd /tmp/ anthony@Zia:/tmp$ cd / | echo -n anthony@Zia:/tmp$ cd / anthony@Zia:/$ Notice how the directory only changes when I don't pipe from cd.
I know that external commands are run in the shell by creating a separate process, but what exactly happens when a built-in command is run in a shell? Are they executed as a function, or does the shell create a new thread to execute them?
What exactly happens when a built-in command is run in a shell?
I know this question is pretty old (Feb 16) but here a response in case it helps someone else. The problem is that you've entered the '-F 999' indicating that you want to sample the events at a frequency of 999 times a second. For 'trace' events, you don't generally want to do sampling. For instance, when I select sched:sched_switch, I want to see every context switch. If you enter -F 999 then you will get a sampling of the context switches... If you look at the output of your 'perf record' cmd with something like: perf script --verbose -I --header -i perf.dat -F comm,pid,tid,cpu,time,period,event,trace,ip,sym,dso > perf.txtthen you would see that the 'period' (the number between the timestamp and the event name) would not (usually) be == 1. If you use a 'perf record' cmd like below, you'll see a period of 1 in the 'perf script' output like: Binder:695_5 695/2077 [000] 16231.700440: 1 sched:sched_switch: prev_comm=Binder:695_5 prev_pid=2077 prev_prio=120 prev_state=S ==> next_comm=kworker/u16:17 next_pid=7665 next_prio=120A long winded explanation but basically: don't do that (where 'that' is '-F 999'). If you just do something like: perf record -a -g -e sched:sched_switch -e sched:sched_blocked_reason -e sched:sched_stat_sleep -e sched:sched_stat_wait sleep 5then the output would show every context switch with the call stack for each event. And you might need to do: echo 1 > /proc/sys/kernel/sched_schedstatsto get the sched_stat events.
I've been trying to enable context switch events on perf and use perf script's dump from perf.data to investigate thread blocked time. So far the only two recording options that seem to be helpful are context switch and all the sched events. Here's the command I'm running on perf: perf record -g -a -F 999 -e cpu-clock,sched:sched_stat_sleep,sched:sched_switch,sched:sched_process_exit,context-switchesHowever, both seem to be incomplete, usually a sched_switch event looks something like this: comm1 0/0 [000] 0.0: 1 sched:sched_switch: prev_comm=comm1 prev_pid=0 prev_prio=0 prev_state=S ==> next_comm=comm2 next_pid=1 next_prio=1 stacktrace...From my understanding, the prev_comm is always the thread that is going to be blocked, and the next_comm is the thread that is going to be unblocked. Is this a correct assumption? If it is, I can't seem to get complete data on the events since there are many threads that get blocked on prev_comm, but never seem to get a corresponding next_comm. Enabling context switches doesn't seem to do much since there is no information on the thread being blocked or unblocked (unless I'm completely missing something, in which I would appreciate an explanation on how they work). Here's how a typical context switch event looks like: comm1 0/0 [000] 0.0: 1 context-switch: stacktrace...tl;dr, how can I do blocked time investigations on linux through perf script's output and what options need to be enabled on perf record? Thanks.
Understanding Linux Perf sched-switch and context-switches
You need to task ps to show thread information; otherwise it only lists processes: ps -eL -o pid,tid,comm | awk '$1 != $2'will show all the threads, apart from each process’ main thread, i.e. entries in the process table where pid and tid are different. The significant option is -L: without that, ps will only list entries where pid and tid are identical. On FreeBSD, the equivalent option is -H. I haven’t checked other BSDs, or System V.
In manpage of ps tid TID the unique number representing a dispatchable entity (alias lwp, spid). This value may also appear as: a process ID (pid); a process group ID (pgrp); a session ID for the session leader (sid); a thread group ID for the thread group leader (tgid); and a tty process group ID for the process group leader (tpgid). tgid TGID a number representing the thread group to which a task belongs (alias pid). It is the process ID of the thread group leader.In Ubuntu, tid and tgid seem always the same as pid, for both user processes, and kernel threads (I run ps -o pid,tid,tgid,cmd) Is it true in Linux, and why? Is it true in other Unix such as System V or BSD? Thanks.
Are tid and tgid always the same as pid in the output of ps?
Interrupts are handled by the operating system, threads (or processes, for that matter) aren't even aware of them. In the scenario you paint:Your thread issues a read() system call; the kernel gets the request, realizes that the thread won't do anything until data arrives (blocking call), so the thread is blocked. Kernel allocates space for buffers (if needed), and initiates the "find the block to be read, request for that block to be read into the buffer" dance. The scheduler selects another thread to use the just freed CPU All goes their merry way, until... ... an interrupt arrives from the disk. The kernel takes over, sees that this marks the completion of the read issued before, and marks the thread ready. Control returns to userspace. All goes their merry way, until... ... somebody yields the CPU by one of a thousand reasons, and it just so happens the just freed CPU gets assigned to the thread which was waiting for data.Something like that, anyway. No, the CPU isn't asigned to the waiting thread when an interrupt happens to signal completion of the transfer. It might interrupt another thread, and execution probably resumes that thread (or perhaps another one might be selected).
I've been reading a bit about threads and interrupts. And there is a sections which says that parallel programing using threads is simpler because we don't have to worry about interrupts. However, what is the mechanism in which signals the release of the blocking system call if not an interrupt? Example I read i file in my thread which use a blocking system call to read the file from the disk. During that time, other threads are running. At some point the file is ready to be read from the hard disk. Does it notify the processor of this via a hardware interrupt, so that it can do a context switch ti the thread which asked for the file?
Are threads which are executing blocking system calls awoken by interrupts?
POSIX uses the term context switch for at least two different purposes, without attempting to define it rigorously (or even providing a definition):switching between threads, and switching between processesRather, POSIX assumes you already know what the term means. For instance,3.118 CPU Time (Execution Time) The time spent executing a process or thread, including the time spent executing system services on behalf of that process or thread. If the Threads option is supported, then the value of the CPU-time clock for a process is implementation-defined. With this definition the sum of all the execution times of all the threads in a process might not equal the process execution time, even in a single-threaded process, because implementations may differ in how they account for time during context switches or for other reasons.Further reading:thread context switch vs process context switch system call and context switch The Context-Switch Overhead Inflicted by Hardware Interrupts (and the Enigma of Do-Nothing Loops), Dan Tsafrir Verified Process-Context Switch for C-Programmed Kernels, Starostin and Tsyban
Is a POSIX context switch well-defined? Is it the same thing as switching threads in C? Can the C compiler generate everything for a context switch or is assembly programming still needed for a routine that switches the threads or switches the "context"? Is there even defined what is meant by "context" - isn't it the same as a thread?
Does POSIX define context switch?
Different threads can certainly be in a different scheduler state at the same time. In fact, if they're all in the same state, that's a coincidence (except for stopped (Z), because that affects the whole process). The subdirectory /proc/PID/task contains a subdirectory per thread of the process. The files in this directory are mostly the same as in the per-process directory. Some of the information is just duplicated (e.g. memory-related information, environment, privileges, etc.). Information that's specific to a thread, such as the scheduler state (running/sleeping/IO/…), can differ.
Do all threads of a specific process share the same status (D, R, S, ...) or may there be differences among these threads? If so, where in /proc do I find information about the status of a certain thread? I am reading the process status from the /proc/<PID>/status files at the moment.
Status of a threads vs. status of a process
There is a file that associates a thread to its network namespace: /proc/[PID]/task/[TID]/ns/netwhere TID is the thread ID. This solved my issue.
/proc/[pid]/ns/net contains a link to the inode representing the network namespace of the process with PID [pid]. Is there something similar for threads? My use case is a multi-threaded application, where there's one main thread and a group of worker threads. The generic worker W creates a new network namespace N with a call to unshare() (which makes W enter N), pushes one end of a veth pair in N and leaves it (it uses an fd pointing to the root namespace to go back to such namespace). Since no processes are in N after W goes back to the root namespace, N is destroyed when that happens, and I do not want that. The solution I thought about is to mount a link to N somewhere in the filesystem. This is what iproute2 netns does: mounting a link to /proc/[pid]/ns/net. The problem, in my case, is that /proc/[pid]/ns/net keeps referencing the root namespace, only W changes namespace, hence I cannot use it and I need a file/something else which points to the namespace of a thread. Is there such a thing in Linux?
Is there a file that associates a thread to its network namespace?
When looking at /proc/${pid}/status, then the Tgid: and Pid: fields will always match, since they're the same for a process or for the main thread of a process. The reason why there are two separate fields is that the same code is used to produce /proc/${pid}/task/${tid}/status, in which Tgid: and Pid: may differ from each other. (More specifically, Tgid: will match ${pid} and Pid: will match ${tid} in the file name template used above.)The naming is a bit confusing, mainly because threading support was only added to the Linux kernel later on and, at the time, the scheduler code was modified to reuse the logic that used to schedule processes so it would now schedule threads. This resulted in reusing the concept of "pids" to identify individual threads. So, in effect, from the kernel's point of view, "pid" is still used for threads and "tgid" was introduced for processes. But from userspace you still want the PID to identify a process, therefore userspace utilities such as ps, etc. will map kernel's "tgid" to PID and kernel's "pid" to "tid" (thread id.)
tgid and pid are the same concept for any process or for any lightweight process. In /proc/${pid}/status, tgid and pid are distinct fields. Are tgid and pid ever different for a process or lightweight process? Thanks.
Are tgid and pid ever different for a process or lightweight process?
You can use taskset from util-linux.The masks may be specified in hexadecimal (with or without a leading "0x"), or as a CPU list with the --cpu-list option. For example, 0x00000001 is processor #0, 0x00000003 is processors #0 and #1, 0xFFFFFFFF is processors #0 through #31, 32 is processors #1, #4, and #5, --cpu-list 0-2,6 is processors #0, #1, #2, and #6. When taskset returns, it is guaranteed that the given program has been scheduled to a legal CPU.
I have a bug in my Linux app that is reproducable only on single-core CPUs. To debug it, I want to start the process from the command line so that it is limited to 1 CPU even on my multi-processor machine. Is it possible to change this for a particular process, e.g. to run it so that it does not run (its) multiple threads on multiple processors?
Run process as if on a single-core machine to find a bug
The settings specified in /etc/security/limits.conf are applied by pam_limits.so (man 8 pam_limits). The pam stack is only involved during the creation of a new session (login). Thus you need to log out and back in for the settings to take effect.
I am increasing nproc limit for one of a development user account in my rhel6 system . After searching some rubust solution , I zerod at editing /etc/security/limits.conf with these two lines : @dev_user hard nproc 4096 @dev_user soft nproc 4096For some cases I have to deal with so much number of threads that's why I want those numbers high . Also this solution serves the pupose well . BUT my problem is if any time I edit that file with sudo permission then it only becomes after the system restart. This dev_user have been provided root access with sudo permissions only. Here is my humble request to you to please suggest me some solution which should do the task without restart . Also increased limits should last long until unless no else edits it again.
Increasing nproc limit for a non-root user . Only effective by restart
Following @Tomes advice, I'm trying to answer my own question, based on my comment exchange with @user10489. Of course I am no expert on this matter, so don't hesitate to amend or correct my statements if needed. But first, a clarification, because on a lot of websites people confuse block size and sector size :A block is the smallest amount of data a file system can handle (very often 4096 bytes by default, for example for EXT4, but it can be changed during formatting). I believe in the Windows world that's called a cluster. A sector is the smallest amount of data a drive can handle. Since circa 2010, all HDDs use 4096 byte sectors (e.g., the physical sector size is 4096 bytes). But to stay compatible with older OSes, that can only handle HDDs with 512 bytes sectors, modern drives still present themselves as HDDs with 512 bytes (e.g., their logical sector size is 512 bytes). The conversion from the logical 512 bytes, as seen by the OS, and the physical 4096 bytes of the HDD, is done by the HDD's firmware. This is called Advanced Format HDDs (aka 512e/4Kn HDDs, e for emulated and n for native)So, an out-of-the-box HDD presents itself with a logical sector size of 512 bytes, because the drive's manufacturer want it to be recognized by all OSes, including old ones. But all modern OSes can handle native 4K drives (Linux can do this since kernel 2.6.31 in 2010). So a legitimate question is : if you know you won't ever use pre-2010 OSes, wouldn't it make sense, prior to using a new HDD, to modify it's logical sector size from 512 bytes to 4096 bytes ? Someone did a benchmark to find out if there are real benefits to this, and found out that there was a real difference only in one case : single-threaded R/W tests. In multi-threaded tests, he found no significant difference. My question is : does this specific use case translate in real life ? E.g., does Linux do a lot of single threaded R/W operations ? In which case setting the HDD's logical sector size to 4096 would result in some real benefits. I still don't have the answer to this question. But I think another way to look at it is to say that, on modern OSes, it doesn't hurt to change a drive's default 512 bytes logical sector size to 4096 bytes : best case scenario you are getting some performance improvements if the OS does single-threaded R/W operations, and worst case scenario nothing changes. Again, the only reason a drive uses 512 bytes logical sectors out-of-the-box is to stay compatible with older pre-2010 OSes. On modern OSes, setting it to 4096 bytes won't hurt. One last thing to notice is that all HDD's don't support that change. As far as I know, those who do report explicitly their supported logical sector sizes : # hdparm -I /dev/sdX | grep 'Sector size:' Logical Sector size: 512 bytes [ Supported: 512 4096 ] Physical Sector size: 4096 bytesIt can then be changed also with hdparm, or with the manufacturer's proprietary tools. [ EDIT ] But there's a reason why changing the logical sector size from 512 to 4K may not be such a good idea. According to Wikipedia, aside from the OS, an application is also a potential area using 512-byte-based code :So, does that mean that even with a modern OS supporting 4Kn, you can get into trouble if a specific application doesn't support it ? In that case it makes probably more sense to keep the HDD's default 512e logical sector size, unless you can be absolutely sure that all your applications can handle 4Kn. [ EDIT 2 ] At second thought, there's probably no big risk to switch to 4K sectors on modern hardware and software. Most software will work at the filesystem level, and those who have direct raw block access (formatting tools, cloning tools, ...) will probably support 4K sectors, unless they're outdated. See also Switching HDD sector size to 4096 bytes
Modern HDDs all are "Advanced Format" ones, e.g. by default they report a logical/physical sector size of 512/4096. By default, most Linux formatting tools use a block size of 4096 bytes (at least that's the default on Debian/EXT4). Until today, I thought that this was kind of optimized : Linux/EXT4 sends chunks of 4K data to the HDD, which can handle them optimally, even though its logical sector size is 512K. But today I read this quite recent (2021) post. The guy did some HDD benchmarks, in order to check if switching his HDD's logical sector size from 512e to 4Kn would provide better performances. His conclusion :Remember: My theory going in was that the filesystem uses 4k blocks, and everything is properly aligned, so there shouldn’t be a meaningful difference.Does that hold up? Well, no. Not at all. (...) Using 4kb blocks… there’s an awfully big difference here. This is single threaded benchmarking, but there is consistently a huge lead going to the 4k sector drive here on 4kb block transfers. (...)Conclusions: Use 4k Sectors! As far as I’m concerned, the conclusions here are pretty clear. If you’ve got a modern operating system that can handle 4k sectors, and your drives support operating either as 512 byte or 4k sectors, convert your drives to 4k native sectors before doing anything else. Then go on your way and let the OS deal with it.Basically, his conclusion was that there was quite a performance improvement in switching the HDD's logical sector size to 4Kn, vs the out-of-box 512e :Now, an important thing to note : that particular benchmark was single threaded. He also did a 4-threaded benchmark, which didn't show any significant differences between 512e and 4Kn. Thus my questions :His conclusion holds up only if you have single threaded processes that read/write on the drive. Does Linux have such single threaded processes ? And thus, would you recommend to set a HDD's logical sector size to 4Kn ?
Are there any benefits in setting a HDD's logical sector size to 4Kn?
According to the man page: Linux supports PTHREAD_SCOPE_SYSTEM, but not PTHREAD_SCOPE_PROCESSAnd if you take a look at the glibc's implementation: 0034 /* Catch invalid values. */ 0035 switch (scope) 0036 { 0037 case PTHREAD_SCOPE_SYSTEM: 0038 iattr->flags &= ~ATTR_FLAG_SCOPEPROCESS; 0039 break; 0040 0041 case PTHREAD_SCOPE_PROCESS: 0042 return ENOTSUP; 0043 0044 default: 0045 return EINVAL; 0046 }
I read that their is 1:1 mapping of user and kernel thread in linux What is the difference between PTHREAD_SCOPE_PROCESS & PTHREAD_SCOPE_SYSTEM in linux if kernel is considering every thread like a process then there will not be any performance difference? Correct me I'm wrong
Pthread scheduler scope variables?
thread1 and thread2 are child threads spawned by the main process but the main process can still do work. In your output of htop bin/process (and all child threads) are using 100% of cpu. 70% of the cpu is used by thread1 and 0% by thread2, the remaining (difference) is the main process that spawn/manages these child threads.
I'm observing a multi threaded process in htop in tree view. If I were to strip it just to the problematic part, it looks somewhat like this: CPU% bin/process 100 `- thread1 70 `- thread2 0The process all together is using 100% and one of the threads is using 70%. Where do I place the other 30%?
'htop' process and threads cpu usage?
If you want something coming from kernel space, then you might want to look at semaphores (sem_overview(7)). You can built higher level constructs from a semaphore, like "event", "condition", "mutex" ("critical sections"). There are the older and newer interfaces in C. Some higher level languages like Python and Perl also expose the interface. The "Mutex" that you are likely talking about is the pthread's mutex, which will be faster than anything in user space, especially one using a spinlock (which were designed for extremely low level OS level constructs). Some pthread's implementations may use an OS level semaphore or may use other constructs.
In linux or a library for linux, is there an equivalent to critical section in win32? I am aware of Mutex, but it is not the same as critical section since a critical section uses user-mode spinlock and an event object internally, and it is must be faster than mutex.
critical section for linux
From the link I provided about the Completely Fair Scheduler, we see that in kernel 2.6.24 has what is called group scheduling. To quote from Chandandeep Singh Pabla:For example, let's say there is a total of 25 runnable processes in the system. CFS tries to be fair by allocating 4% of the CPU to all of them. However, let's say that out of these 25 processes, 20 belong to user A while 5 belong to user B. User B is at an inherent disadvantage as A is getting more CPU power than B. Group scheduling tries to eliminate this problem. It first tries to be fair to a group and then to individual tasks withing that group. So, CFS with group scheduling enabled, will allocate 50% of the CPU to each user A and B. The allocated 50% share of A will be divided among A's 20 tasks, while the other 50% of the CPU time will be distributed fairly among B's 5 taks.Now, this applies to the above question, because when a process spawns a new thread, it'll be in that processes scheduling group. This prevents a program that spawns 1000 threads from hogging all of the CPU time, because it'll only get 1/1001th (1000 threads plus the original program) of that particular process group's run time. So, by slowing down how much time a thread gets compared to the whole system, this properly punishes threaded applications.
I'm wondering about the "punishment" that occurs when a new thread is created. From my understanding of clone(2), NPTL (New POSIX Thread Library), CFS (Completely Fair Scheduler), when a new thread is created it is seen as a new process because NPTL uses a 1:1 thread model. From what I've read about the scheduler, when a new process is added to the run-queue, the fair_clock variable increases to a fraction of the wall clock. From poking around the rituals with pthread_create(3), clone is eventually called just like it would in a fork(2). Now, a process will have a 1:1 model and so will threads. So, does a thread also suffer this same exact fate as well? Obviously, a thread must be punished in some form or else a muti-threaded process can hog most of the CPU time by filling up the RR (round robin) system that CFS uses. If this is true, then what are the advantages of using threads over forks? Is it just the automatic shared heap space (as opposed to using shm_open(2))?
Thread process in linux
On some demand-paged virtual memory systems, the operating system refuses to allocate anonymous pages (i.e. pages containing data without a filesystem source such as runtime data, program stack etc.) unless there is sufficient swap space to swap out the pages in order to free up physical memory. This strict accounting has the advantage that each process is guaranteed access to as much virtual memory they allocate, but is also means that the amount of virtual memory available is essentially limited by the size of the swap space. In practice, programs tend to allocate more memory than they use. For instance, the Java Virtual Machine allocates a lot of virtual memory on startup, but does not use it immediately. Memory accounting in the Linux kernel attempts to compensate for this by tracking the amount of memory actually in use by processes, and overcommits the amount of virtual memory. In other words the amount of virtual memory allocated by the kernel can exceed the amount of physical memory and swap space combined on the system. While this leads to better utilization of physical memory and swap space, the downside is that when the amount of of memory in use exceeds the amount of physical memory and swap space available, the kernel must somehow free memory resources in order to meet the memory allocation commitment. The kernel mechanism that is used to reclaim memory is called the out-of-memory-killer (OOM-killer). Typically the mechanism will start killing off memory-hogging "rogue" processes to free up memory for other processes. In some environments, a viable option to free memory and bring the system back to operation can be to reboot. For these cases the kernel can be configured to panic on an out-of-memory condition via the vm.panic_on_oom sysctl setting. The memory accounting heuristic the kernel uses can be made more liberal or strict via the vm.overcommit_memory sysctl setting. When strict memory accounting is in use, the kernel will no longer allocate anonymous pages unless it has enough free physical memory or swap space to store the pages. This means it is essential that the system is configured with enough swap space.
I started 700+ threads from a single program And my /proc/[PID]/status file shows the following output. VmPeak: 7228104 kB VmSize: 7228104 kB VmLck: 0 kB VmHWM: 3456 kB VmRSS: 3456 kB VmData: 7222340 kB VmStk: 88 kB VmExe: 4 kB VmLib: 1540 kB VmPTE: 2864 kB StaBrk: 15e84000 kB Brk: 15ec6000 kB StaStk: 7fff765095a0 kB Threads: 706But I'm having only 2gb RAM and 4gb swap space. Can somebody tell me how did the Virtual Memory get to 7gb+?
Viewing virtual memory usage
These settings don’t have the same effect:threads-max limits the number of processes which can be instantiated simultaneously pid_max limits the identifier assigned to processesthreads-max limits the amount of memory that can end up allocated to task_struct instances. pid_max determines when pids roll around (if ever). Constraining pid_max doesn’t have an effect on memory consumption (as far as I’m aware, unless lots of pids end up stored as text), and can end up affecting performance since finding a new pid is harder once pid_max has been reached. A lower pid_max also increases the likelihood of pid reuse within a given time period.
I understand the difference between /proc/sys/kernel/pid_max and /proc/sys/kernel/threads-max. There's a good explanation at the answer to Understanding the differences between pid_max, ulimit -u and thread_max:/proc/sys/kernel/pid_max has nothing to do with the maximum number of processes that can be run at any given time. It is, in fact, the maximum numerical PROCESS IDENTIFIER than can be assigned by the kernel. In the Linux kernel, a process and a thread are one and the same. They're handled the same way by the kernel. They both occupy a slot in the task_struct data structure. A thread, by common terminology, is in Linux a process that shares resources with another process (they will also share a thread group ID). A thread in the Linux kernel is largely a conceptual construct as far as the scheduler is concerned. Now that you understand that the kernel largely does not differentiate between a thread and a process, it should make more sense that /proc/sys/kernel/threads-max is actually the maximum number of elements contained in the data structure task_struct. Which is the data structure that contains the list of processes, or as they can be called, tasks.However, effectively, both limit the maximum number of concurrent threads on a host. This number will be - to my understanding - the minimum of pid_max and threads-max. So why are both needed? I understand that the default value pid_max is based on the number of possible CPUs of the machine while the default of threads-max is derived from the number of pages. But since both have the same effect, couldn't Linux just have one value that would be the minimum of both?
Why does Linux needs both pid_max and threads-max?
The "-T" option for the ps command enables thread views. # ps -T -p <pid>For example, to list the threads for the following java process: # ps -ef | grep 97947 deploy 97947 97942 1 00:51 ? 00:13:51 javaAlternatively, you can use top which can show a real-time view of individual threads. To enable thread views in the top output, invoke top with "-H" option. This will list all Linux threads. You can also toggle on or off thread view mode while top is running, by pressing 'H' key. top - 14:43:25 up 6 days, 5:40, 2 users, load average: 0.87, 0.33, 0.22 Threads: 684 total, 1 running, 683 sleeping, 0 stopped, 0 zombie %Cpu(s): 6.3 us, 4.0 sy, 0.0 ni, 89.6 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 7910136 total, 384812 free, 1603096 used, 5922228 buff/cache KiB Swap: 8388604 total, 8239100 free, 149504 used. 5514264 avail MemNote how in the example above the number of threads on the system is listed.
The ps command can be done in terminal to view information about a process. For example, #list processes ps aux #with executable paths ps -ef #path for a specific process ps -p [pid]However, if a process is large, it may be necessary to isolate what individual threads are doing. For example, kernel_task. The command sudo dtruss -ap [pid] is not optimal because it requires turning off system resource protection. Is there a way to find ps information about threads without turning off system resource protection? Thanks
How do I do "ps" command on a thread?
Time-sliced threads are threads executed by a single CPU core without truly executing them at the same time (by switching between threads over and over again). This is the opposite of simultaneous multithreading, when multiple CPU cores execute many threads. Interrupts interrupt thread execution no matter of technology, and when interrupt handling code exits, control is given back to thread code.
What does it mean when threads are time-sliced? Does that mean they work as interrupts, don't exit while routine is not finished? Or it executes one instruction from one thread then one instruction from second thread and so on?
Threads vs interrupts
The top man page describes the field you're looking for:nTH -- Number of Threads The number of threads associated with a process. (number above would probably change depending on OS and top version). Interactively, you can use the f (Fields Management) key, then move down to nTH, activate it with space, select it for column display order change with →, move it up with ↑, and validate with esc. If you're satisfied with the result, you can finally save it in ~/.toprc with shiftw, so you won't have to do this again. I'm not sure if there's an other method (eg command line) to toggle this field.
I know there is a one-line-per-thread view (-H), but the particular threads are not grouped by master process. In fact, I would be completely satisfied with the sole thread count per process (= how many sub-threads does some process create?).
How do I display the thread / child process count of a process in top?
It is very simple, hope you are asking to do thread dump when you find the string in the log. So when you script 1 finds string in the log you need to run the thread dump script. To do so you need to include the thread dump script in the if [[ "$count" -ge 1 ]]; true block. filelocation=$1 string=$2 count=$(cat $1 | grep -i "$2" | wc -l)if [[ "$count" -ge 1 ]]; then echo "WARNING: There are $count occurrences of $2 in log file" PID=$(ps -ef | grep java | awk '{print $2}') N=3 INTERVAL=5 for ((i=1;i<=$N;i++)) do # d=$(date +%Y%m%d-%H:%M:%S) # dump="/tmp/Threaddump-$PID-$d.txt" dump="/tmp/ThreadDump-`hostname`-`date '+%F-%H:%M:%S'`.gz" echo $i of $N: $dump /opt/jdk1.8.0_121/jdk1.7.0_40/bin/jstack -l $PID > $dump sleep $INTERVAL done exit 1 else echo "OK: No lines with $2 in log file" exit 0 fiIf you want the scrip to constantly look up the log and do the thread dump, you need to have a wrapper looping statement with 5-10 sec sleep and do this parsing and dumping logic continuously. Code change for continuously monitoring the log. Have a infinite loop after the file location statement, include a sleep for 60 seconds (It depends on how much sleep time you need) and end the loop in the last line. You need to do exception handling and you can demonize this script. As mentioned by @wildcard you need to optimize for the parsing and PID part.
I need to write a script to create and alert and get thread dumps if a string (Related string is found) in a log file - /tmp/area.log. I am able to do this in 2 separate scripts so far but would like combine them into one. Script 1: create an alert filelocation=$1 string=$2 count=$(cat $1 | grep -i "$2" | wc -l)if [[ "$count" -ge 1 ]]; then echo "WARNING: There are $count occurrences of $2 in log file" exit 1 else echo "OK: No lines with $2 in log file" exit 0 fiScript 2: Create thread dumps #!/bin/bash PID=$(ps -ef | grep java | awk '{print $2}') N=3 INTERVAL=5for ((i=1;i<=$N;i++)) do # d=$(date +%Y%m%d-%H:%M:%S) # dump="/tmp/Threaddump-$PID-$d.txt" dump="/tmp/ThreadDump-`hostname`-`date '+%F-%H:%M:%S'`.gz" echo $i of $N: $dump /opt/jdk1.8.0_121/jdk1.7.0_40/bin/jstack -l $PID > $dump sleep $INTERVAL done
Look up a string in a log to set an alert and generate thread dumps
With htop, you want the TGID column (add it through F2 > Columns). It is also available in top with the same name, but I don't know how to configure top. Linux "processes" are really just thread groups (or task groups), and the "PID" column in top/htop actually shows the threadID (task ID). The same clone(2) system call is used to create both – check out the part about CLONE_THREAD.
when i do a ps -efT (where -T = Show threads, possibly with SPID column.), i see all the threads have the same PID, which is as expected. myroot 24958 24958 7942 0 20:20 pts/12 00:00:00 java -jar myapp.jar myroot 24958 24959 7942 0 20:20 pts/12 00:00:11 java -jar myapp.jar myroot 24958 24960 7942 0 20:20 pts/12 00:00:00 java -jar myapp.jar myroot 24958 24961 7942 0 20:20 pts/12 00:00:00 java -jar myapp.jar myroot 24958 24962 7942 0 20:20 pts/12 00:00:00 java -jar myapp.jar myroot 24958 24963 7942 0 20:20 pts/12 00:00:00 java -jar myapp.jar myroot 24958 24964 7942 0 20:20 pts/12 00:00:00 java -jar myapp.jar myroot 24958 24965 7942 0 20:20 pts/12 00:00:00 java -jar myapp.jarAs it can be seen above, all the threads share/show the same PID 24958.Now When I do the same with top or htop, i am seeing differnt pid for each thread and this is bothering me. Is there a way to show the same PID for all the threads. Below is the curtailed output for top -H -p 24958 (I am using top with -p, so i could explain and show the problem) top - 21:42:44 up 9 days, 18:38, 0 users, load average: 0.00, 0.26, 0.82 Threads: 32 total, 0 running, 32 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.0 us, 0.1 sy, 0.0 ni, 99.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st MiB Mem : 12542.5 total, 10135.3 free, 826.8 used, 1580.4 buff/cache MiB Swap: 4096.0 total, 4096.0 free, 0.0 used. 11439.4 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24958 myroot 20 0 7036228 340720 21084 S 0.0 2.7 0:00.00 java 24959 myroot 20 0 7036228 340720 21084 S 0.0 2.7 0:11.99 java 24960 myroot 20 0 7036228 340720 21084 S 0.0 2.7 0:00.43 GC Thread#0 24961 myroot 20 0 7036228 340720 21084 S 0.0 2.7 0:00.00 G1 Main Marker 24962 myroot 20 0 7036228 340720 21084 S 0.0 2.7 0:00.00 G1 Conc#0 --and few more threads. When i use top -H, i would not have any means to say which all threads belong to same Process unless I see same PID for all of them. Any guidence on how to get the same PID for all the threads when using top (or htop. As I have observed, htop too has the same issue). Given @user1686 answer to use TGID column. I am wondering what the PID for the thread is refering to.
top shows different pid for threads of same process. How to fix it?
If you want to see just that LWP process, ps -e -q 10172. If you want to see all the related threads, then you can do ps -eL -q 10172 So, for example, on my machine rsyslog has threads: PID LWP TTY TIME CMD 22316 22316 ? 00:00:00 rsyslogd 22316 22318 ? 00:02:23 in:imjournal 22316 22319 ? 00:00:00 in:imudp 22316 22320 ? 00:00:07 in:imtcp 22316 22321 ? 00:00:00 in:imtcp 22316 22322 ? 00:00:00 in:imtcp 22316 22323 ? 00:00:00 in:imtcp 22316 22324 ? 00:00:00 in:imtcp 22316 22325 ? 00:00:24 rs:main Q:RegI can see a single thread (eg 22320) % ps -e -q 22320 PID TTY TIME CMD 22316 ? 00:02:55 in:imtcpNote it shows the main PID of the process. I can see all the related process for that thread: % ps -eL -q 22320 PID LWP TTY TIME CMD 22316 22316 ? 00:00:00 rsyslogd 22316 22318 ? 00:02:23 in:imjournal 22316 22319 ? 00:00:00 in:imudp 22316 22320 ? 00:00:07 in:imtcp 22316 22321 ? 00:00:00 in:imtcp 22316 22322 ? 00:00:00 in:imtcp 22316 22323 ? 00:00:00 in:imtcp 22316 22324 ? 00:00:00 in:imtcp 22316 22325 ? 00:00:24 rs:main Q:Reg
How can I list information about a thread/LWP by ps? Why can't I do that simply by: $ ps 10173 PID TTY STAT TIME COMMAND $ ps -L 10173 PID LWP TTY STAT TIME COMMANDThe best I can do $ ps -eL | grep 10173 10172 10173 pts/8 00:00:00 javaIt is a LWP because $ ps -L 10172 PID LWP TTY STAT TIME COMMAND 10172 10172 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10173 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10174 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10175 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10176 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10177 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10178 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10179 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10180 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10181 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10182 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10183 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10184 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10185 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10186 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10187 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10188 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10189 pts/8 Tl 0:00 java -cp target com.mycompany.app.Main 10172 10190 pts/8 Tl 0:00 java -cp target com.mycompany.app.Mainand $ pstree -pau -l -G -s 10172 systemd,1 splash └─lxterminal,3194,t └─bash,12150 └─java,10172 -cp target com.mycompany.app.Main ├─{java},10173 ├─{java},10174 ├─{java},10175 ├─{java},10176 ├─{java},10177 ├─{java},10178 ├─{java},10179 ├─{java},10180 ├─{java},10181 ├─{java},10182 ├─{java},10183 ├─{java},10184 ├─{java},10185 ├─{java},10186 ├─{java},10187 ├─{java},10188 ├─{java},10189 └─{java},10190Thanks.
How can I list information about a thread/LWP by `ps`?
ps -eL|wc -lgives total number of lwp/thread count at any point of time
How can I get the number of threads in the kernel at specific sampling rate? I need to measure the utilization of the system directly by myself.
Number of Threads in Kernel
Judging by the question you pose you probably haven't seen problems where Threads provide an advantage over the standard processes. There are problems like High-Frequency Trading for example where system becomes sensitive to the number of context switches in the system as well as switching from user to kernel mode and back. In this case ability to work within a single memory space and have light context switches or no context switches at all give performance increases enough to worry about it. In addition if you have multiple processes handling stream of data coming in you don't want to implement a dispatcher copying data to an available process since you will have to create queues in shared memory or use network based IPC to copy data from and to the various processes memory spaces whereas with threads you read and parse the data once and keep it internally where any thread can access that particular event as needed making execution that much faster. In addition there are tasks that can manipulate same data at different times or the same time, in which case it becomes much easier to make sure that you are not trampling over the updates from a different update process(thread). Given these and many more problems where a single process space become advantageous you can see the necessity of threads. Now as far as Pthreads are concerned. You don't have to use them but they provide standardization and hence portability of source code across platforms. EDIT The original question appears to mention the LinuxThreads which was implementing Threads entirely in User Level and at the Kernel level had to be handled as processes because those were the only kernel schedulable entities available. As of 2.6 kernel this is no longer the case. NPTL implements threads at the kernel level as schedulable entities. You can also look at the similar question on StackOverflow
I read somewhere that Linux threads are really implemented as processes in the kernel since with today's hardware and on the Linux platform, the thread model is inefficient compared to the process model. If this is the case then why do we still use pthreads programming in our projects (other than for backward compatibility)? Why is there so much hesitation in deprecating the pthreads model in Linux?
What is the advantage of using pthreads in Linux?
What do you meant by “non-native”? There isn't a clear definition of “installed by default”, since each distribution has its own default set of installed packages and it's very easy to tune that set. POSIX threads are part of GNU libc, which is a fundamental part of any non-embedded Linux systems (there are substitutes for small systems, I think the major ones also include pthread support). The Linux kernel itself includes support for threads. It's not exactly pthread, but the distinction between what's supported by the kernel alone and what's supported by the standard library on top of the kernel is rarely useful. OpenMP comes through GOMP, which is part of gcc. Unlike the standard library (Glibc), it's possible to have a normal Linux system without libgomp installed. There are several implementations of MPI for Linux, including MPICH and OpenMPI. There are normal Linux systems without those, too. Everything is ultimately implemented on top of system calls, i.e. the functionality provided by the kernel. MP, MPI and other libraries are implemented in terms of system calls for process management, interprocess communication (pipes, sockets, shared memory, …) and multithreading (locks, conditions, …).
POSIX thread (pthread) and OpenMP are both libraries for thread programming. But is it right that they are not native to Linux, i.e. they have to be installed by user later? If yes, what are the native library or functions in Linux? Are they used to implement pthread and OpenMP? To draw a parallel comparison to process programming, if I am correct, functions fork(), exec*(), waitpid() and pipe() are offered natively by Linux, while MPI is not. Is MPI implemented in those native functions for process programming?
Native and non-native support of thread/process programming in Linux?
You need to try to acquire the lock in all threads which are supposed to be mutually exclusive: void ppp() { pthread_spin_lock(&lock); char a = 'C'; while(1) write(1, &a, 1); }Context-switching isn’t prevented by the lock’s existence, you prevent threads from making progress simultaneously by having them try to acquire the same lock. Threads which can’t acquire the lock spin until they can.
I am using this code in order to visualize how a spinlock would prevent context switching: pthread_spinlock_t lock; void pp() { pthread_spin_lock(&lock); char d = 'D'; while(1) write(1, &d, 1); } void ppp() { char a = 'C'; while(1) write(1, &a, 1); } int main() { pthread_t thread; pthread_t tthread; pthread_spin_init(&lock, PTHREAD_PROCESS_PRIVATE); pthread_create(&thread, NULL, pp, NULL); pthread_create(&tthread, NULL, ppp, NULL); pthread_join(thread, NULL); pthread_join(tthread,NULL); }The problem is that I was expecting it to never switch to the second thread, since I never release the lock done in pp(), and to output DDDDDDDDDDDD... because from my understanding, it should prevent context switching. But the output I get is of the form : DCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDC... How can we explain this? Is my understanding of spinlocks incorrect?
How does a spinlock prevents context switching?
A stack is effectively an array -- it contains a bunch of words in contiguous memory, There is an important restriction -- it can only grow and shrink at one end (hence FILO -- First In Last Out) which is also LIFO. Important difference from array too: Processor stack is logically split into Frames, and (unlike arrays), each frame can be a different size from any of the others. Each frame contains what needs to be stored when you make a function call, including:The return address where the called function jumps, to continue in the calling function. Space to hold any return value. A copy of each parameter being passed to the called function. Copies of the CPU registers, so that the register optimisation in the separate functions don't interfere.If you ever wondered how a function can recurse and yet use the same names for all its parameters and local variables in each level, the answer is that all of them have addresses relative to their current stack frame. The stack frame structure is defined differently for each processor architecture, to adopt the most natural way of storing things. There isn't a "Linux" stack -- Intel and AMD and Sparc will all have their own definition. Remember you can download pre-compiled libraries that your local compiler has to know how to call from your own code. A Stack is also a generic data structure in its own right. For example, if you are parsing the source of a language like C or SQL or XML that allows nested block constructs, then it is natural to make a stack of the blocks you are inside as you go. You wouldn't want to do that using the process stack: it's the thing you are parsing that has block structures, not your own code that needs to recurse. The stack for each process is just a part of its user process memory. Typically, the user address space runs from -8MB to 0 to (say) 60MB. The stack starts at -16 and grows downwards (increasingly negative). Global and static memory assigned by the compiler is upwards from 0, and any heap allocation grows above the fixed memory. The code is somewhere separate (for protection reasons). It doesn't do the virtual storage system any harm to map your negative address range into paged memory.
I have looked in several places such as here but none explain in detail the structs used for implementing the stack itself (the place where "tasks" (processes/threads) store their nested call information and such). Is it a linked list, or is it an array, or something else? I can't seem to find this information, but diagrammatically they always show it as a large memory block (virtual memory) where the beginning is the heap and the end is the stack. But this is virtual memory we're dealing with, which has all kinds of data structures around it such as pagination. So the question is what exactly is the implementation of the stack on top of all this? I can't help but think it must be a linked list. The reason is, if you have multiple processes each with their own stack, how is this implemented? Here we seem to be getting somewhere:Each process has its own stack for use when it is running in the kernel; in current kernels, that stack is sized at either 8KB or (on 64-bit systems) 16KB of memory. The stack lives in directly-mapped kernel memory, so it must be physically contiguous.
What data structure is the stack using in Linux?
A process/thread is woken up by inserting it in the queue of processes/threads to be scheduled. If you look at the source of signal, there is a user-space list of threads waiting on the conditional variable, and one of those gets woken with a futex system call. Checking an array of condition variables every tick would be very slow ... So when they describe the thread being blocked, it doesn't mean it is being put in a blocked queue, like threads waiting on I/O?Threads are implemented as a mix of userspace and kernel space. Just like a thread/process waiting on I/O is put on a kernel-space list to be rescheduled when that particular I/O operation completes, a thread waiting on a condition variable is put on a user-space list of this condition variable. So the answer to your question is "yes and no". :-) And the current thread implementation has been optimized over the years, so don't get hung up when the implementation doesn't match the (more simple) principles.
I am studying the locking mechanisms in an OS and came across these POSIX functions: pthread_cond_wait(pthread_cond_t *c, pthread_mutex_t *m); pthread_cond_signal(pthread_cond_t *c);I fully understand the idea behind the sleep/wake up here. But I am not sure how is this done in HW and how is it affecting scheduling...etc My understanding that when a thread executes: pthread_cond_wait() it goes to sleep (blocks), but what does this actually mean? Yes it gets de-scheduled and moved into a blocked state somewhere in a privileged queue, But when another process executes pthread_cond_signal(), how does the CPU wakes up the blocked thread? Is it during the timer interrupt that the kernel goes and checks all condition variables and then decides to wake up a thread associated with one that got freed up? I couldn't find a detailed explanation on this anywhere online, or maybe I am not looking correctly. Any help is highly appreciated.
Could someone explain the sleep/wake dynamics in Linux?
In Linux there is a scheduler. Some systems will push work to faster/cooler/more-efficient cores but the default behavior is an ordered stack. The software you are running needs to take advantage of multiple cores for any benefit to be had, so it may be that your workload can only be split into 32 threads by your choice of software (or configuration).
lscpu gives: Thread(s) per core: 2 Core(s) per socket: 32When running an intensive 32-threads process, why does htop show almost 100% CPU activity on #1-32, but very little activity on #33-64? Why aren't the process's 32 threads distributed evenly among CPUs #1-64?
Distribution of threads among CPUs?
So sleep 10 & echo $!is two commands sleep 10 & echo $!That they're on the same line doesn't change this. So the shell will fork() a new process and put the process ID of the new process into $!. Then the new process will run the sleep and the other process will run the echo. So you can be sure $! will always hold the PID of the new process, even if that process fails; it's the result of the fork().
In my tests, I always get the correct result so far with this: [fabian@manjaro ~]$ sleep 10 & echo $! [1] 302657 302657But sleep and echo are getting executed simultaneously here, so I would expect that it can sometimes happen that echo executes before the value of $! is set properly. Can this happen? Why doesn't it so far for me? My ultimate goal: Execute two tasks in parallel and then wait for both before moving on. The current plan is to use something like foo & bar; wait $!; baz. Will this always work or can it sometimes wait for an arbitrary older background process (or nothing at all, if $! is empty)?
Will "$!" reliably return the correct ID with "&"?