output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
There exists the GrSecurity patchset, you can find the 3.18 kernel version patches here: http://deb.digdeo.fr/grsecurity-archives/kernel-3.18/
Gentoo has a good page on setting of GrSecurity patches here: https://wiki.gentoo.org/wiki/Hardened/Grsecurity2_Quickstart you must patch the kernel and the compile with certain config options. GrSecurity offers the highest security you can get regarding Linux.
Recently they have taken the stable patches off of their site, due to various political reasons, but they are still availible in the link above, or if you pay for it. Though obviously the patches are GPL'd.
You can also look into SELinux, some info on the Gentoo page here https://wiki.gentoo.org/wiki/Project:SELinux though i'm sure there are many other places to read about it, but the Gentoo pages are often skewed more toward people who are going to be compiling themselves.
|
I am working on prototyping a product using 3.18 Linux kernel.
I am trying to understand the ways to harden Linux Kernel? I am referring a documentation. The document refers to numerous configuration options that should be taken care of? Is that all to hardening the Kernel? Or are there other things that I should take care.
P.S: I cannot move to a newer kernel version as the SoC is supported best on 3.18. I am using gcc-4.9 as the toolchain to build all the software.
|
Ways to Harden Linux Kernel? [closed]
|
Removing perl may harm your system, because there a ton of programs witch depend on it, to keep your system secure its better to include the following line:
deb http://security.debian.org/ jessie/updates main contrib non-freeon your sources.list to upgrade/patch the vulnerable packages through apt as soon as possible.
There are some information about the 3 vulnerabilities:
DSA-3628-1For the stable distribution (jessie), these problems have been fixed in version 5.20.2-3+deb8u6.DSA-3501-1For the oldstable distribution (wheezy), this problem has been fixed in version 5.14.2-21+deb7u3.
For the stable distribution (jessie), this problem has been fixed in version 5.20.2-3+deb8u4.
For the unstable distribution (sid), this problem will be fixed in version 5.22.1-8DSA-3441-1The oldstable distribution (wheezy) is not affected by this problem.
For the stable distribution (jessie), this problem has been fixed in version 5.20.2-3+deb8u2.
For the unstable distribution (sid), this problem will be fixed soon.
We recommend that you upgrade your perl packages.Edit
You can list the package that depend on perl through apt-cache rdepends perl command. The chapter 3 section 3.6.1 Removing Perl deals with the consequence of removing perl :So, without Perl and, unless you remake these utilities in shell script, you will probably not be able to manage any packages (so you will not be able to upgrade the system, which is not a Good Thing)
|
I'm working on a BeagleBone Black shipped with Debian. Started reading the Securing Debian page, and part way through I see in a chapter 3 section 3.6.1 Removing Perl. Doing a quick google search gives three 2016 security advisories in the first five results:DSA-3628-1
DSA-3501-1
DSA-3441-1The document even states that attempting to remove Perl is non-trivial. So is there a strong recommendation to remove Perl for hardening the system?
|
Is Removal of Perl the Recommendation for Hardening a System? [closed]
|
Alright, I've not gotten any good responses and it took trial and error, as well as monitoring to determine what works to achieve this.
I found some things were said to be needed, so the example up above might not work on all systems because a full path should be used on the executables. Also, when specifying a range of ports, you need to add in --match multiport otherwise it will ignore the rule entirely. Lastly, I added in a shebang at the top to ensure that the script will be run correctly by shell.
So here is the final version:
/usr/local/csf/bin/csfpre.sh
#!/bin/sh
/sbin/iptables -A INPUT -p tcp --dport 80 -j ACCEPT
/sbin/iptables -A INPUT -p tcp --dport 443 -j ACCEPT
/sbin/iptables -A INPUT -p all -s 60.168.112.0/20 -j DROP
/sbin/iptables -A INPUT -p tcp --match multiport --dport 21:26 -s 1.0.0.0/8 -j DROP
/sbin/iptables -A INPUT -p tcp --match multiport --dport 21:26 -s 112.0.0.0/7 -j DROP
/sbin/iptables -A INPUT -p tcp --match multiport --dport 21:26 -s 116.96.0.0/12 -j DROP
/sbin/iptables -A INPUT -p tcp --match multiport --dport 21:26 -s 116.118.0.0/16 -j DROP
/sbin/iptables -A INPUT -p tcp --match multiport --dport 21:26,110,465,587,995 -s 117.0.0.0/8 -j DROPNow for the breakdown of what is going on on this cPanel server with CSF installed for a firewall.CSF allows adding custom rules that run in separate groups. All groups ultimately are run by iptables. First, csfpre.sh, then CSF, then csfpost.sh.
Create a csfpre.sh file if it doesn't exist. You can put this in the /etc/ folder somewhere too, but it will always take the version in /usr/local/csf/bin/ with priority.
Add the shebang at the top:
!/bin/sh
My plan is to do some port blocking via csfpre.sh, but rather than have it run through all the rules, I have it first detect whether the connection is for a webpage visit. By checking this first, it reduces the latency/response time.Ports 80 and 443 are for HTTP and HTTPS protocols, before anything else, if the input is for either of those ports, ACCEPT and stop checking rules for this csfpre group:
/sbin/iptables -A INPUT -p tcp --dport 80 -j ACCEPT
/sbin/iptables -A INPUT -p tcp --dport 443 -j ACCEPTNow, we could just block entirely all other ports except for 80 and 443 if the next line is as such:
/sbin/iptables -A INPUT -p all -s 60.168.112.0/20 -j DROPSince all web traffic is accepted already, this will not block the website traffic for that range. If this line came first, then it would block all traffic, including web traffic. I don't want to block good users from viewing websites by blocking all an entire subnet they are in.If blocking is reduced to specific ports, then we can block a specific port, a range, a list, or a combination of these.To only block FTP:
/sbin/iptables -A INPUT -p tcp --dport 21 -s 1.0.0.0/8 -j DROPFTP actually uses a few different ports to establish a connection, and there also is SFTP/SSH which standardly is port 22 so better to block a range by using the starting port separated by a colon and then the ending port:
/sbin/iptables -A INPUT -p tcp --match multiport --dport 21:26 -s 1.0.0.0/8 -j DROPYou must use the --match multiport if you are using a range or a list. Lists can have no more than 15 ports, and each port in a range counts against the 15 total ports.
You can also block the standard SMTP email ports:
/sbin/iptables -A INPUT -p tcp --match multiport --dport 110,465,587,995 -s 117.0.0.0/8 -j DROPAnd, you can indeed use ranges in your list to block the FTP and mail ports in one rule:
/sbin/iptables -A INPUT -p tcp --match multiport --dport 21:26,110,465,587,995 -s 117.0.0.0/8 -j DROPSave your script, and restart the firewall.
Let CSF and cpHulk block individual IP addresses as needed.You can use your smartphone while not using your local connection to test with. Get the IP address of your phone, and be sure it is not the same as the computer you'll be working from. You can then run through all the scenarios, assuming you have the phone set up to check or send email through your server and an FTP program as well.
For the blocking, I've decided to restrict entire subnets from accessing FTP, and for some, SMTP. In order to whittle it down, I've analyzed all the incoming alerts and then compared the worst subnets with the countries listed on this website: http://www.tcpiputils.com/browse/ip-address
The end goal is to reduce the number of individual IPs being blocked by CSF. Blocking thousands of IPs can cause a latency issue, so by having some standard rules to block countries rife with malicious users reduces the need to manage such a large number of individual IPs.
To recalculate valid subnet ranges, use this tool: http://www.subnet-calculator.com/cidr.php
112.0.0.0/7 spans 112.0.0.0 to 113.255.255.255, but 111.0.0.0/7 is invalid blocking, so it would be 110.0.0.0 to 111.255.255.255. It's important that you verify your subnet ranges so that you don't wind up blocking the wrong IPs.
|
This is about a cPanel server which, like most servers, is under constant attack from lands afar. Considering that I only host to clients in the US and Canada, there is less of a reason to allow full access to Asia and South America, among other areas.
Too many firewall rules can increase latency, or worse, crash your firewall. Still, due to the large amount of attacks every day, I've configured CSF to manage at most 7000 rules. Some days are lighter than others, but on the 1st, 671 IPs were blocked trying to access SMTP (669) and cPanel (2).
To try and get this under better control, I thought about only allowing web access to everyone, and blocking specific large blocks from accessing FTP or SMTP. So, here is what I've placed in the CSF pre-rules [/usr/local/csf/bin/csfpre.sh].
iptables -A INPUT -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp --dport 443 -j ACCEPT
iptables -A INPUT -p tcp --dport 21:25 -s 1.0.0.0/8 -j DROP
iptables -A INPUT -p tcp --dport 21:25 -s 2.0.0.0/8 -j DROP
iptables -A INPUT -p tcp --dport 21:25 -s 112.0.0.0/8 -j DROP
iptables -A INPUT -p tcp --dport 21:25 -s 113.0.0.0/8 -j DROP
iptables -A INPUT -p tcp --dport 21:25 -s 117.0.0.0/8 -j DROP
iptables -A INPUT -p tcp --dport 21:25 -s 190.0.0.0/8 -j DROPNow, I'm not entirely confident in my iptables skills, so I'd like opinions regarding this and certainly feedback if this is doing something bad.
I do realize that this would block a massive amount of potential good email and any web developers in those areas hired to work on sites hosted on the server. My thought is that it is far far less probable that any valid email will be coming from these IP ranges. Also, I chose blocks based on my counts of attacks.
Rather than load up the 6000-7000 actual IP blocks for Russia, for instance, I can reduce the firewall rules dramatically and keep it simple by only focusing on wholesale blocking entire Class A blocks.
I used this site to examine exactly which countries would be blocked:
tcpiputils.com
|
Using IPTables to Block Ports to Class A Subnets While Allowing Web Ports (80/443)
|
Firstly...I found that we cannot associate the special character device with any file created by dd commandThe experiment you have shown does not create a file with dd, it tries to write to a special character device using dd.1) What are device files?
Device files can be thought of as a link to a device in your kernel. While they are stored on disk, the actual device they describe is nothing to do with the file system are stored on. In that regard think of them similar to a symbolic link that points to something inside the kernel.
The file name is irrelevant; just as a symbolic link can be named anything and put anywhere, so a device file can be named anything and stored anywhere it will still point to the same device.
2) Why are they a security problem?
For obvious reason, not just anyone can connect to a device directly. For example, you don't want a regular user just reading your hard drive, ignoring the file system and its permissions.
If you plug a drive into your machine and just mount it, there is a risk that there are device files on that disk with in-secure permissions. These might point to something that is supposed to be secured. So by plugging the disk in, you might give someone access to a device by mistake.
3) What does nodev do?
This plugs the security hole. It tells the operating system to ban any program from accessing a device through a device file stored on this file system.
In your experiment you used DD to try to write to a device using a device file (link to that device). Because in the first case you mounted with nodev the OS banned dd (and every other program) from using that device file.Edit: A little more on device files
Above I mentioned that device files are similar to symbolic links. For device files, the major and minor numbers are used to specify what they link to. If we take an example that's been automatically created by the opperating system:
$ ls -l /dev/zero /dev/random /dev/sda /dev/sda1
crw-rw-rw- 1 root root 1, 8 Feb 16 23:24 /dev/random
brw-rw---- 1 root disk 8, 0 Feb 16 23:24 /dev/sda
brw-rw---- 1 root disk 8, 1 Feb 16 23:24 /dev/sda1
crw-rw-rw- 1 root root 1, 5 Feb 16 23:24 /dev/zeroSo on my system, if I call mknod foo c 1 8 I should end up with a character device identical to /dev/random. To be clear, it's the same device, just a different file pointing to it.
According to the printout in your question it has major number 1 minor number 5. On my system, that's /dev/zero.
|
I read this in "RH413 Red Hat Server Hardening" course that we mount the filesystems with nodev which then does not allow the special files/devices to be mounted from it. However, it did not show an example.
I, however, did the following thing on my RHEL machine and I found that we cannot associate the special character device with any file created by dd command when the filesystem is mounted with nodev option. I later removed nodev option and was able to associate the character device with the newly created file with dd command.
Is this the behaviour which is expected when we mount an FS with nodev option or is there something else which I'm missing?
Here go the commands:
[root@server Special]# mount | grep /Special
/dev/mapper/home on /Special type ext4 (rw,nodev,relatime,seclabel,data=ordered)
[root@server Special]#[root@server Special]# ls -l
total 16
drwx------. 2 root root 16384 Feb 20 01:40 lost+found
crw-r--r--. 1 root root 1, 5 Feb 21 04:53 spFile
[root@server Special]#[root@server Special]# dd if=spFile of=newDev bs=1K count=20000
dd: failed to open ‘spFile’: Permission denied
[root@server Special]#Removed nodev by adding exec.
[root@server ~]# mount | grep /Special
/dev/mapper/home on /Special type ext4 (rw,relatime,seclabel,data=ordered)
[root@server ~]# [root@server Special]# dd if=spFile of=newDev bs=1K count=20000
20000+0 records in
20000+0 records out
20480000 bytes (20 MB) copied, 0.527708 s, 38.8 MB/s
[root@server Special]#[root@server Special]# ls -l
total 20016
drwx------. 2 root root 16384 Feb 20 01:40 lost+found
-rw-r--r--. 1 root root 20480000 Feb 21 05:10 newDev
crw-r--r--. 1 root root 1, 5 Feb 21 04:53 spFile
[root@server Special]#[root@server Special]# mkdir /spDev
[root@server Special]# mount newDev /spDev/[root@server Special]# df -h /spDev/
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 18M 326K 17M 2% /spDev
[root@server Special]#
|
What is the default behaviour when we mount filesystem with `nodev` option?
|
Obviously, the easiest way is to just ask them. They should document this?
Absent that, start by looking under /usr/local for things; it might be limited to those.
Check if they installed any packages that aren't in any repositories; the easiest way to do that, in my opinion, is to fire up aptitude and look under the "Obsolete and Locally Created Packages" section. If that doesn't show up anything, check your apt configuration; they might have created a package repository for your local packages (
Next, you can look for files that aren't in any packages:
for i in /usr/bin/*; do dpkg -S $i >/dev/null 2>&1 || dpkg -S /bin/$(basename $i) >/dev/null 2>&1 || echo "$i not found"; donewill give you a line like
/usr/bin/myvps-specific-command not foundfor anything found in /usr/bin that isn't in any Debian package. Note the second dpkg call with the $(basename $i) construction to search for programs that are found in /bin but not in /usr/bin; this is necessary because of the usrmerge stuff, otherwise you'll get a bunch of false positives (so don't leave it out).
You can repeat the above for other directories, e.g., /usr/sbin etc (and then also update the second dpkg call).
In case your provider did something really weird, you can check if the checksums of any installed program differ:
cd /
LC_ALL=C sudo md5sum -c var/lib/dpkg/info/*.md5sums | grep -v 'OK$'This will necessarily get you a few false positives (there are some files that must be modified after install), but it should get you started.
|
My VPS hosting company injects "extras" into new Debian (and other) VPS images - scripts, network config, telemetry, etc.
How can I compare a new VPS against the official image, to see what changes were made by the hosting company?
|
Detect changes made to VPS linux image by hosting company
|
Removing setuid/setgid from system applications isn't done routinely (though there are occasional guidelines which do suggest just that). Doing this invalidates the package configuration, making it necessary for someone to make special-cases when investigating discrepancies between package content and the installed system.
The usual approach is to not configure root's password (so that it is effectively unknown), and to configure a non-root administrator for the machine using sudo.
|
Is it safe to uninstall su in favor of sudo to harden Arch Linux?
Have you got an opinion on whether this is a good or bad idea?
I read the Arch Linux Wiki articles Security, Su and Sudo. I also searched for further resources with regards to this matter but couldn't find any substantial information.
|
Is it safe to uninstall `su` in favor of `sudo` to harden Arch Linux? [closed]
|
Given your scenario, where the main filesystem is mounted as read-only and only specific directories like /tmp are writable, the short answer is yes, someone with root access could potentially redirect access from files in the read-only part of the system to another file in the writable section. Here's how:
Symbolic or Hard Links: A root user could create symbolic links (symlinks) to redirect access from the original file to a new file in a writable area. Although creating a symlink in the read-only filesystem itself wouldn't be possible, the user could place the symlink in a writable area and adjust the environment (e.g., LD_LIBRARY_PATH for libraries) or application configurations to use the symlink instead of the original file.
LD_PRELOAD Trick: For shared libraries like your example of a.so, a root user could exploit the LD_PRELOAD feature of the dynamic linker. This feature allows specifying custom shared libraries to be loaded before others. The user could copy a.so to a writable area, modify it as desired, and then use LD_PRELOAD to load their modified version instead of the original, effectively redirecting the access.
Manipulating the Environment: Beyond file manipulation, a root user could also change environment variables or use chroot environments to alter the way applications run, redirecting them to use different files or configurations stored in writable areas.
Mount --bind Option: The mount --bind option can be used to mount a directory or file from one part of the file system over another. A root user could use this to overlay a writable directory or file over a location in the read-only part, effectively redirecting access to the writable version.
It's important to note that while these methods can redirect access from read-only to writable parts of the system, they require root access to execute. This highlights the importance of securing root access on your system. If a user has unrestricted root access, the integrity of the system can be compromised in numerous ways, not just file redirection. Ensuring that only trusted users have root access and employing additional security measures (like SELinux or AppArmor) can help mitigate these risks.
|
Let's assume that I have created a linux image with a filesystem that is inherently readonly (like SquashFS) and disabled swap. From readonly I mean the main filesystem and all its content is readonly and it is mounted readonly as well, and only directories that need to be writable such as /tmp is redirected to another writable storage/partition.
Now my question is that if someone has a full root access on this system, is it possible that he somehow spoof accessing files in readonly part or not? For example, if a.so is in readonly part, is it possible that all access to this file is redirected to another file? He can copy any file on the writable section of system or run them with root permission.
Best Regards
|
Is it possible to redirect files in a readonly filesystem?
|
If you are concerned about system integrity, then selinux or grsecurity (or the various similar security packages) are very powerful. Unfortunately, mastering their policies is far from trivial. (Any decent distro that includes SELinux will have predefined policies for all kinds of things, though.) Grsecurity policies are easier to create but still require some effort. Grsecurity has the big advantage over SELinux that it comes with several system hardening measures, like pax, which provides quite rigorous memory protection. On the downside, Grsecurity is not officially part of Linux and never will be (for, um, political reasons) and thus only few distros provide integration of Grsecurity.
My personal view: The whole concept of AV is entirely rotten because they are - in essence - nothing more than giant black lists that need to be updated frequently. Because of this they grow ever larger and don't protect you from 0-day-exploits. Personally I believe in encapsulation and containment, which is what SELinux, Grsecurity, etc. achieve.
IDS/IPS is useful to some degree, as long as you can keep it simple (like using iptables, fail2ban, or aide). "High-end" IDS/IPS work like AV and thus my view applies for them as well.
|
I try to hardening my server. For doing so, I got a general question: Should I install kernel security patches like selinux and an Anti-Virus with Intrusion Detection Firewall? Does it make sense to combine it or just one of them?
I mean, the patches are known to secure local things like processes etc. from turning into zombies or stuff like that. But I don´t think, that those patches secures also my Internet Connection, does they?
|
kernel security and IDS Firewall + AV together or not?
|
No, you shouldn't set these policies on the other tables to DROP, these tables are not meant to filter.
You may want to try it on a local machine, where you have local access even if you block the network.
|
I use iptables to secure my server. The default policies for all chains in the filter table have been set to DROP
# iptables -t filter -L | grep -i \ (policy
Chain INPUT (policy DROP)
Chain FORWARD (policy DROP)
Chain OUTPUT (policy DROP)I wonder if it is useful to also set the policies to DROP for mangle, raw and security tables (not nat table because it does not work)
in order to more secure the server ?
And of course duplicate the access rules for each tables setted to DROP
|
Is it useful to set the policies to DROP for all tables in Iptables?
|
NSSWITCH.CONF
All of these settings are files, which means everything is stored on the local computer.
Ex: passwd:files -- all login accounts are stored in /etc/passwd. There is no external user authentication (On Windows, this might be AD), LDAP, Kerberos, etc.
If this is what you expect, then those settings are correct.
They are probably not causing you gateway issue.sysctl.conf
The rp_filter settings are most likely causing your problem. rp_filter means Reverse Path Filtering. A setting of 1 disable rp_filtering.which is used as a reverse proxy for my asp.net core app.Set this value to 0 and try again.
0 = Allow All
1 = Disallow All
2 = Only allow from Networks I know (i.e. prevent "outside" networks, such as the internet). This may not be supported on all OS'
What is RP Filtering
This article shows some echo commands. These are temporary changes that will work until you restart the machine. This way, you can test new settings without having to reboot all the time. Once you find settings that work, you can put them in sysctl.conf.
ip_forwarding may also be causing you problems. Try resetting that value also.Here is some information on sysctl.confif you are not using ipv6, you can leave those settings.
Remember that hardening a system usually breaks it. you need to adjust the settings for the use of the computer. As is the case here, RP Filtering has been disabled when, in reality, THIS computer needs it on.
|
I read somewhere (I forgot where) that the following should be set if I want to harden my linux, which is Ubuntu 18.04 in my case.
However, using these values somehow caused a 504 Gateway Time-out for my nginx, which is used as a reverse proxy for my asp.net core app.
I have no background in linux and all I did is copy paste. So, I have no idea what setting these values even mean.
Is there any incorrect value(s) that I am setting wrong?
/etc/nsswitch.conf
passwd:files
shadow:files
group:files
hosts:dns files
bootparams:files
ethers:files
netmasks:files
networks:files
protocols:files
rpc:files
services:files
automount:files
aliases:files/etc/sysctl.conf
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv6.conf.all.accept_source_route = 0
net.ipv4.conf.all.log_martians = 1
kernel.sysrq=0
|
Hardening of nsswitch.conf and sysctl.conf
|
This is dictated by the NSS (Name Service Switch) configuration i.e. /etc/nsswitch.conf file's hosts directive. For example, on my system:
hosts: files mdns4_minimal [NOTFOUND=return] dnsHere, files refers to the /etc/hosts file, and dns refers to the DNS system. And as you can imagine whichever comes first wins.
Also, see man 5 nsswitch.conf to get more idea on this.As an aside, to follow the NSS host resolution orderings, use getent with hosts as database e.g.:
getent hosts example.com
|
In Linux, how do /etc/hosts and DNS work together to resolve hostnames to IP addresses? if a hostname can be resolved in /etc/hosts, does DNS apply after /etc/hosts
to resolve the hostname or treat the resolved IP address by
/etc/hosts as a "hostname" to resolve recursively?
In my browser (firefox and google chrome), when I add to
/etc/hosts:
127.0.0.1 google.com www.google.comtyping www.google.com into the address bar of the browsers and
hitting entering won't connect to the website. After I remove that
line from /etc/hosts, I can connect to the website. Does it mean
that /etc/hosts overrides DNS for resolving hostnames?
After I re-add the line to /etc/hosts, I can still connect to the
website, even after refreshing the webpage. Why doesn't
/etc/hosts apply again, so that I can't connect to the website?Thanks.
|
How do `/etc/hosts` and DNS work together to resolve hostnames to IP addresses?
|
Use getent:
$ getent hosts unix.stackexchange.com
151.101.193.69 unix.stackexchange.com unix.stackexchange.com
|
How do I get a remote host IP address if I don't have ping, and don't have any bind utilities like dig, nslookup, etc?
I need an answer that does not include 'install X' or 'use sidecar container'. I am looking for something that relies on nothing more than bash and the basic shell commands.
|
How to get remote host DNS address from a super-slim host (docker) without ping or bind-utils?
|
In the default "user mode" networking, QEMU uses only the first DNS nameserver from the host machine. So, if that nameserver doesn't resolve properly, QEMU will not fallback to any other nameservers which may be configured as secondary at the host. It results in the apparent loss of Internet connection by the guests while the host can still use its fallback nameservers "hiding" the problem.
It is a known QEMU behavior, which is not expected to be fixed in QEMU. Here is a quote from a Debian Bug report log #625689 from 2011:No the limitation isn't documented (yet), and it will be difficult
to fix too, or maybe not worth a trouble really. Two reasons.
First of all, user-mode networking is not suitable for anything
serious, you really want tap networking with bridges, which is
about 100 times faster and actually works (e.g. ICMP). Second,
the implementation is rather simplistic - for DNS it merely
forwards (like a NAT box) packets from guest to a nameserver
from host /resolv.conf - only one nameserver, because you can't
NAT to TWO destinations at once. So in order to fix that,
qemu has to become application-level proxy for DNS, instead
of a simplande NAT "device".It is easy to reproduce the issue by adding some garbage as the first nameserver to /etc/resolv.conf at the host. The guest stops resolving immediately.
For a Debian guest machine, to restore networking, it was enough to add another known resolver, such as 8.8.8.8, to /etc/resolv.conf. Such change of configuration does not survive rebooting the guest.
|
I use GNOME Boxes on a laptop. The guest machines get the Internet connection automatically with default settings whenever the laptop moves between networks (Ethernet, Wi-Fi in different locations, or a cellular phone as a USB modem).
The guest machines are not bridged with the host and are not visible on the host's LAN implying that the MAC addresses of the guests are overwritten before passing the frames to the host's LAN.By default QEMU will create a
SLiRP
user network backend and an appropriate virtual network device for the
guest…
User Networking is implemented using "slirp", which provides a full
TCP/IP stack within QEMU and uses that stack to implement a virtual
NAT'd network.QEMU's final, and most bizarre, networking option is also its default option. What this does is connect a "usermode network stack" to a vlan. This network stack is a standalone implementation of the ip, tcp, udp, dhcp and tftp (etc.) protocols. It can handle frames from the vlan by e.g. responding to dhcp requests with a valid address, responding to tftp requests with a file from the host filesystem or by creating udp/tcp sockets over which packet data can be forwarded.
Note that this network stack is running within the qemu process itself. So, for example there is no separate dhcp or tftp process handling those requests. Also, the stack is effectively acting as a proxy by unpacking application data from udp/tcp packets and forwarding them over a socket connecting the qemu process and the destination process.Note, in the above context, "vlan" stands for "emulated" LAN, it doesn't mean IEEE 802.1Q VLAN ID.
By default, the guest has 10.0.2.15 IP address on 10.0.2.0/24 network. The gateway is 10.0.2.2. The DNS server is 10.0.2.3. The guest can access the host by connecting on 10.0.2.2 gateway IP.At a particular Wi-Fi network, all guest machines lose Internet access. I found another question about lack of Internet in the guest under QEMU, which discovered that DNS may not work out-of-the-box under certain setup. So, I checked mine. I can access websites by their IP from the guest. Also, if I configure an IPv4 connection manually, resolving gets restored if I add another known resolver, such as 8.8.8.8, as a backup in addition to the default 10.0.2.3.
According to the local administrator, this Wi-Fi network has VLAN tagging enabled to separate local computers from visitors' computers. Apparently, if VLAN were an issue, it would cause complete loss of Internet access, not just resolving.
Another particularity of that network is that the first DNS resolver is configured to refuse most request. The second resolver 8.8.8.8 is provided, but, apparently, not used by QEMU.
The issue persists across devices. I tried on two completely different laptops with Intel wireless. The issue is found in Debian "Buster" at least since 10.4, "Bullseye" and "Sid".
|
Why DNS stops resolving under QEMU "user networking" when the host roams to a particular network?
|
The LXD docs describe a solution:
Put this in /etc/systemd/system/lxd-dns-lxdbr0.service:
[Unit]
Description=LXD per-link DNS configuration for lxdbr0
BindsTo=sys-subsystem-net-devices-lxdbr0.device
After=sys-subsystem-net-devices-lxdbr0.device[Service]
Type=oneshot
ExecStart=/usr/bin/resolvectl dns lxdbr0 BRIDGEIP
ExecStart=/usr/bin/resolvectl domain lxdbr0 '~lxd'
ExecStopPost=/usr/bin/resolvectl revert lxdbr0
RemainAfterExit=yes[Install]
WantedBy=sys-subsystem-net-devices-lxdbr0.device(Substituting your own BRIDGEIP, from lxc network show lxdbr0 | grep ipv4.address)
Then apply those settings without having to reboot using:
sudo systemctl daemon-reload
sudo systemctl enable --now lxd-dns-lxdbr0
|
I'm using LXC containers, and resolving CONTAINERNAME.lxd to the IP of the specified container, using:
sudo resolvectl dns lxdbr0 $bridge_ip
sudo resolvectl domain lxdbr0 '~lxd'This works great! But the changes don't persist over a host reboot.
(I've described 'things I've tried' as answers to this question, which have varying degrees of success.)
I'm on Pop!_OS 22.04, which is based on Ubuntu 22.04.
How should I be making these resolvectl changes persistent across reboots?
|
Persist resolvectl changes across reboots
|
Installing nslookup pointed me to the source of the problem: resolv.conf was simply not parseable. I copied the contents from the original file into a new one and everything works. Same content, same permissions. But diff shows a difference where is none. Apparently there is some invisible character breaking the file since it is 1 Byte larger
/etc# diff resolv.conf.odd resolv.conf.dem
1c1
< nameserver 8.8.8.8
---
> nameserver 8.8.8.8
/etc# cat resolv.conf.odd && cat resolv.conf.dem
nameserver 8.8.8.8
nameserver 8.8.8.8
/etc# ls -l resolv.conf.*|cut -d' ' -f5,9
19 resolv.conf.dem
20 resolv.conf.oddUpdate: As cas thankfully pointed out it was a trailing \r causing the mayhem and had nothing to do with the Buster Update itself. A coworker had pushed the file with wrong line wrappings
$ hd resolv.conf.odd
00000000 6e 61 6d 65 73 65 72 76 65 72 20 38 2e 38 2e 38 |nameserver 8.8.8|
00000010 2e 38 0d 0a |.8..|
|
I upgraded a few machines to Debian Buster and everything went well so far—although when running apt upgrade before apt full-upgrade I ran into a
Temporary failure in name resolution. This was fixable and only an issue during the process and did not occur when doing a one-step apt dist-upgrade. However one machine shows this behaviour in spite of being fully upgraded. I get
~# LANG=C ping google.com
ping: google.com: Temporary failure in name resolutionWhen I add google.com to /etc/hosts everything is fine. My /etc/nsswitch looks like
~# cat /etc/nsswitch.conf passwd: files systemd
group: files systemd
shadow: files
gshadow: fileshosts: files dns
networks: filesprotocols: db files
services: db files
ethers: db files
rpc: db filesnetgroup: nisMy /etc/resolv.conf points to googles nameserver at the moment and the very server is pingable
~# cat /etc/resolv.conf nameserver 8.8.8.8~# ping -c1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=53 time=22.8 ms--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 22.800/22.800/22.800/0.000 mssystemd-resolved is inactive and should not be an issue if I am interpreting the content of my /etc/nsswitch correctly.
Could there be another point I missed?
|
Temporary failure in name resolution after upgrade to Debian Buster
|
You can install DNSmasq locally and add this option to the conf file log-facility=/var/log/dnsmasq.log log-queries then set your system to use 127.0.0.1 or ::1 as the DNS resolver its work for me.
Then extract data as any format you want and do what ever you want with it
or install Bind locally. Most distros default install of Bind will be non-autoritative caching-only and add a logging {} config block (as described in the Bind 9 Configuration Reference).
|
I would like to be able to get the public IPs of the websites I am accessing with my PC in a way such as:
www.google.es - public IP1
www.cdn.facebook.com - public IP2and so on. I think this should be done by logging DNS traffic, so I tried using wireshark as part of a solution I found in another answer:
tshark -f "udp port 53" -Y "dns.qry.type == A and dns.flags.response == 0"However this seems to only show connections between my router and my machine,
the list is full of pairs such as:
192.168.200.250 -> 192.168.200.1
192.168.200.1 -> 192.168.200.250`
|
Get public IPs of accessed webpages?
|
Why this happens is explained in Connecting to IP 0.0.0.0 succeeds. How? Why? — in short, packets with no destination address (0.0.0.0) have their source address copied into their destination address, and packets with no source or destination have their source and destination addresses set to the loopback address (INADDR_LOOPBACK, 127.0.0.1); the resulting packet is sent out on the loopback interface.
As you determined, this behaviour is hard-coded in the Linux kernel’s IPv4 networking stack, and the only way to change it is to patch the kernel:
diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index 795cbe1de912..df15a685f04c 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -2740,14 +2740,8 @@ struct rtable *ip_route_output_key_hash_rcu(struct net *net, struct flowi4 *fl4,
}
if (!fl4->daddr) {
- fl4->daddr = fl4->saddr;
- if (!fl4->daddr)
- fl4->daddr = fl4->saddr = htonl(INADDR_LOOPBACK);
- dev_out = net->loopback_dev;
- fl4->flowi4_oif = LOOPBACK_IFINDEX;
- res->type = RTN_LOCAL;
- flags |= RTCF_LOCAL;
- goto make_route;
+ rth = ERR_PTR(-ENETUNREACH);
+ goto out;
}
err = fib_lookup(net, fl4, res, 0);This patch shows the original implementation, explaining the “why?” part above: if the packet has no destination address (i.e. it’s 0.0.0.0):the source address is copied to the destination address
if the packet still has no destination address, i.e. it also has no source address, both addresses are set to the loopback address (127.0.0.1);
in all cases, the outgoing device is set to the loopback device, and the route is constructed accordingly.The patch changes this behaviour to return a “network unreachable” error instead.
|
tl;dr: accessing 0.0.0.0:port (eg. curl http://0.0.0.0:443) gets redirected(internally) to 127.0.0.1:port (where port is any port number) (eg. the previous curl command is the same as curl http://127.0.0.1:443); why does this happen and how to block connections destined to 0.0.0.0 ?
UPDATE2: I've found a way to block it by patching the Linux kernel (version 6.0.9):--- .orig/usr/src/linux/net/ipv4/route.c
+++ /usr/src/linux/net/ipv4/route.c
@@ -2740,14 +2740,17 @@ struct rtable *ip_route_output_key_hash_
}
if (!fl4->daddr) {
- fl4->daddr = fl4->saddr;
+ rth = ERR_PTR(-ENETUNREACH);
+ goto out;
+ /* commenting out the rest:
+ fl4->daddr = fl4->saddr; // if you did specify src address and dest is 0.0.0.0 then set dest=src addr
if (!fl4->daddr)
- fl4->daddr = fl4->saddr = htonl(INADDR_LOOPBACK);
+ fl4->daddr = fl4->saddr = htonl(INADDR_LOOPBACK); // if you didn't specify source address and dest address is 0.0.0.0 then make them both 127.0.0.1
dev_out = net->loopback_dev;
fl4->flowi4_oif = LOOPBACK_IFINDEX;
res->type = RTN_LOCAL;
flags |= RTCF_LOCAL;
- goto make_route;
+ goto make_route; END of COMMENTed out block */
}
err = fib_lookup(net, fl4, res, 0);Result:
Where do packets sent to IP 0.0.0.0 go?:
$ ip route get 0.0.0.0
RTNETLINK answers: Network is unreachable...they don't!
A client attempts to connect from 127.1.2.18:5000 to 0.0.0.0:80
$ nc -n -s 127.1.2.18 -p 5000 -vvvvvvvv -- 0.0.0.0 80
(UNKNOWN) [0.0.0.0] 80 (http) : Network is unreachable
sent 0, rcvd 0(if you didn't apply kernel patch, you will need a server like the following for the above client to be able to successfully connect: (as root, in bash)while true; do nc -n -l -p 80 -s 127.1.2.18 -vvvvvvvv -- 127.1.2.18 5000; echo "------------------$(date)";sleep 1; done)
Patched ping(ie. a ping that doesn't set destination address to be the same as the source address when destination address is 0.0.0.0, ie. comment out the 2 lines under // special case for 0 dst address that you see here):
$ ping -c1 0.0.0.0
ping: connect: Network is unreachableinstant.
However, if specifying source address, it takes a timeout(of 10 sec) until it finishes:
$ ping -I 127.1.2.3 -c1 -- 0.0.0.0
PING 0.0.0.0 (0.0.0.0) from 127.1.2.3 : 56(84) bytes of data.--- 0.0.0.0 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0msUPDATE1:
The why part is explained here but I'm expecting a little bit more details as to why does this happen, for example(thanks to user with nickname anyone on liberachat #kernel channel):
$ ip route get 0.0.0.0
local 0.0.0.0 dev lo src 127.0.0.1 uid 1000
cache <local>This shows that somehow packets destined for 0.0.0.0 get routed to the localhost interface lo and they get source ip 127.0.0.1 (if I'm interpreting this right) and because that route doesn't appear in this list:
$ ip route list table local
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
local 169.254.6.5 dev em1 proto kernel scope host src 169.254.6.5
broadcast 169.254.6.255 dev em1 proto kernel scope link src 169.254.6.5
local 192.168.0.17 dev em1 proto kernel scope host src 192.168.0.17
broadcast 192.168.255.255 dev em1 proto kernel scope link src 192.168.0.17it means that it must be somehow internal to the Linux kernel. ie. hardcoded
To give you an idea, here's how it looks for an IP that's on the internet (I used quad1 as an example IP):
$ ip route get 1.1.1.1
1.1.1.1 via 192.168.1.1 dev em1 src 192.168.0.17 uid 1000
cachewhere 192.168.1.1 is my gateway, ie.:
$ ip route
default via 192.168.1.1 dev em1 metric 2
169.254.6.0/24 dev em1 proto kernel scope link src 169.254.6.5
192.168.0.0/16 dev em1 proto kernel scope link src 192.168.0.17Because iptables cannot be used to sense (and thus block/drop) such connections destined to 0.0.0.0 that get somehow routed to 127.0.0.1, it might prove difficult to find a way to block them... but I'll definitely try to find a way, unless someone already knows one.
@Stephen Kitt (in the comments) suggested a way to block hostnames that reside in /etc/hosts, so instead of:
0.0.0.0 someblockedhostname
you can have
127.1.2.3 someblockedhostname
127.1.2.3 someOTHERblockedhostname
(anything other than 127.0.0.1, but you can use the same IP for every blocked hostname, unless you want to differentiate)
which IP you can then block using iptables.
However if your DNS resolver (ie. NextDNS, or 1.1.1.3) returns 0.0.0.0 for blocked hostnames (instead of NXDOMAIN) then you cannot do this (unless, of course, you want to add each host manually in /etc/hosts, because /etc/hosts takes precedence - assuming you didn't change the line hosts: files dns from /etc/nsswitch.conf)OLD: (though edited)
On Linux (I tried Gentoo and Pop OS!, latest) if you have this line in /etc/hosts:
0.0.0.0 somehosthereand you run this as root (to emulate a localhost server listening on port 443)
# nc -l -p 443 -s 127.0.0.1
then you go into your browser (Firefox and Chrome/Chromium tested) and put this in address bar:
https://somehosthere
or
0.0.0.0:443
or
https://0.0.0.0
then the terminal where you started nc(aka netcat) shows a connection attempt (some garbage text including the plaintext somehosthere if you used it in the url)
or instead of the browser, you can try:
curl https://somehosthere
or if you want to see the plaintext request:
curl http://somehosthere:443
This doesn't seem to be mitigable even when using dnsmasq as long as that 0.0.0.0 somehosthere is in /etc/hosts, but when using dnsmasq and your DNS resolver (ie. NextDNS or Cloudflare's 1.1.1.3) returns 0.0.0.0 instead of NXDOMAIN (true at the time of this writing) and that hostname isn't in your /etc/hosts(AND in what you told dnsmasq is the /etc/hosts to use) then there are two ways to mitigate it(either or both will work):use dnsmasq arg --stop-dns-rebind --stop-dns-rebind
Reject (and log) addresses from upstream nameservers which are in
the private ranges. This blocks an attack where a browser behind
a firewall is used to probe machines on the local network. For
IPv6, the private range covers the IPv4-mapped addresses in pri‐
vate space plus all link-local (LL) and site-local (ULA) ad‐
dresses.use line bogus-nxdomain=0.0.0.0 in /etc/dnsmasq.conf which makes dnsmasq itself return NXDOMAIN for any hostname that resolved to 0.0.0.0 (except, once again, if that hostname was in /etc/hosts (bypasses dnsmasq) and what you told dnsmasq to use as /etc/hosts (if you did))So, the second part of this question is how to disallow accesses to 0.0.0.0 from being redirected to 127.0.0.1 ? I want this because when using NextDNS (or cloudflare's 1.1.1.3) as DNS resolver, it returns 0.0.0.0 for blocked hostnames, instead of NXDOMAIN, thus when loading webpages, parts of them(that are located on blocked hostnames) will try to access my localhost server running on port 443 (if any) and load pages from it instead of just being blocked.
Relevant browser-specific public issues being aware of this(that 0.0.0.0 maps to 127.0.0.1):
Chrome/Chromium: https://bugs.chromium.org/p/chromium/issues/detail?id=1300021
Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=1672528#c17
|
Why accessing 0.0.0.0:443 gets redirected to 127.0.0.1:443 on Linux and how to disallow it?
|
Enable Link-Local Multicast Name Resolution (LLMNR) on the RPi. Edit /etc/systemd/resolved.conf and set LLMNR=true. Enable and start the systemd service system-resolved: systemctl --now enable systemd-resolved. No DNS server is needed, but name resolution only works on the local net. Make sure there are no duplicate hostnames on the LAN.
|
According to Debian's RPi3 image wiki, I should be able to ssh into a Raspberry-Pi, with just the hostname. I shared internet from my Debian laptop, WiFi to the a Raspberry-Pi over Ethernet, but the hostname never resolved.
What kind of settings/configuration do either the client and server or the network need for LAN hostname resolution to work?
What do I need to install on the Pi so that MS-Windows can resolve the Pi's IP address when I want to access a web server hosted on it, for example? I think it's smbclient but I'm not sure.
|
Linux/Windows client resolve Linux hostname on LAN
|
Add the interface address to the lo (loopback) adaptor as well:
sudo ip addr add 192.168.1.108/32 dev loThe kernel will do the right things.
|
I have a headless music server (raspbian buster) running a storage server (minimserver) and a player (upmpdcli) on the same machine. When it's playing, it's fetching the music files via its LAN address. So the music is played and browsed through URLs like: http://192.168.1.108:9790/minimserver/...
Even though playlists are stored on the machine, when it loses WiFi connection, the stream stops immediately. I assume this is because the interface gets taken down and it can no longer access itself through the IP 192.168.1.108.
I can't alter the system to completely access music through localhost because the music is enqueued and browsed by other devices on the network. So is there some way I can maintain localhost LAN address (192.168.1.108) resolution even when the network is down?
|
Resolving own LAN IP when the network goes down
|
Please put .htaccess file under public_html folder with below code :
# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPressThen try to access your URL
|
I was following this tutorial to the letter and when I entered my domain name in the browser, I get my page. Except the fact that the browser never masks the domain to www.example.com, instead it changed the domain I just entered and showed me the subfolder preceded by the IP address, for example: 211.232.01.23/website/wordpress/index.php
Already done:Installed apache via httpd
Created sites-enabled and sites-available folders in /etc/httpd
Created .conf file in sites-available with symbolic link
Set up permissions to my directories using apache:apache user
Added "IncludeOptional sites-enabled/*.conf" string to the end of httpd.conf file.I have not touched the htaccess file as the tutorial doesn't specify anything about it.
My example.com.conf file:
<VirtualHost *:80>
ServerName www.example.com
ServerAlias example.com
DocumentRoot /var/www/example.com/public_html
ErrorLog /var/www/example.com/error.log
CustomLog /var/www/example.com/requests.log combined
</VirtualHost>
|
Apache httpd on CentOS doesn't mask IP to domain
|
With journalctl | tail -n 100 I found this error (which didn't show at first):
nm-openvpn: write to TUN/TAP : Invalid argument (fd=-1,code=22)
To solve this problem (in KDE) right click the tray icon of NetworkManager (if you have imported your VPN config to it or set the VPN up in there)
->Configure Network Connections...->Your VPN->VPN (openvpn)->Advanced->General->check "Use compression" and select LZO in the box on the right (I haven't tried other compression methods)->Ok and Apply->Disconnect, reconnect and test it.
I still don't know why this problem occurred and it seems like DNS-over-TLS doesn't yet work. Please comment if you know about either.
|
After upgrading Debian11/KDE to Debian12, restarting and running sudo apt-get upgrade it shows errors like Could not resolve ftp.XX.debian.org. These also show when running sudo apt-get update. I then tried to open websites in the Firefox-esr browser and it can't open any (it shows the "Hmm. We're having trouble finding that site." error). I can't ping any sites either, it shows "Name or service not known". So it has problems resolving domain names with DNS.Details and what I tried:
I tried sudo mv /etc/resolv.conf /etc/backup.resolv.conf. DNS still works on a Debian11 machine and it worked before upgrading to Debian12. The nftables firewall rules are the same as before. The time was off by minutes again but I corrected it so it shouldn't be off by more than seconds. At the end of upgrading at 99% I tried to open the browser when it asked me to replace a certain config file, this caused a black screen (once during updating the screen could not get woken up too) and logged me out so I had to finish upgrading with sudo dpkg --configure -a which seemed to have worked. Maybe I need to check if the upgrading worked.
Right now I can't use the Internet on that machine while NetworkManager displays it's properly connected and my router page also shows the device as connected.grep ^hosts /etc/nsswitch.conf shows hosts: files mdns4_minimal [NOTFOUND=return] dns mymachines/etc/resolv.conf contains #Generated by NetworkManager nameserver: 1.1.1.1 (I already tried adding nameserver 1.0.0.1 beneath it which didn't help)nmcli c show <connection name> | grep -i dns shows the below for the Internet connection (not the VPN connection). On the Debian11 machine where DNS still works those values are different: it does not have connection.dns-over-tls. I think dns-over-tls likely has to do with the problem. It's also configured in the router that is used by multiple machines of which only the Debian12 machine can't reach websites. I use IPv4-only for good reasons and a VPN.connection.mdns: -1 (default)
connection.dns-over-tls: -1 (default)
ipv4.dns: 1.1.1.1
ipv4.dns-search: --
ipv4.dns-options: --
ipv4.dns-priority: 0
ipv4.ignore-auto-dns: yes
ipv6.dns: --
ipv6.dns-search: --
ipv6.dns-options: --
ipv6.dns-priority: 0
ipv6.ignore-auto-dns: no
IP4.DNS[1]: 1.1.1.1Why is that and how to solve this problem?
|
Can't resolve domain names after upgrading to Debian 12
|
You can't use .local like that. It's reserved for Multicast DNS (mDNS) lookups in an environment without managed DNS servers.
It's actually reserved for use with the full set of Zeroconf technologies, but the one relevant here is mDNS.
|
I am using VirtualBox on Windows now.
The network is roughly like this:
[Fedora 37 VM] -- NAT network -- [Windows Host] ---- intranet ---- internet
I use DNS on intranet to resole host.domain names like both some.host.on.intranet and www.yahoo.co.jp .
On my windows host, this is OK.
But I am not so luky on my Fedora VM.
shao@fedora Music $ resolvectl status
Global
Protocols: LLMNR=resolve -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stubLink 2 (enp0s3)
Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6
Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 10.0.2.1
DNS Servers: 10.0.2.1 10.3.1.24 192.168.3.1
DNS Domain: intra.somedomain.co.jpLink 3 (docker0)
Current Scopes: none
Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupportedMy primary DNS is 10.0.2.1, which is OK, same as my Windows host.
I can resovle www.yahoo.co.jp on Linux VM.
shao@fedora Music $ ping www.yahoo.co.jp
PING edge12.g.yimg.jp (183.79.250.251) 56(84) bytes of data.
64 bytes from 183.79.250.251: icmp_seq=1 ttl=54 time=17.4 ms
64 bytes from 183.79.250.251: icmp_seq=2 ttl=54 time=20.5 msWhen I try to resolve host.domain on intranet. I got:
shao@fedora Music $ ping dev-dm-energy101z.dev.jp.local
ping: dev-dm-energy101z.dev.jp.local: Temporary failure in name resolutionWhat makes me confuse is that I can 'dig' that host.domain name.
shao@fedora Music $ dig @10.0.2.1 dev-dm-energy101z.dev.jp.local; <<>> DiG 9.18.11 <<>> @10.0.2.1 dev-dm-energy101z.dev.jp.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34400
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4000
;; QUESTION SECTION:
;dev-dm-energy101z.dev.jp.local. IN A;; ANSWER SECTION:
dev-dm-energy101z.dev.jp.local. 721 IN A 100.67.254.168;; Query time: 11 msec
;; SERVER: 10.0.2.1#53(10.0.2.1) (UDP)
;; WHEN: Thu Mar 09 10:27:42 JST 2023
;; MSG SIZE rcvd: 75I also checked tcpdump when I performed these instructoins.
I can see UDP traffic when I 'ping yahoo' or 'dig intranet_host', like this:
10:40:31.283922 enp0s3 Out IP 10.9.9.4.45466 > 10.0.2.1.53: 7945+ [1au] A? www.yahoo.co.jp. (44)
10:40:31.284623 enp0s3 Out IP 10.9.9.4.35216 > 10.0.2.1.53: 59710+ [1au] AAAA? www.yahoo.co.jp. (44)
10:40:31.292909 enp0s3 In IP 10.0.2.1.53 > 10.9.9.4.45466: 7945 2/0/1 CNAME edge12.g.yimg.jp., A 183.79.217.124 (88)
...10:45:14.514350 enp0s3 Out IP 10.9.9.4.54319 > 10.0.2.1.53: 3623+ [1au] A? dev-dm-energy101z.dev.jp.local. (71)
10:45:14.531879 enp0s3 In IP 10.0.2.1.53 > 10.9.9.4.54319: 3623 1/0/1 A 100.67.254.168 (75)But when I 'ping intranet_host' , tcpdump -i any -nn udp keeps silence.
Did I miss some config?
Any hint will help, thanks in adance.
===========================================================
2023-03-15:
I found something interesting.
Fedora just refuses to resolve host.domain names end in local, like:
stg-zed2-jpe2.stg.jp.local
or dev-dm-energy.dev.jp.local.
Is there a convention of DNS likes that?
|
Fedora VM behind NAT can not ping host.domain name on intranet
|
First of all, you can specify the name resolution priority (or rather, order) in /etc/nsswitch.conf.
For example, on a Raspbian 11 (bullseye) Pi, the relevant section of the /etc/nsswitch.conf looks like this:
hosts: files mdns4_minimal [NOTFOUND=return] dnsFor example, if you put dns before mdns4_minimal, the hostname resolution will favour dns over mdns:
hosts: files dns mdns4_minimal [NOTFOUND=return]See manpage here and an excellent post detailing resolve order here.
Upon reading your comments, I understand you would like to keep the mDNS resolving in place, but dictate which interfaces are involved in that process.
You can instruct the Avahi daemon (responsible for mDNS) to ignore an interface by adding said interface to the deny-interfaces list in /etc/avahi/avahi-daemon.conf. From the manpage:
deny-interfaces= Set a comma separated list of network interfaces that should be ignored by avahi-daemon. Other not specified interfaces will be used, unless allow-interfaces= is set.
This option takes precedence over allow-interfaces=.Afterwards, restart the daemon with systemctl restart avahi-daemon.
|
We are using a Raspberry Pi which has a Lidar connected to the ethernet port. The problem is that the mdns4_minimal resolves $(hosname).local into two IPs. One IP is obtained from the ethernet port (from Lidar) and another from WiFi. This results in a problem with ROS that some nodes get the Lidar's IP address instead of WiFi's IP address which results in nodes not being able to correctly communicate with each other.
I think that the solution could be to change the priority of hostname resolution to prioritize WiFi connection, but I didn't find any instructions on the internet of how to do that.
Or is there a better way to tackle this problem?
|
Change priority of mdns4 hostname resolution
|
Looking into my openWRT, I do not have any libnss* libraries installed. It seems that only libuClibc is used for that. libc.so.0 is a symlink to it.
root@RuiWifi:/lib# grep -ri hosts *
libc.so.0:/etc/hosts
libuClibc-0.9.33.2.so:/etc/hostsuClibc is an implementation of the standard C library that is much
smaller than glibc, which makes it useful for embedded systems. If you are trying to put together a minimal environment, I would advise you to compile busybox against UClibc instead of glibc, and snooping around openWRT to see how they managed to put together such a distribution with such a small footprint.
Compiling BusyBox with uClibc
|
I have a statically linked busybox and want to be able to write busybox telnet foo. How do I specify the address of "foo"?
Do I really need /etc/nsswitch.conf and the corresponding dynamic libraries, or does busybox contain some own simple mechanism to consult /etc/hosts?
|
Name resolution in busybox
|
No, a gateway is a router. You'll want to specify name servers. For example:
nameserver 209.244.0.3
nameserver 209.244.0.4Or whichever name servers you want to use. Your hosting provider probably has name servers very, very close to your server. Those might be the best option, and you should ask them the IP addresses of their name servers for your use. Or use a search engine to find publicly available name servers. My hosts are very close to Level 3, so I use the Level 3 name servers, as specified above.
Another nice option is the search or domain option. From the manual (man resolv.conf):domain Local domain name.
Most queries for names within this domain can use short names relative to the local domain. If set to
'.', the root domain is considered. If no domain entry is present, the domain is determined from the
local hostname returned by gethostname(2); the domain part is taken to be everything after the first
'.'. Finally, if the hostname does not contain a domain part, the root domain is assumed.
search Search list for host-name lookup.
The search list is normally determined from the local domain name; by default, it contains only the
local domain name. This may be changed by listing the desired domain search path following the search
keyword with spaces or tabs separating the names. Resolver queries having fewer than ndots dots
(default is 1) in them will be attempted using each component of the search path in turn until a match
is found. For environments with multiple subdomains please read options ndots:n below to avoid man-in-
the-middle attacks and unnecessary traffic for the root-dns-servers. Note that this process may be
slow and will generate a lot of network traffic if the servers for the listed domains are not local,
and that queries will time out if no server is available for one of the domains.
The search list is currently limited to six domains with a total of 256 characters.domain your-domain-name.com
nameserver 209.244.0.3
nameserver 209.244.0.4
|
I have installed Debian on a dedicated server. The company that leases this server gave me 3 addresses:DMZ IP
Mask
GatewayThe server cannot resolve hostnames. So when I use ping with IP- it works. When I use ping with hostname- it doesn't.
I think I should put something in
/etc/resolv.confBut I don't know what. Should it be the gateway?
|
How to enable hostname resolving in Debian? [closed]
|
I have done a few tests on my debian/wsl
~$ uname -a
Linux DESKTOP-OMM8LBC 4.4.0-17763-Microsoft #864-Microsoft Thu Nov 07 15:22:00 PST 2019 x86_64 GNU/Linux# /etc/hosts
172.22.5.107 www.wordpress-rend-adri.com # Unreachable IP from my LAN
216.58.198.164 www.wordpress-rend-adri.com # IP for www.google.com
192.168.0.12 www.wordpress-rend-adri.com # IP for another running machine on my LAN
157.240.1.35 www.wordpress-rend-adri.com # IP for www.facebook.com~$ ping www.wordpress-rend-adri.com
PING www.wordpress-rend-adri.com (192.168.0.12) 56(84) bytes of data.
64 bytes from www.wordpress-rend-adri.com (192.168.0.12): icmp_seq=1 ttl=64 time=49.9 ms
64 bytes from www.wordpress-rend-adri.com (192.168.0.12): icmp_seq=2 ttl=64 time=5.85 ms
64 bytes from www.wordpress-rend-adri.com (192.168.0.12): icmp_seq=3 ttl=64 time=5.58 ms
64 bytes from www.wordpress-rend-adri.com (192.168.0.12): icmp_seq=4 ttl=64 time=6.25 ms
64 bytes from www.wordpress-rend-adri.com (192.168.0.12): icmp_seq=5 ttl=64 time=6.19 ms
--- www.wordpress-rend-adri.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 9ms
rtt min/avg/max/mdev = 5.575/14.754/49.919/17.584 msSo ping picked the local IP placed between two working WAN IP.
Second test:
/etc/hosts
172.22.5.107 www.wordpress-rend-adri.com # Unreachable IP from my LAN
216.58.198.164 www.wordpress-rend-adri.com # IP for www.google.com
#192.168.0.12 www.wordpress-rend-adri.com # IP for one running machine on my LAN
157.240.1.35 www.wordpress-rend-adri.com # IP for www.facebook.com~$ ping www.wordpress-rend-adri.com
PING www.wordpress-rend-adri.com (172.22.5.107) 56(84) bytes of data.
# Stuck hereThird test:
/etc/hosts
#172.22.5.107 www.wordpress-rend-adri.com # Unreachable IP from my LAN
216.58.198.164 www.wordpress-rend-adri.com # IP for www.google.com
#192.168.0.12 www.wordpress-rend-adri.com # IP for one running machine on my LAN
157.240.1.35 www.wordpress-rend-adri.com # IP for www.facebook.com~$ ping www.wordpress-rend-adri.com
PING www.wordpress-rend-adri.com (216.58.198.164) 56(84) bytes of data.
64 bytes from www.wordpress-rend-adri.com (216.58.198.164): icmp_seq=1 ttl=54 time=24.5 ms
64 bytes from www.wordpress-rend-adri.com (216.58.198.164): icmp_seq=2 ttl=54 time=22.4 ms
64 bytes from www.wordpress-rend-adri.com (216.58.198.164): icmp_seq=3 ttl=54 time=21.7 ms
64 bytes from www.wordpress-rend-adri.com (216.58.198.164): icmp_seq=4 ttl=54 time=30.5 ms--- www.wordpress-rend-adri.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 7ms
rtt min/avg/max/mdev = 21.734/24.768/30.457/3.440 msFourth test:
/etc/hosts
#172.22.5.107 www.wordpress-rend-adri.com # Unreachable IP from my LAN
216.58.198.164 www.wordpress-rend-adri.com # IP for www.google.com
192.168.0.12 www.wordpress-rend-adri.com # IP for one running machine on my LAN
192.168.0.1 www.wordpress-rend-adri.com # IP for my router
157.240.1.35 www.wordpress-rend-adri.com # IP for www.facebook.com~$ ping www.wordpress-rend-adri.com
PING www.wordpress-rend-adri.com (192.168.0.1) 56(84) bytes of data.
64 bytes from www.wordpress-rend-adri.com (192.168.0.1): icmp_seq=1 ttl=64 time=1.56 ms
64 bytes from www.wordpress-rend-adri.com (192.168.0.1): icmp_seq=2 ttl=64 time=1.35 ms--- www.wordpress-rend-adri.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 1.349/1.455/1.561/0.106 msSo my conclusion is that ping does not try one IP after another. It favours router, local IP over WAN IP.
Update :
The choice of IP above is confirmed by following python command:
python -c 'import socket;print(socket.gethostbyname("www.wordpress-rend-adri.com"))'
|
My scenario
Relevant entries in my /etc/hosts (I have them written in the same order you see them here)
172.22.5.107 www.wordpress-rend-adri.com
192.168.1.116 www.wordpress-rend-adri.comI use my laptop in my house and school, hence I'm always dealing with 2 address spaces:192.168.1.0/24
172.22.0.0/16So I have those entries because I have a vm with a Wordpress for doing an exercise. That way, it doesn't matter where I am that I'll be able to access my Wordpress (as long as the DHCP offers me the same IP in both networks obviously)
My question
Knowing all of this, now I can tell you that I just made that configuration in my /etc/hosts because one teacher said to me that I only can have 1 record for a name pointing to a single IP. He said to me that If I had a doubled register for the same name, It always take the first one, and stops. But he also said to me that I should try it out, so I did.
The reality, is that for example in my house (where I'm using 192.168.1.0/24), even though the first record is for the other IP, I still can make a connection, and when I ping the name, the correct IP answers to me. And yes, I did try to be completely sure about this, and I did it in an incognito firefox window, and I also tried to comment the line of the IP of my house to check what happened.
Then, I tried to exchange both records. I mean, I just did this:
192.168.1.116 www.wordpress-rend-adri.com
172.22.5.107 www.wordpress-rend-adri.comSo in this case, obviously it is still working.
And when I went to school, the same happened when using the other address space.
So...
¿Why is it said that you can only have 1 record for a name in your /etc/hosts, if this configuration actually worked for me?
¿Is firefox, the ping binary, or anything that you use, doing an internal process of name resolution to check what's the entry that actually works, before doing the final connection?
I'm asking this because for example with ping, you just start getting an answer from the IP that works. You don't get failed connections like trying to connect to the other previous IPs
|
Different IP:hostName mappings for same host in `/etc/hosts`. Why does this work?
|
I have decided to stop resolvconf and have noticed that after restarting dnsmasq the correct nameservers are written/consumed in /var/run/dnsmasq/resolv.conf.
|
I have an Ubuntu 16.04.2 LTS host. It is configured to use dnsmasq for DNS forwarding, rather than use resolv.conf populated with nameservers. The configuration is standard wherein resolv.conf just has:
nameserver 127.0.0.1
search redacted.searchfield.comThe host's configured /etc/resolv.dnsmasq has 4 nameservers configured. When I restart the dnsmasq service, it points to 3 nameservers that were configured on the host at one time (but no longer), and writes them automatically to /var/run/dnsmasq/resolv.conf, ignoring the 4 defined nameservers in /etc/resolv.dnsmasq.
I can get the service to properly read the correct nameservers if I enter the four of them in /var/run/dnsmasq/resolv.conf and leave the dnsmasq service running. However, if I restart the service it just points to these 3 old nameservers again.
Is this cached somewhere? I'm not using nscd here. I'm wondering if maybe the resolvconf service is causing an issue, and should not be run alongside dnsmasq?
|
Host configured with both resolvconf and dnsmasq, restarting dnsmasq keeps pointing to old servers
|
I found the issue after I enabled the debug flag for systemd logs. I followed answer specified here: https://unix.stackexchange.com/a/432077/556205.
After setting the flag, I was able to see a concrete error message: libsystemd-shared-251.8-586.fc37.so: cannot open shared object file: No such file or directory.
I then ran ls /usr/lib/systemd/libsystemd-* and found that the file does not exist. Instead, another file of different version was there: /usr/lib/systemd/libsystemd-shared-251.10-588.fc37.so. This was possibly due to the recent update that I did.
I don't think it's a good practice, but as a fix I linked the two files: sudo ln -s /usr/lib/systemd/libsystemd-shared-251.10-588.fc37.so /usr/lib/systemd/libsystemd-shared-251.8-586.fc37.so. Everything started to work after this!
|
I recently updated my system, but noticed that on reboot, systemd-resolved always fails. So I cannot access any websites even though I have internet connection.
I have included an error message that I'm getting (I could not find any other post mentioning this exact error either).
Is anyone facing the same problem or has a fix? I think it is an issue with DNS resolution and as temporary workaround I'm including a nameserver in /etc/resolv.conf. But since this is a temporary fix and I wanted to know if there's a way to fix systemd-resolved since it worked fine before updating the system.
Below is version of systemd
➜ ~ resolvectl --version
systemd 251 (251.10-588.fc37)
|
Facing issue with systemd-resolved after update
|
It seems like the problem was ProtonVPN. I don't know what exactly solved the issue, but what I did was:
sudo ifconfig pvpnksintrf0 down
sudo ifconfig ipv6leakintrf0 down
sudo apt-get remove protonvpn
rm -rf ~/.cache/protonvpn
rm -rf ~/.config/protonvpn
systemctl restart systemd-resolved.service
|
I installed some updates on my Ubuntu Desktop 21.04 through app "Software and Updates", I don't know what kind of software was updated. Internet had worked without problems before I rebooted the system. Computer has no internet access from any wi-fi, my other devices do have. When I do ping google.com, I get ping: google.com: Temporary failure in name resolution
I have created a proxy server on my android in local network and connected through it on my Linux and it worked. I'm not sure if this fact is important, trying to tell the details I noticed.
I have searched StackOverflow for this kind of problem, I tried adding nameserver 8.8.8.8 to /etc/resolv.conf, tried editing /etc/network/interfaces according to this answer: https://askubuntu.com/a/552311 , after that there are no wi-fi networks detected in the settings (Initially, I had no such file), so I just delete everything in there.
systemd-resolve --status returns:
Global
Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: foreignLink 2 (eth0)
Current Scopes: none
Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupportedLink 3 (wlan0)
Current Scopes: none
Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupportedLink 4 (pvpnksintrf0)
Current Scopes: DNS
Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: ::1
DNS Servers: ::1
DNS Domain: ~.Link 5 (ipv6leakintrf0)
Current Scopes: DNS
Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: ::1
DNS Servers: ::1
DNS Domain: ~.How can I find the reason of the problem and solve it?
|
Ubuntu, network problem
|
As pointed out in the comments by Johan Myréen, my issue appeared to be caused by the use of a reserved TLD. Since I'm not making use of mDNS, switching from .local to .com allowed my name resolutions to work properly.
|
I'm trying to complete the setup of my Bind9 DNS server.
Both Systems are running Debian Stretch. The serving machine (192.168.0.113) is a VM host and the client machine (192.168.0.104) is its virtual guest.
The server seems to be running without complaint, but I'm getting some confusing results. The host command resolves as I'd hoped:
$ host wiles.local
wiles.local has address 192.168.0.113However I'm unable to reference the system by hostname anywhere else:
$ ssh wiles.local
ssh: Could not resolve hostname wiles.local: Name or service not knownOf course, I can ssh into the system by referencing the IP explicitly without issue.
The client machine does seem to be looking in the right place for its DNS:
$ nslookup google.com
Server: 192.168.0.113
Address: 192.168.0.113#53Non-authoritative answer:
Name: google.com
Address: 216.58.192.206I'm hoping someone can help me figure out what the distinction here is and what I can do to fix the issue.
I'll give what relevant config information I know:
On the serving system:
/etc/bind/named.conf.local
zone "wiles.local" {
type master;
file "/etc/bind/db.wiles.local";
};/etc/bind/db.wiles.local
$TTL 86400
@ IN SOA wiles.local. root.localhost. (
1 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
86400 ) ; Negative Cache TTL
;
IN A 192.168.0.113
@ IN NS localhost.
www IN A 192.168.0.104On the connecting system:
/etc/network/interfaces
auto lo enp0s3
iface lo inet loopbackiface enp0s3 inet static
address 192.168.0.104
netmask 255.255.255.0
gateway 192.168.0.1
dns-nameservers 192.168.0.113And finally:
/etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 192.168.0.113A note on the last file: I had to disobey the loud warnings and write that line in by hand. Until having done that, this system would not resolve host names for ANY service, external or internal. I believe this to be a separate issue, that was fixed by installing and running resolvconf, but I mention it just in case the problems are related.
|
Host command successful but DNS won't resolve
|
All other answers are good, if you want only to check if a device is connected (checking kernel messages with dmesg, check in /var/log files and use some tools like usbconfig, pciconf or camcontrol).
But, if you want more (handle a message and execute a program or something like that when you plug your device), you can use devd.
When you connect a device, FreeBSD kernel will generate messages:when you plug your device, an attach message is generated
when you unplug your device, a detach message is generated
and more (see devd.conf man page if you want more information).FreeBSD uses devd by default, and its configuration is stored in /etc/devd/ and /etc/devd.conf. If you use linux, the same features exist with devfs and udev.
You can find some examples in /usr/share/examples/etc/devd.conf.
|
How can I find out when a device is connected to my FreeBSD machine? Lets say I plug in a USB device, HDMI device, Bluetooth or something like that.
Can I have a console output to say [device] and gives some output about the device?
|
Find when new hardware is connected on FreeBSD
|
QIIME came out with a new virtualbox image (version 1.5), which works.
If no one finds the answer to the problem above I will close the question in a week.
|
I am running VirtualBox (using the Qiime image http://qiime.org/install/virtual_box.html)
The physical hardware is a 32 core machine. The virtual machine in VirtualBox has been given 16 cores.
When booting I get:
Ubuntu 10.04.1 LTS
Linux 2.6.38-15-server# grep . /sys/devices/system/cpu/*
/sys/devices/system/cpu/kernel_max:255
/sys/devices/system/cpu/offline:1-15
/sys/devices/system/cpu/online:0
/sys/devices/system/cpu/possible:0-15
/sys/devices/system/cpu/present:0
/sys/devices/system/cpu/sched_mc_power_savings:0# ls /sys/kernel/debug/tracing/per_cpu/
cpu0 cpu1 cpu10 cpu11 cpu12 cpu13 cpu14 cpu15 cpu2 cpu3 cpu4 cpu5 cpu6 cpu7 cpu8 cpu9# ls /sys/devices/system/cpu/
cpu0 cpufreq cpuidle kernel_max offline online possible present probe release sched_mc_power_savings# echo 1 > /sys/devices/system/cpu/cpu6/online
-su: /sys/devices/system/cpu/cpu6/online: No such file or directorySo it seems it detects the resources for 16 CPUs, but it only sets one online.
I have tested with another image that the VirtualBox host can run a guest with 16 cores. That works. So the problem is to trouble shoot the Qiime image to figure out why this guest image only detects 1 CPU.
|
VirtualBox guest: 16 CPUs detected but only 1 online
|
When a device that is bound and attached remotely is unplugged, the device is automatically detached on the client and unbound on the host. After that the state is the same as if it was never bound or attached.
The usbip commands for binding (on the host) and attaching (on the client) may be run repeatedly with the same arguments. While this issues an error message on already bound or attached devices, nothing bad happens! So one can just install background scripts that will repeatedly bind and attach the devices. Example scripts and systemd units are provided below. Be sure to change the Hostname and Port IDs to your needs.
Host
Skript /opt/usbip/usbip-bind:
#!/bin/bashSPOOL=/var/spool/usbip/bindif [[ $1 == "-q" ]]
then
exec &>/dev/null
fitouch $SPOOLwhile [[ -e $SPOOL ]]
do
/usr/bin/usbip bind -b 1-1.2.1
/usr/bin/usbip bind -b 1-1.2.2
sleep 10
done/usr/bin/usbip unbind -b 1-1.2.1
/usr/bin/usbip unbind -b 1-1.2.2exit 0Systemd unit /etc/systemd/system/usbip-bind.service:
[Unit]
Description=USB-IP Bindings[Service]
ExecStart=/opt/usbip/usbip-bind -q
ExecStop=/bin/rm /var/spool/usbip/bind ; /bin/bash -c "while [[ -d /proc/"$MAINPID" ]]; do sleep 1; done"[Install]
WantedBy=multi-user.targetBe sure do make the directory /var/spool/usbip. Then enable and start the unit:
systemctl daemon-reload
systemctl enable usbip-bind
systemctl start usbip-bindClient
Skript /opt/usbip/usbip-attach:
#!/bin/bashSPOOL=/var/spool/usbip/attachif [[ $1 == "-q" ]]
then
exec &>/dev/null
fitouch $SPOOLwhile [[ -e $SPOOL ]]
do
/usr/bin/usbip attach -r pi -b 1-1.2.1
/usr/bin/usbip attach -r pi -b 1-1.2.2
sleep 10
done/usr/bin/usbip detach -p 0
/usr/bin/usbip detach -p 1exit 0Systemd unit /etc/systemd/system/usbip-attach.service:
[Unit]
Description=USB-IP Attach
Wants=network-online.target
After=network-online.target[Service]
ExecStart=/opt/usbip/usbip-attach -q
ExecStop=/bin/rm /var/spool/usbip/attach ; /bin/bash -c "while [[ -d /proc/"$MAINPID" ]]; do sleep 1; done"[Install]
WantedBy=multi-user.targetBe sure do make the directory /var/spool/usbip. Then enable and start the unit:
systemctl daemon-reload
systemctl enable usbip-attach
systemctl start usbip-attachNow you may remove the device whenever needed and at most 20 seconds after plugging it back in the usbip connection is reestablished.
|
I am using usbip and a raspberry pi to extend the range of a wireless keyboard to a computer that is just a tad too far away for the keyboard to work reliably on its own.
Sometimes the USB receiver of the keyboard is reconnected and used elsewhere, but when it is reconnected to the raspberry pi the USBIP connection is not automatically re-established.
How can I achieve automatic reconnection?
|
Use USBIP for devices that are being removed and reconnected
|
Proper SAS/SATA connectors are hot plug safe, so as long as you are using those connectors both for data and power ( not the usual PC molex power connector ) then you won't hurt anything plugging them in.
|
I have Ubuntu 12.04.1 LTS 64-bit on a PowerEdge 2900. My current setup has two 300GB disks (no RAID), but I want migrate the system to three new 600GB disks. I'm trying to connect the new disks, make a RAID5 array, and copy my partitions to the new RAID, but i'm not sure if the server has hot-plug support or, in particular, if it's activated.
Looking at the system I get:
admin@host:~$ lsscsi -v
[4:0:0:0] disk HITACHI HUS151414VLS300 A48B /dev/sda
dir: /sys/bus/scsi/devices/4:0:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:08.0/host4/port-4:0/end_device-4:0/target4:0:0/4:0:0:0]
[4:0:1:0] disk HITACHI HUS151414VLS300 A48B /dev/sdb
dir: /sys/bus/scsi/devices/4:0:1:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:08.0/host4/port-4:1/end_device-4:1/target4:0:1/4:0:1:0]admin@host:~$ lspci | grep '02:08.0'
02:08.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)LSI SAS 1068 has hot-swap support according to the description, but I'm not sure, and I can't power-off the system for migration or check in bios. I'm afraid to just connect a disk in case it damages the controller or the disk itself, so I need a way to check if hot-plug/hot-swap is activated in the system.
|
How to check if hot-swap or hot-plug are activated on my Linux machine
|
That's just the way that the PS/2 port works. Unlike the USB, the PS/2 was not designed to be hot-plugged. If you need the hot-plugging capability, use a USB mouse. Otherwise, there is no guarantee that any solution will work consistently.
|
I plugged in a PS/2 mouse while inside my Gnome desktop, but Linux doesn't recognize it.
Linux will only recognize the PS/2 mouse if it is plugged in before booting the machine (like a normal scenario).
In this case, I forgot to plug in the mouse, plugged it in when I got to the desktop, but realized that it doesn't work.
How do I detect PS/2 devices (my mouse) in real time so I don't have to reboot just to use a mouse?
|
How do you force Linux to detect a PS/2 device (e.g. mouse) on demand?
|
Yes, you can find the information in /sys/block/$DEVICE/slaves. If you only have the canonical name you can use readlink to get the details, e.g:
devdm="$(readlink -f /dev/mapper/extern-1-crypt)"
dm="${devdm#/dev/}"
ls /sys/block/$dm/slaves/If you want to remove all you can just utilize directly the sys filesystem:
echo 1 > /sys/block/$dm/slaves/*/../device/delete
|
I have an external eSATA-hdd on an OpenSUSE 12.2 system. The external hdd has an LVM on a dm-crypt partition.
I mount it by powering it up and then doing
rescan-scsi-bus.sh
cryptsetup -v luksOpen
vgchange -ay
mountNow when I want to power the hdd down, I do
umount
vgchange -an extern-1
cryptsetup -v remove /dev/mapper/extern-1-crypt
echo 1 >/sys/block/sdf/device/deleteHere the device (sdf) is currently hardcoded in the script. Can I somehow deduce it in the script from the VG or the crypto device?
|
Detecting the device of a crypto mount
|
An udev rule applies to the add action by default. The udev rule is on a graphics card, not on a monitor; so it runs when a graphics card is added to the system, which in practice means at boot time.
Plugging in a monitor results in a change action, not an add action. You can observe this by running udevadm monitor and plugging a monitor in. So the udev rule should specify a change action.
KERNEL=="card0", SUBSYSTEM=="drm", ACTION=="change", \
ENV{DISPLAY}=":0", ENV{XAUTHORITY}="/var/run/gdm/auth-for-vazquez-OlbTje/database", RUN+="/usr/bin/arandr"Examples found on the web corroborate my understanding, e.g. codingtony whose monitor-hotplug.sh script may be of interest to you.
The file name under /var/run changes each time you reboot, so you should determine it automatically inside your script. This answer should help.
|
I have setup a basic udev rule to detect when I connect or disconnect a mDP cable.
The file is /etc/udev/rules.d/95-monitor-hotplug.rules
KERNEL=="card0", SUBSYSTEM=="drm", ENV{DISPLAY}=":0", ENV{XAUTHORITY}="/var/run/gdm/auth-for-vazquez-OlbTje/database", RUN+="/usr/bin/arandr"It should just launch arandr when their is a mDP cable connected or disconnected, but nothing happens. I have also reloaded the rules with:
udevadm control --reload-rules ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
This is how the problem was solved. With the links provided by @Gilles. I added the following code to my .profile then pointed the ENV{$XAUTHORITY}="/home/user/.Xauthority" and also added ACTION=="change" to the rules file. After that everything was working as it should. Thanks Gilles.
case $DISPLAY:$XAUTHORITY in
:*:?*)
# DISPLAY is set and points to a local display, and XAUTHORITY is
# set, so merge the contents of `$XAUTHORITY` into ~/.Xauthority.
XAUTHORITY=~/.Xauthority xauth merge "$XAUTHORITY";;
esac
|
udev monitor hotplug rule not running
|
For read/write access you will need a read-write NTFS driver like the ntfs-3g package from extra repository.
After installation with sudo pacman -S ntfs-3g you are able to mount your NTFS partitions the usual way with sudo mount /path/to/ntfs /mount/point. This is possible due to a symlink of /usr/bin/mount.ntfs to /usr/bin/ntfs-3g.Note: You need to have root privilegs to mount the filesystem.
Requirements for an exception are listed in the ntfs-3g-FAQ.Using the default settings the NTFS-partition will be mounted at boot. Put the following in your /etc/fstab:
/path/to/ntfs /mount/point ntfs-3g defaults 0 0
To be able to read-write with a non-root user, you have to set some additional options (username has to be changed to your username):
/path/to/ntfs /mount/point ntfs-3g uid=username,gid=users,umask=0022 0 0
|
I am wondering how I can configure an Arch Linux system to mount an external hard disk when it is plugged in (as opposed to have it plugged in at startup).
In order to to that, I added
/dev/sdb1 /mnt/E auto rw,users,umask=0000 0 0to my /etc/fstab file.
Although I specified auto, it wouldn't automatically mount the harddisk when I plug it in. In fact, the system wouldn't even boot up without the harddisk being plugged in.
|
How do I configure an Arch Linux system to automatic mount an external harddisk when it is plugged in?
|
In general, with modern hardware, a modern kernel, and a modern distribution, hardware recognition should happen automatically.
There is, however, a program called "kudzu" which will do what you want — attempt to detect new hardware, and add the appropriate configuration. I think, because of the changes in modern systems, it's not really maintained anymore (and for several years, I think it has been more trouble than it's worth). But if you are on an older system, or building something yourself and for whatever reason don't want to do it the modern way, you might find that useful.
|
cfgmgr is a command under AIX/ksh that checks for new hardware; e.g. new HDD's that have been added without a shutdown.
Question: Are there any similar commands under Linux? If "fdisk -l" doesn't recognize the new HDD (only after reboot). Or is Linux different from AIX, and this command is not needed?
|
cfgmgr like command under Linux?
|
Not really an answer, but wanted to add something. You must have tried these already, but see below just in case you missed.
If you have top(1), you can check their resource usage. Or you can use /proc/[pid]/status etc. to check them. Either way, you'd see they aren't resource hungry, and mostly in sleeping (S) state.
You'd also see from kernel documentation and configurations that it is related to power management (SMP suspend/resume), so consider your power management requirements and how disabling CPU hotplug can affect them.
Kernel configuration:
Kernel Documentation says CONFIG_HOTPLUG_CPU needs to be enabled for CPU hotplug to work.Did you do the disabling from menuconfig?
If not, did you check menuconfig to see if it has been disabled after
your changes?If you try disabling it from menuconfig, you'll first have to disable a series of other configs, and be able to build the kernel successfully after disabling them. For example, this is what menuconfig shows on my platform:
CONFIG_HOTPLUG_CPU:
Say Y here to experiment with turning CPUs off and on. CPUs
can be controlled through /sys/devices/system/cpu.
Symbol: HOTPLUG_CPU [=y]
Type : bool
Defined at arch/arm64/Kconfig:985
Prompt: Support for hot-pluggable CPUs
Location:
-> Kernel Features
Selects: GENERIC_IRQ_MIGRATION [=y]
Selected by [y]:
- PM_SLEEP_SMP [=y] && SMP [=y] && (ARCH_SUSPEND_POSSIBLE [=y] || ARCH_HIBERNATION_POSSIBLE [=y]) && PM_SLEEP [=y]So, I simply cannot disable it from menuconfig unless I address the
Selected by [y]:
- PM_SLEEP_SMP [=y] && SMP [=y] && (ARCH_SUSPEND_POSSIBLE [=y] || ARCH_HIBERNATION_POSSIBLE [=y]) && PM_SLEEP [=y]that enables it.
You'll have similar constraints. You can try addressing them and finally disable CONFIG_HOTPLUG_CPU, but I doubt you'll be able to build the kernel after that because some drivers don't handle the dependencies well (but things worked for you it seems).
|
I am working on an embedded Linux system, which is using kernel-5.10.24.
As the system resource is limited, so I want to minimize the CPU/memory/storage usage.
From ps -ax I found 2 kernel threads as follows,
14 root 0:00 [cpuhp/0]
15 root 0:00 [cpuhp/1]I think they are used for CPU hotpluging, and there is NO CPU hotpluging use case in this system, so I want to disable the feature and not create these 2 kernel threads.
I tried to disable this configuration by force (Removing select SYS_SUPPORTS_HOTPLUG_CPU from the arch/ARM/Kconfig and others).
But after deployed the new kernel, these 2 kernel threads are still there.
By checking the codes, it seemed that these 2 threads are created regardless of CONFIG_HOTPLUG_CPU and CONFIG_SYS_SUPPORTS_HOTPLUG_CPU, which means when SMP is configured, these 2 threads are ALWAYS there!
So I am not sure if there is a way to disable creation of these 2 kernel threads. If no, I have to live with them, assuming they will NOT take too much CPU and memory for running.
Updated with kernel menuconfig based on dhanushka's comment
Symbol: HOTPLUG_CPU [=y] Type : bool Defined at arch/mips/Kconfig:2942
Prompt: Support for hot-pluggable CPUs
Depends on: SMP [=y] && SYS_SUPPORTS_HOTPLUG_CPU [=y]
Location:
-> Kernel type
(2) -> Multi-Processing support (SMP [=y])
Selected by [y]:
- PM_SLEEP_SMP [=y] && SMP [=y] && (ARCH_SUSPEND_POSSIBLE [=y] || ARCH_HIBERNATION_POSSIBLE [=y]) && PM_SLEEP [=y]The same as dhanushka's comment.
I will try to disable it and update this question.
And as I said, the cpuhp0/1 seemed not be able to be disabled.
|
How to disable CPU hotplug feature (and kernel thread) in Linux-5.10.24
|
Wine has a wine eject command to address this. When it's time to switch disks, simply fire up another terminal and wine eject, then plug in the second disk.
It is noteworthy that the appropriate $WINEPREFIX must be set for this command to work properly.
|
Right now I'm trying to install Battlefield 2 from CD-ROM on my Linux computer (I know Battlefield 2 is a little old now but I couldn't care less). Of course, it needs to be run under Wine, and luckily for me Wine isn't the issue yet. The issue is that once the installer asks for Disk 2 to be inserted, it doesn't get detected. I know pretty confidently that the issue is that the second disk just isn't being recognized as a new disk somewhere in the kernel which causes big problem and that the issue is that the disk isn't being unmounted properly, but I can't unmount the disk properly, since that would require that the installer gets killed. I tried a bunch of AHCI and SCSI tricks, but to no avail. If lsblk can tell the disk is different, there should be a way to tell unaware parts of the system about it, but I'm not sure how. Help pls
|
Hotswapping CDs on Linux
|
In order to change the VCPU allocation, you do
sudo virsh setvcpus [vm_name] [num_vcpus] --currentFrom within the machine, running
sudo udevadm monitor -kYou'll see a series of messages similar to
KERNEL[836.518069] add /devices/system/cpu/cpu4 (cpu)
KERNEL[836.518095] bind /devices/system/cpu/cpu4 (cpu)
KERNEL[836.526936] add /module/intel_rapl_perf (module)
KERNEL[836.534023] remove /module/intel_rapl_perf (module)
KERNEL[836.561229] add /module/intel_uncore (module)
KERNEL[836.568971] remove /module/intel_uncore (module)
KERNEL[836.578821] add /module/intel_cstate (module)
KERNEL[836.592990] remove /module/intel_cstate (module)
KERNEL[836.603800] add /module/intel_rapl (module)
KERNEL[836.604120] add /devices/virtual/powercap/intel-rapl (powercap)
KERNEL[836.604967] remove /devices/virtual/powercap/intel-rapl (powercap)
KERNEL[836.613034] remove /module/intel_rapl (module)
|
Is it possible to change the number of VCPUs on a KVM virtual machine on Linux without stopping it first? The Linux kernel has calls for addition and removal of CPUs (CPU hotplug in the Kernel) for physical machines (on hardware that supports that) but I can't find anything on VMs and how to allocate more/fewer resources to running machines.
|
Change CPU count on live Linux VM
|
From the kernel documentation:
The hotplug mechanism asynchronously notifies userspace when hardware is
inserted, removed, or undergoes a similar significant state change.
There is an event variable for modules, called DRIVER, that suggests a driver for handling the hotplugged device.
|
Many questions are tagged : hot-plug . However , No wiki found in : Tag info
I've read this:Modularized USB drivers are loaded by the generic /sbin/hotplug
support in the ker- nel, which is also used for other hotplug devices
such as CardBus cards.Therefore , Could we say that hotplug is the responsible on loading/unloading modules automatically ?
|
What's hotplug?
|
I solved my problem. After inspecting carefully the boot sequence , i created my script in /etc/init.d and made a symlink in /etc/rc.d after that my script ran at boot.
|
I want to run some scripts at boot. I tried a lot of things but couldn't achieve.
Here is the OpenWRT boot sequence : https://openwrt.org/docs/techref/preinit_mount
After taking a look at this link i tried to make some changes to my /etc/init.d , i added my script in /etc/init.d. It looks like this :
avahi-daemon dropbear log rpcd system
boot firewall mjpg-streamer samba telnet
cron **gpio.sh** mountd sysctl uhttpd
dnsmasq led network sysfixtime umount
done linkit odhcpd sysntpd yunbridgegpio.sh is my script. And it doesn't do anything. Am i missing something here ? Anybody can help ?
|
Hotplug at boot in OpenWRT
|
Better to explicitly say that this is a shell script and I wish to use ${@} instead of $*:
#!/bin/sh
mount -o remount,rw /
echo ${@} >/tmp/log.txt
echo >>/tmp/log.txt
env >>/tmp/log.txt # if /tmp is writable or tmpfs
exec /sbin/sbin/hotplug "${@}"If the system is sane, this should work. Many embedded, however, are not. Beware.
|
I have a linux media player that was very common before android's age. It is a MIPS running Linux Venus 2.6.12.6 and has 2 sata, 2 usb and 1 sdcard port. Since the flash memory is very limited, I installed optware, ssh and nano on sdcard and put in
ln -s /tmp/usbmounts/sdb1/opt /optThe sdcard can remain plugged for good since I won't use sdcard for media. It works very well if I do not have other usb plugged or if I plug other usb after boot. But if I plug other usb before boot, the sdcard port always be mounted to sdc or sdd and of course the link won't work. I (kind of) resolved this by putting a script at boot to locate /opt and link accordingly. However, I found that there is other activity that can change the mount point after boot.
The player mainly runs a software called Dvdplayer. This software has a menu on screen for user to choose media to play. Every time when this menu is called up, the mount point seems to change, EVEN WITHOUT any additional usb plug in. Say if after boot, my sdcard is mounted to sdb, after calling up the menu, it changed to sdc (sdb has nothing). Calling up the menu again, it becomes sdd (sdb and sdc has nothing). Call the menu the 3rd time, it goes back to sdc and then to and fore between sdc and sdd, never sdb again.
Searching the internet, I understand this is hotplugging and I am able to locate the software. But different from the usual linux hotplug, the softare is an executable elf file instead of a script, and I cannot find any system variables related the hotplug, such as SUBSYSTEM, ACTION, PRODUCT, TYPE, INTERFACE, DEVICE etc. Instead, it has a sequence number in /sys/kernel/hotplug_seqnum. It has empty folders like /tmp/lock/hotplug/convert_tmp, ...mount_tmp, ...rename_tmp and ...volume_lock. mount_tmp is the only folder that has its date changed, but still is always empty.
I've tried to trap the hotplug by moving the /sbin/hotplug to /sbin/sbin/hotplug and put in my own hotplug script in /sbin/hotplug. The script looks like this
mount / -o remount,rw
echo $* >> /usr/local/etc/init.d/hotplug.log
/sbin/sbin/hotplug $*But it doesn't work: after calling the menu, nothing was logged and all plug-in mounts were lost.
All I wanted to do now is to trap the hotplug activities and relink my /opt correctly. Appreciate any help or a better method of ensuring the correct link for /opt.
|
Embedded linux hotplug changed mount point
|
There is no standard indentation in shell scripts that matters.
Slightly less flippant answer:Pick a standard in your team that you can all work to, to simplify things.
Use something your editor makes easy so you don't have to fight to stick to the standard.
|
Java community use 4 spaces as the unit of indentation. 1
Ruby community use 2 spaces that is generally agreed-upon. 2
What's the standard for indentation in shell scripts? 2 or 4 spaces or 1 tab?
|
What's the standard for indentation in shell scripts? [closed]
|
Once you have selected the block, you can indent it using Alt + } (not the key, but whatever key combination is necessary to produce a closing curly bracket).
|
Selecting lines in nano can be achieved using Esc+A. With multiple lines selected, how do I then indent all those lines at once?
|
How to indent multiple lines in nano
|
The first command here emulates the formatting you see in vim. It intelligently expands tabs to the equivalent number of spaces, based on a tab-STOP (ts) setting of every 4 columns.
printf "ab\tcd\tde\n" |expand -t4 Output
ab cd deTo keep the tabs as tabs and have the tab STOP positions set to every 4th column, then you must change the way the environment works with a tab-char (just as vim does with the :set ts=4 command)
For example, in the terminal, you can set the tab STOP to 4 with this command;
tabs 4; printf "ab\tcd\tde\n" Output
ab cd de
|
When I am in vim I can change the tab size with the following command:
:set ts=4Is it possible to set tab size for cat command output too?
|
Change tab size of "cat" command
|
This has nothing to do with the noai option. What you are experiencing, is a little trouble copy-pasting a load of text with existing indents to vim.
What I usually do (I have this 'problem' a lot), is bind F4 to invpaste and then, before I paste stuff into vim, hit that key. It makes the problem go away.
nnoremap <F4> :set invpaste paste?<CR>Read more about this using
:help paste inside vim
|
I am using vim 7.2 from putty terminal.
Even if I run set noai it seems vim still trying to indent code.
I am copying my code from Notepad++ to vim.
following is from Notepad++and following what I got in vim:I don't have any tab in my file.
As a workaround I am opening old vi run set noai paste save and open in vim again.
Any suggestion how to correct this behavior ?
|
vim auto indenting even after setting noai option
|
cal | sed 's/^/ /'Explanationcal |: pipe the output of cal to…
sed 's/^/ /' sed, which will look for the start of lines ^, replacing with spaces. You can change the number of spaces here to match the required formatting.Edit
To preserve the highlighting of the current day from cal, you need to tell it to output "color" (highlighting) to the pipe. From man cal
--color [when]
Colorize output. The when can be never, auto, or always. Never will turn off coloriz‐
ing in all situations. Auto is default, and it will make colorizing to be in use if
output is done to terminal. Always will allow colors to be outputed when cal outputs
to pipe, or is called from a script.N.B. there seems to be a typo in the manual; I needed a = for it to work. Hence, the final command is
cal --color=always | sed 's/^/ /'
|
Say I'm writing a .bashrc file to give me some useful information in my login terminals and I'm telling it to run the cal command (a nice one). How would I go about shifting the calendar produced to the right to match the formatting of the rest of my .bashrc "welcome message"?
|
Shifting command output to the right
|
You can use something like this in your ~/.vimrc to adjust to use spaces/tabs as appropriate:
" By default, use spaced tabs.
set expandtab" Display tabs as 4 spaces wide. When expandtab is set, use 4 spaces.
set shiftwidth=4
set tabstop=4function TabsOrSpaces()
" Determines whether to use spaces or tabs on the current buffer.
if getfsize(bufname("%")) > 256000
" File is very large, just use the default.
return
endif let numTabs=len(filter(getbufline(bufname("%"), 1, 250), 'v:val =~ "^\\t"'))
let numSpaces=len(filter(getbufline(bufname("%"), 1, 250), 'v:val =~ "^ "')) if numTabs > numSpaces
setlocal noexpandtab
endif
endfunction" Call the function after opening a buffer
autocmd BufReadPost * call TabsOrSpaces()
|
Sometimes I edit others source code where the prevailing style is to use tabs. In this case, I want to keep the existing convention of using literal tabs.
For files I create myself, and files that use spaces as the prevailing indent style, I wish to use that instead.
How can I do this in vim?
|
In Vim, how can I automatically determine whether to use spaces or tabs for indentation?
|
There are multiple ways to achieve what you want. In order, Kate is doing the following:Kate reads the settings that are configured globally in the config dialog in the Indentation tab.
Kate reads optional session data, i.e. if you use sessions and manually chose settings in a file, these settings should be restored again when opening the file.
Kate reads the "Filetype" configuration: The filetype, also called mode, can be configured in Settings > Configure Kate > Open/Save > Modes & Filetypes tab. Choose your filetype, e.g. Scripts/Python and then add a modeline like this: kate: indent-pasted-text false; indent-width 4;
Kate searches for document variables in .kateconfig files recursively upwards. If found, it will apply these settings
Kate reads document variables in the document itself. So in a Python file, you can simply add a comment in the first or last 10 lines of the file and write e.g.:# kate: indent-pasted-text false; indent-width 4;All this is also described in the Kate Handbook.
|
My goal is to set Kate up to work properly on Python files but to use different settings (tabs not spaces) on other documents. I'm sure others are doing this, but I can't figure out a convenient solution. I appreciate any advice.
Kate has settings for indentation here:Click the Settings menu
Click "Configure - Kate"
On the right expand "Editor"
Click "Indentation"One option is "Default indentation mode". One choice for that setting is Python. However, I cannot find where to set (or even display) the options used for the Python choice.
Furthermore, it is not clear what is the interaction between "Default indentation mode" and the explicit settings for indentation on that page. Does one override the other?
|
How do I make Kate indent with spaces on Python files but use tabs for text files and other files?
|
Looking through the man page for indent and the official GNU documentation I only see 2 methods for controlling this behavior.
The environment variables:SIMPLE_BACKUP_SUFFIX
VERSION_WIDTHI tried various tricks of setting the width to 0 and also setting the SIMPLE_BACKUP_WIDTH to nothing (""). Neither had the desired effect. I think you're only course of action would be to create a shell alias and/or function to wrap the command indent to do what you want.
Example
$ function myindent() { indent "$@"; rm "$@"~; }Then when I run it:
$ myindent ev_epoll.cI get the desired effect:
$ ls -l | grep ev_epo
-rw-r--r--. 1 saml saml 7525 Dec 13 18:07 ev_epoll.c
|
I am using GNU Indent to format C code in my project.
By default backup files are created ending with a ~.
I don't want to have any backup files created, is there a way to disable it?
|
Disable GNU Indent backup files
|
The Tabularize plugin for vim can do exactly what you want. It comes down to typing Tabularize /:
This will probably not keep the indentation on the left however.
Edit on your updated question:
I was not able to do that with Tabular directly, but I was able to do this with a second command, which is a search and replace on a range:
:%s/\([ ]*\)[[:alpha:][:punct:]]*[ ]*/\0\1/This searches for a certain amount of spaces in front of the :, and pastes them just before this semicolon.
|
I often run into situations like this:
title : Jekyll Bootstrap
tagline: Site Tagline
author :
name : Name Lastname
email : [emailprotected]
github : username
twitter : username
feedburner : feednameWhere the arguments are not lined up well, is there a standard way in vim for to have it formatted with each of the according arguments aligned to the nearest indent where an indent is defined as 2 spaces without having to go through line by line to the, such as in the following:
title : Jekyll Bootstrap
tagline : Site Tagline
author :
name : Name Lastname
email : [emailprotected]
github : username
twitter : username
feedburner: feednameUPDATE:
I believe tabular.vim is the plugin I am looking for but I am having a difficult time forming a regular expression which would take into account the number of spaces at the beginning of the line when deciding something should be part of a block, i.e. Tabularize/: produces:
title : Jekyll Bootstrap
tagline : Site Tagline
author :
name : Name Lastname
email : [emailprotected]
github : username
twitter : username
feedburner: feednameThe is an example in the documentation where the following is achieved via a regular expression:
abc,def,ghi
a,b
a,b,c:Tabularize /^[^,]*\zs,/r0c0l0
abc,def,ghi
a,b
a,b,cBut I am unsure how to formulate this when consider each line with the same number of spaces in front part of the same block while still evaluating subblock such as in the following more complex than my original example:
comments :
provider : disqus
disqus :
short_name : jekyllbootstrap
livefyre :
site_id : 123
intensedebate :
account : 123abc
facebook :
appid : 123
num_posts : 5
width : 580
colorscheme : lightwould be transformed tabularize\some_regular_expression_I_cant_figure_out to:
comments :
provider : disqus
disqus :
short_name : jekyllbootstrap
livefyre :
site_id : 123
intensedebate :
account : 123abc
facebook :
appid : 123
num_posts : 5
width : 580
colorscheme : light
|
Indent the middle of multiple lines
|
With awk:
awk '
/^end/ { sub(" ", "", indent) } # Or { indent = substr(indent, 3) }
{ print indent $0 }
/^describe/ { indent = indent" " }
' <file
|
How can I indent source code based on a couple of simple rules?
As an example, I've used sed and ask to transform a selenium HTML source table to the following rspec like code. How could I consistently indent lines between describe and end ? Ideally I would like to be able to add indenting to
describe "Landing" do
visit("http://some_url/url_reset")
visit("http://some_url/url_3_step_minimal_foundation")
# comments
expect(css_vehicle1_auto_year) to be_visible
end
describe "Stage1" do
wait_for_element_present(css_vehicle1_auto_year option_auto_year)
select(auto_year, from: css_vehicle1_auto_year)
...
end
describe "Stage2" do
fill_in(css_driver1_first_name, with: driver1_first_name)
fill_in(css_driver1_last_name, with: driver1_last_name)
...
submit(css_policy_form)
expect(css_vehicle1_coverage_type) to be_visible
end
describe "Stage3" do
wait_for_element_present(css_vehicle1_coverage_type)
select(coverage_type, from: css_vehicle1_coverage_type)
find(css_has_auto_insurance).click
...
submit(css_policy_form)
expect(css_quotes) to be_visible
endso I have
describe "Landing" do
visit("http://some_url/url_reset")
visit("http://some_url/url_3_step_minimal_foundation")
# comments
expect(css_vehicle1_auto_year) to be_visible
end
describe "Stage1" do
wait_for_element_present(css_vehicle1_auto_year option_auto_year)
select(auto_year, from: css_vehicle1_auto_year)
...
end
describe "Stage2" do
fill_in(css_driver1_first_name, with: driver1_first_name)
fill_in(css_driver1_last_name, with: driver1_last_name)
...
submit(css_policy_form)
expect(css_vehicle1_coverage_type) to be_visible
end
describe "Stage3" do
wait_for_element_present(css_vehicle1_coverage_type)
select(coverage_type, from: css_vehicle1_coverage_type)
find(css_has_auto_insurance).click
...
submit(css_policy_form)
expect(css_quotes) to be_visible
endThe source code for the existing sed and awk's is at https://jsfiddle.net/4gbj5mh4/ but it's really messy and not what I am asking about. I've got the hang of simple sed and awk's but not sure where to start with this one.
It would be great if it could also handle recursion. Not essential for me but the generalization is probably useful to others using this question, i.e.
describe "a" do
describe "b" do
stuff
more stuff
end
endto
describe "a" do
describe "b" do
stuff
more stuff
end
endbtw I am also doing this custom conversion partly because I've used variables as page objects in selenium and they bork the built-in export to rspec.
|
How to use awk to indent a source file based on simple rules?
|
You can run any normal command with :normal, e.g. :normal =G.
If the files are C source code, it may be easier to use the external program indent.
|
I want to indent multiple files which are poorly indented and indent them properly as would vim do when I type gg=G.
Is there someway to enter the = command or its alias in the command mode? i.e after a :?
If that is possible I can use the bufdo command like in this question.
|
Indenting multiple files
|
Being a Kate developer, the answer is as follows:
Kate's indentation system supports the concept of indentation and alignment:Alternatively, an array of two elements can be returned:
return [ indent, align ];In this case, the first element is the indentation depth as above with the same meaning of the special values. However, the second element is an absolute value representing a column for "alignment". If this value is higher than the indent value, the difference represents a number of spaces to be added after the indentation of the first parameter. Otherwise, the second number is ignored. Using tabs and spaces for indentation is often referred to as "mixed mode".So theoretically it works. However, in practice the "C Style" indenter and most other indenters do not support this. Instead, they just return the indentation level without distinguishing indentation from alignment.
In other words: The feature you want is not implemented.
The good news is that all these indenters are written in JavaScript and can therefore be changed very easily. Contributions are always welcome at [emailprotected]. So if you are interested in working on this, please contact us!
|
When indenting a block of code in Kate (3.11.2), spaces used for alignment are replaced by tabs, ruining all alignments and putting me in the hell of restoring all these spaces.
Example:
if (true)
{
—→$foo = 'bar'.
—→•••••••'baz';
}(—→ are tabs, • spaces)
I indent using two characters wide tabs. The problem is when I select these lines and press the Tab key to add an indentation level: it replaces groups of two spaces by one tab:
—→if (true)
—→{
—→—→$foo = 'bar'.
—→—→—→—→—→'baz';
—→}Removing the last (odd) space. This is wrong since tabs width is undefined and must be able to vary without breaking the code presentation.
In my settings (Editor Component → Editing → Indentation), I set Indent using on Tabulators and Spaces but it doesn't save it and returns immediately on Tabulators.
Is it a bug? Or is my Kate misconfigured?
|
Kate replaces alignment spaces by tabs
|
The right answer is not to use tabs. But ok, just for the sake of knowing how it's done…
CPerl uses the default Emacs settings for tab usage, and the Emacs default is to use tabs. So you're already getting tabs. Note that the default amount of indentation is 2 spaces, and the default tab width is 8 columns, so you need at least 4 levels of indentation to see a tab.
If you want to change the tab width to 2 columns, set the tab-width variable, but note that your files will look strange to other people with a different tab width. If you want to change the amount of indentation per level to 8 columns, set cperl-indent-level.
If you exchange files with other people, it's best to put these settings in a file variable (and not to use tabs, of course). For example:
# Local Variables:
# tab-width: 8
# cperl-indent-level: 8
# End:I think the equivalent vi modeline is # vi: ts=8 sw=8:.
|
Is there a way to make cperl mode in emacs use all tabs for indentation instead of spaces? I've tried setting indent-tabs-mode, and cperl-tab-always-indent. Here is my .emacs file:
(defalias 'perl-mode 'cperl-mode)
(setq cperl-tab-always-indent t)
(setq inhibit-splash-screen t)
(cua-mode t)
(setq cua-auto-tabify-rectangles nil)
(transient-mark-mode 1)
(setq cua-keep-region-after-copy t)
|
Emacs cperl mode - how to use tabs for indentation instead of spaces
|
One way using perl:
perl -pe 'if ($. == 1) { m/^(\s*)/; $space = $1 || q{}; next } s/^\s*/$space/' infileIt yields:
x=1+2+3+4+
5+6+7+8
+9+10+12
|
How can I indent a file such as its first line?
Example:
A file containing
x=1+2+3+4+
5+6+7+8
+9+10+12should be converted to
x=1+2+3+4+
5+6+7+8
+9+10+12I need this inside a shell-script on a Linux system. One-liners are preferred.
|
Indent like first line
|
Vim is acting as if you had typed all of your pasted code by hand, so Vim will add additional indentation and otherwise change whitespace as it normally would, such as with your autoindent setting. To paste code in Vim::set paste to enable paste mode.
Paste your code.
:set nopaste to disable paste mode so your normal typing will work as expected again.And see :help paste for more information, including which options are disabled/altered when paste mode is on.
It's possible to set up mappings for this sort of thing if you do it a lot. See :help imap for more information on that.
|
When i'm in Insert mode at vim, and press Shift+Insert to paste my code to my file, vim disassembles my indentation , Such as:Question: How can i solve this problem?
|
vim disassembles my indentation [duplicate]
|
I found a solution using a version of tree which is newer than what was installed on my system. Version 1.8.0 of tree (released 11/16/2018) introduced the --fromfile parameter, which reads a directory/file listing from a file (or stdin) rather than the filesystem itself and generates a tree representation:
$ grep -rl 'foobar' ./ |tree --fromfile -F .
./
└── ./
├── dirA/
│ ├── dirA.A/
│ │ ├── abc.txt
│ │ ├── def.txt
│ │ └── dirA.A.A/
│ │ └── ghi.txt
│ └── dirA.B/
│ └── jkl.txt
└── dirB/
└── mno.txt6 directories, 5 filesFor reference:http://mama.indstate.edu/users/ice/tree/tree.1.html
http://mama.indstate.edu/users/ice/tree/changes.html
|
I'm searching a large number of text files which are organized in various subdirectories. I can run a command such as grep -lr foobar ./, and I get results like the following:
./dirA/dirA.A/abc.txt
./dirA/dirA.A/def.txt
./dirA/dirA.A/dirA.A.A/ghi.txt
./dirA/dirA.B/jkl.txt
./dirB/mno.txtI would like some way to display these in a visual tree, similar to how the tree command works. Something roughly like this:
./
dirA/
dirA.A/
abc.txt
def.txt
dirA.A.A/
ghi.txt
dirA.B/
jkl.txt
dirB/
mno.txtIt seems like it'd be trivial to do this in some Python script with a stack, but I'd really like some way to do this straight from bash if there's a way to do it. So I guess I'm looking for a way to either (a) format/transform the output of grep, OR (b) some other generic "indent-by-common-prefix" utility that I've so-far been unable to find.
|
Display `grep -lr` results as a tree
|
It uses mixed spaces and (8-space) tabs to indent with. You can see that with this minimal example:
int main() {
if (true) {
while (false) {
puts("");
}
}
}If I run that through indent -kr and then hexdump -C, I get this:
$ indent -kr < mini.c |hexdump -C
00000000 69 6e 74 20 6d 61 69 6e 28 29 0a 7b 0a 20 20 20 |int main().{. |
00000010 20 69 66 20 28 74 72 75 65 29 20 7b 0a 09 77 68 | if (true) {..wh|
00000020 69 6c 65 20 28 66 61 6c 73 65 29 20 7b 0a 09 20 |ile (false) {.. |
00000030 20 20 20 70 75 74 73 28 22 22 29 3b 0a 09 7d 0a | puts("");..}.|
00000040 20 20 20 20 7d 0a 7d 0a | }.}.|
00000048You can see that the while is preceded by a single 09 (horizontal tab) byte, while puts is preceded by a tab and four spaces (20). The default is similar:
00000000 69 6e 74 0a 6d 61 69 6e 20 28 29 0a 7b 0a 20 20 |int.main ().{. |
00000010 69 66 20 28 74 72 75 65 29 0a 20 20 20 20 7b 0a |if (true). {.|
00000020 20 20 20 20 20 20 77 68 69 6c 65 20 28 66 61 6c | while (fal|
00000030 73 65 29 0a 09 7b 0a 09 20 20 70 75 74 73 20 28 |se)..{.. puts (|
00000040 22 22 29 3b 0a 09 7d 0a 20 20 20 20 7d 0a 7d 0a |"");..}. }.}.|
00000050though here, only the innermost braces and puts get a tab.
You can use the -nut/--no-tabs option to use spaces everywhere:
$ indent -kr -nut fizzbuzz.cAlternatively, you can configure your editor and/or terminal to use 8-wide tabs instead of 4 if sticking with the original indentation is important. The expand command may help to convert existing files you don't want to re-indent.
|
When I try to format my C code using GNU Indent, it doesn't seem to deal with multiple levels of nested indentation. Specifically, it seems to collapse the second level of indentation.
For example, if this is the code I start with:
#include <stdio.h>int main(int argc, char *argv[])
{
int n; if (argc > 1) {
printf("# of args: %d\n", argc);
} for (n = 1; n <= 15; n++) {
if (n % 3 == 0) {
printf("fizz %d\n", n);
} else if (n % 5 == 0) {
printf("buzz %d\n", n);
} else if (n % 3 == 0 && n % 5 == 0) {
printf("fizzbuzz %d\n", n);
} else {
printf("%d\n", n);
}
} return 0;
}If I run indent -kr fizzbuzz.c, I get this:
#include <stdio.h>int main(int argc, char *argv[])
{
int n; if (argc > 1) {
printf("# of args: %d\n", argc);
} for (n = 1; n <= 15; n++) {
if (n % 3 == 0) {
printf("fizz %d\n", n);
} else if (n % 5 == 0) {
printf("buzz %d\n", n);
} else if (n % 3 == 0 && n % 5 == 0) {
printf("fizzbuzz %d\n", n);
} else {
printf("%d\n", n);
}
} return 0;
}And if I run it with just the defaults (indent fizzbuzz.c), I get this:
#include <stdio.h>int
main (int argc, char *argv[])
{
int n; if (argc > 1)
{
printf ("# of args: %d\n", argc);
} for (n = 1; n <= 15; n++)
{
if (n % 3 == 0)
{
printf ("fizz %d\n", n);
}
else if (n % 5 == 0)
{
printf ("buzz %d\n", n);
}
else if (n % 3 == 0 && n % 5 == 0)
{
printf ("fizzbuzz %d\n", n);
}
else
{
printf ("%d\n", n);
}
} return 0;
}Seems like if it does this out-of-the-box, lots of people would be asking about it, because if it's not a bug, it seems like a really strange way to format your code. Why does it do this?
I'm using version 2.2.11 of GNU Indent.
|
Why does GNU Indent collapse one level of indentation?
|
As I understand it, the claim is being made specifically in the context of here-documents, and the <<- form specifically strips leading tabs:If the redirection operator is "<<-", all leading <tab> characters shall be stripped from input lines and the line containing the trailing delimiter.Note that the second script in that answer doesn’t use tabs, which would be surprising if the author considered that indentation was required to use tabs.
|
I recently came across the following statement (source; emphasis added):For shell scripts, using tabs is not a matter of preference or style; it's how the language is defined.I am trying to make sense of this claim. Of course, it is somewhat loosely worded1, but I would like to know if there is any truth to it.
In particular, I would like to know if the official documentation for either bash or zsh (the two shells I routinely write scripts for) say anything approaching a mandate or recommendation to use tabs for indentation of source code. (I would appreciate explicit references to the supporting paragraphs in this documentation.)
(FWIW, let me point out that I am aware of the fact that, in practice, both bash and zsh readily interpret scripts that are not indented exclusively with tabs. Therefore, I don't expect the documentation for either shell to go much further than a strong recommendation, if they mention the matter at all.)1 For one thing, it refers simultaneously to "shell scripts" and "the language", which contradicts the facts that there are multiple shells in current use, each defining its own language.
|
Does bash (or zsh) "officially" mandate the use of tabs for indentation in scripts?
|
Here's one way with sed:
sed -E '/with_ajax_wait/,/end/{ # if line is in this range
H # append to hold space
/end/!d # if it doesn't match end, delete it
//{ # if it matches
s/.*// # empty the pattern space
x # exchange pattern space w. hold space
s/^(\n)( *)/\2it "waits" do\1\2/ # add first line + initial spacing
s/\n/& /g # re-indent all other lines
G # append hold space to pattern space
s/^(( *).*)/\1\2do/ # add the closing 'do' + initial spacing
}
}
' infileso with an input like:
with_ajax_wait
expect(css_coverage_type_everquote).to be_visible
end
find(css_policy_form_stage3).click
with_ajax_wait
expect(css_coverage_type_everquote).to be_visible
end
something here
with_ajax_wait
expect(css_coverage_type_everquote).to be_visible
got some more stuff here to do
process it
done
end
endthe output is:
it "waits" do
with_ajax_wait
expect(css_coverage_type_everquote).to be_visible
end
do
find(css_policy_form_stage3).click
it "waits" do
with_ajax_wait
expect(css_coverage_type_everquote).to be_visible
end
do
something here
it "waits" do
with_ajax_wait
expect(css_coverage_type_everquote).to be_visible
got some more stuff here to do
process it
done
end
do
endIt should work with blocks of more than three lines provided your with_ajax_wait blocks always ends with end.
Replace the closing do with end if needed as your example is confusing... (you used end for the first block and do for the second) e.g. this time using BRE and [[:blank:]] instead of (space):
sed '/with_ajax_wait/,/end/{
/with_ajax_wait/{
G
s/\([[:blank:]]*\)\(with_ajax_wait\)\(\n\)/\1it "waits" do\3 \1\2/
p
d
}
//!{
/end/!{
s/^/ /
}
/end/{
G
s/\([[:blank:]]*\)\(end\)\(\n\)/ \1\2\3\1end/
}
}
}
' infileThis one is processing each line in that range separately, the first and the last ones in the range are re-indented and wrappers are added, the rest of the lines are just re-indented.
|
How can I add wrappers to a file based on a pattern?
For instance I have the following:
...
find(css_policy_form_stage3).click
with_ajax_wait
expect(css_coverage_type_everquote).to be_visible
end
with_ajax_wait
expect(css_coverage_type_everquote).to be_visible
end
end
it "Stage 3" do
select(coverage_type, from: css_coverage_type_everquote)
find(css_has_auto_insurance).click
...And I want to 'wrap' those "with_ajax_wait" blocks with it "waits" do ... end around them.
i.e. I want to get:
...
find(css_policy_form_stage3).click
it "waits" do
with_ajax_wait
expect(css_coverage_type_everquote).to be_visible
end
end
it "waits" do
with_ajax_wait
expect(css_coverage_type_everquote).to be_visible
end
do
end
it "Stage 3" do
select(coverage_type, from: css_coverage_type_everquote)
find(css_has_auto_insurance).click
...Notes and Assumptions:block to indent code is always 3 lines long (with... expect... and end). It would be nice to allow for more than i inner code line but not required for the simplest case.
the block itself should have an extra 2 space indent.
there are other end's that are not related (example shown just before "Stage 3"
it would be nice to be able to specify an inner pattern also, e.g. only these blocks that have expect starting the code line being indented.
I think awk is probably the tool due to the ability to read consecutive lines but I am struggling to know how to write this.I'm imagining this is a generally useful q&a as adding wrapper within files is not uncommon.
Somewhat similar to my previous question:
How to use awk to indent a source file based on simple rules?
However in this case I am adding the wrapper plus the indent.
|
How to add "wrappers" around methods in source code based on a pattern, using sed, awk, grep and friends
|
Some vim syntaxes set certain settings when the file is opened. As you've found, you can get around this by using an autocmd to set the setting after the syntax has finished.
To get the autocmd to apply on all file types, use a *. For example:
autocmd FileType * set noexpandtab
|
I'm using only TAB for indent, so I configure Vim for using only them:
set autoindent
set noexpandtab
set tabstop=4
set shiftwidth=4But some files (.py) still using spaces. I've search for it and found:
filetype plugin indent onBut this has not help, and I've try:
au FileType python setlocal noexpandtabBut this has help only for python. So how to apply noexpandtab for all file types?
|
(Vim) How to use TAB for indentation in all file types?
|
I added the following to ~/.emacs:
(setq-default indent-tabs-mode t)
(setq backward-delete-char-untabify-method nil)
(setq indent-tabs-mode t)(defun my-insert-tab-char ()
"Insert a tab char. (ASCII 9, \t)"
(interactive)
(insert "\t"))
(global-set-key (kbd "TAB") 'my-insert-tab-char) ; same as Ctrl+i
|
How can I configure ~/.emacs so that I indent how nano does by default?Uses a tab character instead of 5 spaces
I can add as many tabs to a line as I please
|
How to make 'emacs' indent with tabs exactly how 'nano' does...?
|
Looking at man indent I see using -brf will put braces on the function definition line. If you want it on the if-line as well, you'll need -br.
If your PAGER environment variable is less you can search through man indent with / and the text. So if you do man indent, followed by /braces<ENTER> You'll be able to hop between matches that are informative to you by pressing n repeated.
Edit to make my comment below clearer, this is what I see in man indent
The `-brf´ option formats braces like this: int one(void) {
return 1;
};The `-blf´ option formats them like this: int one(void)
{
return 1;
};
|
I have several empty inline function definitions in C++ like so:
class C
{
void foo(){}
void bar(){}
};now if I run indent -st -i4 -nut test.cc in order to just fix the indentation I get
class C
{
void foo ()
{
}
void bar ()
{
}
};But I just want to fix the indentation without moving curly braces around!
How can I achieve that?
|
How do I keep 'indent' from moving curly braces to the next line?
|
You'll want to add the following to your .emacs file:
(add-hook 'python-mode-hook
(lambda () (setq indent-tabs-mode t))
|
I am using Emacs for Python developement. My team is using Tab indentation, so I must do the same. The problem is that I can't figure out how to make python-mode use tabs instead of spaces. I want Emacs to automatically indent my line to the correct level when I press Ctrl-j.
|
Use tabs for indentation in Python mode
|
The ={motion} operator can be defined by a number of settings ('equalprg', 'indentexpr', 'lisp'), but when all those are unset, it falls back to using C indenting. This is what is happening here.
C indenting is meant for the C language, and mostly takes its cues on the C curly braces { ... } and identifiers such as if, else, while, etc.
It turns out a lot of this is quite familiar to bash (and many other languages), so this works well a lot of the time.
In C though, parentheses are used to enclose logical expressions, in variable assignments or if or while statements. Vim wants to format those (so it wants to keep track of the sets of matching parens), but it wants to put some limits into how deep it looks.
As, in C, parens are used on expressions and those are typically short, the default limit for tracking them is 20 lines.
The ['cinoptions'] can control a lot of C indenting and it turns out it has an option to control just that. The )N option can be used to tweak the line limit for parens expressions.
For instance, to raise it to 100 lines:
:set cinoptions=)100(Or to reduce it to 10, use :set cinoptions=)10.)
This can explain what is going on and it's possibly a quick hack that can be made into a usable workaround... But the proper solution here is to set 'indentexpr' appropriately for the language you're writing. (Remember, C indenting only kicks in when 'indentexpr' is unset.)
Vim actually ships a plug-in to indent shell scripts, perhaps you just don't have it enabled. Make sure you have this command in your .vimrc:
filetype indent onAnd then make sure your shell script is being recognized as type sh:
:set filetype?
filetype=shIf it isn't, set it (you might need to dig into why that's not happening):
:setf shYou can double check that 'indentexpr' has been set:
:set indentexpr?
indentexpr=GetShIndent()With those settings on, = will work as you expect on a shell script.
|
Given this code:
#!/bin/bash_DATABASES=(
"secretX"
"secretmin"
"secretcopyijui"
"secretcroma"
"secretdemo"
"secretdicopy"
"secretflashcolo"
"secretmdat"
"secretneton"
"secretprintshar"
"secretrealjet"
"secretsolumax"
"secretunicopia"
"secretworddigit"
"secretducao"
"secrette"
"secrette_app"
"secretanopecanh"
"secretx_ead"
"secretx_site"
"secretdroppy"
"secret"
)When i do gg=G on vim, then the code be like this:
#!/bin/bash_DATABASES=(
"secretX"
"secretmin"
"secretcopyijui"
"secretcroma"
"secretdemo"
"secretdicopy"
"secretflashcolo"
"secretmdat"
"secretneton"
"secretprintshar"
"secretrealjet"
"secretsolumax"
"secretunicopia"
"secretworddigit"
"secretducao"
"secrette"
"secrette_app"
"secretanopecanh"
"secretx_ead"
"secretx_site"
"secretdroppy"
"secret"
)Why?
With smaller arrays everything works gracefully, but when it's an array with more than 20 elements, this happens...
Tested with other language(JS, C++, PHP), no similiar behaviour happened.
Info:
Vim 7.4.52
No .vimrc
|
Vim autoindent stops indenting multiline array in bash after 20 lines
|
To untabify the whole buffer upon opening a file that uses web-mode, you could add something like this to your init file:
(add-hook 'web-mode-hook
(lambda () (untabify (point-min) (point-max))))This assumes that web-mode is the name of the mode you want this setting to apply to; adjust to taste.
|
I don't want to use tabs for indent, so I add (setq-default indent-tabs-mode nil) in my emacs init file.
With the setting indents are created by spaces in web-mode, but it doesn't change already existing tabs to spaces.
Is there a config like overwrite-tab-indent-by-space-indent?
Or must I replace tabs to spaces by command every time I encounter tab-indented HTML?
|
How to replace tab's indent by space's indent with web-mode in Emacs
|
The number of lines to scan to find the indentation of the corresponding \begin{...} is limited, but it can be controlled by the (unfortunately undocumented) global variable g:tex_max_scan_line, which defaults to 60.
See the variable definition in the indent/tex.vim shipped with the Vim runtime.
You can increase it to something more reasonable for your own LaTeX documents. For example, add this to your vimrc file:
let g:tex_max_scan_line = 400This will increase the limit to 400 lines, which according to your post, should be enough. You will have a small performance hit from this change, but I'd expect it should be pretty acceptable.
|
I'm having some difficulty with the vim reindent files (with gg=G).
When I have a larger file (not that large, maybe less than 400 lines of code) I think Vim is having trouble to indent some lines correctly since the line on which the indention of the line afterwards depends is lots of lines above (I assume so, because I tried it with smaller blocks and then the indentation is done correctly).
Example:
\begin{itemize}
\begin{minipage} %indent +2 (after \begin{itemize})
\item %indent +1 (after \begin{minipage}) but -1 because it's \item
%some lines %indent +1
\end{minipage} %indent -1
\end{itemize} %indent -2 <--- here is the Problem, because here has to be -double indentNow if in this case the lines at %some lines are lots of lines, then the \end{itemize} isn't shifted left by two indents (which would be correct) but by only one indent :/
Problem with this is that this messes the whole indention of all lines below.
The solution I'd like most, is if there would be something like the %stopzone comment for LaTeX to signal the syntax highlighting to stop the current (math)zone.
Maybe something like %indent -1 for move the line by one indent to the left.
Does anyone know how you would implement something like this, or even better, it something like this does already exist?
Or is there some other tool that can do this indentation better than Vim? It would be enough for me to get an approximate indentation from Vim and to use an external terminal utility to make the indentation really correct?
|
Vim re-indent file, hardcode some indents
|
You could do it with awk instead of grep if that's acceptable:
MyCmd | awk '/id:/ {print " " $0}'or if you need grep, sed could help:
MyCmd | grep "id:" | sed -e 's/^/ /'The awk version does its own pattern match for lines that contain "id:" and then will print the spaces before the line. The sed version does the grep as you already did it but then replaces the start of each line (regex ^ matches the start of a line) with the spaces
|
I need grep's output to be indented with tabs/spaces.
This is the plain, un-indented version: MyCmd | grep "id:"
I tried this without success:MyCmd | grep "id:" | echo " "
|
How to indent grep's output? [duplicate]
|
This will give you the expected result
File.txt :
self.colorOfBackground =? colorOfBackground
self.colorOfLineForTime =? colorOfLineForTime
self.marginOnBottom =? marginOnBottom
self.marginOnTop =? marginOnTopWhen below command is used :
sed 's/^[[:blank:]]*//' File.txt | column -t -s " "This command will remove frontend spaces : sed 's/^[[:blank:]]*//' refer this stack overflow Question , explained in detail with an example what actually the command does
stack overflow : click_here
Syntax : column -t [-s separator] [filename] -> column -t -s " "
–t : parameter to display the content in tabular format
-s : To separate the content based on a specific delimiter
Output of command :
self.colorOfBackground =? colorOfBackground
self.colorOfLineForTime =? colorOfLineForTime
self.marginOnBottom =? marginOnBottom
self.marginOnTop =? marginOnTopMake sure before you make use of above command just align your entire data in file to left side in order to align data i have used : sed 's/^[[:blank:]]*//'
|
Often I have code that I want to align based on similar structure of lines, not just the left-side auto indent. Is there a script out there that can do something like this? Here is an example of what I want to do. Given:
self.colorOfBackground =? colorOfBackground
self.colorOfLineForTime =? colorOfLineForTime
self.marginOnBottom =? marginOnBottom
self.marginOnTop =? marginOnTop
...I want to run a script and align each "column" on a tab so that they are aligned and easier to visually parse:
self.colorOfBackground =? colorOfBackground
self.colorOfLineForTime =? colorOfLineForTime
self.marginOnBottom =? marginOnBottom
self.marginOnTop =? marginOnTop
...I am thinking that a Perl or Python or AWK or some other scripting language could do this, but alas I know none of these. Till now I have been using Vim and its regex based substitution capabilities but I still spend most of the time manually spacing out the columns.
|
Script for formatting code into columns
|
The classic Unix tool for this job is indent (e.g., GNU
indent). Called in K&R mode, it
will indent your example code as you asked (assuming you actually want
puts indented):
$ indent -kr <sample.c
int main()
{
puts("Hello world");
}A more modern solution may be clang-format
(http://clang.llvm.org/docs/ClangFormat.html), which can be configured
in many ways according to a style file.
|
I need a way to auto-indent blocks on a C source file within the
terminal. According to the norms.
Before:
int main() {
puts("Hello world");
}After:
int main()
{
puts("Hello world");
}
|
Command that indents lines of a C source file
|
From the main window, settings -> Configure Kate. In the sidebar, go to Editing and there go the the tab Indentation.
Under Indentation Actions select Increase indentation level if in leading blank space (this is the default action).
|
I tried to change all the indentation settings to fix another problem yesterday, but now I have the following problem and I don't know which setting causes it.
If I select lines and type Tab, I would expect the selected lines to be indented. Instead, the selection is replaced with a tabulation. For example, I start with:
aa
bb
ccThen I select a part (what's between [ and ] is the selected part):
aa
b[b
c]cThen I press Tab (---> represents a tabulation), I expect this:
aa
--->bb
--->ccBut I get this instead:
aa
b--->cHow can I revert this behaviour? I'm using Kate 21.12.3.
|
Kate replaces text instead of indenting when typing Tab on selected lines
|
ioctl tends to go hand-in-hand with a /dev entry; your typical code would do
fd=open("/dev/mydevice",O_RDRW);
ioctl(fd,.....);This is perfectly standard Unix behaviour. Inside the kernel driver you can put access controls (eg only root can do some things, or require a specific capability for more fine grained access) which makes it pretty flexible and powerful.
Of course this means that devices can expose a lot more than use block/character read-write activity; many things can be done via ioctl calls. Not so easy to use from shell scripts, but pretty easy from C or perl or python or similar.
sysfs entries are another way of interacting with drivers. Typically each type of command would have a different entry, so it can be complicated to write the driver but it makes it very easy to access via userspace; simple shell scripts can manipulate lots of stuff, but may not be very efficient
netlink is primarily focused (I think!) on network data transfers, but it could be used for other stuff. It's really good for larger volumes of data transfer and is meant to be a successor to ioctl in some cases.
All the options are good; your use case may better determine which type of interface to expose from your driver.
|
I'm trying to clarify which is the most useful (in terms of functionality) method of interacting with devices in Linux. As I understand, device files expose only part of functionality (address blocks in block devices, streams in character devices, etc...). ioctl(2) seems to be most commonly used, yet some people says it's not safe, and so on.
Some good articles or other relevant pointers would be welcome.
|
Usage difference between device files, ioctl, sysfs, netlink
|
When a userland process is opening a serial device like /dev/ttyS0 or /dev/ttyACM0, linux will raise the DTR/RTS lines by default, and will drop them when closing it.
It does that by calling a dtr_rts callback defined by the driver.
Unfortunately, there isn't yet any sysctl or similar which allows to disable this annoying behavior (of very little use nowadays), so the only thing that works is to remove that callback from the driver's tty_port_operations structure, and recompile the driver module.
You can do that for the cdc-acm driver by commenting out this line:
--- drivers/usb/class/cdc-acm.c~
+++ drivers/usb/class/cdc-acm.c
@@ -1063,7 +1063,7 @@
} static const struct tty_port_operations acm_port_ops = {
- .dtr_rts = acm_port_dtr_rts,
+ /* .dtr_rts = acm_port_dtr_rts, */
.shutdown = acm_port_shutdown,
.activate = acm_port_activate,
.destruct = acm_port_destruct,This will not prevent you from using the DTR/RTS lines via serial ioctls like TIOCMSET, TIOCMBIC, TIOCMBIS, which will be handled by the acm_tty_tiocmset(), etc callbacks from the acm_ops structure, as usual.
Similar hacks could be used with other drivers; I personally have used this with the PL2303 usb -> serial driver.
[The diff is informative; it will not apply directly because this site mangles tabs and whitespaces]
|
I have an Arduino Uno attached over USB, using the cdc_acm driver. It is available at /dev/ttyACM0.
The convention for the Arduino's serial interface is for the DTR signal to be used for a reset signal—when using the integrated serial-to-USB adapter, the DTR/RTS/DSR/CTS signal; or, when using an RS-232 cable, pins 4 or 5 (and possibly 6 or 8) are wired to the RESET pin.
This reset avenue has the important advantage of being, if not truly out-of-band, at least very near-failsafe (due to being implemented via the always-out-of-band serial controller in conjunction with the not-normally-user-controllable watchdog circuit), and while it can be physically disabled (via wiring either a capacitor or a resistor, depending on the model, to the RESET pin), to do so completely ruins this important killswitch and all associated utility.
Unfortunately, it seems that, currently, Linux absolutely always sends this signal when any program attaches to an ACM device for any reason, and (unlike Windows,) provides no even-vaguely-known-reliable way to prevent this.
(Currently both -hupcl, "send a hangup signal when the last process closes the tty" and -clocal, "disable modem control signals" do not prevent this signal from being sent every time the device is opened.)tl;dr: What do I need to do to access /dev/ttyACM0 without sending it a DTR/RTS/DSR/CTS signal (short of blocking the signal on the hardware level)?
|
How to prevent DTR on open for cdc_acm?
|
I experienced the same issue when writing a Rust program that spawns a tunctl process for creating and managing TUN/TAP interfaces.
For instance:
let tunctl_status = Command::new("tunctl")
.args(&["-u", "user", "-t", "tap0"])
.stdout(Stdio::null())
.status()?;failed with:
$ ./target/debug/nio
TUNSETIFF: Operation not permitted
tunctl failed to create tap network device.even though the NET_ADMIN file capability was set:
$ sudo setcap cap_net_admin=+ep ./target/debug/nio
$ getcap ./target/debug/nio
./target/debug/nio cap_net_admin=epThe manual states:Because inheritable capabilities are not generally preserved across execve(2) when running as a non-root user, applications that wish to run helper programs with elevated capabilities should consider using ambient capabilities, described below.To cover the case of execve() system calls, I used ambient capabilities.Ambient (since Linux 4.3) This is a set of capabilities that are preserved across an execve(2) of a program that is not privileged. The ambient capability set obeys the invariant that no capability can ever be ambient if it is not both permitted and inheritable.Example solution: For convenience, I use the caps-rs library.
// Check if `NET_ADMIN` is in permitted set.
let perm_net_admin = caps::has_cap(None, CapSet::Permitted, Capability::CAP_NET_ADMIN);
match perm_net_admin {
Ok(is_in_perm) => {
if !is_in_perm {
eprintln!("Error: The capability 'NET_ADMIN' is not in the permitted set!");
std::process::exit(1)
}
}
Err(e) => {
eprintln!("Error: {:?}", e);
std::process::exit(1)
}
}// Note: The ambient capability set obeys the invariant that no capability can ever be ambient if it is not both permitted and inheritable.
caps::raise(
None,
caps::CapSet::Inheritable,
caps::Capability::CAP_NET_ADMIN,
)
.unwrap_or_else(fail_due_to_caps_err);caps::raise(None, caps::CapSet::Ambient, caps::Capability::CAP_NET_ADMIN)
.unwrap_or_else(fail_due_to_caps_err);Finally, setting the NET_ADMIN file capability suffices:
$ sudo setcap cap_net_admin=+ep ./target/debug/nio
|
I'm trying to write a tun/tap program in Rust. Since I don't want it to run as root I've added CAP_NET_ADMIN to the binary's capabilities:
$sudo setcap cap_net_admin=eip target/release/tunnel
$getcap target/release/tunnel
target/release/tunnel = cap_net_admin+eipHowever, this is not working. Everything I've read says that this is the only capability required to create tuns, but the program gets an EPERM on the ioctl. In strace, I see this error:
openat(AT_FDCWD, "/dev/net/tun", O_RDWR|O_CLOEXEC) = 3
fcntl(3, F_GETFD) = 0x1 (flags FD_CLOEXEC)
ioctl(3, TUNSETIFF, 0x7ffcdac7c7c0) = -1 EPERM (Operation not permitted)I've verified that the binary runs successfully with full root permissions, but I don't want this to require sudo to run. Why is CAP_NET_ADMIN not sufficient here?
For reference, I'm on Linux version 4.15.0-45 there are only a few ways I see that this ioctl can return EPERM in the kernel (https://elixir.bootlin.com/linux/v4.15/source/drivers/net/tun.c#L2194) and at least one of them seems to be satisfied. I'm not sure how to probe the others:
if (!capable(CAP_NET_ADMIN))
return -EPERM;
...
if (tun_not_capable(tun))
return -EPERM;
...
if (!ns_capable(net->user_ns, CAP_NET_ADMIN))
return -EPERM;
|
Why is CAP_NET_ADMIN insufficient permissions for ioctl(TUNSETIFF)?
|
The differences are somewhat subtle.
Reflink deletes the duplicate file and creates a new file in its place which is a clone of the original file. The metadata of the duplicate is lost, although rmlint does its best to preserve the metadata via some trickery with touch -mr.
Clone uses the BTRFS_IOC_FILE_EXTENT_SAME ioctl (or, in the latest version, the FIDEDUPERANGE ioctl) which asks the kernel to check if the files are identical, if so then make them share the same data extents. They keep their original metadata. It's arguably safer than reflink because it's done atomically by the kernel, and because it checks that the files are still identical.
|
I was reading the rmlint manual, and one of the duplicate handlers are clone and reflink:· clone: btrfs only. Try to clone both files with the BTRFS_IOC_FILE_EXTENT_SAME ioctl(3p). This will physically delete duplicate extents. Needs at least kernel 4.2.
· reflink: Try to reflink the duplicate file to the original. See also --reflink in man 1 cp. Fails if the filesystem does not support it.What exactly does this clone do, and how is it different from a reflink? What does the BTRFS_IOC_FILE_EXTENT_SAME ioctl do?
|
What does a rmlint's "clone" for btrfs do?
|
According to the launchpad thread you linked to, it is a cosmetic error caused by os-prober not properly ignoring ZFS-managed drives, and if you're not dual-booting you can safely make the message go away with apt purge os-prober. See also here.
|
I ran into the error device-mapper: reload ioctl on osprober-linux-nvme0n1p7 failed: Device or resource busy while compiling the kernel in Ubuntu Studio. I use ZFS for my main drive.
Apparently, this is a bug: [zfs-root] "device-mapper: reload ioctl on osprober-linux-sdaX failed: Device or resource busy" against devices owned by ZFS.
How can I work around it?
|
device-mapper: reload ioctl on osprober-linux-nvme0n1p7 failed: Device or resource busy
|
A definitive explanation you can at least find in the kernel sources, more specifically drivers/input/evdev.c:
static long evdev_do_ioctl(struct file *file, unsigned int cmd,
void __user *p, int compat_mode)
{
[…]
switch (cmd) {
[…]
case EVIOCGRAB:
if (p)
return evdev_grab(evdev, client);
else
return evdev_ungrab(evdev, client);
[…]
}
[…]
}As I understand, everything that evaluates to »false« (0) will lead to evdev_ungrab ((void*)0, 0, …), everything that's »true« (not 0) will cause an evdev_grab ((void*)1, 1, 0xDEADBEEF…).
One thing worth mentioning is that your first example,
int grab = 1;
ioctl(fd, EVIOCGRAB, &grab);
..
ioctl(fd, EVIOCGRAB, NULL); only works unintentionally. It's not the value inside of grab, but the fact that &grab is non-zero (you could have guessed this, since the counter-case isn't grab = 0; ioctl(…, &grab); but ioctl(…, NULL);. Funny. :)
|
I want to use the ioctl EVIOCGRAB function in a C based program, and from googling around I have found various bits of example source code that use the function, but I am struggling to find explicit documentation that correctly describes how to correctly use it.
I see that from ioctl(2), ioctl function is defined as
int ioctl(int d, unsigned long request, …);And that:
The third argument is an untyped pointer to memory. It's traditionally char
*argp (from the days before void * was valid C), and will be so named
for this discussion.And I hoped to find EVIOCGRAB listed in ioctl_list(2), but it wasn't.
So I don't know what the third argument should be for the EVIOCGRAB function. After seeing various bits of example code all I can do is assume that a non-zero value grabs the device and that a zero value releases it.
Which I got from random code examples like
int grab = 1;
ioctl(fd, EVIOCGRAB, &grab);
..
ioctl(fd, EVIOCGRAB, NULL); or
ioctl(fd, EVIOCGRAB, (void*)1);
..
ioctl(fd, EVIOCGRAB, (void*)0); or
ioctl(fd, EVIOCGRAB, 1);
..
ioctl(fd, EVIOCGRAB, 0); (Which seems to smell a bit of cargo cult programming.)
So where can I find a definitive explanation of the EVIOCGRAB control parameter?
|
Where do I find ioctl EVIOCGRAB documented?
|
GPIO_V2_LINE_SET_VALUES_IOCTL seems safe enough; it matches the expected use of ioctl, “manipulat[ing] the underlying device parameters of special files”. It is implemented in linereq_set_values, which acquires a lock, but I don’t think that lock can block for an indefinite amount of time (its users are all non-blocking).Theoretically, one might expect ioctls to be non-blocking, since they are mostly intended to configure drivers. However, some ioctls do much more than that: for example, FICLONERANGE and FICLONE involve actual I/O, and worse than that, they are supported by some networked file systems such as NFS v4.2, so they could conceivably block indefinitely.See point 1 above.
|
I am writing some code around libgpiod's interface. For example, I want to set a line to output high. Under the hood, libgpiod opens an fd provided by the kernel for the line, and then calls ioctl(fd, GPIO_V2_LINE_SET_VALUES_IOCTL, ...).
My questions are:Is this particular ioctl() call (with the GPIO_V2... argument) theoretically (potentially) blocking in the same way that writing to an arbitrary file descriptor can be?Are ioctl() calls in general theoretically blocking? For example, requesting the line in the first place also involves an ioctl() on a fd for the chip. What about I2C ioctl()s?If it is blocking, is the underlying fd in the line struct (line->fd_handle->fd) the one I need to wait on in an event loop (eg. epoll() or an abstracted event library like libuv)?I have tried to answer this question through research, but (a) searching for any combination of "ioctl" and "blocking" just gives results for setting a fd to be blocking or not and (b) it's not in the man pages or kernel docs that I can find.
|
Are ioctl calls blocking?
|
It seems script is the solution, as mentioned by A.B. With -e you even get the return code of the program. cat -vet shows more explicitly the carriage return ^M and newline $.
$ script -q -e out -c ./pusher.bin >/dev/null; echo $?
0
$ cat -vet out
Script started on Mon Dec 21 10:54:40 2020$
echo 'Catch me if you can'^M$
|
I'm messing with TIOCSTI which shoves data into the terminal's input buffer. I want to be able to capture this data before it arrives at the shell or redirects it to a file.
To better illustrate what I'm trying to do:
gcc -x c -o pusher.bin - <<PUSHER
#include <unistd.h>
#include <sys/ioctl.h>
#include <termios.h>int main() {
char *c = "echo 'Catch me if you can'\n";
while(*c) ioctl(0, TIOCSTI, c++);
}
PUSHER
./pusher.binIf running in my terminal, ./pusher.bin will inject echo 'Catch me if you can'\n in my tty which my shell would immediately execute. If I run setsid ./pusher.bin, echo won't be injected in my terminal but I also won't be able to capture it.
I want to wrap ./pusher.bin with something that allows me to inspect what pusher would have injected in my tty's input buffer if it was run bare.
Clarification: I'm aware that injected input can be captured after it arrives at my shell's stdin. This approach while effective at capturing the injected input will also capture normal user input. Furthermore, this approach would not work if stdin is closed or if the process is not attached to a tty. These downsides alone make capturing stdin unviable as a general solution.
|
How can I run a program in its own tty?
|
No. Terminal applications read keyboard input from the device file (on Linux, something like /dev/ttyS0 or /dev/ttyUSB0... for a serial device, /dev/pts/0 for a pseudo-terminal device) corresponding to the terminal with the keyboard you're typing on.
That device doesn't have to be the controlling terminal of the process (or any process for that matters).
You can do cat /dev/pts/x provided you have read permission to that device file, and that would read what's being typed on the terminal (if any) at the other end.
Actually, if it is the controlling terminal of the process and the process is not in the foreground process group of the terminal, the process would typically be suspended if it attempted to read from it (and if it was in the foreground process group, it would receive a SIGINT/SIGTSTP/SIGQUIT if you sent a ^C/^Z/^\ regardless of whether the process is reading from the terminal device or not). Those things would not happen if the terminal device was not the controlling terminal of the process (if the process was part of a different session). That's what controlling terminal is about. That is intended for the job control mechanism as implemented by interactive shells. Beside those SIGTTIN/SIGTTOU and SIGINT/SIGTSTP/SIGQUIT signals, the controlling terminal is involved in the delivery of SIGHUP upon terminal hang hup, it's also the tty device that /dev/tty redirects to.
In any case, that's only for terminal input: real as in a terminal device connected over a serial cable, emulated like X11 terminal emulators such as xterm that make use of pseudo-terminal devices, or emulated by the kernel like the virtual terminals on Linux that interact with processes with /dev/tty<x> (and support more than the standard terminal interface).
Applications like the X server typically get keyboard input from the keyboard drivers. On Linux using common input abstraction layers. The X server, in turn provides an event mechanism to communicate keyboard events to applications connecting to it. For instance, xterm would receive X11 keyboard events which it translates to writing characters to the master side of a pseudo-terminal device, which translates to processes running "inside" xterm reading the corresponding characters when they read from the corresponding pseudo-terminal slave device (/dev/pts/x).
Now, there's no such thing as a terminal application. What we call terminal application above are applications that are typically used in a terminal, that are expected to be displayed in a terminal and take input from a terminal like vi, and interactive shell or less. But any application can be controlled by a terminal, and any application that reads or writes files or their stdin/stdout/stderr can be made to perform I/O to a terminal device.
For instance, if you run firefox, an application that connects to the X server for user I/O, from within a shell running in an xterm, firefox will inherit the controlling terminal from its shell parent. ^C in the terminal would kill it if it was started in foreground by the shell. It will also have its file descriptors 0, 1 and 2 (stdin, stdout and stderr) open on that /dev/pts/<x> file (again as inherited from its shell parent). And firefox may very well end up writing on the fd 2 (stderr) for some kind of errors (and if it was put in background and the terminal device was configured with stty tostop, it would then receive a SIGTTOU and be suspended).
If instead, firefox is started by your X session manager or Windows manager (when you click on some firefox icon on some menu), it will likely not get any controlling terminal and will have no file descriptor connected to any (you'll see that ps -fp <firefox-pid> shows ? as the tty and lsof -p <firefox-pid> shows no file descriptor on /dev/pts/* or /dev/tty*). If however you browsed to file:///dev/pts/<x>, firefox could still do some I/O to a terminal device. And if it opened that file without the O_NOCTTY flag and if it happened to be a session leader and if that /dev/pts/<x> didn't already have a session attached to it, that device would end up being the controlling terminal of that firefox process.
More reading at:How do keyboard input and text output work?
What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'?Edit
After your edit clarifies a bit the question and adds some context.
The above should make it clear that a process can read input from any terminal device they like (except the controlling terminal if the process is not in its foreground process group), but that's not really what is of interest to you here.
Your question would be: for an interactive terminal application, where to get the user input from when stdin no longer points to the terminal.
Applications like tr get their input from stdin and write on stdout. When stdin/stdout is a tty device with a terminal at the other end, they happen to be interactive in that they read and write data from/to the user.
Some terminal text editors (like ed/ex and even some vi implementations) continue reading their input from stdin when stdin is no longer a terminal so they can be scriptable.
A pager though is a typical application that still needs to interact with the user even when their input is not a terminal (at least when their output still goes to the terminal). So they need another channel to the terminal device to take user input. And the question is: which terminal device should they use?
Yes, it should be the controlling terminal. As that's typically what the controlling terminal is meant to be. That's the device that would send the pager a SIGINT/SIGTSTP when you press Ctrl-C/Z, so it makes sense for the pager to read other key strokes from that same terminal.
The typical way to get a file descriptor on the controlling terminal is to open /dev/tty that redirects there (note that it works even if the process has changed euid so that it doesn't have read permission to the original device. It's a lot better than trying to find a path to the original device (which can't be done portably anyway)).
Some pagers like less or most open /dev/tty even if stdin is a tty device (after all, one could do less < /dev/ttyS0 from within a terminal emulator to see what's being sent over serial).
If opening /dev/tty fails, that's typically because you don't have a controlling terminal. One might argue that it's because you've been explicitly detached from a terminal so shouldn't be attempting to do user interaction, but there are potential (unusual) situations where you have no controlling terminal device but your stdin/stdout is still a tty device and you'd still want to do user interaction (like an emergency shell in an initrd).
So you could fall back to get user interaction from stdin if it's a terminal.
One could argue that you'd want to check that stdout is a terminal device and that it points to the same terminal device as the controlling one (to account for things that do man -l /dev/stdin < /dev/ttyS0 > /dev/ttyS1 for instance where you don't want the pager spawned by man to do user interaction) but that's probably not worth the bother especially considering that it's not easy to do portably. That could also potentially break other weird use cases that expect the pager to be interactive as long as stdout is a terminal device.
|
Am I right that all input typed from the keyboard goes through a controlling terminal? That means that if a program is run without a controlling terminal, it won't be able to receive any user input. Is that right for every kind of program in Linux?
UPDATE #1: To clarify the question, my pager module for Python crashes when stdin is redirected:
$ ./pager.py < README.rst
...
File "pager.py", line 566, in <module>
page(sys.stdin)
File "pager.py", line 375, in page
if pagecallback(pagenum) == False:
File "pager.py", line 319, in prompt
if getch() in [ESC_, CTRL_C_, 'q', 'Q']:
File "pager.py", line 222, in _getch_unix
old_settings = termios.tcgetattr(fd)
termios.error: (25, 'Inappropriate ioctl for device')This is because I try to get descriptor to setup keyboard input as fd = sys.stdin.fileno(). When stdin is redirected, its file descriptor no longer associated with any keyboard input, so attempt to setup it fails with input-output control error.
I was told to get this controlling terminal instead, but I had no idea where does it come from. I understood that it is some kind of channel to send signals from user to running processes, but at the same time it is possible to run processes without it.
So the question is - should I always read my keyboard input from controlling terminal? And what happens if the pager process is run without it? Will keyboard input still matter to user? Should I care to get it from some other source?
|
Does keyboard input always go through a controlling terminal?
|
This is embarrassing: I have reviewed my code and found that I did fail to properly initialize a struct, just not in quite the way I expected. I also didn't provide the full context needed to solve the problem, since I thought I had ruled out the possibility of junk values left over from before allocation.
The sample code I posted is part of a function which fills out a struct I've defined containing the relevant fields of ethtool_ts_info.
typedef struct {
unsigned int soTimestamping;
unsigned int txTypes;
unsigned int rxFilters;
} tsCapsFilters_t;The user of the function initializes this struct themselves and the function merely populates it. So expected usage looks like this:
tsCapsFilters_t tsInfo; // struct full of junk
if(getTsInfo(nameOfNetIf, &tsInfo, errMsgBuf, ERR_BUF_SZ) < 0) {
// handle errors in here
}
printf("%x\n", tsInfo.soTimestamping); // struct filled out by getTsInfoHowever, the mistake I made was that I forgot to copy tx_types and rx_filters from the ethtool_ts_info struct (internal to my function) to the tsCapsFilters_t struct (given to the user). So my program got all the fields just fine, but it only returned the so_timestamping capabilities when it was supposed to return all 3 values.
printf("%x\n", tsInfo.soTimestamping); // printed the expected values
printf("%x\n", tsInfo.txTypes); // printed junk
printf("%x\n", tsInfo.rxFilters); // printed junkLike I said, embarrassing. I'll award the bounty and be more careful with my structs in the future.
|
I am working on a C program which gets the timestamping information for a given network interface, like my own version of ethtool. My goal is to get the information printed by $ ethtool -T myNetIf. Something like:
Time stamping parameters for myNetIf:
Capabilities:
hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE)
software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE)
hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE)
software-receive (SOF_TIMESTAMPING_RX_SOFTWARE)
software-system-clock (SOF_TIMESTAMPING_SOFTWARE)
hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE)
PTP Hardware Clock: 0
Hardware Transmit Timestamp Modes:
off (HWTSTAMP_TX_OFF)
on (HWTSTAMP_TX_ON)
Hardware Receive Filter Modes:
none (HWTSTAMP_FILTER_NONE)
all (HWTSTAMP_FILTER_ALL)Since I don't want to just scrape the command line output, I have figured out that I can use an ioctl call to query this information from ethtool and get my flags back as an unsigned integer.
struct ifreq ifr;
memset(&ifr, 0, sizeof(struct ifreq));
struct ethtool_ts_info etsi;
memset(&etsi, 0, sizeof(struct ethtool_ts_info));etsi.cmd = ETHTOOL_GET_TS_INFO;
strncpy(ifr.ifr_name, dev, sizeof(ifr.ifr_name)); // dev is read from stdin
ifr.ifr_data = (void *)&etsi;ioctl(sock, SIOCETHTOOL, &ifr); // sock is an AF_INET socketThe ethtool_ts_info struct which (I believe) contains the data I want is defined here. Specifically, I grab the so_timestamping, tx_types, and rx_filters fields.
Using the same example interface, the value of so_timestamping is 0x5f, or 0101 1111. A look at net_tstamp.h confirms that these are the expected flags.
My issue, however, is in interpreting the values of tx_types and rx_filters. I assumed that the value of tx_types would be something like 0x3 (0011, where the 3rd bit is HWTSTAMP_TX_ON and the 4th is HWTSTAMP_TX_OFF). Alternatively, since the possible transmit modes are defined in an enum with 4 values, perhaps the result would be 0100, where each int enum value is 2 bits.
This is not the case. The actual value of tx_types is 0x7fdd. How on Earth am I supposed to get "HW_TX_OFF and ON" from 0x7fdd? I find the value of rx_filters even more confusing. What is 0x664758eb supposed to mean?
Besides the kernel source code itself, I haven't been able to find much helpful information on this. I think I've done everything right, I just need some help understanding my results.
|
How does the Linux Kernel store hardware TX and RX filter modes?
|
OK, after a hint from one of the maintainers of libcdio, I found out that the version I installed was out of date and contained a bug based on improper use of O_RDWR vs. O_RDONLY. After the update, suddenly everything works fine. Nevertheless thank you for your hints!
|
I've run into an error stemming from lacking permissions when using the CDIO library to issue an eject command to my USB CD-ROM drive. I always get an error message like this:
INFO: ioctl CDROM_SEND_PACKET for command PREVENT ALLOW MEDIUM REMOVAL (0x1e) failed: Operation not permittedThe ioctl call is part of the cdda-player app I call as follows:
cdda-player -ev /dev/sr0After taking a look into the sourcecode of libcdio, I found out that this line of code makes trouble:
int i_rc = ioctl (p_env->gen.fd, CDROM_SEND_PACKET, &cgc);When I run the code as root (using sudo), everything works fine. Here are the permissions for my CD-ROM drive:
pi@autoradio:/import/valen/autoradio/libcdio-master $ ls -al /dev/sr0
brw-rw----+ 1 root cdrom 11, 0 Jul 5 22:42 /dev/sr0pi@autoradio:/import/valen/autoradio/libcdio-master $ ls -al /dev/sg0
crw-rw----+ 1 root cdrom 21, 0 Jul 5 22:38 /dev/sg0pi@autoradio:~ $ getfacl /dev/sr0
getfacl: Removing leading '/' from absolute path names
# file: dev/sr0
# owner: root
# group: cdrom
user::rw-
user:pi:rw-
group::rw-
mask::rw-
other::---The user pi is part of the cdrom group. The standard eject utility does work, though.
Now: Which permissions do I have to set for the eject operation to work as an ordinary user? Thank you.
UPDATE: Here is my kernel version:
pi@autoradio:/import/valen/autoradio/libcdio-master $ uname -a
Linux autoradio 4.9.35-v7+ #1014 SMP Fri Jun 30 14:47:43 BST 2017 armv7l GNU/Linux
|
How to I set the permissions necessary to make the ioctl CDROM_SEND_PACKET command run?
|
Originally CD ROM drives (in the IDE era) had an analog audio connection to the motherboard. The SCSI commands PLAY, STOP, SCAN and their variants would then play audio CDs to this analog output just like a standalone CD player.
The CDROMPLAYMSF ioctl issues one of those SCSI commands, namely PLAY AUDIO MSF. MSF defines a position on the CD (in Minutes, Seconds, Frames).
Internal CD ROMs have long lost this feature, as do external USB CD ROMs (there's no analog audio connection to the motherboard). So your CD player rightfully ignores this command.
IIRC the libcdaudio library also has functions to read the digital data from the CD. You need to use those, and then pass on the data to Pulseaudio etc. to playback the CD.
You can also use ready-made command-line tools like mplayer cdda:// for that.
|
I've got a USB 2.0 CD/DVD drive, which is (amongst other use cases) used to play music CDs. But: The drive seems to ignore CDROMPLAYMSF commands.
The host is a Raspberry Pi 3B with the current version of Raspbian. I'm using libcdaudio for audio CD playback, which in turn issues the necessary ioctl commands, including CDROMPLAYMSF.
UPDATE: Upon request, may I hereby give you the specs of my drive, as spit out by the cd-drive utility of cdio:
CD-ROM drive supports MMC 3 Drive: /dev/cdrom
Vendor : MATSHITA
Model : CD-RW CW-8124
Revision : DA0DHardware : CD-ROM or DVD
Can eject : Yes
Can close tray : Yes
Can disable manual eject : Yes
Can select juke-box disc : NoCan set drive speed : No
Can read multiple sessions (e.g. PhotoCD) : Yes
Can hard reset device : YesReading....
Can read Mode 2 Form 1 : Yes
Can read Mode 2 Form 2 : Yes
Can read (S)VCD (i.e. Mode 2 Form 1/2) : Yes
Can read C2 Errors : Yes
Can read IRSC : Yes
Can read Media Channel Number (or UPC) : Yes
Can play audio : Yes
Can read CD-DA : Yes
Can read CD-R : Yes
Can read CD-RW : Yes
Can read DVD-ROM : YesWriting....
Can write CD-RW : Yes
Can write DVD-R : No
Can write DVD-RAM : No
Can write DVD-RW : No
Can write DVD+RW : No
|
What does the ioctl CDROMPLAYMSF command do exactly?
|
The ioctl system call takes a parameter list that varies a lot depending on the request. Many requests take structured data as input or produce structured data as output. This makes it awkward to use from the shell. There are of course commands that wrap around a specific ioctl request or a specific set — for example stty with terminal ioctl — but not generic ones. The tool you link to allows passing binary data in and out, but that data is awkward to manipulate from the shell.
I recommend writing your program in a more advanced scripting language such as Perl or Python. This makes deployment easier since people won't need an obscure tool: most Unix-like systems have Perl and nowadays most have Python as well.
Perl has a predefined ioctl function and provides access to at least some symbolic names for request values via sys/ioctl.ph. I don't know how you're supposed to list the available symbolic names. The extra input/output is available as a scalar (i.e. byte string).
Python has an ioctl function in the fnctl module. The termios module defines some symbolic names for terminal-related ioctl. The extra input/output can be passed as an integer or a byte string.
In Python, an alternative approach could be the ctypes module, which is a very nice way to call C functions from a high-level language. It's nice if the ioctl you want to call is defined as taking a struct as input or output, because ctypes lets you access structs the way a C compiler would lay them out, without needing to worry about data type sizes, padding, endianness, etc.
|
I try to make an ioctl() call from bash. This is very easy to do in C, so there are tools ( https://github.com/jerome-pouiller/ioctl ) which wrap this functionality.
But it would make the distribution of my script a lot harder, because I would have to distribute that tool along with it.
Is there any other tool that is already included in the Debian APT repositories that can do the same? So that I could just do a simple apt install from the script?
|
Any tool to do ioctl() from bash?
|
Older versions of btrfs (e.g., 4.15) had a 16 MiB limit per FIDEDUPERANGE call, and would silently cut oversized requests down to 16 MiB. I forget exactly when the change happened, but the current version of btrfs (i.e., 5.16) loops in 16 MiB chunks. I think linux (not btrfs, now) still silently cuts down requests over 1 GiB, though. If you expect to use FIDEDUPERANGE with older versions of btrfs, you should definitely respect the 16 MiB limit. Also, other filesystems might have a similar limit.
As for src_length = 0, you should really consult the documentation for the individual ioctls for instructions on how to use them. The man page for FIDEDUPERANGE you quoted correctly documents that src_length = 0 means to dedup nothing.
Regarding the VFS page you quoted, things are just complicated. remap_file_range() handles the functionality for multiple ioctls adapted from btrfs that were originally designed and implemented separately in btrfs. In the clone ioctls, src_length == 0 means clone to the end of the file. In the dedup ioctl, src_length == 0 means dedup nothing. I forget exactly when, but there was an effort to unify the clone and dedup functions. However, it's not very nice to change the ioctl interface of currently supported versions of btrfs. In version 5.16, there's a weird hack involving btrfs_remap_file_range_prep() that converts the len argument to remap_file_range() depending on whether the ioctl was a clone or dedup call. It's tempting to say the VFS documentation is wrong, since I don't think the btrfs behavior here has changed. However, I'm not sure whether other filesystems have implemented remap_file_range() with this meaning of len == 0, so it's just complicated.
|
According to ioctl_fideduperange,The maximum size of src_length is filesystem dependent
andis typically 16 MiB.However, I've been able use src_length of > 1 Gib successfully with a single call to ioctl. Is that warning about 16 MiB just a complete exaggeration, at least for btrfs?
Also, according to the VFS documentation,Implementations must handle callers passing in len == 0; this means “remap to the end of the source file”.However, when I try setting src_length to 0, the ioctl call succeeds but without doing anything.
Am I misreading these two sentences, or does the btrfs implementation simply not conform (well) to the documentation? I'm testing on Linux Mint 20 with kernel 5.4.0-62-generic. I'm using filefrag -sv FILE1 FILE2 to check the block-level allocation of the files, to see if they're duplicated or not. I'm using the program below to deduplicate the files. The files in question are on a RAID-1 btrfs filesystem (on LUKS-encrypted partitions) created with sudo mkfs.btrfs -mraid1 -draid1 /dev/mapper/sda1_crypt /dev/mapper/sdb1_crypt.
Scenario:
$ cp -f file1 file2
$ filefrag -sv file1 file2 # see that files use different extents (are not deduplicated)
$ myprog file1 file2
$ filefrag -sv file1 file2 # see that files use the same extents (have been deduplicated)Program to deduplicate two files:
// deduplicate srcfile and targetfile if contents are identical
// usage: myprog srcfile targetfile
// compile with: gcc myprog.c -o myprog#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <fcntl.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <linux/fs.h>int main(int argc, char**argv)
{
struct stat st;
long size;
__u64 buf[2048]; /* __u64 for proper field alignment */
struct file_dedupe_range *range = (struct file_dedupe_range *)buf; memset(range, 0, sizeof(struct file_dedupe_range));
memset(&range->info, 0, sizeof(struct file_dedupe_range_info)); long srcfd = open(argv[1], O_RDONLY);
if (srcfd < 0) { perror("open-src"); exit(1); }
if (fstat(srcfd, &st) < 0) { perror("stat-src"); exit(1); }
size = st.st_size; long tgtfd = open(argv[2], O_RDWR);
if (tgtfd < 0) { perror("open-tgt"); exit(1); }
if (fstat(tgtfd, &st) < 0) { perror("stat-tgt"); exit(1); }
if (size != st.st_size) {
fprintf(stderr, "SIZE DIFF\n");
exit(1);
} range->src_offset = 0;
range->src_length = size;
// range->src_length = 0; // I expected this to work
range->dest_count = 1;
range->info[0].dest_fd = tgtfd;
range->info[0].dest_offset = 0; while (range->src_length > 0) {
if (ioctl(srcfd, FIDEDUPERANGE, range) < 0) { perror("ioctl"); exit(1); } fprintf(stderr, "bytes_deduped: %llu\n", range->info[0].bytes_deduped);
fprintf(stderr, "status: %d\n", range->info[0].status);
if (range->info[0].status == FILE_DEDUPE_RANGE_DIFFERS) {
fprintf(stderr, "DIFFERS\n");
break;
} else if (range->info[0].status == FILE_DEDUPE_RANGE_SAME) {
fprintf(stderr, "SAME\n");
} else {
fprintf(stderr, "ERROR\n");
break;
} if (range->info[0].bytes_deduped >= range->src_length) { break; }
range->src_length -= range->info[0].bytes_deduped;
range->src_offset += range->info[0].bytes_deduped;
range->info[0].dest_offset += range->info[0].bytes_deduped;
}
exit(0);
}
|
FIDEDUPERANGE ioctl doesn't behave as expected on btrfs
|
As rightly asserted by Tilman in comments, sysfs and ioctl both provide userland access to kernel data structures.
Since the kernel does not need system calls to access to its own data, neither is the sysfs tree built resorting to ioctl calls, nor any user action on its files will translate into ioctl calls.You write " … information is already available by simply reading files…" and this is, I believe, the answer for your final question :
Why can it appear simpler to resort to the sysfs interface ?First because considered in front of your basic ASCII terminal running some shell, the sysfs tree gives access to (binary) kernel data via the most basic cat and echo commands.
Thanks to other basic shell commands, (ls, cd) you can also, following the symlinks get some deep understanding of the relationships between kernel objects.
On top of this the user can take benefits from some (at least minimal) control over the validity of the changes you would wish to commit.This indeed makes sysfs the right way to go when, under console, wishing to tune your system, write scripts or rules, confortabely debug some driver from userspace (the initial destination of sysfs… just think that before… /dev/mem was your only friend for that later purpose.)
All right then, however, there are cases you just don't care of all these facilities, cases where accessing kernel objects via the sysfs interface would just constrain you to… write (much) more code : When writing a C program :
Just imagine : You want to open some file, transcode your data, manage additional error conditions, deal with race conditions ? When a simple ioctl system call is enough (providing you know what you are doing of course).
So you had the answer to your question : When should you prefer this or that way ? Simply because for you, here and now, achieving what you want to achieve will be much simpler using this rather than that.
|
My assumption is that sysfs is built using ioctl queries, meaning all the information you would want (or at least most of it) is already available by simply reading files on sysfs. I notice some programs (e.g., hdparm) still use ioctl calls rather than simply hitting sysfs , and I'm curious if there's a reason for that. Is sysfs unreliable? If you're only interested in hardware info, is there a reason to use ioctl over sysfs?
|
Is there ever a reason to query ioctl for hardware info when we have sysfs?
|
In ${kernel_root}/fs/ioctl.c (in 4.13) there's:
SYSCALL_DEFINE3(ioctl, unsigned int, fd, unsigned int, cmd, unsigned long, arg)That SYSCALL_DEFINE3 is a macro that takes those parameters and expands it to the appropriate signature for the system call. That function is the logical entry point for the ioctl system call from user space. That function, in turn, looks up the struct fd corresponding to the given file descriptor and calls do_vfs_ioctl passing the struct file associated with the struct fd. The call will wind through the VFS layer before it reaches a driver, but that should give you a place to start looking.
|
The prototype of ioctl in linux driver modules is
int ioctl(struct inode *i, struct file *f, unsigned int cmd, unsigned long arg);or
long ioctl(struct file *f, unsigned int cmd, unsigned long arg);but inside sys/ioctl.h it is
int ioctl(int fd, int request, void *argp);The first argument type is different, is there any module between ioctl calling program and driver that converts this argument(From file descriptor to file structure pointer)?
How this mapping works?(From file descriptor to file).
|
mapping of ioctl to its definition
|
GETGEO returns bios drive geometry, which is obsolete. IDENTITY returns the raw ATA device identification sector. You shouldn't use either one. Instead, simply read from the files /sys/block/sda/size and /sys/block/sda/queue/hw_sector_size. The former gives the size in "sectors" as if the sector size were 512 bytes, even if it isn't, and the latter gives the real sector size of the drive. If you want the logical sector size instead, use logical_block_size.
|
Can anyone explain the core difference between HDIO_GETGEO and HDIO_GET_IDENTITY?
From the Linux documentation and this document titled: Summary of HDIO_ ioctl calls., I know that the former is for "getting device geometries" and the latter for "getting IDE identification info".
In the HDIO summary document, it is said that the object of "struct hd_geometry" is passed as an argument to a "ioctl" call and it will contain the "number of sectors".
However, HDIO_GET_IDENTITY returns an unsigned char array. But from this SO question, I hope that struct hd_driveid contains the bytes per sector and other info. And I read somewhere that hd_driveid can be passed as an argument to ioctl if HDIO_GET_IDNTITY is used in the call.
I need a clarification for all these doubts..
Also which HDIO_ ioctl call should I use to get the number of sectors and bytes per sector of my hard disk in Linux?
|
HDIO_GETGEO and HDIO_GET_IDENTITY in Linux using C++
|
The system call involved is … uname! You can see it in your trace:
uname({sysname="Linux", nodename="debian", ...}) = 0It provides the operating system name, release, version etc.
|
Does anyone know if uname() makes an ioctl() call directly or indirectly? I reviewed the source, however didn't see that it does. I also used strace and did not see the kernel call made.
Thanks
strace uname
execve("/usr/bin/uname", ["uname"], 0x7fffe60bef30 /* 35 vars */) = 0
brk(NULL) = 0x559dc7796000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=161332, ...}) = 0
mmap(NULL, 161332, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fd0c9384000
close(3) = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260A\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1820400, ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fd0c9382000
mmap(NULL, 1832960, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fd0c91c2000
mprotect(0x7fd0c91e4000, 1654784, PROT_NONE) = 0
mmap(0x7fd0c91e4000, 1339392, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x22000) = 0x7fd0c91e4000
mmap(0x7fd0c932b000, 311296, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x169000) = 0x7fd0c932b000
mmap(0x7fd0c9378000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b5000) = 0x7fd0c9378000
mmap(0x7fd0c937e000, 14336, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fd0c937e000
close(3) = 0
arch_prctl(ARCH_SET_FS, 0x7fd0c9383540) = 0
mprotect(0x7fd0c9378000, 16384, PROT_READ) = 0
mprotect(0x559dc6f13000, 4096, PROT_READ) = 0
mprotect(0x7fd0c93d3000, 4096, PROT_READ) = 0
munmap(0x7fd0c9384000, 161332) = 0
brk(NULL) = 0x559dc7796000
brk(0x559dc77b7000) = 0x559dc77b7000
openat(AT_FDCWD, "/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=3031632, ...}) = 0
mmap(NULL, 3031632, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fd0c8edd000
close(3) = 0
uname({sysname="Linux", nodename="debian", ...}) = 0
fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(0x88, 0), ...}) = 0
write(1, "Linux\n", 6Linux
) = 6
close(1) = 0
close(2) = 0
exit_group(0) = ?
+++ exited with 0 +++
|
uname: what ioctl does it use?
|
Are both the above prototypes suitable in this case? If yes, why? If no, how to choose the right one?They are not both suitable. Only version 2 is currently available in the kernel, so this is the version that should be used.What header/source file(s) contain these prototypes? In other words: what is the official reference file for these prototypes?They are in include/linux/fs.h (this is a path relative to the kernel sourcecode root directory), inside the struct file_operations definition:
long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);That is: the member unlocked_ioctl must be a pointer to a function
long ioctl(struct file *f, unsigned int cmd, unsigned long arg);which is exactly version 2. If a function my_ioctl() is defined inside a kernel module using version 1 instead, a compiler error will be generated:
error: initialization of ‘long int (*)(struct file *, unsigned int, long unsigned int)’ from incompatible pointer type ‘long int (*)(struct inode *, struct file *, unsigned int, long unsigned int)’ [-Werror=incompatible-pointer-types]
.unlocked_ioctl = my_ioctl
^~~~~~~~Some additional comments
Version 1 has been the only one, till kernel 2.6.10, where struct file_operations only had
int (*ioctl) (struct inode *, struct file *, unsigned int, unsigned long);This ioctl function, however, created a Big Kernel Lock (BKL): it locked the whole kernel during its operation. This is undesirable. So, from 2.6.11,
int (*ioctl) (struct inode *, struct file *, unsigned int, unsigned long);
long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);A new way to use ioctls has been introduced, which did not lock the kernel. Here the old ioctl with kernel lock and the new unlocked_ioctl coexist. From 2.6.36, the old ioctl has been removed. All the drivers should be updated accordingly, to only use unlocked_ioctl. Refer to this answer for more information.
In a recent kernel release (5.15.2), it seems that there are still few files using the old ioctl:
linux-5.15.2$ grep -r "ioctl(struct inode" *
Documentation/cdrom/cdrom-standard.rst: int cdrom_ioctl(struct inode *ip, struct file *fp,
drivers/staging/vme/devices/vme_user.c:static int vme_user_ioctl(struct inode *inode, struct file *file,
drivers/scsi/dpti.h:static int adpt_ioctl(struct inode *inode, struct file *file, uint cmd, ulong arg);
drivers/scsi/dpt_i2o.c:static int adpt_ioctl(struct inode *inode, struct file *file, uint cmd, ulong arg)
fs/fuse/ioctl.c:static int fuse_priv_ioctl(struct inode *inode, struct fuse_file *ff,
fs/btrfs/ioctl.c:static noinline int search_ioctl(struct inode *inode,
fs/ocfs2/refcounttree.h:int ocfs2_reflink_ioctl(struct inode *inode,
fs/ocfs2/refcounttree.c:int ocfs2_reflink_ioctl(struct inode *inode,
net/sunrpc/cache.c:static int cache_ioctl(struct inode *ino, struct file *filp,vme_user.c, dpt_i2o.c and cache.c, however, have:
static const struct file_operations adpt_fops = {
.unlocked_ioctl = adpt_unlocked_ioctl,and then
static long adpt_unlocked_ioctl(struct file *file, uint cmd, ulong arg)
{
struct inode *inode;
long ret; inode = file_inode(file); mutex_lock(&adpt_mutex);
ret = adpt_ioctl(inode, file, cmd, arg);So they use the old version, inside the new (getting the inode from the available data, as suggested by Andy Dalton in the comments). As regards the files inside fs: they seem not to use a struct file_operations; also, their functions are not the ioctl defined in
int (*ioctl) (struct inode *, struct file *, unsigned int, unsigned long);because they take different parameters (fuse_priv_ioctl in fs/fuse/ioctl.c, search_ioctl in fs/btrfs/ioctl.c, ocfs2_reflink_ioctl in fs/ocfs2/refcounttree.c), so they maybe are only used internally in the driver.
So, the assumption in the linked question that two versions are available for the ioctl function inside a Linux kernel module is wrong. Only unlocked_ioctl (version 2) must be used.
|
As pointed out in this question, the prototype for the ioctl function inside a Linux kernel module is:
(version 1)
int ioctl(struct inode *i, struct file *f, unsigned int cmd, unsigned long arg);or
(version 2)
long ioctl(struct file *f, unsigned int cmd, unsigned long arg);I would like to use them in a kernel module which implements a character device driver.Are both the above prototypes suitable in this case? If yes, why? If no, how to choose the right one?
What header/source file(s) contain these prototypes? In other words: what is the official reference file for these prototypes?I'm running Ubuntu 20.04 on x86_64 and these are my available header files:
/usr/include/asm-generic/ioctl.h
/usr/include/linux/ioctl.h
/usr/include/linux/mmc/ioctl.h
/usr/include/linux/hdlc/ioctl.h
/usr/include/x86_64-linux-gnu/sys/ioctl.h
/usr/include/x86_64-linux-gnu/asm/ioctl.hThe only significant line is in /usr/include/x86_64-linux-gnu/sys/ioctl.h:
extern int ioctl (int __fd, unsigned long int __request, ...) __THROW;but I can't find here any clue about the above two alternative prototypes.
|
Two different function prototypes for Linux kernel module ioctl
|
According to a mailing list question from 2003, Reiserfs doesn't support chattr. Granted that was a long time ago, but given your error above, it seems likely that it still doesn't.
|
I am unable to set or view file attributes using lsattr and chattr commands on Reiser File System. Following result is observed:
chattr +i Temp.txt
chattr: Inappropriate ioctl for device while reading flags on Temp.txtlsattr Temp.txt
lsattr: Inappropriate ioctl for device While reading flags on Temp.txtIs there a way to get file attributes with ReiserFS or how should I access file attributes on ReiserFS?
|
Inappropriate ioctl for device while reading flags on <file>
|
The hypervisor means the layer that manages a virtual environment, like VMware, XEN or VirtualBox.
So the steal field, should be an interesting field to monitor, to detect problems or oversubscription of a virtualised environment. The field itself means the time the VM CPU has to wait for others VMs (virtual machines) finishing their turn (slice), or for a task of the hypervisor itself.
The st field is present in the iostat, vmstat, sar and top commands.
However, this thread confirms the steal field is not supported in VmWare VMs (I tested it in VMware 5.5 and I corroborate it). VirtualBox doesn't provide CPU steal time data also. It is supported by Xen and KVM virtual environments.
vmstat also has the same field in the CPU area, but only after Debian 8.
For sar to work sysstat data collection has to be enabled.
As per man vmstat:st: Time stolen from a virtual machine. Prior to Linux 2.6.11,
unknown.Related thread Tools for Monitoring Steal Time (st)
Further reading: CPU Time stolen from a virtual machine?It’s the time the hypervisor scheduled something else to run instead
of something within your VM. This might be time for another VM, or for
the Hypervisor host itself. If no time were stolen, this time would be
used to run your CPU workload or your idle thread.
|
In output of iostat there is a steal field, according to man page the field is used to:Show the percentage of time spent in involuntary wait by the virtual CPU or CPUs while the hypervisor was servicing another virtual processor.But what does that mean? Does it means the kernel itself is too busy to manage a cpu, and cause the cpu to be idle?
|
iostat - What does the 'steal' field mean?
|
It could well be that the data had not been flushed to disk during the first cp operation, but was during the second.
Try setting vm.dirty_background_bytes to something small, like 1048576 (1 MiB) to see if this is the case; run sysctl -w vm.dirty_background_bytes=1048576, and then your first cp scenario should show I/O.
What's going on here?
Except in cases of synchronous and/or direct I/O, writes to disk get buffered in memory until a threshold is hit, at which point they begin to be flushed to disk in the background. This threshold doesn't have an official name, but it's controlled by vm.dirty_background_bytes and vm.dirty_background_ratio, so I'll call it the "Dirty Background Threshold." From the kernel docs:vm.dirty_background_bytes
Contains the amount of dirty memory at which the background kernel flusher threads will start writeback.
Note: dirty_background_bytes is the counterpart of dirty_background_ratio. Only one of them may be specified at a time. When one sysctl is written it is immediately taken into account to evaluate the dirty memory limits and the other appears as 0 when read.
dirty_background_ratio
Contains, as a percentage of total available memory that contains free pages and reclaimable pages, the number of pages at which the background kernel flusher threads will start writing out dirty data.
The total available memory is not equal to total system memory.vm.dirty_bytes and vm.dirty_ratio
There's a second threshold, beyond this one. Well, more a limit than a threshold, and it's controlled by vm.dirty_bytes and vm.dirty_ratio. Again, it doesn't have an official name, so we'll call it the "Dirty Limit". Once enough data has been "written", but not committed to the underlying block device, further attempts to write will have to wait for write I/O to complete. (The precise details of what data they'll have to wait on is unclear to me, and may be a function of the I/O scheduler. I don't know.)
Why?
Disks are slow. Spinning rust especially so, so while the R/W head on a disk is moving to satisfy a read request, no write requests can serviced until the read request completes and the write request can be started. (And vice versa)
Efficiency
This is why we buffer write requests in memory and cache data we've read; we move work from the slow disk to faster memory. When we eventually go to commit the data to disk, we've got a good quantity of data to work with, and we can try to write it in a way that minimizes seek time. (If you're using an SSD, replace the concept of disk seek time with reflashing of SSD blocks; reflashing consumes SSD life and is a slow operation, which SSDs attempt--to varying degrees of success--to hide with their own write caching.)
We can tune how much data gets buffered before the kernel attempts to write it to disk using vm.dirty_background_bytes and vm.dirty_background_ratio.
Too much write data buffered!
If the amount of data you're writing is too great for how quickly it's reaching disk, you'll eventually consume all your system memory. First, your read cache will go away, meaning fewer read requests will be serviced from memory and have to be serviced from disk, slowing down your writes even further! If your write pressure still doesn't let up, eventually like memory allocations will have to wait on your write cache getting freed up some, and that'll even more disruptive.
So we have vm.dirty_bytes (and vm.dirty_ratio); it lets us say, "hey, wait up a minute, it's really time we got data to the disk, before this gets any worse."
Still too much data
Putting a hard stop on I/O is very disruptive, though; disk is already slow from the perspective of reading processes, and it can take several seconds to several minutes for that data to flush; consider vm.dirty_bytes's default of 20. If you have a system with 16GiB of RAM and no swap, you might find your I/O blocked while you wait for 3.4GiB of data to get flushed to disk. On a server with 128GiB of RAM? You're going to have services timing out while you wait on 27.5GiB of data!
So it's helpful to keep vm.dirty_bytes (or vm.dirty_ratio, if you prefer) fairly low, so that if you hit this hard threshold, it will only be minimally disruptive to your services.
What are good values?
With these tunables, you're always trading between throughput and latency. Buffer too much, and you'll have great throughput but terrible latency. Buffer too little, and you'll have terrible throughput but great latency.
On workstations and laptops with only single disks, I like to set vm.dirty_background_bytes to around 1MiB, and vm.dirty_bytes to between 8MiB and 16MiB. I very rarely find a throughput benefit beyond 16MiB for single-user systems, but the latency hangups can get pretty bad for any synchronous workloads like web browser data stores.
On anything with a striped parity array, I find some multiple of the array's stripe width to be a good starting value for vm.dirty_background_bytes; it reduces the likelihood of needing to perform a read/update/write sequence while updating parity, improving array throughput.
For vm.dirty_bytes, it depends on how much latency your services can suffer. Myself, I like calculating the theoretical throughput of the block device, use that to calculate how much data it could move in 100ms or so, and setting vm.dirty_bytes accordingly. A 100ms delay is huge, but it's not catastrophic (in my environment.)
All of this depends on your environment, though; these are only a starting point for finding what works well for you.
|
os: centos7
test file: a.txt 1.2G
monitor command: iostat -xdm 1
The first scene:
cp a.txt b.txt #b.txt is not existThe second scene:
cp a.txt b.txt #b.txt is existWhy the first scene don't consume IO, but the second scene consume IO?
|
why linux cp command don't consume disk IO?
|
A merge happens when two i/o requests can be collapsed into one single longer-length request. For example, a write to block 1234 followed by a write to block 1235 can be merged into a single i/o request for block 1234 of length 2 blocks. As this sort of situation can be fairly common it is worth putting the effort in the kernel to do the merge, freeing up an i/o request structture, and reducing interrupt overhead.
If you are interested in more detailed statistics on this aspect of i/o see the pdf btt user guide which is part of blktrace.
|
From iostat man pages:
rrqm/s
The number of read requests merged per second that were queued to the device.wrqm/s
The number of write requests merged per second that were queued to the device.r/s
The number (after merges) of read requests completed per second for the device.w/s
The number (after merges) of write requests completed per second for the device.Can anyone elaborate on the merge concept since the documentation does not provide any further details?
|
iostat: what is exactly the concept of merge
|
Why is the size of my IO requests being limited, to about 512K?I posit that I/O is being limited to "about" 512 KiB due to the way it is being submitted and various limits being reached (in this case /sys/block/sda/queue/max_segments). The questioner took the time to include various pieces of side information (such as kernel version and the blktrace output) that allows us to take a guess at this mystery so let's see how I came to that conclusion.Why [...] limited, to about 512K?It's key to note the questioner carefully said "about" in the title. While the iostat output makes us think we should be looking for values of 512 KiB:
Device [...] aqu-sz rareq-sz wareq-sz svctm %util
sda [...] 1.42 511.81 0.00 1.11 34.27the blktrace (via blkparse) gives us some exact values:
8,0 0 3090 5.516361551 15201 Q R 6496256 + 2048 [dd]
8,0 0 3091 5.516370559 15201 X R 6496256 / 6497600 [dd]
8,0 0 3092 5.516374414 15201 G R 6496256 + 1344 [dd]
8,0 0 3093 5.516376502 15201 I R 6496256 + 1344 [dd]
8,0 0 3094 5.516388293 15201 G R 6497600 + 704 [dd]
8,0 0 3095 5.516388891 15201 I R 6497600 + 704 [dd](We typically expect a single sector to be 512 bytes in size) So the read I/O from dd for sector 6496256 that was sized 2048 sectors (1 MiByte) was split into two pieces - one read starting at sector 6496256 for 1344 sectors and another read starting at sector 6497600 for 704 sectors. So the max size of a request before it is split is slightly more than 1024 sectors (512 KiB)... but why?
The questioner mentions a kernel version of 5.1.15-300.fc30.x86_64. Doing a Google search for linux split block i/o kernel turns up "Chapter 16. Block Drivers" from Linux Device Drivers, 3rd Edition and that mentions[...] a bio_split call that can be used to split a bio into multiple chunks for submission to more than one deviceWhile we're not splitting bios because we intend to send them to different devices (in the way md or device mapper might) this still gives us an area to explore. Searching LXR's 5.1.15 Linux kernel source for bio_split includes a link to the file block/blk-merge.c. Inside that file there is blk_queue_split() and for non special I/Os that function calls blk_bio_segment_split().
(If you want to take a break and explore LXR now's a good time. I'll continue the investigation below and try and be more terse going forward)
In blk_bio_segment_split() the max_sectors variable ultimately comes from aligning the value returned blk_max_size_offset() and that looks at q->limits.chunk_sectors and if that's zero then just returns q->limits.max_sectors. Clicking around, we see how max_sectors is derived from max_sectors_kb in queue_max_sectors_store() which is in block/blk-sysfs.c. Back in blk_bio_segment_split(), the max_segs variable comes from queue_max_segments() which returns q->limits.max_segments. Continuing down blk_bio_segment_split() we see the following:
bio_for_each_bvec(bv, bio, iter) {According to block/biovecs.txt we're iterating over over multi-page bvec.
if (sectors + (bv.bv_len >> 9) > max_sectors) {
/*
* Consider this a new segment if we're splitting in
* the middle of this vector.
*/
if (nsegs < max_segs &&
sectors < max_sectors) {
/* split in the middle of bvec */
bv.bv_len = (max_sectors - sectors) << 9;
bvec_split_segs(q, &bv, &nsegs,
&seg_size,
&front_seg_size,
§ors, max_segs);
}
goto split;
}So if the I/O size is bigger than max_sectors_kb (which is 1280 KiB in the questioner's case) it will be split (if there are spare segments and sector space then we'll fill the current I/O as much as possible before splitting by dividing it into segments and adding as many as possible). But in the questioner's case the I/O is "only" 1 MiB which is smaller than 1280 KiB so we're not in this case... Further down we see:
if (bvprvp) {
if (seg_size + bv.bv_len > queue_max_segment_size(q))
goto new_segment;
[...]queue_max_segment_size() returns q->limits.max_segment_size. Given some of what we've seen earlier (if (sectors + (bv.bv_len >> 9) > max_sectors)) bv.bv_len is going to be in terms of bytes (otherwise why do we have to divide it by 512?) and the questioner said /sys/block/sda/queue/max_segment_size was 65336. If only we knew what value bv.bv_len was...
[...]
new_segment:
if (nsegs == max_segs)
goto split; bvprv = bv;
bvprvp = &bvprv; if (bv.bv_offset + bv.bv_len <= PAGE_SIZE) {
nsegs++;
seg_size = bv.bv_len;
sectors += bv.bv_len >> 9;
if (nsegs == 1 && seg_size > front_seg_size)
front_seg_size = seg_size;
} else if (bvec_split_segs(q, &bv, &nsegs, &seg_size,
&front_seg_size, §ors, max_segs)) {
goto split;
}
} do_split = false;So for each bv we check to see if it is a single-page or multi-page bvec (by checking whether its size is <= PAGE_SIZE). If it's a single-page bvec we add one to the segment count and do some bookkeeping. If it's a multi-page bvec we check if it needed splitting into smaller segments (the code in bvec_split_segs() does comparisons against get_max_segment_size() which in this case means it will split the segment into multiple segments no bigger than 64 KiB (earlier we said /sys/block/sda/queue/max_segment_size was 65336) but there must be no more than 168 (max_segs) segments. If bvec_split_segs() reached the segment limit and didn't cover all of the bv's length then we will jump to split. However, IF we assume we take the goto split case we only generate 1024 / 64 = 16 segments so ultimately we wouldn't have to submit less than 1 MiB I/O so this is not the path the questioner's I/O went through...
Working backwards, if we assume there were "only single-page sized segments" this means we can deduce bv.bv_offset + bv.bv_len <= 4096 and since bv_offset is an unsigned int then that means 0 <= bv.bv_len <= 4096. Thus we can also deduce we never took the condition body that led to goto new_segment earlier. We then go on to conclude that the original biovec must have had 1024 / 4 = 256 segments. 256 > 168 so we would have caused a jump to split just after new_segment thus generating one I/O of 168 segments and another of 88 segments. 168 * 4096 = 688128 bytes, 88 * 4096 = 360448 bytes but so what? Well:688128 / 512 = 1344
360448 / 512 = 704Which are the numbers we saw in the blktrace output:
[...] R 6496256 + 2048 [dd]
[...] R 6496256 / 6497600 [dd]
[...] R 6496256 + 1344 [dd]
[...] R 6496256 + 1344 [dd]
[...] R 6497600 + 704 [dd]
[...] R 6497600 + 704 [dd]So I propose that the dd command line you're using is causing I/O to be formed into single-page bvecs and because the maximum number of segments is being reached, splitting of I/O happens at a boundaries of 672 KiB for each I/O.
I suspect if we'd submitted I/O a different way (e.g. via buffered I/O) such that multi-page bvecs were generated then we would have seen a different splitting point.Is there a configuration option for this behaviour?Sort of - /sys/block/<block device>/queue/max_sectors_kb is a control on the maximum size that a normal I/O submitted through the block layer can be before it is split but it is only one of many criteria - if other limits are reached (such as the maximum segments) then a block based I/O may be split at a smaller size. Also, if you use raw SCSI commands it's possible to submit an I/O up to /sys/block/<block device>/queue/max_hw_sectors_kb in size but then you're bypassing the block layer and bigger I/Os will just be rejected.
In fact you can Ilya Dryomov describing this max_segments limitation in a June 2015 Ceph Users thread "krbd splitting large IO's into smaller IO's" and a fix later went in for rbd devices (which itself was later fixed).
Further validation of the above come via a document titled "When 2MB turns into 512KB" by kernel block layer maintainer Jens Axboe, which has a section titled "Device limitations" covering the maximum segments limitation more succinctly.
|
I read /dev/sda using a 1MiB block size. Linux seems to limit the IO requests to 512KiB an average size of 512KiB. What is happening here? Is there a configuration option for this behaviour?
$ sudo dd iflag=direct if=/dev/sda bs=1M of=/dev/null status=progress
1545601024 bytes (1.5 GB, 1.4 GiB) copied, 10 s, 155 MB/s
1521+0 records in
1520+0 records out
...While my dd command is running, rareq-sz is 512.rareq-sz
The average size (in kilobytes) of the read requests that were issued to the device.
-- man iostat$ iostat -d -x 3
...
Device r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
sda 309.00 0.00 158149.33 0.00 0.00 0.00 0.00 0.00 5.24 0.00 1.42 511.81 0.00 1.11 34.27
dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
...The kernel version is 5.1.15-300.fc30.x86_64. max_sectors_kb is 1280.
$ cd /sys/class/block/sda/queue
$ grep -H . max_sectors_kb max_hw_sectors_kb max_segments max_segment_size optimal_io_size logical_block_size chunk_sectors
max_sectors_kb:1280
max_hw_sectors_kb:32767
max_segments:168
max_segment_size:65536
optimal_io_size:0
logical_block_size:512
chunk_sectors:0By default I use the BFQ I/O scheduler. I also tried repeating the test after echo 0 | sudo tee wbt_lat_usec. I also then tried repeating the test after echo mq-deadline|sudo tee scheduler. The results remained the same.
Apart from WBT, I used the default settings for both I/O schedulers. E.g. for mq-deadline, iosched/read_expire is 500, which is equivalent to half a second.
During the last test (mq-deadline, WBT disabled), I ran btrace /dev/sda. It shows all the requests were split into two unequal halves:
8,0 0 3090 5.516361551 15201 Q R 6496256 + 2048 [dd]
8,0 0 3091 5.516370559 15201 X R 6496256 / 6497600 [dd]
8,0 0 3092 5.516374414 15201 G R 6496256 + 1344 [dd]
8,0 0 3093 5.516376502 15201 I R 6496256 + 1344 [dd]
8,0 0 3094 5.516388293 15201 G R 6497600 + 704 [dd]
8,0 0 3095 5.516388891 15201 I R 6497600 + 704 [dd]
8,0 0 3096 5.516400193 733 D R 6496256 + 1344 [kworker/0:1H]
8,0 0 3097 5.516427886 733 D R 6497600 + 704 [kworker/0:1H]
8,0 0 3098 5.521033332 0 C R 6496256 + 1344 [0]
8,0 0 3099 5.523001591 0 C R 6497600 + 704 [0]X -- split On [software] raid or device mapper setups, an incoming i/o may straddle a device or internal zone and needs to be chopped up into smaller
pieces for service. This may indicate a performance problem due to a bad setup of that raid/dm device, but may also just be part of
normal boundary conditions. dm is notably bad at this and will clone lots of i/o.
-- man blkparseThings to ignore in iostat
Ignore the %util number. It is broken in this version. (`dd` is running at full speed, but I only see 20% disk utilization. Why?)
I thought aqu-sz is also affected due to being based on %util. Although I thought that meant it would be about three times too large here (100/34.27).
Ignore the svtm number. "Warning! Do not trust this field any more. This field will be removed in a future sysstat version."
|
Why is the size of my IO requests being limited, to about 512K?
|
The ionice command is "nice for IO" and will run a command with different IO priorities, so it will (or won't) yield to other processes that want to use the disk.
ionice -c 3 tar xf ...will run the tar command with "idle" priority, so it only uses the disk when nobody else wants to. That will prevent it interfering with other processes.
There won't be much benefit in running multiple extractions in parallel in this case. A tar file is just concatenated data and some headers, so there's nothing much except reading and writing to do. It might be useful if you're working on different disks, or for certain SSDs.
|
I have a web server (specs below) with 12 TB of storage. I am moving massive amounts of csv files packaged in TAR's to the server, then extracting on the server. The problem is that when extracting the TAR files, the server become so slow that it's almost unusable. I'm not doing anything crazy, generally running 2-4 extractions at a time. But even just running one or two slows the server down noticeably. This is going to be a massive problem for me since I will be uploading and extracting TAR files while people will want to use the site and right now I can't do both. I'm really new to Linux and this community so let me know if I can provide any more useful info and I'll update the post.
I'm guessing the disk is the bottleneck?
If so, can I limit the tar extraction disk usage or give everything else priority?
I/O Stat:
avg-cpu: %user %nice %system %iowait %steal %idle
0.15 0.56 0.40 14.83 0.00 84.06Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
loop0 0.00 0.00 0.00 1907 2
sda 155.19 787.23 1484.89 604305327 1139862930
sdb 154.49 765.39 1493.48 587544552 1146456242
sdc 153.82 759.91 1485.53 583338594 1140353662
md4 1041.52 1861.40 4425.45 1428880721 3397151904
md3 4.78 46.70 11.08 35850458 8501904
md2 0.00 0.00 0.00 3641 98TOP:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7194 root 20 0 0 0 0 D 5.0 0.0 0:17.38
13811 user1 20 0 121272 1620 1464 D 4.3 0.0 0:02.20 tarServer Specs:Intel Atom C2750, 8c/8t - 2.4GHz /2.6GHz, 16GB DDR3 ECC 1600 MHz
|
Linux - Extracting Tar dramatically slows down server
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.