output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
Ypu can write your own FUSE filesystem (what you can do using almost any scripting/programming language, even bash) , that would just proxy filesystem calls to pointed filesystem (and eventually translate paths) plus monitor what you minght want to monitor. Otherwise you might investigate output of strace for programs performing I/O calls ofninterest, if possible.
While an application is running, I can monitor disk bandwidth usage using linux tools including dstat. Now I'd like to know how many sequential or random disk I/Os are occurring in the system. Does any one know any ways to achieve this?
Is there a way to monitor disk i/o patterns? (i.e. random or sequential i/o?)
Could you list the specific command you are using? The first printout is usually the average over the life of the system it rarely changes. Run "iostat -x 1 10" that will get you 10 runs of iostat in 1 second intervals with extended statistics. run 2 - 10 should have the data you want. If it does then you can fiddle with the parameters to get exactly what you need.
My IOstat doesn't change...at all. It'll show a change in blocks being read and written, but it doesn't change at all when it comes to blocks/kB/MB read and written. When the server sits idle...it shows 363kB_read/s, 537kB_wrtn/s. If I put it under heavy load...it says the same thing. Is it bugged out? How do I fix it? Using Centos 6, being used a primary mysql server.
Why doesn't my IOstat change its output at all?
The source of the abysmal iostat output for %util and svctm seems to be related to a kernel bug which will be solved in kernel-3.10.0-1036.el7 or in RHEL/CentOS release 7.7. Devices that have the scheduler set to none are affected, which is the default for NVME drives. For reference, there is a Redhat solution (login required) which describes the bug. In the CentOS bug report someone wrote that the issue will be solved with the above mentioned kernel/release version. Changing the scheduler should resolve the issue, until the new kernel is available. As it seems to only affects metrics not real performance, another possibility would be to just ignore the metrics until the new kernel. I cannot verify this due to lack of NVME drive, maybe @michal kralik can verify this.
I have a CentOS 7 server (kernel 3.10.0-957.12.1.el7.x86_64) with 2 NVMe disks with the following setup: # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 477G 0 disk ├─nvme0n1p1 259:2 0 511M 0 part /boot/efi ├─nvme0n1p2 259:4 0 19.5G 0 part │ └─md2 9:2 0 19.5G 0 raid1 / ├─nvme0n1p3 259:7 0 511M 0 part [SWAP] └─nvme0n1p4 259:9 0 456.4G 0 part └─data-data 253:0 0 912.8G 0 lvm /data nvme1n1 259:1 0 477G 0 disk ├─nvme1n1p1 259:3 0 511M 0 part ├─nvme1n1p2 259:5 0 19.5G 0 part │ └─md2 9:2 0 19.5G 0 raid1 / ├─nvme1n1p3 259:6 0 511M 0 part [SWAP] └─nvme1n1p4 259:8 0 456.4G 0 part └─data-data 253:0 0 912.8G 0 lvm /dataOur monitoring and iostat continually shows nvme0n1 and nvme1n1 with 80%+ io utilization while the individual partitions have 0% io utilization and are fully available (250k iops, 1GB read/write per sec). avg-cpu: %user %nice %system %iowait %steal %idle 7.14 0.00 3.51 0.00 0.00 89.36Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util nvme1n1 0.00 0.00 0.00 50.50 0.00 222.00 8.79 0.73 0.02 0.00 0.02 14.48 73.10 nvme1n1p1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 nvme1n1p2 0.00 0.00 0.00 49.50 0.00 218.00 8.81 0.00 0.02 0.00 0.02 0.01 0.05 nvme1n1p3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 nvme1n1p4 0.00 0.00 0.00 1.00 0.00 4.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00 nvme0n1 0.00 0.00 0.00 49.50 0.00 218.00 8.81 0.73 0.02 0.00 0.02 14.77 73.10 nvme0n1p1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 nvme0n1p2 0.00 0.00 0.00 49.50 0.00 218.00 8.81 0.00 0.02 0.00 0.02 0.01 0.05 nvme0n1p3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 nvme0n1p4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 0.00 48.50 0.00 214.00 8.82 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 1.00 0.00 4.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00Any ideas what can be the root cause for such behavior? All seems to be working fine except for monitoring triggering high io alerts.
NVMe disk shows 80% io utilization, partitions show 0% io utilization
The man page of iostat says: The interval parameter specifies the amount of time in seconds between each report. The first report contains statistics for the time since system startup (boot), unless the -y option is used (in this case, this report is omitted). Each subsequent report contains statistics collected during the interval since the previous report. This means the first output of iostat -dx 1 will be the same as iostat -dx, but subsequent outputs are different. - You cannot reproduce this behavior using watch.
I'm using iostat to get the current disk load each second with iostat -dx 1 (specially, %util column). However, I'd like to put this in a bash script and control the interval with the watch command such as: watch -n 1 ./script.sh. Running the following in script.sh won't print a thing: io_load=`iostat -dx 1` echo $io_loadAny ideas?
Current disk load
iostat displays stats since boot, once (per command run, not per boot). Then depending on parameters (eg: running iostat 2, for every two seconds), it will display stats since previous display in the same command run:The first report generated by the iostat command provides statistics concerning the time since the system was booted, unless the -y option is used (in this case, this first report is omitted). Each subsequent report covers the time since the previous report. All statistics are reported each time the iostat command is run. The report consists of a CPU header row followed by a row of CPU statistics. On multiprocessor systems, CPU statistics are calculated system-wide as averages among all processors. A device header row is displayed followed by a line of statistics for each device that is configured.Really iostat is just doing a few substractions. The bookkeeping role is done by the kernel. iostat just accesses various /proc (or perhaps other similar entries). Among them (found simply by using strace on iostat 2): /proc/diskstats /proc/uptime /proc/statFirst read is since boot. To know since its last display, the iostat memorizes (while it's running, in memory) the previous values, and substracts them to the newly read: that's what happened during the time period. To confirm OP's questions: every run of iostat command is independant of other runs of the iostat command. It won't affect an other concurrently running iostat command or future runs of the iostat command.
I see conflicting information online about use of IOSTAT. In particular I would like to be able to show an average since boot. Based on information I have read if I have never issued the command IOSTAT it will show the average since boot. But if at some point I have issued an IOSTAT command the next execution will not be since boot, but rather since last execution. How do I execute IOSTAT since boot assuming I have already run it once before.
Does IOSTATS show output since boot or since last execution?
A search through the Illumos fiber-channel device code for ENODEV shows 13 uses of ENODEV in the source code that originated as OpenSolaris. Of those instances, I suspect this is the one most likely to cause your "No device" errors: pd = fctl_hold_remote_port_by_pwwn(port, &pwwn); if (pd == NULL) { fcio->fcio_errno = FC_BADDEV; return (ENODEV); }That code is in the fp_fcio_login() function, where the code appears to be trying to login to a remote WWN. It seems appropriate to assume a bad cable could prevent that from happening. Note that fiber-channel error code is FC_BADDEV, which also seems appropriate for a bad cable. In short, a review of the source code indicates that ENODEV errors are consistent a bad cable. You can use dTrace to more closely identify the association if necessary. Given that both hard and transport errors occur about 5 or 6 orders of magnitude more frequently, IMO that effort isn't necessary until the ENODEV errors occur after the other errors are addressed and no longer occur.
We presume to have a faulty cable that connects the SAN to a direct I/O LDOM. This is a snippet of the error when running iostat -En c5t60060E8007C50E000030C50E00001067d0 Soft Errors: 0 Hard Errors: 696633 Transport Errors: 704386 Vendor: HITACHI Product: OPEN-V Revision: 8001 Serial No: 504463 Size: 214.75GB <214748364800 bytes> Media Error: 0 Device Not Ready: 0 No Device: 6 Recoverable: 0 Illegal Request: 1 Predictive Failure Analysis: 0What does No Device: 6 mean here?
What does "no device" mean when running iostat -En
The easiest way is just to use `egrep': iostat -xd 5 | egrep -v "Linux|Device"egrep prints lines with multiple strings via extended regular expressions and -v prints lines not containing those strings which, in this case, are "Linux" and "Device". Output: vda 0.07 0.28 0.31 4.22 9.25 28.56 16.72 0.08 16.70 38.40 15.12 5.92 2.68 scd0 0.00 0.00 0.00 0.00 0.00 0.00 7.99 0.00 0.88 0.88 0.00 0.88 0.00 dm-0 0.00 0.00 0.28 3.01 8.86 28.13 22.50 0.05 16.58 41.32 14.27 8.11 2.67 dm-1 0.00 0.00 0.09 0.11 0.38 0.43 8.04 0.00 6.45 8.44 4.72 1.00 0.02vda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
I would like to remove both headers (that are incidentally repeated). Any solution for it? [root@report]# iostat -xd 5 Linux 3.10.0-693.21.1.el7.x86_64 (mdds-pgbackup-01) 07/05/2018 _x86_64_ (2 CPU)Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util vda 0.07 0.28 0.31 4.22 9.25 28.56 16.72 0.08 16.70 38.40 15.12 5.92 2.68 scd0 0.00 0.00 0.00 0.00 0.00 0.00 7.99 0.00 0.88 0.88 0.00 0.88 0.00 dm-0 0.00 0.00 0.28 3.01 8.86 28.13 22.50 0.05 16.58 41.32 14.27 8.11 2.67 dm-1 0.00 0.00 0.09 0.11 0.38 0.43 8.04 0.00 6.45 8.44 4.72 1.00 0.02Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util vda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00expected output: [root@report]# iostat -xd 5vda 0.07 0.28 0.31 4.22 9.25 28.56 16.72 0.08 16.70 38.40 15.12 5.92 2.68 scd0 0.00 0.00 0.00 0.00 0.00 0.00 7.99 0.00 0.88 0.88 0.00 0.88 0.00 dm-0 0.00 0.00 0.28 3.01 8.86 28.13 22.50 0.05 16.58 41.32 14.27 8.11 2.67 dm-1 0.00 0.00 0.09 0.11 0.38 0.43 8.04 0.00 6.45 8.44 4.72 1.00 0.02vda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
How can I remove the headers
You cannot do so directly. sar(systat) and friends are fundamantally limited to daily records. From "sadc.c" (in sysstat-11.7.2): 485 void setup_file_hdr(int fd) 486 { ... 507 file_hdr.sa_day = rectime.tm_mday; 508 file_hdr.sa_month = rectime.tm_mon; 509 file_hdr.sa_year = rectime.tm_year;So the file header contains one and only one day. Somewhat convincing is the format of an individual record. From "sa.h": 604 struct record_header { ... 617 /* 618 * Timestamp: Hour (0-23), minute (0-59) and second (0-59). 619 * Used to determine TRUE time (immutable, non locale dependent time). 620 */ 621 unsigned char hour; 622 unsigned char minute; 623 unsigned char second;However, the struct also contains machine uptime in 1/100th of a second and number of seconds since the epoch. I'd have to do some more looking to see how these values are used (which I'm not going to do) so this is more of a hint rather than proof.
I don't see 'sar' command accepts date-and-time as starttime(-s) or endtime(-e) than just time. So, how to query 'sar' for more than one day's data point with older date and times(-f not going to help here). The output of the 'sar' command should have date value too as well for the data points - instead of just time in hours and minutes. I see sysstat splitting pa data files per day-wise. Is it ok to modify the default sysstat cron entries to collect sysstat(sa1/sa2) data in a single pa file per week.sysstat config: cat /etc/sysconfig/sysstat # sysstat-9.0.4 configuration file.# How long to keep log files (in days). # If value is greater than 28, then log files are kept in # multiple directories, one for each month. HISTORY=7# Compress (using gzip or bzip2) sa and sar files older than (in days): COMPRESSAFTER=10# Parameters for the system activity data collector (see sadc manual page) # which are used for the generation of log files. SADC_OPTIONS="-S DISK"sysstat cron entries: cat /etc/cron.d/sysstat # Run system activity accounting tool every 10 minutes */10 * * * * root /usr/lib64/sa/sa1 1 1 # 0 * * * * root /usr/lib64/sa/sa1 600 6 & # Generate a daily summary of process accounting at 23:53 53 23 * * * root /usr/lib64/sa/sa2 -A
how to query sar(sysstat) for more than one day data points
kB_wrtn is the total amount written during the iostat update interval. I suppose you used an interval of one second to generate the output in your question, which has the effect that kB_wrtn/sec is the same. Try a different interval to see the difference.
/dev/sdc is a SATA hard drive. Do the kB_read and kB_wrtn fields sometimes, in some situations, show total counts? Here it seems to be just the same as the per second value.Linux kernel 5.4.0-26-generic. sysstat version 12.2.0iostat -dz 1 Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd sdc 40.00 0.00 21.00 0.00 0 21 0Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd dm-0 6.00 0.00 24.00 0.00 0 24 0 sdc 42.00 0.00 42.50 0.00 0 42 0Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd dm-0 5.00 0.00 20.00 0.00 0 20 0 sdc 43.00 0.00 36.00 0.00 0 36 0Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd sdc 48.00 0.00 25.00 0.00 0 25 0Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd sdc 36.00 0.00 18.50 0.00 0 18 0Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd sdc 40.00 0.00 21.00 0.00 0 21 0
In iostat, why are kB_wrtn/s and kB_wrtn the same?
So I found it... it seems the kernel supplies helper functions for you.... you need the request_queue, the bio and the gendisk, call these before and after you process the io... unsigned long start_time; start_time = jiffies; generic_start_io_acct(q, bio_op(bio), bio_sectors(bio), &gd->part0);generic_end_io_acct(q, bio_op(bio), &gd->part0, start_time);and voila, the stats and your block device start showing up in iostat.
I've been looking and looking and everybody explains the /proc/diskstats file, but nobody seems to explain where that data comes from. I found this comment: Just remember that /proc/diskstats is tracking the kernel’s read requests–not yours. on this page: https://kevinclosson.net/2018/10/09/no-proc-diskstats-does-not-track-your-physical-i-o-requests/ But basically my problem is that I've got a kernel module that creates a block device, and handles requests via a request handler set via blk_queue_make_request not blk_init_queue, just like dm, I don't want the kernel to queue requests for me. Everything works fine, but nothing shows up in /proc/diskstats What bit of magic am I missing to get my stats in there so it will show up in iostat? I assumed the kernel would be tallying this information since it's handling the requests to the kernel module, but apparently not. or I'm missing a flag somewhere or something. Any ideas?
How do I get the linux kernel to track io stats to a block device I create in a loadable module?
You can read the I/O stats from /proc/self/io before and after your task, and subtract the values from the "write_bytes" and "read_bytes" lines. See "man proc" for some details. It does not differentiate by device or folder, though. Here's an example: #!/bin/bash cat /proc/$$/io dd if=/dev/zero of=/tmp/iotest bs=1M count=5 sync cat /proc/$$/io rm /tmp/iotest
I have a bash script to do calculation. This calculation generates big scratch files as big as 12 GB and the disk usage of scratch folder is ~30 GB. I want to know how much total data is written to disk during the process and how much total data is read. This will help me to understand the disk IO bottlenecks and choose a better scratch disk type. Question : Track written data (MB or GB) in a folder between a time interval. Similarly track read data from a folder between a time interval. The present version of my script is below. #!/bin/bash # Running QM-JOB: helix HPC d="$1" # .dal file m="$2" # .mol file n="$3" # number of CPU cores to be used for this calculation. dir=$(pwd) dt=$(date +%Y-%m-%d:%H:%M:%S ) echo -e 'Job started @ '$dt'' >> /home/vayu/dalton/runlog.log echo "-----------------------------------------------" df -h /dev/md0 echo "-----------------------------------------------"folder="<path/to/the/folder>" #Scratch folder# start IO log on "scratch folder" (no idea how to implement this) echo "-----------------------------------------------"export OMP_NUM_THREADS=$n source /opt/intel/parallel_studio_xe_2017.0.035/compilers_and_libraries_2017/linux/bin/compilervars.sh intel64 source /opt/intel/parallel_studio_xe_2017.0.035/compilers_and_libraries_2017/linux/mkl/bin/mklvars.sh intel64 source /opt/intel/parallel_studio_xe_2017.0.035/compilers_and_libraries_2017/linux/mpi/bin64/mpivars.sh intel64./application_script "$d" "$m" "$n" "$folder" dt2=$(date '+%d/%m/%Y %H:%M:%S');#stop "scratch folder" IO log #print total data written in "scratch folder" #print total data read from "scratch folder"
Track total data written in and read from a folder within bash script
Here are some hints. Yes, it should, because zfs volume is created on zpool which is located on a storage device. If that storage is shared between other resources, they can affect zfs pools and volumes. Unfortunately, I do not know what Proxmox is, but %util usually shows the time the device has a positive queue of transactions. A number of transactions in the queue is avgqu-sz. Both of these values are also depend on the storage system type and model which can support quite a large queue. So, it may be a bad symptom or not. Therefore first of all it's better to look at: await, r/s, w/s, rkB/s, wkB/s to see if the volume has a real workload and performance issues or not. There is a special command: zpool iostat to monitor zpool statistic.
First off, I asked this question 5 days ago over on Serverfault. I hope I'm not doing a bad by bringing it over here to the Unix&Linux Stack. I have also asked this question on 3 other sites not related to Stack, with no answers. I plan on updating each site with an answer, if I can just get it answered. Here we go. I am having a hard time understanding the output of iostat -x with specific regards to ZFS zvols. I'm running Proxmox 4.4, fully updated and encountering some generally poor IO performance. While troubleshooting the sluggish performance, I was looking at iostat -x 1 and saw this sort of utilization reading near constantly. Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 77.00 115.00 308.00 640.00 9.88 2.02 10.33 9.92 10.61 3.58 68.80 sdb 0.00 0.00 81.00 116.00 324.00 644.00 9.83 1.32 6.72 6.42 6.93 2.50 49.20 ... sde 0.00 0.00 77.00 117.00 308.00 640.00 9.77 1.16 6.25 5.25 6.91 2.35 45.60 sdf 0.00 0.00 78.00 116.00 312.00 640.00 9.81 1.25 6.45 5.64 7.00 2.47 48.00 ... zd32 0.00 0.00 0.00 197.00 0.00 788.00 8.00 1.09 5.54 0.00 5.54 5.06 99.60Where I am confused is that the utilization percent for zd32, the zvol of my VM, is at 100%, where the underlying storage is at roughly 50% utilization. My question is: Shouldn't the zvol utilization reflect the utilization of the underlying storage devices? For reference, there are other VMs on this system, but this troubleshooting was done after hours, so they were idle. This one VM was the only busy VM, running Windows updates. The zpool is a RAID-Z2 of 7200RPM SATA disks, so not exactly built for incredible speed. I'm just wondering about the utilization right now.
Reading iostat utilization with ZFS zvols
For apt-get install linux-headers-$(uname -r)to work, you need to be running a kernel which is still available from the distribution repositories; in most cases, this basically means you need to be running the latest supported kernel for your distribution. On Debian, the simplest option is apt-get update apt-get install linux-image-amd64 linux-headers-amd64(adjust to your architecture) to get the current kernel and matching headers, then reboot.
After running below command i got error: # apt-get install linux-headers-$(uname -r) Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-headers-4.9.0-3-amd64 E: Couldn't find any package by glob 'linux-headers-4.9.0-3-amd64' E: Couldn't find any package by regex 'linux-headers-4.9.0-3-amd64'To troubleshoot i checked following: # apt-cache search linux-headers aufs-dkms - DKMS files to build and install aufs linux-libc-dev-arm64-cross - Linux Kernel Headers for development (for cross-compiling) linux-libc-dev-armel-cross - Linux Kernel Headers for development (for cross-compiling) linux-libc-dev-armhf-cross - Linux Kernel Headers for development (for cross-compiling) linux-libc-dev-mips-cross - Linux Kernel Headers for development (for cross-compiling) linux-libc-dev-mips64el-cross - Linux Kernel Headers for development (for cross-compiling) linux-libc-dev-mipsel-cross - Linux Kernel Headers for development (for cross-compiling) linux-libc-dev-ppc64el-cross - Linux Kernel Headers for development (for cross-compiling) linux-libc-dev-s390x-cross - Linux Kernel Headers for development (for cross-compiling) linux-libc-dev-alpha-cross - Linux Kernel Headers for development (for cross-compiling) linux-libc-dev-hppa-cross - Linux Kernel Headers for development (for cross-compiling) linux-libc-dev-m68k-cross - Linux Kernel Headers for development (for cross-compiling) linux-libc-dev-mips64-cross - Linux Kernel Headers for development (for cross-compiling) linux-libc-dev-powerpc-cross - Linux Kernel Headers for development (for cross-compiling) linux-libc-dev-powerpcspe-cross - Linux Kernel Headers for development (for cross-compiling) linux-libc-dev-ppc64-cross - Linux Kernel Headers for development (for cross-compiling) linux-libc-dev-sh4-cross - Linux Kernel Headers for development (for cross-compiling) linux-libc-dev-sparc64-cross - Linux Kernel Headers for development (for cross-compiling) linux-headers-4.9.0-11-all - All header files for Linux 4.9 (meta-package) linux-headers-4.9.0-11-all-amd64 - All header files for Linux 4.9 (meta-package) linux-headers-4.9.0-11-amd64 - Header files for Linux 4.9.0-11-amd64 linux-headers-4.9.0-11-common - Common header files for Linux 4.9.0-11 linux-headers-4.9.0-11-common-rt - Common header files for Linux 4.9.0-11-rt linux-headers-4.9.0-11-rt-amd64 - Header files for Linux 4.9.0-11-rt-amd64 linux-headers-amd64 - Header files for Linux amd64 configuration (meta-package) linux-headers-rt-amd64 - Header files for Linux rt-amd64 configuration (meta-package)```and # apt-cache search linux-image linux-headers-4.9.0-11-amd64 - Header files for Linux 4.9.0-11-amd64 linux-headers-4.9.0-11-rt-amd64 - Header files for Linux 4.9.0-11-rt-amd64 linux-image-4.9.0-11-amd64 - Linux 4.9 for 64-bit PCs linux-image-4.9.0-11-amd64-dbg - Debug symbols for linux-image-4.9.0-11-amd64 linux-image-4.9.0-11-rt-amd64 - Linux 4.9 for 64-bit PCs, PREEMPT_RT linux-image-4.9.0-11-rt-amd64-dbg - Debug symbols for linux-image-4.9.0-11-rt-amd64 linux-image-amd64 - Linux for 64-bit PCs (meta-package) linux-image-amd64-dbg - Debugging symbols for Linux amd64 configuration (meta-package) linux-image-rt-amd64 - Linux for 64-bit PCs (meta-package), PREEMPT_RT linux-image-rt-amd64-dbg - Debugging symbols for Linux rt-amd64 configuration (meta-package) linux-image-4.9.0-3-amd64 - Linux 4.9 for 64-bit PCsAfter running apt-cache search linux-image i get linux-image-4.9.0-3-amd64 kernal version which i want in the result of apt-cache search linux-headers command also. Few people suggested to change sources.list and then try. But as i am new to this i don't have idea how to search proper link for sources.list and what will be best suited to resolve my problem. I did search on google but did not find solution. Any link or solution which can provide solution will be of great help.
Failed to update Linux headers on debian stretch / Debian 9
No, they aren't exact copies. If you care to investigate, you'll find that the files at the top level /usr/include will normally have a lot of #ifdefs or other conditionals, and they'll only define the architecture-independent parts and will #include other stuff from architecture-specific directories deeper within the hierarchy. As some architecture-specific parts may in turn depend on some architecture-independent parts, there may be multiple layers of inclusion on top of each other. Likewise, the files under /usr/include/c++ will have the additional declarations that will only make sense for C++, and #includes for the corresponding C include files where appropriate. The name of the game is deduplication for maintainability: the aim is that when a glibc developer needs to add something new that only affects the ABI between the application and glibc and has no architecture-specific parts, the addition ideally needs to happen at just one location in the tree of include files, and it will take effect for all hardware architectures that use glibc. Or, say, when a new system call is added to the Linux kernel, there will be a place to add it without interfering with *BSD or GNU Hurd system call definitions, for example. Or if you're porting glibc to yet another hardware/kernel architecture, you'll find places you can plug in the necessary kernel ABI definitions without disturbing architecture-independent stuff any more than absolutely necessary. Yes, it's pretty complicated. I don't have any easy references for you, since the entire /usr/include layout is a sum of ISO C and POSIX standards requirements and the choices made by both the GCC and glibc projects. I'd suggest you make a note of your architecture triplet (x86_64-linux-gnu in your case; obtainable with gcc -dumpmachine on any architecture supported by GCC) and then study the compiler's default #include <...> file search paths. You can see the search paths with:cpp -v /dev/null -o /dev/null for plain C cpp -x c++ -v /dev/null -o /dev/null for C++I don't have a system with GCC 7 at hand here, but for GCC 6, the list of include paths looks like this for C: ... #include <...> search starts here: /usr/lib/gcc/x86_64-linux-gnu/6/include /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/6/include-fixed /usr/include/x86_64-linux-gnu /usr/include End of search list. ...... and like this for C++: ... #include <...> search starts here: /usr/include/c++/6 /usr/include/x86_64-linux-gnu/c++/6 /usr/include/c++/6/backward /usr/lib/gcc/x86_64-linux-gnu/6/include /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/6/include-fixed /usr/include/x86_64-linux-gnu /usr/include End of search list. ...If the /usr/local/include/<architecture triplet> directory existed, it would get added in the lists, just before /usr/local/include. So, it would seem that for your own projects, if you need to have architecture-specific versions of an include file, you could put them under /usr/[local/]include/<architecture triplet>/, and the regular architecture-agnostic include files to /usr/[local/]include/. I wouldn't touch any include directories whose path name includes the major version number of the compiler without a very good reason. If you plan to modify glibc, and you cannot find what you need in the glibc developer documentation, you'd better ask for advice on the glibc development mailing list; glibc is extra complicated because it is usable also on architectures that use a compiler other than GCC, and so might not have the architecture triplet convention as standard.
I've been browsing my /usr/include folder trying to get acquainted with the layout and I've noticed that there are multiple copies of header files (at least by name, I didn't actually diff them to see if they were exact copies) found in several sub directories of /usr/include. This is especially the case for standard C and C++ header files as well as POSIX/LSB standard header files. Some examples include (note ./ refers to /usr/include): ./asm-generic/unistd.h ./linux/unistd.h ./unistd.h ./x86_64-linux-gnu/sys/unistd.h ./x86_64-linux-gnu/bits/unistd.h ./x86_64-linux-gnu/asm/unistd.h./stdlib.h ./x86_64-linux-gnu/bits/stdlib.h ./c++/7/stdlib.h ./c++/7/tr1/stdlib.h./c++/7/cmath ./c++/7/ext/cmath ./c++/7/tr1/cmath./asm-generic/termios.h ./linux/termios.h ./x86_64-linux-gnu/sys/termios.h ./x86_64-linux-gnu/bits/termios.h ./x86_64-linux-gnu/asm/termios.h ./termios.h./linux/time.h ./time.h ./x86_64-linux-gnu/sys/time.h ./x86_64-linux-gnu/bits/time.hWhy is this? And why do some C standard headers show up in C++ locations? I only have one compiler installed (GCC 7).
Why are there multiple copies of header files in /usr/include?
You are not finding the headers for your kernel version, in the official distribution repository, because you are dealing with a Kali setup using a custom made kernel version. Whist we do not have all data, from you uname -r, it leds me to suspect it was made using these scripts/tools https://github.com/Re4son/re4son-kernel-builder ; it also leads me to speculate, after a bit of detective work, that maybe you have a Raspberry PI/ARM v6 device. In this case, the easier option is either to reinstall a new version, or even better, choosing a more user friendly Linux distribution.
I am trying to install kernel headers version 4.14.71-v6 (uname -r) for Kali Linux. I already did the common commands... apt update apt upgrade apt dist-upgrade apt install linux-headers-generic alt install linux-headers-$(uname -r)...with and without option -y Also did reboots. I searched the repos by apt search 4.14. I took a look onto http://http.kali.org/kali/pool/main/l/linux/, no success at all. I've seen on http://http.kali.org/kali/pool/main/l/linux/ are kernel-headers for 4.18 and 4.19, but the upgrade is distributing versions only up to my 4.14.x Does anybody have an idea what to do?
Kali Linux kernel headers for 4.14.71-v6
Welcome to Unix & Linux StackExchange! Yes, the kernel headers provide an interface for other parts of the kernel - in this you're entirely correct. They also include definitions for the interface between the kernel and the userspace - but usually the "raw" kernel interface is not used directly, but through the C library (often glibc). The userspace-kernel interface may include multiple versions of a particular system call, for backwards compatibility reasons. By making the system call through the C library, all the applications get the same version of the actual system call, and thus a guarantee of consistent behavior. Also, when the relevant part of the kernel interface gets updated, you'll only need to update the C library to take advantage of new features. (For example, when time_t gets extended to 64-bit in 32-bit architectures to avoid the Y2K38 problem, the C library may move to always use the 64-bit version of the kernel interface, but have an userspace-configurable mapping for applications that still use the 32-bit version. At that point, the functions using 32-bit time_t can be obsoleted and removed from the kernel, and the C library can provide better workarounds for legacy applications that still use the 32-bit type.) So, unless you're compiling your own versions od the C library also, you should generally prefer the development headers of the C library instead of using the kernel headers directly. The headers in /usr/include/linux/ generally come with the development header package of the C library, and describe the version of the kernel theC limrary was compiled for. This is what an userspace application developer normally needs. /lib/modules/$(uname -r)/build is often a symbolic link to /usr/src/... where the headers or full source code of the actual running kernel version is stored. It is used when compiling external (aka third-party) kernel modules, i.e kernel modules from sources that are not integrated to the main kernel source code. These headers include the necessary kernel API version signatures so that the modules can use version-specific kernel-internal APIs.
It might be a incoherent question about kernel headers as I don't have clear understanding about it and where and how it is used. I think it might get flagged down. My question has 3 parts:I think Kernel headers provide a interface so that other part of the kernel, like modules, can use them. That's my bookish knowledge. I haven't seen or found any code that used kernel headers (I would appreciate if someone can point me to it). Can it be used in userspace too? Any code example would be appreciated. I found out that using make headers_install kernel headers are exposed for by userspace, but at the same time it is discouraged to use kernel headers in userspace. If it is discouraged then what is the use of exposing it to userspace? As per this and this, kernel header files (.h files) should be at 3 places: a. /usr/include/linux/kernel.h which is intended for user space b. /lib/modules/$(uname -r)/build/include/linux/sched.h which is external modules c. /usr/src/... which is used for kernel modules Does it mean header files in different directories have different purpose or different interface or signature? In other words, does #include <linux/xyz.h> in a userspace code has different meaning than #include <linux/xyz.h> in kernel module? Also is external module same as kernel module? Thanks.
What is kernel headers that can be used in userspace? Do their signature or interface differ than the headers in different directories?
The rc marker at the start of each line indicates that the packages are removed, but configured — i.e. all their contents have been removed, apart from configuration files. Packages in this state don’t appear in Synaptic by default. You can remove them with sudo dpkg --purge or sudo apt purge, listing the packages you want to remove. sudo apt autoremove --purge should purge them automatically without having to list them.
Am currently running POP_OS 20.04 (LTS). When I open the terminal and run the command, dpkg --list | grep linux-imageit returns a list of apparently installed images, including my current (6.0.12) and most recent (5.17.5), about nine images from version set 5.0 and six from version set 5.3 ...plus one from 4.18 and one from 5.4.Previously I have used synaptic to remove older kernels from the system, but for some reason when I search for linux-image (and sort installed images to the top) only my current and most recent are showing as installed ~ the box is green. These older 5.0/5.3 images don't show in the search results.Stacer uninstaller also cannot find them.Can anyone tell me why these images show in the console but not synaptic? What are they doing on the system and are they even really on the system? If they are indeed taking up space on my home partition how can I safely remove them? One consideration comes to mind is this system started out at POP_OS version 18.10 and has been upgraded with each release up to 20.04 (LTS) -- although not sure if that has anything to do with it. Another thought is maybe this has something to do with the recovery partition, but I have no idea how or why it would need so many versions of the kernel, or why they would accumulate in such untidy manner.
How to remove old kernel images
Got it: auth_request_set $myheader $upstream_http_x_costumheader; add_header costum $myheader;
From my nginx server I want to get an auth response with custom headers from an external Apache server. The problem is, I can't get the custom header's value. location /app { auth_request /auth; add_header custom $http_x-customheader; } location = /auth { proxy_pass http://ip.externalserver/auth.php; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; proxy_set_header X-Original-METHO $request_method; proxy_pass_request_headers on;}When directly requesting from the Apache server: $ curl -i "http://xxxx.xxx.xxxx.xxxx/auth.php" HTTP/1.1 200 OK Date: Mon, 30 Sep 2019 17:08:05 GMT Server: Apache/2.4.7 (Ubuntu) X-Powered-By: PHP/5.5.9-1ubuntu4.29 x_customheader: 2 x-headername: headervalue Vary: Accept-Encoding Access-Control-Allow-Origin: * Content-Length: 193 Content-Type: text/htmlbut through nginx: HTTP/1.1 200 OK Server: nginx Date: Mon, 30 Sep 2019 16:47:28 GMT Content-Type: application/vnd.apple.mpegurl Content-Length: 559 Last-Modified: Mon, 30 Sep 2019 16:47:26 GMT Connection: keep-alive ETag: "5d92319e-22f" Expires: Wed, 08 Jan 2020 16:47:28 GMT Cache-Control: max-age=8640000 custom: -customheader Access-Control-Allow-Headers: * Access-Control-Allow-Methods: GET, HEAD, OPTIONS Access-Control-Allow-Origin: * Accept-Ranges: bytesI have tried this without success: add_header custom $http_x-customheader; add_header custom $http_x_customheader; add_header custom $upstream_http_x_customheader; add_header custom $upstream_http_x-customheader;
NGINX can't read custom headers from response
I have an old Slackware server running a 4.6 kernel instead of its original 3.10 and have not had to mess with the headers. I've built at least a dozen kernels for half a dozen Slackware versions over the years and never did anything about headers or glibc on any of those. Of course, without the newer headers, you may not be able to build software which uses features from that new kernel. But I doubt you'd be running Slackware if you wanted bleeding edge software so I don't think you'll run into that issue.
The last time I needed to deal with kernel headers was back in the Pleistocene (2.6 or so) and I remember back then that you needed to match your kernel headers not to the kernel you were running but to the kernel version glibc was compiled against. But this was a long time ago and before the kernel exported its own headers. I have a computer that will be running a 4.15 series kernel, with a C library compiled against 4.4 series headers. Should I export the headers from the kernel I'm running, or use the headers package that my distro (Slackware) provides? (Or, and please say the answer is no, do I need to also rebuild glibc against the new kernel?)
Should my Linux headers match my running kernel or what glibc was compiled against?
Please install headers exactly based on your kernerl release. sudo yum install kernel-headers-$(uname -r) kernel-devel-$(uname -r)
I try to install VirtualBox Guest Additions on a CentOS 7 VM. I installed the prerequisites via sudo yum install perl gcc dkms kernel-devel kernel-headers make bzip2then I "inserted" the Guest Additions CD image and the Guest Additions auto runner came up and ran. However the Guest Additions installation errored out withVirtualBox Guest Additions: Kernel headers not found for target kernel 3.10.0-1062.el7.x86_64.For closer examination I issued the following commands in the Terminal shell of the VM: $ ls /usr/src/kernels/ 3.10.0-1062.18.1.el7.x86_64and $ uname -r 3.10.0-1062.el7.x86_64Notice the ddition caharcters 18.1 in the installed headers copared to what the kernel reports. I guess that is the reason why he Guest Additions installation fails. How can I fix this and install the Guest Additions?A few more details:OS Version: CentOS 7.7.1908 Guest Additions Version: 6.1.6 EPEl Repo URL: https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
Want to install VirtualBox Guest Additions on CentOS 7 but get a header mismatch
You’ll find the header files used by your build in /lib/modules/$(uname -r)/build/, see for example find /lib/modules/$(uname -r)/build/ -name timeconst.hAll these files are generated during the build, in various ways; timeconst.h is built by kernel/time/timeconst.bc. /lib/modules/$(uname -r)/build/ stores the generated headers (and a few other files) corresponding to the running kernel; the intention is to make them available for external module builds in particular. If you’re building a new kernel, you’ll find the generated files in your build tree (after a kernel build, or an in-tree-module build).
So, I am writing a module, that is working in kernel space. My code compiles correctly and works correctly. The thing is that there are some header files, which I couldn't find anywhere. That doesn't make sense to me, how come everything goes right when header files are not present. They must be present somewhere. These are some of header files I couldn't find anywhere (there are more, but for my question, they may be enough): #include <asm/errno.h> #include <asm/socket.h> /* /include/linux/socket.h */ #include <stdarg.h> /* /include/linux/kernel.h */ #include <asm/types.h> #include <asm/mmiowb.h> /* /include/linux/spinlock.h */ #include <asm/param.h> /* /include/linux/jiffies.h */Though some of header files can be found on architectures other than x86, but I don't think that's any solution to the question. And I don't have any idea where to look for these files: #include <generated/timeconst.h> /* /include/linux/jiffies.h */ #include <generated/bounds.h> #include <generated/autoconf.h> /* /include/linux/kconfig.h */ #include <generated/asm-offsets.h>I am looking for these files in the following directories from linux kernel 5.4.31: /include /include/uapi /arch/x86/include /arch/x86/include/uapiI expect the files to be found in the above include paths. But I don't know much about where and how linux header files are processed after compilation, since I'm finding for them in the source code.
Can't find the source of some "asm", "generated" header files in linux kernel?
I finally figured this out. I went to GitHub and get the Linux sources associated with the version of Ubuntu I am running. I was able to run: make \ ARCH=<arch-name> O=. -C <path-to-linux-sources> \ headers_install INSTALL_HDR_PATH=<output-directory>This worked like a charm and did not require having to run in an elevated privileged mode.
I have Ubuntu 16.04 LTS installed and have installed linux-headers. I am in the middle of trying to build uClibc-ng and it needs the linux headers. So when I run the following command from the linux-headers directory I get the following error messages. What step am I missing? sudo make INSALL_HDR_PATH=/tmp/linux-headers headers_install CHK include/generated/uapi/linux/version.h UPD include/generated/uapi/linux/version.h make[1]: *** No rule to make target 'arch/x86/entry/syscalls/syscall_32.tbl' needed by 'arch/x86/entry/syscalls/../include/generated/asm/syscalls_32.h'. Stop. arc/x86/Makefile:216: recipe for target 'archheaders' failed. make: *** [archheaders] Error 2I created a new VM to play with, and its uname -a is: Linux ubuntu 4.15.0-46-generic #49~16.04.1-Ubuntu SMP Tue Feb 12 17:45:24 UTC 2019 x86_64 x86_64 x86_64 GNU/LinuxSome questions:What is the difference between linux-headers-4.15.0-46 and linux-headers-4.15.0-46-generic? On my VM I have 2 sets linux-header directories. One with 4.14.0-29 and one with 4.14.0-46. Within each of those sets is 2 directories. One with and one without -generic. Do I need to maintain both of these sets? Does anyone know of any instructions of what steps need to be performed on a fresh 16.04 Ubuntu image in order to be able to get the linux-headers?
make headers_install not working as expected
You have updated the system, but have not yet rebooted, so the system is still running on the old (pre-update) kernel. The package management system refuses to install an older linux-headers package because after just one boot you will be running a newer kernel and any modules built for the old one will be useless. You probably already have installed the linux-image-6.3.0-kali1-amd64_6.3.7-1kali1_amd64.deb package that contains the kernel that matches your linux-headers package version, but it's not running yet. Reboot, then verify that uname -r now outputs 6.3.0-kali1-amd64, then try installing the VMware kernel module again.
When I run vmware in Kali Linux, this keeps showing up:I have kernel headers for version 6.3.0-kali1-amd64. What path do I use? When I choose a folder, this shows up:I looked up the answers online and ran this script: sudo apt-get install linux-headers-$(uname -r) but it returns E: Unable to locate package linux-headers-6.1.0-kali9-amd64 so that command won't work. I have updated and upgraded my Kali Linux operating, too. Also I searched for linux header packages with the command aptitude search linux-headers and the only version was 6.3.0-kali1-amd64.
What folder do I choose in the VMware kernel module updater?
I suspect there’s some misunderstanding of what TRIM_UNUSED_KSYMS does. Its Kconfig description is as follows:The kernel and some modules make many symbols available for other modules to use via EXPORT_SYMBOL() and variants. Depending on the set of modules being selected in your kernel configuration, many of those exported symbols might never be used. This option allows for unused exported symbols to be dropped from the build. In turn, this provides the compiler more opportunities (especially when using LTO) for optimizing the code and reducing binary size. This might have some security advantages as well. If unsure, or if you need to build out-of-tree modules, say N.Note in particular the last sentence: in your scenario, you should disable TRIM_UNUSED_KSYMS, unless you have a really good reason to enable it (and deal with the fallout). The overall set of available symbols in the kernel depends on the build configuration, before any symbol trimming occurs: a kernel whose configuration excludes a given feature will never have the corresponding symbols. In your case, your .config doesn’t include VIDEO_V4L2, which means kernels you build won’t ever have video_ioctl2, nor does it include VIDEOBUF_V4L2, which you need for vb2_init_queue, and VIDEOBUF2_DMA_SG, which you need for vb2_dma_sg_memops. Adding those symbols to TRIM_UNUSED_KSYMS’s whitelist won’t help: a symbol that isn’t present to start with can’t be added by not removing it. To support building out-of-tree modules, you need to start by determining the symbols those modules will need, and enabling the necessary features in the kernels you build so that those symbols are available. If you then decide you still want to enable symbol trimming, you can do so, adding the necessary symbols to the whiltelist; but the latter is useless unless you get the first part right. Generic distribution kernels enable most (if not all) subsystems, which allows out-of-tree modules to be built on top of them without foreknowledge of the features they need.
I want to build an out-of-tree driver using a kernel built from the latest mainline source code. I use localmodconfig when building, which reduces the number of modules and kernel symbols exported to match devices available on the system, per my understanding. The out-of-tree driver requires symbols that aren't used by other components of the kernel, so I think I need to specify the symbols to be exported manually according to this conversation about TRIM_UNUSED_KSYMS and the whitelist. My .config looks like this: ... CONFIG_TRIM_UNUSED_KSYMS=y CONFIG_UNUSED_KSYMS_WHITELIST="ksyms-whitelist" ...And I created ksyms-whitelist file in the root of the kernel source directory: vb2_queue_init vb2_dma_sg_memops video_ioctl2 ...The driver builds when using a generic kernel with no problem (and the device works), but what do we need to do to build out-of-tree driver using symbols from a kernel built from source? Edit: my whole .config
What is the correct usage of CONFIG_TRIM_UNUSED_KSYMS and the whitelist file?
In Linux, this information is available in /proc/stat’s cpu line, and is usually parsed from there — I don’t think there’s a user-space-accessible function which will provide the same information. The values in that line give the time spent user mode, the time spent in user mode with low priority, the time spent in system mode, the time spent in the idle task, among other times; see the link above for details. Functions are available to retrieve time information for the current process and/or its children; for example POSIX’s getrusage and times.
Basically, I am working with Nvidia's new github repos and trying to compile them in a cross-platform setup. Specifically, I am trying to compile them on Fedora 33. I have run into an issue: In file included from /home/chris/.../sample_example.cpp:55: /home/chris/.../nvml_monitor.hpp:19:10: fatal error: cfgmgr32.h: No such file or directory 19 | #include <cfgmgr32.h> | ^~~~~~~~~~~The cfgmgr32.h appears to be a windows-specific library. Within the file nvml_monitor.hpp, it looks like I can get away with disabling most of the windows content. But, I think the following Windows-specific function is going to be useful: float getCpuLoad() { static uint64_t _previousTotalTicks = 0; static uint64_t _previousIdleTicks = 0; FILETIME idleTime, kernelTime, userTime; GetSystemTimes(&idleTime, &kernelTime, &userTime); auto FileTimeToInt64 = [](const FILETIME& ft) { return (((uint64_t)(ft.dwHighDateTime)) << 32) | ((uint64_t)ft.dwLowDateTime); }; auto totalTicks = FileTimeToInt64(kernelTime) + FileTimeToInt64(userTime); auto idleTicks = FileTimeToInt64(idleTime); uint64_t totalTicksSinceLastTime = totalTicks - _previousTotalTicks; uint64_t idleTicksSinceLastTime = idleTicks - _previousIdleTicks; float result = 1.0f - ((totalTicksSinceLastTime > 0) ? ((float)idleTicksSinceLastTime) / totalTicksSinceLastTime : 0); _previousTotalTicks = totalTicks; _previousIdleTicks = idleTicks; return result * 100.f; }What I am looking for is not anyone to "do the work" or anything -- just a place to start or a thread to pull on. When searching for linux solutions in this problem space, the results are obfuscated by shell-level responses. But, instead, what I am looking for are the C headers which provide interfaces and handles for this information. Are there any specific linux headers similar in purpose to the cfgmgr32.h header in windows? If not, are there searchable linux kernel resources for discovering these headers or functionality?
C-std headers containing helper functions for system, user and kernel times?
I'm trying to understand how Linux works and how to build modules.Building kernel modules doesn’t involve the “standard” C compiler directories; instead, see /lib/modules/$(uname -r)/build.C programs don’t know where to look for headers and libraries; the C preprocessor and compiler do. You can see the standard include directories using gcc -xc -E -v - < /dev/null(replacing -xc with -xc++ for C++), and the library search directories using gcc -print-search-dirsConflicts aren’t avoided by the compiler; it’s up to whatever or whoever is running the compiler to make sure that the search paths don’t contain conflicting headers. Libraries aren’t linked in automatically, you have to add libraries to the linker command-line (-lprocps in your example); any other library is ignored. In both cases, when conflicts arise, compilation and/or linking stops with an error.
I'm trying to understand how Linux works and how to build modules. So far I saw that Linux headers are stored in /usr/include and that the compiled implementation of these interfaces are located in /usr/lib/x86_64-linux-gnu. I have a few questions:How does Linux or any C program know where to look for the headers and the .so files? Is there a file where this is defined? Is it possible to modify these references in case I wanted to add another default sources or headers folder (just to horse around)?How are conflicts avoided? For example, in the libprocps-dev there is a header file /usr/include/proc/numa.hwith the definition void numa_uninit (void);. The implementation of this file is in /usr/lib/x86_64-linux-gnu/libprocps.so. What if someone also compiled another .so file with the same function definition but with another implementation (or code) and added it to /usr/lib/x86_64-linux-gnu? How would the Linker know which is the proper .so file that has to be linked to that header definition?Thanks.
How are conflicts avoided in Linux Headers?
The kernel image and headers packages come from the same source package, so they are available simultaneously on the mirror network (barring failures on a specific mirror). If you follow the amd64 link on the linux-headers-6.1.0-21-amd64 package page, you’ll find a package download link which works; that’s the package which apt will download. Examining the package pool shows that all the amd64 packages for 6.1.90-1 were uploaded at the same time, 2024-05-03 21:54. The package file list is unfortunately not particularly reliable for packages which aren’t in the main archive — the latest Debian 12 kernel package was published in the security archive. Given the many different scenarios around kernel image and headers package, it isn’t possible to introduce dependencies between them such that one could guarantee that an image package is only installed if its matching headers package is also installed. In any case that still wouldn’t ensure smooth updates for NVIDIA users — what matters there is whether the NVIDIA module is successfully built, and that can fail with matching kernel packages.
I'm using Debian 12 Bookworm, and currently, when I run uname -a, it shows: Linux pctxd 6.1.0-20-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.85-1 (2024-04-11) x86_64 GNU/LinuxThe package linux-image-6.1.0-21-amd64 and related packages are ready to install. However, the corresponding linux-headers-6.1.0-21-amd64 package is not available. Without these headers, the Nvidia drivers can't be compiled, rendering the graphical user interface non-functional—something I learned the hard way during the last upgrade to 6.1.85-1. Running aptitude show yields: Package: linux-image-6.1.0-21-amd64 Version: 6.1.90-1 New: yes State: not installed Priority: optional Section: kernel Maintainer: Debian Kernel Team <[emailprotected]> Architecture: amd64 Uncompressed Size: 408 M Depends: kmod, linux-base (>= 4.3~), initramfs-tools (>= 0.120+deb8u2) | linux-initramfs-tool Recommends: firmware-linux-free, apparmor Suggests: linux-doc-6.1, debian-kernel-handbook, grub-pc | grub-efi-amd64 | extlinux Conflicts: linux-image-6.1.0-21-amd64-unsigned Breaks: fwupdate (< 12-7), initramfs-tools (< 0.120+deb8u2), wireless-regdb (< 2019.06.03-1~) Replaces: linux-image-6.1.0-21-amd64-unsigned Provides: $kernel (= 6.1.90-1)Just now, web page Package: linux-headers-6.1.0-21-amd64 seems to describe the missing package, but clicking the “list of files” button results in a error page with the information “No such package in this suite on this architecture.” Currently, there is another Security update (regarding libglib2.0) waiting. So, the time lag between the kernel security update and the linux header files necessary for my graphic UI is a increasing risk. For future updates: Is there a way to automatically defer the kernel update until the linux-headers package is available but process the security updates of other packages?
How to defer kernel updates until the corresponding "linux-headers" package is available?
Ultimately, you’ll need to load your modules into the host kernel, so you need to build them with the host’s configuration exactly. Even finding a matching kernel in your Ubuntu container wouldn’t help, you need the headers matching the host’s kernel and configuration. I don’t know whether micro OS even supports this, its web site is currently down for maintenance.
I have an Open SUSE micro OS host with kernel 6.5.9-1-default and an Ubuntu 22.04 distrobox container. I need to install the linux-headers-$(uname -r) and linux-modules-extra-$(uname -r) packages and the problem is that Ubuntu 22.04 at the time I'm writing doesn't have these packages, their most recent kernel is 6.2. So, how can I install these Linux headers and modules extra on my Ubuntu 22.04 container?
Linux header and extra modules on ubuntu container from an open suse micro os host with mismatch kernel versions
Normally make modules_install but of course distros just package all these modules. This looks like a Debian/Ubuntu thing. depmod just traverses all the subdirectories under /lib/modules/$version For Fedora/RHEL/CentOS or the Linux kernel installed from source - the answer is yes. Normally they are always symlinks Almost never. There are kernel development headers which are required to build modules, and most distros don't even offer the option of installing the kernel source - it doesn't make a lot of sense for the end user.Instead of reinventing the wheel, I'd recommend that you take a look at kernel modules build systems offered by VirtualBox, NVIDIA or VMWare. They are well tested and support dozens of distros.
I'm creating a program that interacts withkernel headers. The user can provide a path to the location of the headers, but first I would like to be able discover existing kernel headers on the users's machine based on convention. This apparently varies between distributions and tools. I know technically linux is fully customizable but I'm trying to understand what conventions apply to the mainstream distros:who creates /lib/modules/$version and when? are there guidelines for the structure of /lib/modules besides the /kernel and /extra subdirectories? are /build and /source expected to always exist under /lib/modules? (both of them?) is it acceptable that /build and /source are sometimes symlinks and sometimes not? does the headers and source come together? I've noticed that most distros offer a kernel headers or kernel development package. how is this related?
what are the conventions around /lib/modules?
The installation on your RHEL 6 server was probably done using the RHEL package of UnixODBC. This is easy to replicate on your RHEL 7 server: yum install unixODBC-develwill install the headers, development files and all their dependencies. We won’t be able to tell you why the two installations were performed differently; only your helpdesk can do that.
My current use case is installing PyODBC via poetry in a Jenkins job build, which is failing because sql.h can't be found. For background, I have two servers, one RHEL 6, one RHEL 7; the RHEL 6 server has unixODBC installed and working with (among other things) odbc.ini in /etc/, sql.h and other headers in /usr/include/. On the non-working RHEL 7 server helpdesk personnel just installed the same version of unixODBC-dev, and all the files seem to instead be in /usr/local/unixODBC/. I believe I correctly understand that /usr/local is for manually installed packages, which I suppose this is, but I'm not clear why these two installs would be done differently, and ultimately if there's a way to make things work on the RHEL server(, or failing that, a phrase I can give back to helpdesk to get this installed the right way.) Edit: Following @Stephen Kitt's suggestion, I went back to HD and they supposedly installed it with yum. Now the error is different (and quite a bit more lengthy, some seemingly repetitive lines have been clipped to fit the post length): [EnvCommandError] Command ['/var/lib/jenkins/.cache/pypoetry/virtualenvs/ds-ops-tools-py3.6/bi n/python', '-m', 'pip', 'install', '--no-deps', 'pyodbc==4.0.26'] errored wi th the following output: Collecting pyodbc==4.0.26 Using cached https://files.pythonhosted.org/packages/b4/41/f3eb5e56af207a8 fcc02f1f84cc3fed9fcf315565e65f418ae815e399929/pyodbc-4.0.26.tar.gz Installing collected packages: pyodbc Running setup.py install for pyodbc: started Running setup.py install for pyodbc: finished with status 'error' Complete output from command /var/lib/jenkins/.cache/pypoetry/virtualenv s/ds-ops-tools-py3.6/bin/python -u -c "import setuptools, tokenize;__file__= '/tmp/pip-build-i2_4l6cq/pyodbc/setup.py';f=getattr(tokenize, 'open', open)( __file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, _ _file__, 'exec'))" install --record /tmp/pip-oqv50di8-record/install-record. txt --single-version-externally-managed --compile --install-headers /var/lib /jenkins/.cache/pypoetry/virtualenvs/ds-ops-tools-py3.6/include/site/python3 .6/pyodbc: running install running build running build_ext building 'pyodbc' extension creating build creating build/temp.linux-x86_64-3.6 creating build/temp.linux-x86_64-3.6/src gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 - Wall -Wstrict-prototypes -fPIC -DPYODBC_VERSION=4.0.26 -I/var/lib/jenkins/.c ache/pypoetry/virtualenvs/ds-ops-tools-py3.6/include -I/usr/local/include/py thon3.6m -c src/buffer.cpp -o build/temp.linux-x86_64-3.6/src/buffer.o -Wno- write-strings -DHAVE_UNISTD_H -DHAVE_PWD_H -DHAVE_SYS_TYPES_H -DHAVE_LONG_LO NG -DSIZEOF_LONG_INT=8 -I/usr/include cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default] gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 - Wall -Wstrict-prototypes -fPIC -DPYODBC_VERSION=4.0.26 -I/var/lib/jenkins/.c ache/pypoetry/virtualenvs/ds-ops-tools-py3.6/include -I/usr/local/include/py thon3.6m -c src/cnxninfo.cpp -o build/temp.linux-x86_64-3.6/src/cnxninfo.o - Wno-write-strings -DHAVE_UNISTD_H -DHAVE_PWD_H -DHAVE_SYS_TYPES_H -DHAVE_LON G_LONG -DSIZEOF_LONG_INT=8 -I/usr/include cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default] gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 - Wall -Wstrict-prototypes -fPIC -DPYODBC_VERSION=4.0.26 -I/var/lib/jenkins/.c ache/pypoetry/virtualenvs/ds-ops-tools-py3.6/include -I/usr/local/include/py thon3.6m -c src/connection.cpp -o build/temp.linux-x86_64-3.6/src/connection .o -Wno-write-strings -DHAVE_UNISTD_H -DHAVE_PWD_H -DHAVE_SYS_TYPES_H -DHAVE _LONG_LONG -DSIZEOF_LONG_INT=8 -I/usr/include cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default] src/connection.cpp: In function ‘PyObject* Connection_getinfo(PyObject*, PyObject*)’: src/connection.cpp:835:40: warning: dereferencing type-punned pointer wi ll break strict-aliasing rules [-Wstrict-aliasing] SQLUINTEGER n = *(SQLUINTEGER*)szBuffer; // Does this work on P PC or do we need a union? ^ src/connection.cpp:848:49: warning: dereferencing type-punned pointer wi ll break strict-aliasing rules [-Wstrict-aliasing] result = PyInt_FromLong(*(SQLUSMALLINT*)szBuffer); ^ gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 - Wall -Wstrict-prototypes -fPIC -DPYODBC_VERSION=4.0.26 -I/var/lib/jenkins/.c ache/pypoetry/virtualenvs/ds-ops-tools-py3.6/include -I/usr/local/include/py thon3.6m -c src/cursor.cpp -o build/temp.linux-x86_64-3.6/src/cursor.o -Wno- write-strings -DHAVE_UNISTD_H -DHAVE_PWD_H -DHAVE_SYS_TYPES_H -DHAVE_LONG_LO NG -DSIZEOF_LONG_INT=8 -I/usr/include cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default] gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 - Wall -Wstrict-prototypes -fPIC -DPYODBC_VERSION=4.0.26 -I/var/lib/jenkins/.c ache/pypoetry/virtualenvs/ds-ops-tools-py3.6/include -I/usr/local/include/py thon3.6m -c src/errors.cpp -o build/temp.linux-x86_64-3.6/src/errors.o -Wno- write-strings -DHAVE_UNISTD_H -DHAVE_PWD_H -DHAVE_SYS_TYPES_H -DHAVE_LONG_LO NG -DSIZEOF_LONG_INT=8 -I/usr/include cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default] gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 - Wall -Wstrict-prototypes -fPIC -DPYODBC_VERSION=4.0.26 -I/var/lib/jenkins/.c ache/pypoetry/virtualenvs/ds-ops-tools-py3.6/include -I/usr/local/include/py thon3.6m -c src/getdata.cpp -o build/temp.linux-x86_64-3.6/src/getdata.o -Wn o-write-strings -DHAVE_UNISTD_H -DHAVE_PWD_H -DHAVE_SYS_TYPES_H -DHAVE_LONG_ LONG -DSIZEOF_LONG_INT=8 -I/usr/include cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default] gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 - Wall -Wstrict-prototypes -fPIC -DPYODBC_VERSION=4.0.26 -I/var/lib/jenkins/.c ache/pypoetry/virtualenvs/ds-ops-tools-py3.6/include -I/usr/local/include/py thon3.6m -c src/params.cpp -o build/temp.linux-x86_64-3.6/src/params.o -Wno- write-strings -DHAVE_UNISTD_H -DHAVE_PWD_H -DHAVE_SYS_TYPES_H -DHAVE_LONG_LO NG -DSIZEOF_LONG_INT=8 -I/usr/include cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default] gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 - Wall -Wstrict-prototypes -fPIC -DPYODBC_VERSION=4.0.26 -I/var/lib/jenkins/.c ache/pypoetry/virtualenvs/ds-ops-tools-py3.6/include -I/usr/local/include/py thon3.6m -c src/pyodbccompat.cpp -o build/temp.linux-x86_64-3.6/src/pyodbcco mpat.o -Wno-write-strings -DHAVE_UNISTD_H -DHAVE_PWD_H -DHAVE_SYS_TYPES_H -D HAVE_LONG_LONG -DSIZEOF_LONG_INT=8 -I/usr/include cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default] gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 - Wall -Wstrict-prototypes -fPIC -DPYODBC_VERSION=4.0.26 -I/var/lib/jenkins/.c ache/pypoetry/virtualenvs/ds-ops-tools-py3.6/include -I/usr/local/include/py thon3.6m -c src/pyodbcdbg.cpp -o build/temp.linux-x86_64-3.6/src/pyodbcdbg.o -Wno-write-strings -DHAVE_UNISTD_H -DHAVE_PWD_H -DHAVE_SYS_TYPES_H -DHAVE_L ONG_LONG -DSIZEOF_LONG_INT=8 -I/usr/include cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default] gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 - Wall -Wstrict-prototypes -fPIC -DPYODBC_VERSION=4.0.26 -I/var/lib/jenkins/.c ache/pypoetry/virtualenvs/ds-ops-tools-py3.6/include -I/usr/local/include/py thon3.6m -c src/pyodbcmodule.cpp -o build/temp.linux-x86_64-3.6/src/pyodbcmo dule.o -Wno-write-strings -DHAVE_UNISTD_H -DHAVE_PWD_H -DHAVE_SYS_TYPES_H -D HAVE_LONG_LONG -DSIZEOF_LONG_INT=8 -I/usr/include cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default] gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 - Wall -Wstrict-prototypes -fPIC -DPYODBC_VERSION=4.0.26 -I/var/lib/jenkins/.c ache/pypoetry/virtualenvs/ds-ops-tools-py3.6/include -I/usr/local/include/py thon3.6m -c src/row.cpp -o build/temp.linux-x86_64-3.6/src/row.o -Wno-write- strings -DHAVE_UNISTD_H -DHAVE_PWD_H -DHAVE_SYS_TYPES_H -DHAVE_LONG_LONG -DS IZEOF_LONG_INT=8 -I/usr/include cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default] gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 - Wall -Wstrict-prototypes -fPIC -DPYODBC_VERSION=4.0.26 -I/var/lib/jenkins/.c ache/pypoetry/virtualenvs/ds-ops-tools-py3.6/include -I/usr/local/include/py thon3.6m -c src/textenc.cpp -o build/temp.linux-x86_64-3.6/src/textenc.o -Wn o-write-strings -DHAVE_UNISTD_H -DHAVE_PWD_H -DHAVE_SYS_TYPES_H -DHAVE_LONG_ LONG -DSIZEOF_LONG_INT=8 -I/usr/include cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default] creating build/lib.linux-x86_64-3.6 g++ -pthread -shared -Wl,-rpath /usr/local/lib build/temp.linux-x86_64-3 .6/src/buffer.o build/temp.linux-x86_64-3.6/src/cnxninfo.o build/temp.linux- x86_64-3.6/src/connection.o build/temp.linux-x86_64-3.6/src/cursor.o build/t emp.linux-x86_64-3.6/src/errors.o build/temp.linux-x86_64-3.6/src/getdata.o build/temp.linux-x86_64-3.6/src/params.o build/temp.linux-x86_64-3.6/src/pyo dbccompat.o build/temp.linux-x86_64-3.6/src/pyodbcdbg.o build/temp.linux-x86 _64-3.6/src/pyodbcmodule.o build/temp.linux-x86_64-3.6/src/row.o build/temp. linux-x86_64-3.6/src/textenc.o -L/usr/lib -L/usr/local/lib -L/usr/local/lib -lodbc -lpython3.6m -o build/lib.linux-x86_64-3.6/pyodbc.cpython-36m-x86_64- linux-gnu.so -L/usr/lib64 -lodbc /bin/ld: /usr/lib/libpython3.6m.a(abstract.o): relocation R_X86_64_32S a gainst symbol `_Py_NotImplementedStruct' can not be used when making a share d object; recompile with -fPIC /bin/ld: /usr/lib/libpython3.6m.a(boolobject.o): relocation R_X86_64_32 against `.data' can not be used when making a shared object; recompile with -fPIC [...] /bin/ld: /usr/lib/libpython3.6m.a(parser.o): relocation R_X86_64_32 agai nst `.rodata.str1.8' can not be used when making a shared object; recompile with -fPIC /bin/ld: /usr/lib/libpython3.6m.a(getcompiler.o): relocation R_X86_64_32 against `.rodata.str1.8' can not be used when making a shared object; recom pile with -fPIC /bin/ld: final link failed: Nonrepresentable section on output collect2: error: ld returned 1 exit status error: command 'g++' failed with exit status 1 Edit: I think this is because my libpython*.so files are in /usr/local/lib rather than /usr/lib; I've added /usr/local/lib to ld.so.conf and run ldconfig but that doesn't seem to do anything. Edit 2: I found a suggestion that renaming /usr/lib/libpython3.6m.a would allow the .so files to be 'found' and this seems to have worked! But I'm still puzzled as this exhibited symptoms first of unixODBC not being installed with the package manager (which it wasn't) and then of python not having --shared-packages enabled, which it did, but other files were overriding those packages somehow. It would be great if someone could shed light on that but I realize that it's hard to say without knowing exactly how the system was set up and manipulated by the helpdesk folks who work on it.
UnixODBC-dev installed in /usr/local/, resulting in gcc reporting sql.h can't be found
The TS4700 v2 SBC isn’t supported by the standard Debian kernel, it uses a variant provided by the manufacturer. This means you can’t use Debian-provided kernel packages, including the header packages. To build extra modules for the system, you should cross-compile, building the kernel on the SBC isn’t recommended. See the TS wiki for details; basically you’ll need to clone the appropriate repository and use that to build: git clone https://github.com/embeddedarm/linux-2.6.35.3-imx28.git
I have a ts7400v2 sbc and I am trying to install the linux-headers. I run: sudo apt-get install build-essential linux-headers-$(uname -r)but get the following error: sudo: unable to resolve host ts7400-4e7b7c Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-headers-2.6.35.3-571-gcca29a0 E: Couldn't find any package by regex 'linux-headers-2.6.35.3-571-gcca29a0'is there a particular source I am missing? EDIT: output of uname -a: Linux ts7400-4e7b7c 2.6.35.3-571-gcca29a0+ #2 PREEMPT Mon Mar 16 14:56:01 PDT 2015 armv5tejl GNU/Linuxoutput for apt-cache search linux-headers linux-headers-3.2.0-4-all - All header files for Linux 3.2 (meta-package) linux-headers-3.2.0-4-all-armel - All header files for Linux 3.2 (meta-package) linux-headers-3.2.0-4-common - Common header files for Linux 3.2.0-4 linux-headers-3.2.0-4-iop32x - Header files for Linux 3.2.0-4-iop32x linux-headers-3.2.0-4-ixp4xx - Header files for Linux 3.2.0-4-ixp4xx linux-headers-3.2.0-4-kirkwood - Header files for Linux 3.2.0-4-kirkwood linux-headers-3.2.0-4-mv78xx0 - Header files for Linux 3.2.0-4-mv78xx0 linux-headers-3.2.0-4-orion5x - Header files for Linux 3.2.0-4-orion5x linux-headers-3.2.0-4-versatile - Header files for Linux 3.2.0-4-versatile linux-headers-3.2.0-5-all - All header files for Linux 3.2 (meta-package) linux-headers-3.2.0-5-all-armel - All header files for Linux 3.2 (meta-package) linux-headers-3.2.0-5-common - Common header files for Linux 3.2.0-5 linux-headers-3.2.0-5-iop32x - Header files for Linux 3.2.0-5-iop32x linux-headers-3.2.0-5-ixp4xx - Header files for Linux 3.2.0-5-ixp4xx linux-headers-3.2.0-5-kirkwood - Header files for Linux 3.2.0-5-kirkwood linux-headers-3.2.0-5-mv78xx0 - Header files for Linux 3.2.0-5-mv78xx0 linux-headers-3.2.0-5-orion5x - Header files for Linux 3.2.0-5-orion5x linux-headers-3.2.0-5-versatile - Header files for Linux 3.2.0-5-versatile linux-headers-2.6-iop32x - Header files for Linux iop32x configuration (dummy package) linux-headers-2.6-ixp4xx - Header files for Linux ixp4xx configuration (dummy package) linux-headers-2.6-kirkwood - Header files for Linux kirkwood configuration (dummy package) linux-headers-2.6-orion5x - Header files for Linux orion5x configuration (dummy package) linux-headers-2.6-versatile - Header files for Linux versatile configuration (dummy package) linux-headers-iop32x - Header files for Linux iop32x configuration (meta-package) linux-headers-ixp4xx - Header files for Linux ixp4xx configuration (meta-package) linux-headers-kirkwood - Header files for Linux kirkwood configuration (meta-package) linux-headers-mv78xx0 - Header files for Linux mv78xx0 configuration (meta-package) linux-headers-orion5x - Header files for Linux orion5x configuration (meta-package) linux-headers-versatile - Header files for Linux versatile configuration (meta-package)
trying to install linux headers but not found in sources
Kali Linux is based on Debian testing and therefor the injectable ISO from virtualbox will usually be for an earlier version of the Linux kernel. I want to generalize this answer as this same problem will always apply to Kali Linux so long as its based on Debian testing (aka Sid). If you do not need x support install virtualbox-guest-utils or with x support virtualbox-guest-x11 Or if you wish to use guest additions as provided by oracle install virtualbox-guest-additions-iso package. (if you install the latter you have to know how to mount the iso on the guest yourself and install it like you just did.) Otherwise you do not need to do anything after install except maybe reboot the guest.
I'm trying to install VirtualBox guest additions on Kali Linux 2018.1 with the kernel version 4.14.0 But when I try to install them from the guest additions ISO by running the VBoxLinuxAdditions.run file it gives the error for missing the linux-headers-4.14.0-kali3-amd64 header. When I search the apt cache it doesn't come up but I have the 4.15.0 header installed. I also tried to install linux-headers-amd64 but it says that linux-headers-4.14.0-kali3-amd64 has no installation candidate Kali docs say I should use the virtualbox-guest-x11 package and I installed it. But after a reboot absolutely nothing happens and the guest additions still don't work. I even tried installing the old headers manually but that bricked my installation. I'm quite lost and I didn't find any other solutions to this problem. I would appreciate help. EDIT: I added some more information but please tell me what would you like more. EDIT 2: Downvoting isn't going to help improve my question. Please give me pointers and I'll improve it
Kali Linux VirtualBox Header Problems [duplicate]
From the bottom up:Xorg, XFree86 and X11 are display servers. This creates the graphical environment. [gkx]dm (and others) are display managers. A login manager is a synonym. This is the first X program run by the system if the system (not the user) is starting X and allows you to log on to the local system, or network systems. A window manager controls the placement and decoration of windows. That is, the window border and controls are the decoration. Some of these are stand alone (WindowMaker, sawfish, fvwm, etc). Some depend on an accompanying desktop environment. A desktop environment such as XFCE, KDE, GNOME, etc. are suites of applications designed to integrate well with each other to provide a consistent experience.In theory (and mostly so in practice) any of those components are interchangeable. You can run kmail using GNOME with WindowMaker on Xorg.
I posted a question and noticed people weren't distinguishing correctly between many of these things: Windows Managers vs Login Managers Vs Display Managers Vs Desktop Environment. Can someone please clear this up, i.e. tell us the difference between them and how they are related perhaps? What category does Xorg fall under? What about Gdm/Kdm/Xdm? People also talk about X. What is X?
Windows Managers vs Login Managers Vs Display Managers Vs Desktop Environment
Edit /etc/mdm/mdm.confand set AutomaticLoginEnable=false
I'm using linux mint with Cinnamon. The login screen comes to type a username but it also displays that in 10 seconds userx will be automatically logged in. And it happens. How to make it wait indefinitely?
how to disable auto login in linux mint
Edit /etc/gdm/custom.conf and add or change the Exclude directive in the [greeter] section: [greeter] Exclude=nobody,alice,bobUsers alice and bob won't be shown on the list at the login screen but can still log in by typing their name and password (if they have a password). See more details in How to hide users from the GDM login screen? (it's mostly distribution-independent — some details might change, for example files may located in different places, and the threshold for system users is 500 on most Red Hat derivatives but 1000 on most Debian derivatives).
I need to run the web browser with another user but I don't want the user to be shown at the login screen. How can I create a user that will not be listed on the login screen? GNOME/Scientific Linux 6.3.
How to create a user that doesn't show up on the login screen?
I think the most probable stopper here would be that your .xsession script lacks the execute permission (+x). In gdm, you also need to choose “Custom Session” (and not the standard “Xmonad” session) in the Session menu before logging in.
I have installed xmobar, xmonad on ubuntu 11.04. #!/bin/bashtrayer --edge top --align right --SetDockType true --SetPartialStrut true \ --expand true --width 10 --transparent true --tint 0x191970 --height 12 &nm-applet --sm-disable &sleep 3 gnome-power-manager &xmonadI put this in my .xsession file. But it does not seem to execute. I believe so because I do not see any of the applications in my processes list after the xmonad starts. Is there anything I am missing?
My .xsession and .xinitrc files are not executing
I found that this is possible by editing the slim.conf file available in /etc. You would need admin credentials to open this file. SLiM themes are placed in /usr/share/slim/themes:In the slim.conf file, there is a section that mentions the theme: # current theme, use comma separated list to specify a set to # randomly choose from current_theme crunchbangYou can change this to any of the themes shown in the previous screenshot. Change the theme and exit the file. Try logging out and logging back in. That's it. Login screen is changed with immediate effect. More information available here: http://slim.berlios.de/
I am on SLiM, and I don't like the default login screen. I want a login screen like the one shown below: But instead I have a pretty minimal one which has just one textbox and nothing else on the screen. I can't find a screenshot of it, but that is what I got when I am done installing. Is changing to GDM the only way to get a login screen like this? Is there any other way?
How to change login screen in CrunchBang?
Is sway compatible with gdm?yesDoes gdm support wayland or only Xorg?gdm3 itself runs on wayland. It supports both wayland and Xorg sessions.How to configure gdm for sway?You are missing an entry in /usr/share/wayland-sessions. This folder contains wayland desktop session entries for display managers in general. (Respectivly, X desktop session entries are located in /usr/share/xsessions). Create a file /usr/share/wayland-sessions/sway.desktop with this content: [Desktop Entry] Version=1.0 Name=Sway Comment=Sway - i3 on Wayland # Please choose matching path Exec=/usr/bin/sway #Exec=/usr/local/bin/sway Type=ApplicationThis entry was missing on my system, too. I've compiled sway from source; wayland-session/xsession entries are rather part of ready-to-use packages. Please make sure the Exec line matches your path to executeable sway. Note that gdm3 does not show entries in /usr/share/wayland-sessions if your host runs with a proprietary NVIDIA driver. The proprietary NVIDIA driver does not support Wayland. However, the free nouveau driver does.
I have installed sway window manager on Fedora 27. The system uses gdm as its login manager. But gdm does not provide sway for selection as the login session. Only Gnome, which is also installed on the system, is shown. I did not had this problem with i3wm, when I tried it.Is sway compatible with gdm? Does gdm support wayland or only Xorg? How to configure gdm for sway, or which login manager is prefered for usage with sway?
How to configure gdm for login into a sway session?
The location of the X cookie file can be configured with the XAUTHORITY environment variable. The default is ~/.Xauthority. Of course, the location that you pass to applications has to match the location where the cookie is stored. SLiM doesn't offer a way to add the cookie to a different file: it has ~/.Xauthority hard-coded. If you want to use a different file, patch SLiM or use a display manager that happens to have this configuration option. For example, Gdm stores X cookies under /var/run/gdm. I think you can make .Xauthority a symbolic link, if you don't want the modifiable file to be in your home directory. Making your home directory immutable is an exercise in futility. You're likely to encounter many other similar issues. The standard place for configuration files and state files is your home directory — that's where dot files get their name, because they start with a . so that ls won't list them by default.
Is it possible to change the location for .Xauthority, to something other than $HOME/.Xauthority ? AFAIU, this file is being created every time I log into LXDE, by my login manager slim. The problem I am having is following: I want to set my home to "immutable" using extended attributes: chattr +i /home/martin/This way, no applications can save their files directly in /home/martin/, but they can still save files in directories located lower levels of my home, i.e. /home/martin/.config/. At the moment, when I set my home to immutable, I cannot login to LXDE because the login manager (slim) cannot create /home/martin/.Xauthority. This happens even if the old .Xauthority exists. The login manager could just overwrite the old file with new data, but apparently this is not what it does. It creates a new file and deletes the old one. This is not allowed when /home/martin is immutable (overwriting existing file would be allowed). Therefore, I would like to store .Xauthority somewhere else, such as .config/.Xauthority. Is this possible? I know that xauth takes the parameter -f where file path can be specified. UPDATE: looking at the source code of slim, I think I might have found the place where .Xauthority is being deleted and created again: string xauthority = pw->pw_dir; xauthority.append("/.Xauthority");.../* reinitialize auth file */ authfile = cfg->getOption("authfile"); remove(authfile.c_str()); putenv(StrConcat("XAUTHORITY=", authfile.c_str())); Util::add_mcookie(mcookie, ":0", cfg->getOption("xauth_path"), authfile);How could I change the source code, so that file gets overwritten, rather than deleted/created ?
change location of $HOME/.Xauthority
I think I've made some progress on understanding this question, so I'll post what I know here. This answer is currently for those systems that use PAM. I'll add more on other methods of login as I encounter them. After you type in your username and password into the fields of your display manager, the display manager takes these two fields and starts the PAM authentication process. First it calls pam_start(). This tells PAM which conversation function (we'll get to what that is) you're using and what pam_handle_t struct to initialize. You pass this pam_handle_t struct into all of the following calls. Then you can set any properties like the username using pam_set_item(). You don't have to set the username here. When you call pam_authenticate(), it will ask for any information it doesn't already have. Next you call pam_authenticate() to see if the username and password are valid. At this point, pam_authenticate() gets any information it didn't have using the conversation function. You can look at that link for the details, but in short, PAM will call the conversation function you provided in the struct passed to pam_start() once you've called pam_authenticate(). It will then pass an array of messages to this conversation function. If the msg_style is PAM_PROMPT_ECHO_ON, it's asking for the username, and if the msg_style is PAM_PROMPT_ECHO_OFF, it's asking for the password. The other two options are described in the spec and are used for error and informational messages. Depending on the message type, populate the resp array with the responses and return the correct error code. Now if pam_authenticate() returns PAM_SUCCESS, that means the user exists. We then have to call pam_acct_mgmt() to make sure the user has permission to login at this time (I don't know where or how this permission is set). At this point we get a token using pam_setcred() and then open a session with pam_open_session(). I don't know what the purpose of this is or how the token is actually used, but this is required. Let me know if you know more information. Now we can set all of the bash variables we want using pam_putenv(). When our environment has everything we need, we can fork a new process, and then exec the startx command. When this process finishes, the user is logging out. At this point we call pam_close_session(), pam_setcred (with the option to delete the credentials), and pam_end() in that order. If any of this is incorrect or you have more information to add, please let me know. You can see my display manager (still in development) for examples of this.
I'm about to start working on my own display/login manager. I think I'll be able to handle all of the X11 stuff, but I realized I have no clue what to do when the user types in their username and password. Once a display manager has a username and password, what does it do? How does it log you in? Are there any other specific requirements a login manager has to do, like sourcing any configuration files, or is that all left up to the desktop environment?
How do display managers log a user in?
Use small footprint Display Manager. SLIM With this display manager, some manual configuration is needed. Please refer to their official document and write your /etc/slim.conf and ~/.xinitrc. The command you should put in your ~/.xinitrc to start LXDE is: exec startlxde The above is coming from : http://wiki.lxde.org/en/Debian It supports autologin.
I am disappointed with existing Display Managers, and so I was wondering wheteher I could live without one. I have very basic needs on my laptop. I have one user martin who wants to be logged into LXDE automatically after boot. I have made following change in /etc/inittab #1:2345:respawn:/sbin/getty 38400 tty1 1:2345:once:/bin/login -f martin tty1 </dev/tty1 >/dev/tty1 2>&1and added following line to my /home/martin/.profile xinit 2>/dev/nullNow, when I boot my laptop, LXDE starts automatically. Thats great. When I log out of LXDE, I am back in tty1, logged in as martin. The problem is, when LXDE is running and I have screen-lock active to protect my LXDE session, somebody could press CTRL+c in tty1, thereby killing LXDE and he would be logged in as martin. Is there a way to make LXDE start without leaving martin logged in on tty1? i.e., after LXDE has started, I don't need tty1 anymore and I would like to log out of it. but I cannot because LXDE is started from that console. Is there any way to make LXDE to "detach" itself from tty1, so that is shows the standard login prompt as normally? In case it is relevant, I am using Debian Wheezy
starting LXDE automatically (without Display Manager)
You want to use pam_usb. Read more here: http://pamusb.org/
To meet my security needs I set up quite long user password on my notebook. But when I am at home or other secure location, typing it down is cumbersome. It would be nice to let the gdm (or: mdm, since I am using Mint 13 with Mate) search for a specific file (on a pendrive), and when it is present, treat is as a security token and log in me automaticaly with it. I use encrypted home folders.
Mint 13: Is it possible to skip standard login password dialog in presence of a pendrive with the key
This command should list all of your Login managers installed: dpkg -l | grep -i 'Display Manager\|Login Manager' | awk '$2 !~ /^lib/'It will search for the keywords "Display Manager" and "Login Manager" and show only things that do not start with "lib" on the second column. Note: If you have more than one Display Manager configured, it will show both. You will have to use dpkg-reconfigure kdm for example to turn that login manager as your default.
I have Debian installed, and am using XFCE. How do I figure out which login manager I have installed?
Which login manager am I using? [duplicate]
Given that a reboot fixed the problem, what you missed is that you needed to tell the login manager (gdm) to reload its configuration. Most system services do not reload their configuration when you change it, in fact few applications automatically reload their configuration files if you edit the file directly (as opposed to going through that application's configuration UI). In the case of Gdm, it doesn't have a command to reload its configuration file. All you can do is restart it; that doesn't happen automatically when you log out (it's still the same instance of gdm until you stop it). The usual way to restart a system service is to run something like one of the following commands (I forget which service manager your version of Red Hat uses): restart ssh service ssh restart /etc/init.d/ssh restartHowever, restarting gdm logs out all users that logged in through it, so it's generally not desirable. Instead, run gdm-safe-restart so that gdm will restart as soon as the last user logs out. (This doesn't work on some versions/installations of gdm, notably on Ubuntu 10.04.)
I'm running Redhat 5.6 with the gnome display manager. I would like to configure the login manager so that there is no echo of the password when typing it in (no asterisks or the like). I have edited the files /usr/share/gdm/defaults.conf and /usr/share/gdm/factory-defaults.conf and changed the line #UseInvisibleInEntry=false to UseInvisibleInEntry=true, but I still get a password echo of asterisks at the login screen.
Trying to remove password echo in Redhat 5 when logging in
Trying many possible combinations of settings I solved it but the conclusion is that there is something amiss with Xfce's session manager settings or GUI. What I have verified is:As stated in the question, when this problem happens, under Settings/Login Window/Security - "Enable automatic login" is checked, like so: Enable timed login is not checked.The odd thing is that in order to avoid typing username & password after logging out it is enough to check 'Enable timed login'. The login window appears but just 'Enter' is needed to start session in this case:Even with 'Enable automatic login' unchecked, typing username & password after logging out is not necessary if 'Enable timed login' is checked.That doesn't make too much sense to me, but it works. Edit after restart: Because (related to a different problem - here) "Automatically save session on logout" (Menu/Settings/Session and Startup - General tab) was disabled, the solution above was not saved after startup. So, in case session automatic saving is disabled, make the 'good' settings as in the images above and in Menu/Settings/Session and Startup - Session tab: click "Save session" button. In this way, after logout, username and password are not required to login in the default session, but are required to login into a different one. This may seem odd, considering the fact that in Settings/Session and Startup/General there is an option to 'Display chooser on login'. But checking that displays only DE-specific sessions (the ones saved within a certain DE, that is within the generic "session"). In fact it seems that passwords are asked for desktop environments, not for saved "sessions". This double meaning of "session" is confusing. There is no logic in this, this solution is just a limited workaround, there may be many other variables depending on other settings that I haven't touched yet. For example, the login experience varies even depending on the style and theme of the login window... Some of the themes may display the username as a button (if Style "Themed with faced browser" is selected under Under Settings/Login Window/Local), but some may not; clicking enter as said above would enter the session directly; but clicking that username button makes necessary the password. Hopefully this application (Xfce4-session) will be in better shape in a future update.
I have Linux Mint 14 Xfce (4.10) and have also installed LXDE desktop, so I can choose between these sessions if I want. Normally I would set one as default and at startup I am not asked for username&password and am logged in automatically as intended. (Under Settings/Login Window/Security - "Enable automatic login" is checked; and have also verified that /etc/lightdm/lightdm.conf contains the line autologin-user=cipricus.) But even before installing LXDE I was asked for username and password after logout despite the fact that in Settings/Users and Groups I have the setting of not being asked for password on login. In /etc/mdm/mdm.conf I see the line: AutomaticLoginEnable=true. (Settings/Session and startup/General - 'Display session chooser' is unchecked. But this regards the other type of 'session': not the first type that involves selecting between users-passwords-desktops, but the one that involves selecting between sets of saved settings of the same user and desktop. More on this distinction/confusion here.) I find it odd that if I restart the computer I can enter default session without password, but if I just log out the username&password are needed to log in. Opening computer and entering default session/DE: no username&password selecting between sessions: username&password needed. In the future I might decide to activate password at startup but I still don't want a password being asked in order to change or re-enter a session after the system has started. Are there other settings to make?
How to enter/choose session after logout without password in (Linux Mint) Xfce?
This configuration was tested on Ubuntu 16.04.1 LTS Server. Modify /etc/pam.d/common-auth. # [...] auth [success=2 default=ignore] pam_unix.so nullok_secure auth [success=1 default=ignore] pam_succeed_if.so user = the_username # [...] auth requisite pam_deny.so # [...] auth required pam_permit.so # [...]The success=x portion tells PAM to skip x rules on success. Substitute the real username for the_username, above.So, all users first try to authenticate with pam_unix.so, which requires a correct username and password in order to succeed. If authentication succeeds with pam_unix.so, then proceed to pam_permit.so. That's the default behavior. If authentication failed with pam_unix.so, proceed to pam_succeed_if.so, which succeeds whenever a user enters the username of the_username, using whatever password was entered. If both pam_unix.so and pam_succeed_if.so fail, then proceed to pam_deny.so; otherwise, proceed to pam_permit.so. A word of caution: be very careful on a live system because it's easy to make a mistake and lock yourself out, probably requiring a fix via rescue media.
How to add a user that accept any password as a valid password? PS: I am aware of the security issue. The user will have a very restricted access (as the guest user in Ubuntu). related question: How to log into another user if the entered password is wrong?
How to add a user that accept any password as a valid password?
Have you tried passwd from the command prompt?
Currently when I login I do not have to provide a login password. I simply click on my user name and I get in. I would like to change that. In order to accomplish that I went to settings >> users and then in Login options there was no password set next to the Password label. So I clicked on it and was prompted to enter my old password and new password. However after entering my new password the change button is still disabled. How can I enable that change button so that I could login using my password.
setting up a startup login password Fedora 20
The virtual keyboard should be by default diplayed on devices without HW keyboard (like tablets). However not on normal PC with keyboard attached. To avoid virtual keyboard in the sddm open /etc/sddm.conf, find the section [General] and put there InputMethod= without any value. Like this: [General] InputMethod=As of now the virtual keyboard should not be displayed by default.
I've upgraded from Fedora 27 to 29. The upgrade itself passed fine, just after final reboot, the graphical login screen (sddm) just flickered standard screen with users and then displays virtual keyboard on black background. Similar to this picture:What can I do to avoid this behaviour?
Fedora 29 graphical login screen (sddm) displays only virtual keyboard
Over on the Arch Linux BBS, Haller wrote: It's a kernel problem: https://bbs.archlinux.org/viewtopic.php?id=236696This resolved it for me: pacman -Syu haveged systemctl enable haveged.service systemctl start haveged.serviceThat change reduced this phase of startup from 13 seconds to about 1 second: I tested this on two different computers and got very positive results on both. This will be a satisfactory work-around for me until the kernel issue is resolved.
Recently sddm has become very slow to show the login screen on Arch Linux. After I see the bootup message "Reached Graphical User Interface target" (or similar), there is a long delay of more than 10 seconds before the sddm greeter is displayed. The logs below show that the whole bootup process is slow: Startup finished in 17.085s (firmware) + 4.763s (loader) + 6.253s (kernel) + 15.786s (userspace) = 43.890s.However, from "Started Simple Desktop Display Manager" (May 15 17:53:36) to "Greeter session started successfully" (May 15 17:53:59) is 13 seconds. The greatly increased delay seems to be related to the display of user .face.icon files, although that's just a guess. Hopefully someone will see some clues in the log messages below. May 15 17:54:01 desktop1 sddm-greeter[660]: Message received from daemon: HostName May 15 17:54:01 desktop1 sddm-greeter[660]: Message received from daemon: Capabilities May 15 17:54:01 desktop1 sddm-greeter[660]: QDBusConnection: name 'org.freedesktop.UDisks2' had owner '' but we thought it was ':1.54' May 15 17:54:01 desktop1 sddm-greeter[660]: Adding view for "HDMI-2" QRect(0,0 2560x1440) May 15 17:54:01 desktop1 sddm-greeter[660]: file:///usr/share/sddm/themes/breeze/components/VirtualKeyboard.qml:20:1: module "QtQuick.VirtualKeyboard" is not installed May 15 17:54:01 desktop1 sddm-greeter[660]: inotify_add_watch("/etc/fstab") failed: "Permission denied" May 15 17:54:01 desktop1 systemd[1]: Started Daemon for power management. May 15 17:54:01 desktop1 dbus-daemon[410]: [system] Successfully activated service 'org.freedesktop.UPower' May 15 17:54:00 desktop1 udisksd[666]: Acquired the name org.freedesktop.UDisks2 on the system message bus May 15 17:54:00 desktop1 systemd[1]: Starting Daemon for power management... May 15 17:54:00 desktop1 dbus-daemon[410]: [system] Activating via systemd: service name='org.freedesktop.UPower' unit='upower.service' requested by ':1.53' (uid=995 pid=660 comm="/usr/b> May 15 17:54:00 desktop1 systemd[1]: Started Disk Manager. May 15 17:54:00 desktop1 dbus-daemon[410]: [system] Successfully activated service 'org.freedesktop.UDisks2' May 15 17:54:00 desktop1 udisksd[666]: udisks daemon version 2.7.6 starting May 15 17:54:00 desktop1 systemd[1]: Starting Disk Manager... May 15 17:54:00 desktop1 dbus-daemon[410]: [system] Activating via systemd: service name='org.freedesktop.UDisks2' unit='udisks2.service' requested by ':1.53' (uid=995 pid=660 comm="/usr> May 15 17:54:00 desktop1 sddm-greeter[660]: Cannot watch QRC-like path ":/icons/hicolor/index.theme" May 15 17:54:00 desktop1 sddm-greeter[660]: QObject::installEventFilter(): Cannot filter events for objects in a different thread. May 15 17:54:00 desktop1 sddm-greeter[660]: QObject: Cannot create children for a parent that is in a different thread. (Parent is SDDM::GreeterApp(0x7fff5551e800), parent's thread is QThread(0x55fe866984a0), current thread is QThread(0x55fe866f9ae0) May 15 17:53:59 desktop1 systemd[654]: Started D-Bus User Message Bus. May 15 17:53:59 desktop1 sddm-greeter[660]: QObject: Cannot create children for a parent that is in a different thread. (Parent is SDDM::GreeterApp(0x7fff5551e800), parent's thread is QThread(0x55fe866984a0), current thread is QThread(0x55fe866f9ae0) May 15 17:53:59 desktop1 sddm-greeter[660]: QObject::installEventFilter(): Cannot filter events for objects in a different thread. May 15 17:53:59 desktop1 sddm-greeter[660]: QObject: Cannot create children for a parent that is in a different thread. (Parent is SDDM::GreeterApp(0x7fff5551e800), parent's thread is QThread(0x55fe866984a0), current thread is QThread(0x55fe866f9ae0) May 15 17:53:59 desktop1 sddm-greeter[660]: QObject: Cannot create children for a parent that is in a different thread. (Parent is SDDM::GreeterApp(0x7fff5551e800), parent's thread is QThread(0x55fe866984a0), current thread is QThread(0x55fe866f9ae0) May 15 17:53:59 desktop1 sddm-greeter[660]: QObject: Cannot create children for a parent that is in a different thread. (Parent is SDDM::GreeterApp(0x7fff5551e800), parent's thread is QThread(0x55fe866984a0), current thread is QThread(0x55fe866f9ae0) May 15 17:53:59 desktop1 sddm-greeter[660]: QObject: Cannot create children for a parent that is in a different thread. (Parent is SDDM::GreeterApp(0x7fff5551e800), parent's thread is QThread(0x55fe866984a0), current thread is QThread(0x55fe866f9ae0) May 15 17:53:59 desktop1 sddm-greeter[660]: Loading file:///usr/share/sddm/themes/breeze/Main.qml... May 15 17:53:59 desktop1 sddm[630]: Message received from greeter: Connect May 15 17:53:59 desktop1 sddm-greeter[660]: Connected to the daemon. May 15 17:53:59 desktop1 sddm-greeter[660]: inotify_add_watch("/usr/share/wayland-sessions") failed: "No such file or directory" May 15 17:53:59 desktop1 sddm-greeter[660]: Reading from "/usr/share/xsessions/plasma.desktop" May 15 17:53:59 desktop1 sddm-greeter[660]: Loading theme configuration from "/usr/share/sddm/themes/breeze/theme.conf" May 15 17:53:59 desktop1 sddm-greeter[660]: High-DPI autoscaling not Enabled May 15 17:53:59 desktop1 sddm[630]: Greeter session started successfully May 15 17:53:59 desktop1 systemd[1]: Started User Manager for UID 995. May 15 17:53:59 desktop1 systemd[654]: Startup finished in 82ms. May 15 17:53:59 desktop1 systemd[654]: Reached target Default. May 15 17:53:59 desktop1 systemd[654]: Reached target Basic System. May 15 17:53:59 desktop1 systemd[654]: Reached target Sockets. May 15 17:53:59 desktop1 systemd[654]: Listening on D-Bus User Message Bus Socket. May 15 17:53:59 desktop1 systemd[654]: Reached target Paths. May 15 17:53:59 desktop1 systemd[654]: Listening on GnuPG cryptographic agent and passphrase cache (restricted). May 15 17:53:59 desktop1 systemd[654]: Listening on GnuPG cryptographic agent (ssh-agent emulation). May 15 17:53:59 desktop1 systemd[654]: Listening on GnuPG network certificate management daemon. May 15 17:53:59 desktop1 systemd[654]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers). May 15 17:53:59 desktop1 systemd[654]: Listening on GnuPG cryptographic agent and passphrase cache. May 15 17:53:59 desktop1 systemd[654]: Listening on Sound System. May 15 17:53:59 desktop1 systemd[654]: Reached target Timers. May 15 17:53:59 desktop1 systemd[654]: Starting D-Bus User Message Bus Socket. May 15 17:53:58 desktop1 systemd[654]: pam_unix(systemd-user:session): session opened for user sddm by (uid=0) May 15 17:53:58 desktop1 systemd[1]: Started Session c1 of user sddm. May 15 17:53:58 desktop1 systemd-logind[409]: New session c1 of user sddm. May 15 17:53:58 desktop1 systemd[1]: Starting User Manager for UID 995... May 15 17:53:58 desktop1 systemd[1]: Created slice User Slice of sddm. May 15 17:53:58 desktop1 sddm-helper[652]: pam_unix(sddm-greeter:session): session opened for user sddm by (uid=0) May 15 17:53:58 desktop1 sddm-helper[652]: [PAM] returning. May 15 17:53:58 desktop1 sddm-helper[652]: [PAM] Authenticating... May 15 17:53:58 desktop1 sddm-helper[652]: [PAM] Starting... May 15 17:53:58 desktop1 sddm[630]: Adding cookie to "/var/run/sddm/{d4e3bc53-809f-3ca5-a1e-b1d287e870b1}" May 15 17:53:58 desktop1 sddm[630]: Greeter starting... May 15 17:53:58 desktop1 sddm[630]: Loading theme configuration from "/usr/share/sddm/themes/breeze/theme.conf" May 15 17:53:58 desktop1 sddm[630]: Socket server started. May 15 17:53:58 desktop1 sddm[630]: Socket server starting... May 15 17:53:58 desktop1 sddm[630]: Display server started. May 15 17:53:58 desktop1 sddm[630]: Running display setup script "/usr/share/sddm/scripts/Xsetup" May 15 17:53:58 desktop1 sddm[630]: Setting default cursor May 15 17:53:57 desktop1 sddm[630]: Running: /usr/bin/X -nolisten tcp -auth /var/run/sddm/{d4e3bc53-809f-3ca5-a1e-b1d287e870b1} -background none -noreset -displayfd 17 -seat seat0 vt1 May 15 17:53:57 desktop1 sddm[630]: Display server starting... May 15 17:53:57 desktop1 sddm[630]: Loading theme configuration from "" May 15 17:53:57 desktop1 sddm[630]: Adding new display on vt 1 ... May 15 17:53:57 desktop1 sddm[630]: Starting... May 15 17:53:57 desktop1 sddm[630]: Logind interface found May 15 17:53:57 desktop1 sddm[630]: Initializing... May 15 17:53:57 desktop1 kernel: random: 6 urandom warning(s) missed due to ratelimiting May 15 17:53:57 desktop1 kernel: random: crng init done May 15 17:53:42 desktop1 dhcpcd[507]: eth0: no IPv6 Routers available May 15 17:53:37 desktop1 systemd[1]: Startup finished in 17.085s (firmware) + 4.763s (loader) + 6.253s (kernel) + 15.786s (userspace) = 43.890s. May 15 17:53:37 desktop1 systemd[1]: Reached target Graphical Interface. May 15 17:53:37 desktop1 systemd[1]: Reached target Multi-User System. May 15 17:53:37 desktop1 systemd[1]: Started Make remote CUPS printers available locally. May 15 17:53:37 desktop1 systemd[1]: Started CUPS Scheduler. May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 systemd[1]: Started Simple Desktop Display Manager. May 15 17:53:36 desktop1 systemd[1]: Started Permit User Sessions. May 15 17:53:36 desktop1 systemd[1]: Starting Permit User Sessions... May 15 17:53:36 desktop1 systemd[1]: Reached target Remote File Systems. May 15 17:53:36 desktop1 systemd[1]: Started Manage, Install and Generate Color Profiles. May 15 17:53:36 desktop1 systemd[1]: Mounted /home/usercommon/Finance/Syncd. May 15 17:53:36 desktop1 systemd[1]: Mounted /var/cache/pacman. May 15 17:53:36 desktop1 systemd[1]: Mounted /kit. May 15 17:53:36 desktop1 systemd[1]: Mounted /home/mari/fileserver/Desktop. May 15 17:53:36 desktop1 systemd[1]: Mounted /home/usercommon/Finance/Receipts. May 15 17:53:36 desktop1 systemd[1]: Mounted /home/jessica/Documents. May 15 17:53:36 desktop1 systemd[1]: Mounted /backup/admins. May 15 17:53:36 desktop1 systemd[1]: Mounted /home/natasha/Documents. May 15 17:53:36 desktop1 systemd[1]: Mounted /backup/files. May 15 17:53:36 desktop1 systemd[1]: Mounted /home/usercommon/Ventures. May 15 17:53:58 desktop1 sddm[630]: Setting default cursor May 15 17:53:57 desktop1 sddm[630]: Running: /usr/bin/X -nolisten tcp -auth /var/run/sddm/{d4e3bc53-809f-3ca5-a1e-b1d287e870b1} -background none -noreset -displayfd 17 -seat seat0 vt1 May 15 17:53:57 desktop1 sddm[630]: Display server starting... May 15 17:53:57 desktop1 sddm[630]: Loading theme configuration from "" May 15 17:53:57 desktop1 sddm[630]: Adding new display on vt 1 ... May 15 17:53:57 desktop1 sddm[630]: Starting... May 15 17:53:57 desktop1 sddm[630]: Logind interface found May 15 17:53:57 desktop1 sddm[630]: Initializing... May 15 17:53:57 desktop1 kernel: random: 6 urandom warning(s) missed due to ratelimiting May 15 17:53:57 desktop1 kernel: random: crng init done May 15 17:53:42 desktop1 dhcpcd[507]: eth0: no IPv6 Routers available May 15 17:53:37 desktop1 systemd[1]: Startup finished in 17.085s (firmware) + 4.763s (loader) + 6.253s (kernel) + 15.786s (userspace) = 43.890s. May 15 17:53:37 desktop1 systemd[1]: Reached target Graphical Interface. May 15 17:53:37 desktop1 systemd[1]: Reached target Multi-User System. May 15 17:53:37 desktop1 systemd[1]: Started Make remote CUPS printers available locally. May 15 17:53:37 desktop1 systemd[1]: Started CUPS Scheduler. May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 colord[615]: failed to get session [pid 562]: No data available May 15 17:53:36 desktop1 systemd[1]: Started Simple Desktop Display Manager.I'm seeing the same issue on multiple computers. All run Arch Linux KDE with plasmashell 5.12.5 or newer. Linux 4.16.8-1-ARCH #1 SMP PREEMPT Wed May 9 11:25:02 UTC 2018 x86_64 GNU/Linux.
sddm slow to launch [duplicate]
You absolutely can change the SDDM background through the GUI, it's right there in System Settings.
I have looked and not found one. I'm surprised, as something as trivial as this should be configurable via a GUI. Even SDDM for KDE doesn't offer this and KDE is normally super configurable via GUI. This is NOT about doing this by command line and editing config files, that is possible.
Is there a login / display manager in Linux with a config GUI for setting wallpaper (NOT editing config files by CLI)?
Most display managers let the user choose a session type when logging in. The primitive xdm doesn't but more recent ones such as gdm, kdm, lightdm, etc. do. There's a directory, usually either under /etc/X11 or under the display manager's configuration directory, that records session names with the program associated with each name. Under Arch, the location is /use/share/xsessions. So just ensure that your favorite window manager or session manager is listed.
I'm working towards an Arch install with a tiling window manager (probably i3) but was wondering whether it's possible to use a login manager and have the Gnome 3 desktop available as a fallback. So that one user would have the option to select either the Gnome desktop, or the i3 window manager on login. Or would it only be possible for different users? Or wouldn't it be possible at all?
Can I use a login manager to choose different window managers?
Ranges: You can do it with the following commands: for commenting: :66,70s/^/#for uncommenting: :66,70s/^#/Obviously, here we're commenting lines from 66 to 70 (inclusive).
How can I select a bunch of text and comment it all out? Currently I go to the first line, go to insert mode then type # left-arrowdown-arrow and then I repeat that sequence, perhaps saving a few keystrokes by using the . repeat feature to do each line. Is there anyway I could (for instance) select either multiple lines in visual mode or by using a range of lines and an ex ('colon') command and for that range comment out all the lines with a # to make them a "block comment". The ability to quickly 'de-comment' (remove the #'s) for a block comment would also be nice.
How to comment multiple lines at once? [duplicate]
One simple approach is xdotool, like xdotool type 'text'
How can I configure a shortcut key to send a text string to the current program? The purpose is to type common entries quicker (email address, street address, phone number, username, favorite quote, etc). I don't need any further automation than just entering the text. Gentoo Linux (3.2.12-gentoo) Xfce Desktop Environment (Version 4.8)
Keyboard Shortcut To Send Text Strings To Program
You can just use % for current file. This command should serve your purpose: :! python %
In vim, if I'm working on a Python script, I will commonly type: :! python this_script.pyto execute the script. Is there a shortcut for the name of the current file? If not, can I easily make one? I'm new at vim, and I'm not sure how to google for this.
Is there a vim shortcut for <name of current file>?
While my other answer will probably work on most Linuxes, even if they're many years old, SystemD and udev actually makes things easier:use lsusb to find the vendor and product code of your additional keyboard. (In my case, it's Vendor 145F, Product 0177. Make sure to have the letters in uppercase.) create a file /etc/udev/hwdb.d/90-extra-keyboard.hwdb, with contents similar to this:evdev:input:b0003v145Fp0177* KEYBOARD_KEY_7005b=stopcdThe first line identifies the device: the four letters after the v is the vendor code, after the p, it's the product code, from the previous step. Every further line maps a scancode to a symbolic name. To get the scancode, run evtest: Event: time 1553711252.888538, -------------- SYN_REPORT ------------ Event: time 1553711257.656558, type 4 (EV_MSC), code 4 (MSC_SCAN), value 70059 Event: time 1553711257.656558, type 1 (EV_KEY), code 79 (KEY_KP1), value 1To find out what to use for the symbolic name, look at the list of #define KEY_… lines in /usr/include/linux/input-event-codes.h: #define KEY_PLAYPAUSE 164 #define KEY_PREVIOUSSONG 165 #define KEY_STOPCD 166 #define KEY_RECORD 167re-build and load internal databases by running systemd-hwdb update; udevadm trigger verify the new settings work by running evtest again, or by assigning shortcuts in your settings.When trying this out in applications, just remember that if your desktop environment already uses that shortcut, the application won't even see the keypress.
I have a small numpad keyboard which I would like to use for launching macros and shortcuts, along side my regular keyboard. I can attach macros and shortcuts to these keys (i.e, numpad 1 minimises the active window), but my primary keyboard numpad also activates the shortcut. I would like a way to have the secondary keyboard act completely separately and to then attach shortcuts to it. Here is the output I get from xinput. ⎡ Virtual core pointer id=2 [master pointer (3)] ⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)] ⎜ ↳ LVT Recon gaming mouse id=10 [slave pointer (2)] ⎜ ↳ LVT Recon gaming mouse id=11 [slave pointer (2)] ⎜ ↳ Corsair Corsair K30A Gaming Keyboard id=13 [slave pointer (2)] ⎜ ↳ SIGMACHIP USB Keyboard id=18 [slave pointer (2)] ⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ Video Bus id=7 [slave keyboard (3)] ↳ Power Button id=8 [slave keyboard (3)] ↳ Sleep Button id=9 [slave keyboard (3)] ↳ Corsair Corsair K30A Gaming Keyboard id=12 [slave keyboard (3)] ↳ Corsair Corsair K30A Gaming Keyboard id=14 [slave keyboard (3)] ↳ LVT Recon gaming mouse id=15 [slave keyboard (3)] ↳ Corsair Corsair K30A Gaming Keyboard id=16 [slave keyboard (3)] ↳ SIGMACHIP USB Keyboard id=17 [slave keyboard (3)] ↳ SIGMACHIP USB Keyboard id=19 [slave keyboard (3)]
Can I launch macros and shortcuts from a second keyboard on Linux?
The Vim editor will load the man filetype plugin (ft-man-plugin) whenever it opens a file which has a .man filename suffix. A more usable way of using this plugin is by loading it with :runtime ftplugin/man.vim... and then using the :Man command, e.g., :Man lsOne of the things the man filetype plugin changes is that it maps q to :quit. It does this because the :Man command usually runs within Vim, and quitting it brings you back to whatever other document you were writing at the time, and someone decided it would be handier to just have to type q, as you would usually do to exit out of the manual pager in the terminal. For more information, see :help ft-man-plugin in Vim. To avoid loading this plugin for .man files, set the filetype of files with this filename suffix to something else, e.g., text, in your Vim startup file (~/.vimrc or wherever you configure Vim): autocmd BufRead,BufNewFile *.man set filetype=textOr, remove the filetype detection completely for these files: autocmd! filetypedetect BufRead,BufNewFile *.man
When I type the character q on my keyboard in vim, it exits vim. Why? Today I tried it edit a file with the file extension .man. I wanted to edit the file with a macro, so I tried to type qq -- but when the first q was entered, vim closed! vim test.manThis is on a fresh install on Debian 12. I do not have a .vimrc defined. I renamed the same file with a .man extension to have a .txt extension, and this time I could create a macro as expected -- typing a q doesn't cause vim to exit. mv test.man test.txt vim test.txtWhy is vim exiting when typing a q when trying to edit a file with a .man file extension? And how do I stop this behaviour?
Why does typing 'q' exit vim? (.man file)
Give xdotool (Ubuntu man page) a look. It's extremely powerful and should be able to do whatever you need. http://www.semicomplete.com/projects/xdotool/
I have an application which is built using GTK+. The application has a very simple interface. When started, the same window always opens, with a few input controls. We want to write a script to fill in text in a couple fields, check a check box, then click a button. Pretty simple, and would be easy to do if a command-line version of the app were available (but it isn't). What's the best way to approach interacting with an X application programmatically?
Interacting with X applications programmatically
I'm not sure what you're trying to do. If you want to make a key combination perform an action, you can use XBindKeys. The companion program xbindkeys_config can help define bindings. If you want to act on existing windows, invoke a program such as xdotool or wmctrl. If you want to make a key combination simulate a sequence of key presses, try xmacro.
I'm a big fan of keyboards, so a lot of thing a done by combination of keys, like open a file browser, web browser etc. Is there some daemon that can monitor my key presses, and launch some program afterwards, so that I won't have to configure anything else after moving to another desktop environment?
Making shortcuts cross desktop environment, possible?
To handle that particular macro, you could use the --regex-<LANG> option: ctags --regex-c='/^[^#]*_EXFUN *\( *([^ ,]+),.*/\1/p/' ...Which generates a tags file with: _EXFUN test.c 1;" d file: strchr test.c /^char *_EXFUN(strchr,(const char *, int));$/;" p
I can successfully create a tags file for vim with exctags (Exuberant Ctags). However, creating tags allowing to jump to the prototype of a function does not work, due the system headers using a syntax-mangling macro of the form #define _EXFUN(name, proto) name protoand in, e.g. string.h using char *_EXFUN(strchr,(const char *, int));which creates a tag for _EXFUN instead of strchr: _EXFUN /somedir/include/string.h /^char *_EXFUN(strchr,(const char *, int));$/;" pI create tags with this command: exctags -f tags.p --language-force=c --c-kinds=p file1 file2 ...I've read the exctags man page up and down, tried various -I options to affect macro expansions but to no avail. Has anyone solved this?
Undoing C syntax mangling macros to make exctags able to create prototype tags
I attempted to duplicate the issue you describe. What I found is that I had two i3 config files existing at the same time. ~/.config/i3/config and ~/.i3/config. In my case, editing ~/.config/i3/config had no effect because it seems that ~/.i3/config trumps it. It's a long shot, but see if maybe you have more than one config file, and possibly you are editing the wrong one.
Question: I'm using i3-wm and I have Mod3 working as a hotkey. I have the following in ./config/i3/config: #This command works bindsym Mod3+f exec "firefox" #This doesn't work nor do my other scripts bindsym Mod3+w exec "openBrowser" Both of these commands work fine when I run them from bash but only the 'firefox' command runs with the hotkey. Running my own script doesn't work. Additional Details: openBrowser is a script in /opt/bin/ which is in my path. Also tried doing: #This command works bindsym Mod3+f exec /opt/bin/openBrowserI've also tried other scripts none of which work when invoked by i3. Thus I've determined it's not an issue with the script. I also noticed when I'm in bash if I do Mod3+w my cursor blinks, where as if I do Mod3+[any unset key] the key writes it's value to the screen. So it seems i3 is at least trying to run the function.
Execute script from i3 config
You might try this: :.,'c normal @aThis uses the “ranged” :normal command to run the normal-mode command @a with the cursor successively positioned on the first column of each line starting with current line and going down to to the line with mark c. If the mark happens to be above the cursor, then Vim will ask if you want to reverse the range. This is not always the same as applying a count to @a (e.g. 5@a) because the content of register a may not always move down a single line each time it is executed (consider a “macro” that uses searches to move around instead of j or k: it would require a higher count to fully process lines that have multiple matches).
Is there a way to make a macro operate up to a marker? I know if I do 5@a my macro will operate on 5 lines. Example: set marker with `mc` record a macro with `qa` ... now what?Obviously 'c@a just moves the cursor to the marker at c. I've tried buffers, "b'c, but that just goes to the marker. I'm probably missing something very basic or just looking in the wrong places.
Vim markers and macros
xmacro is a basic macro-recorder/macro-player.. it is good for some things, but is not suited to monitoring your keystrokes dynamically (other than for recording)... xmacro: Record / Play keystrokes and mouse movements in X displaysYou are probably better off using a tool like autokey.. You can find some tutorials at How-To Geek Autokey Sample Scripts Autokey Video Autokey Features: KDE and GTK versions available, making AutoKey integrate well into any desktop environment. Write Python scripts to automate virtually any task that can be accomplished via the keyboard Built-in code editor (using QScintilla in KDE or GtkSourceView2 in GTK) Create phrases (blocks of text) to be pasted into any program on demand (uses the X selection) Create collections of phrases/scripts in folders, and assign a hotkey or abbreviation to the folder to display a popup menu Regular expressions can be used to filter windows by their title, to exclude hotkeys/abbreviations from triggering in certain applications Scripts, phrases and folders can be attached to the tray icon menu, allowing you to select them without assigning a hotkey or abbreviation AutoKey can track your usage patterns and present the most frequently used items at the top of the popup menu
Could you show me how to write macro in xmacro (that will work in whole desktop environment) that is able to expand strings? E.g. I will type "thx" and it will expand to "thank you".
Automating typing strings in xmacro
You can use apt-file for this, without necessarily knowing where the M4 files go: apt-file search yelp.m4will tell you where the particular file should be located even without having the package (yelp-tools) installed. yelp-tools: /usr/share/aclocal/yelp.m4This tells you that installing yelp-tools should allow the build to proceed further. Alternatively, you can check the build-dependencies of file-roller in Debian: that lists yelp-tools too, along with all the other packages you’ll need. On Linux Mint 18 apt-file isn’t pre-installed, but it’s easy to install: sudo apt-get install apt-fileAfter installation you will need to update its database with: sudo apt-file update
I am on Linux Mint 18 Cinnamon 64-bit. I was about to compile file-roller known as Archive manager for GNOME from source. But when running: ./autogen.shThere is a following M4 macro missing:Checking for required M4 macros... yelp.m4 not found ***Error***: some autoconf macros required to build Package were not found in your aclocal path, or some forbidden macros were found. Perhaps you need to adjust your ACLOCAL_PATH?
Checking for required M4 macros... yelp.m4 not found
Remove the \n from after the folder name so the command you are looking for is macro index s ":set confirmappend=no delete=yes auto_tag=yes\n\ <save-message>=archive<sync-mailbox>:set delete=ask-yes\n"
I have following macro defined in my muttrc: macro index s ":set confirmappend=no delete=yes auto_tag=yes\n\ <save-message>=archive\n<sync-mailbox>:set delete=ask-yes\n"When I press s on a message, it will immediately be moved into my archive folder. I would like to modify my macro, so that I will be asked for confirmation before the message is moved. But when I change confirmappend=yes: macro index s ":set confirmappend=yes delete=yes auto_tag=yes\n\ <save-message>=archive\n<sync-mailbox>:set delete=ask-yes\n"and when I press s, mutt becomes immediately unresponsive, all keys stop working, I cannot even exit. The cpu runs at 100% and I have to log in from another console to kill mutt. Can somebody please advise how to correctly modify my macro?
mutt: ask for confirmation before moving message to archive
Take a look at the glob function (:help glob()) For example, this command :nmap <leader>* ciW<C-r>=substitute(glob(@"),'\n',' ','g')<cr>defines a normal-mode mapping that replaces the current word with space separated output of glob. Note that it will clobber your " register, which should not be a big deal as long as you keep that caveat in mind. Unfortunately, it does not result in a pretty display because file names themselves often have spaces and may even have newlines.
Is there anyway to let VIM expand the glob pattern to all files that match it? e.g when I type *.c, and press some key, it would become a.c b.c
Expand pattern under cursor to all files matching it
Using __install as an example you can see where it's defined with rpm --showrc | grep __installOr you can see the definition with rpm --eval "%{__install}"
Inside of an RPM I have %{__install} %{SOURCE2} %{buildroot}I believe that %{__install} is a macro. Where do I find where it is defined? What is the definition? Was it provided by the system or distro, or is it a core rpm thing?
How can I find where an RPM macro is defined or what it expands to?
You want something like this: to_list=( "[emailprotected]," "[emailprotected]," "[emailprotected]," "[emailprotected]" ) mail -s "Subject text here." "${to_list[@]}" < body_text.txtThat is using an array, where you were trying to create a string.
I have a list of 20 e-mail addresses that I'm trying to put in a macro variable as multiple rows in a shell script. In "wide" format it works fine and appears as: to_list="[emailprotected],[emailprotected],[emailprotected],[emailprotected]"I want something like below, and I'm having trouble with the quotes, commas, and line breaks: to_list="[emailprotected],[emailprotected], \ . . . [emailprotected],[emailprotected]"The usage will be: mail -s "Subject text here." $to_list < body_text.txtAccording to the syntax the e-mails should be comma separated and only the entire list should be wrapped in quotes as shown in the wide format. However, my test is only sending e-mails to the top row. I appreciate the insight!
Put long list of e-mail addresses in multiple-row macro variable
Let's add some more debugging code. all: $TARGETS define f2 $$(info f2 called on $(1)) .PHONY: target$(1) target$(1): echo "We are in $(1)" TARGETS+=target$(1) endefdefine f1 VAR$(1)=ValueWith$(1) $(info too early: VAR$(1) is $$(VAR$(1))) $$(info right time: VAR$(1) is $$(VAR$(1))) $(eval $(call f2,$(VAR$(1)))) endef$(eval $(call f1,CallOne)) $(eval $(call f1,CallTwo))$(warning Warning: $(TARGETS))Output: too early: VARCallOne is $(VARCallOne) f2 called on right time: VARCallOne is ValueWithCallOne too early: VARCallTwo is $(VARCallTwo) f2 called on debug.mk:18: warning: overriding commands for target `target' debug.mk:17: warning: ignoring old commands for target `target' right time: VARCallTwo is ValueWithCallTwo debug.mk:20: Warning: target target make: *** No rule to make target `ARGETS', needed by `all'. Stop.The problem is that the eval call is made before the definition of VAR…, at the time the function f1 is expanded, instead of at the time the result of that expansion is processed. You need to delay the eval. Also there is a typo in line 1; if you fix it, you'll find that the target all builds nothing since TARGETS is not defined at the time it's used. You need to declare the dependencies later. all: # default target, declare it firstdefine f2 .PHONY: target$(1) target$(1): echo "We are in $(1)" TARGETS+=target$(1) endefdefine f1 VAR$(1)=ValueWith$(1) $$(eval $$(call f2,$$(VAR$(1)))) endef$(eval $(call f1,CallOne)) $(eval $(call f1,CallTwo))$(warning Warning: $(TARGETS)) all: $(TARGETS)
In the following makeffile one macro process it's arguments to call another macro. I expect that makefile below will generate two targets and correct list of the targets in $TARGETS. But in fact it only generates one target with correct list. How to do such macro call in correct way?all: $TARGETS define f2 .PHONY: target$(1) target$(1): echo "We are in $(1)" TARGETS+=target$(1) endefdefine f1 VAR$(1)=ValueWith$(1) $(eval $(call f2,$$(VAR$(1)))) endef$(eval $(call f1,CallOne)) $(eval $(call f1,CallTwo))$(warning Warning: $(TARGETS))output of make:test.mk:16: warning: overriding recipe for target `target' test.mk:15: warning: ignoring old recipe for target `target' test.mk:18: Warning: targetValueWithCallOne targetValueWithCallTwo gmake: Nothing to be done for `all'.
Gmake macro expansion: macro calls macro with variable in arguments
It seems elementary-tweaks has no longer key bindings in elementary Freya. But luckily there is mode to bind keys in Freya also. You can use program named xbindkeys. Details about how to configure and use it you can find here. excerptWe only need xbindkeys, a simple yet powerful command line tool to bind commands to a certain key or keys combinations. The program can be installed via terminal typing: $ sudo apt-get install xbindkeysAfter the installation, if you try to run the application, you will be warmed to create a configuration file. As user, type: $ touch ~/.xbindkeysrcor, alternatively: $ xbindkeys --defaults > ~/.xbindkeysrcThen edit the file: $ nano ~/.xbindkeysrcAnd type, before the end section: "slingshot-launcher" Super_Lso it will look like this: xbindkeysThen press CTRL+X to save and exit. Of course, xbindkeys can be used to bind also different commands to different keys. The values are written as: "command" state (0x8) and keycode (32) keysyms associated with the given keycodesTo find the last two values (which, as we’ve seen, can be used indifferently), type $ xbindkeys -kthen, in the blank window that’ll be open, type the key or the desired keys combination. The result will appear in the terminal.
I want to be able to run certain scripts by shortcut the way I do in ubuntu - for example some similar to this awesome script. (On that model I can search google the text selected in any text editor, or even translate it various languages, search it on different sites, etc). How to use this type of script in elementaryOS?
How to run a script by shortkey in elementaryOS?
The shell will always first empty file BUILD whenever you use > BUILD in a command, before running m4, so this can never work. What you can try is putting the write into BUILD within the m4 script. For example, replace the last line count dnl with syscmd(`echo 'count` >BUILD')dnl
My goal is to create a m4 macro, that reads a value from a file (BUILD), increases it and then saves the output into the file. I came up with the following solution (BUILD.m4): define(`__buildnumber__',`esyscmd(cat BUILD)')dnl define(`counter',__buildnumber__)dnl popdef(__buildnumber__)dnl define(`count',`define(`counter',eval(counter+1))counter')dnl count dnlWhen BUILD contains 3 then running m4 BUILD.m4 outputs 4. Which is great! However, when I call it like this m4 BUILD.m4 > BUILD the BUILD file always contains 1. If I pipe to another file m4 BUILD.m4 > B it works and the B file will contain 4 when BUILD was 3. I suspect it has to do with the > output redirection. When comparing both variants with debug tracing it seems like the one with the redirection into the same file can not read from the file anymore. Variant redirection to different file: % m4 -dtaeq BUILD.m4 > B m4trace: -1- define(`__buildnumber__', `esyscmd(`cat BUILD')') m4trace: -1- dnl m4trace: -2- __buildnumber__ -> `esyscmd(`cat BUILD')' m4trace: -2- esyscmd(`cat BUILD') -> `3' m4trace: -1- define(`counter', `3')Variant redirection to same file: % m4 -dtaeq BUILD.m4 > BUILD m4trace: -1- define(`__buildnumber__', `esyscmd(`cat BUILD')') m4trace: -1- dnl m4trace: -2- __buildnumber__ -> `esyscmd(`cat BUILD')' m4trace: -2- esyscmd(`cat BUILD') m4trace: -1- define(`counter', `')Is there a way of doing it like this, or do I need to use some other means of capturing the output
output redirection issue - m4 macro with self increasing build counter
You can do that by having two macros, a counter holding the current value, and a count macro that expands to the value and redefines `counter'. For example, it could look like this define(`counter',`0')dnl define(`count',`define(`counter',eval(counter+1))counter')dnlWhen the count macro is used, it firstly redefines counter to hold its next value (adding 1 to its present value), and then it uses that value. I'm not immediately sure how to do this with a single macro, and if that's an important aspect of your problem then this is not the answer.
Is it possible to define a m4 macro (without arguments), which expands to 1 on first invocation, expands to 2 on second invocation, and so on? In other words, it should have internal memory storing the number of times it is invoked. Can this be done?
m4 macro implementation of global (non-volatile) counter
A change is any command that modifies the text in the current buffer. You'll find all commands listed under :help change.txt. In insert mode, a change is further limited to a sequence of continually entered characters, i.e. if you use the cursor keys to navigate (which you shouldn't), only the last typed part is repeated. Commands like j are motions; i.e. they don't affect the text, and just move the cursor. Those are not repeated. If you want to repeat multiple changes, or a combination of movements and changes, record the steps into a macro (e.g. qaA;<Esc>jq), and then repeat that (@a).
The dot command in Vim repeats the "last change", but I am not exactly sure what constitutes the "last change". For example, if I type the sequence:A;{ESC}j.Then a semi-colon is appended to the current line, but I have to type "j" again. In other words, the dot macro only does "A;{ESC}", so apparently the ESC is defining the end of the "last change". Why doesn't it include the "j" as well?
dot command in Vim, last change?
I'm not a mutt user, but it looks like tag-prefix-cond can do this. It is like tag-prefix but if there aren't any tagged messages, the command buffer is flushed without doing anything (in other words, whatever hook you're in stops dead in its tracks), from this [emailprotected] archive.
I use mutt and I like to sort out some emails from various mailing lists. I still like them to come to my inbox, but when read, I want to move them somehow automatically. Currently, I do the following:Select the mails matching a pattern, e.g.: T~f facebook.com Move them to some place: ;s=FacebookI made some macros to avoid typing it by myself. However I still need to do the two separate steps. And considering that I have a few different mailing lists (say, LinkedIn as well), that's two steps for each list. I would like to reduce it in a single step, that is to say one command (macro) to select a few messages based on the pattern and move them. The problem is that ;s does not check if some mails are already tagged. So that if none are tagged, it will move the current mail. How could I add some condition to ;s to do nothing if no tag is set?
Mutt: move emails only if some emails are tagged
Add at the end of your .screenrc the following lines: split focus otherTo run multiple command, each in a separated split window: screen -t title1 app1 split focus screen -t title2 app2 split focus screen -t title3 app3and so on.
Here's my .screenrc: defscrollback 5000vbell on vbell_msg " dierre!!! ---- Wuff!! "screen -t GRINDER ssh [emailprotected] screen -t TRUNKattrcolor b ".I" termcap xterm 'Co#256:AB=\E[48;5;%dm:AF=\E[38;5;%dm' defbce "on"# caption always "%3n %t%? @%u%?%? [%h]%?%=%c" # hardstatus alwaysignore hardstatus alwayslastline '%{gk}[ %{G}%H %{g}][%= %{wk}%?%-Lw%?%{=b kR}[%{W}%n%f %t%?(%u)%?%{=b kR}]%{= kw}%?%+Lw%?%?%= %{g}]%{=b C}[ %D %m/%d %C%a ]%{W}'This will open two screen(s). The next step I have to do is:Ctrl+A -> S to split the screen Ctrl+A -> \t to go the the empty screen Ctrl+A -> " to open the screen listNow I can choose the other screen and therefore I have a terminal with two splitted screen. Is there a way to create a macro for this? Really annoying to do it every time.
Is it possible to have a screen macro for this?
Apparently, at some time after version 3.5.4, LibreOffice changed the protocol by which they title various styles (FYI, in the example shown below, it is a paragraph style that is referenced). A snippet of code from a Writer macro which was broken by the subject upgrade (FYI, I use the Record method to create most of my macros): rem ---------------------------------------------------------------------- dim args13(1) as new com.sun.star.beans.PropertyValue args13(0).Name = "Template" args13(0).Value = "First line indent" args13(1).Name = "Family" args13(1).Value = 2dispatcher.executeDispatch(document, ".uno:StyleApply", "", 0, args13())rem ----------------------------------------------------------------------Note the format of the named style, "First line indent". I discovered that, if I merely capitalized said style name (replacing First line indent with First Line Indent), this fixed my macro: rem ---------------------------------------------------------------------- dim args13(1) as new com.sun.star.beans.PropertyValue args13(0).Name = "Template" args13(0).Value = "First Line Indent" args13(1).Name = "Family" args13(1).Value = 2dispatcher.executeDispatch(document, ".uno:StyleApply", "", 0, args13())rem ----------------------------------------------------------------------And, like substitutions for other so broken macros fixed them, too! End of problem.
Recently upgraded from Debian Wheezy to Jessie (yeah, I know...). During said upgrade, the LibreOffice suite got upgraded from version 3.5.4 to 4.3.3. Well, lots of my Writer macros were broken after said upgrade. Is anyone aware of any issues that could have caused this as a result of said upgrade?
LibreOffice Upgrade from 3.5.4 to 4.3.3 Broke Lots of Macros
This is not exactly what you asked, but you can create vim scripts with your content. Lets start with a simple case: $ cat noendspaces #!/usr/bin/vim -s :%s/ *$// :r ! echo "\#last changed by $USER in :" `date` :xand then... $ chmod 755 noendspaces $ for a in file*.txt do ./noendspaces $a done
I edit and correct a lot of identical files using those commands :%norm f^ID :%s/\s\+$// :%norm A, :%norm GG$xI also used the macro mode qafor record macro a and @ato execute But for some strange reason or probably my error it applied only some commands My question is: is possible to save those command in a script and use vim -N -u NONE -n -c "set nomore" -S script.vi file.txtOf course the syntax :%norm f^ID :%s/\s\+$// :%norm A, :%norm GG$xUsed in a script give me a error Vim is 7.4 on slackware 14.2
vim: how to record norm commands?
Quoting from https://fedoraproject.org/wiki/Packaging:DistTag#Conditionals :Keep in mind that if you are checking for a specific family of distributions, that you need to use: %if 0%{?rhel}and NOT %if %{?rhel}Without the extra 0, if %{rhel} is undefined, the %if conditional will cease to exist, and the rpm will fail to build.And similary you need to use 0%{?fedora} in the first condition.
I have a spec file where its Requires: fields depend on the specific distribution it's being built on. So I need to be able to create a conditional structure along the lines of: %if %{?fedora} Requires: xterm libssh clang BuildRequires: wxGTK3-devel cmake clang-devel lldb-devel libssh-devel hunspell-devel sqlite-devel desktop-file-utils %endif %if (centos test) Requires: xterm libssh clang BuildRequires: wxGTK3-devel cmake clang-devel lldb-devel libssh-devel hunspell-devel %endifwhere (centos test) is to be replaced with some test to see if the distribution we're on is CentOS. I have tried using %{?rhel} and %{?centos} as this test. But both failed. I have also tried the tests %{rhel} and %{centos} but neither worked (as it didn't seem to recognize these macros). I have searched RPM macro references (like https://docs.fedoraproject.org/ro/Fedora_Draft_Documentation/0.1/html/RPM_Guide/ch09s07.html and https://docs.fedoraproject.org/en-US/Fedora_Draft_Documentation/0.1/html-single/RPM_Guide/index.html) but neither mention these types of macros.
How to determine whether the system an RPM package is built on is CentOS from within a spec file?
I think changing delete option at the beginning of the macro to ask-no and reseting to yes at the end should do the trick. folder-hook =Trash 'macro index <delete> "set delete=ask-no;<delete-message><sync-mailbox><change-folder>^<enter>set delete=yes"'
I have defined in Mutt two "trash" macros -- one for the Trash folder (just mark as deleted and sync) and one for the remaining folders (save into Trash and sync): folder-hook . 'macro index <delete> "s=Trash<enter><enter><sync-mailbox><change-folder>^<enter>"' folder-hook =Trash 'macro index <delete> "<delete-message><sync-mailbox><change-folder>^<enter>"'Together with set delete = yes this results in messages immediately being expunged, no questions asked. Such a behaviour is fine outside the Trash folder, however, I'd like Mutt to ask for confirmation before messages are deleted from Trash. The only workaround that I have come up with so far is not to sync in the macro (i.e., use only <delete-message> there), thus having to live with deleted messages being present until I sync manually. (Which is not ideal...) Is it possible to change this setup so that -- in Trash -- Mutt would ask for confirmation before actually marking the messages as deleted and expunging them? Thanks.
Mutt: ask before deleting messages from Trash
I figured this is not a LibreOffice standard macro language, but relatorio, a subset of Genshi (the question was in the context of tryton). According to the first link, supported directives from Genshi are:py:for; py:if; py:choose; py:when; py:otherwise; py:with.It seems that inside a TEST="" statement, standard python syntax is allowed (i.e. in the case I was interested in, see comment above, len(array)).
I have to modify a LibreOffice template with a number of conditional statements. I managed to figure out one can add such statements through Insert > Fields > More fields, tab Functions, type Placeholders and format Text (as of LibreOffice 5.0.6.2), but I can't seem to find the list of available functions. Where do I find some documentation for this language?
LibreOffice placeholder fields documentation
Well this is embarrassing! A short while after giving up and posted this question I stumbled upon the answer: .ds FAM HIf it's the first line in the document it'll set all text to use Helvetica by default, requiring you to use \f[font family descriptor] to change it to e.g. Times Bold-Italics. If it comes after the cover page, that page'll be in Times and the rest of the document'll be in Helvetica. Hope this helps anyone who has the same problem I had :)
I noticed there's a similar answered question here, but that one's pertaining to the me package. I'd like to know if and how, if so, is it possible to that using ms instead.
How can you customise headings in the ms package for groff?
Attention! It disables the confirmation dialog temporarily. macro index,pager a ":set confirmappend=no<enter><tag-prefix><copy-message>=Archive<enter><sync-mailbox>:set confirmappend=yes<enter>" Archive
I'd like to write a macro (index, pager) that saves a message to the Archive mailbox but keep the current message open / selected, or even to to the previous entry. My current macro: macro index,pager a "<save-message>=Archive<enter><previous-entry><enter>" ArchiveThe problem is that <save-message> seems to jump to the next entry that is not deleted on its own, so calling <previous-entry> does not really do the trick afterwards. Is there a way of staying at the current message or going to the previous entry after saving a message in a macro?
Mutt: Save message to different folder but stay on it
Try sync; echo 1 > /proc/sys/vm/drop_caches.
Is there any command that by using I can clean the cache in RHEL? I used this command: sync; echo 3 > /proc/sys/vm/drop_cachesbut it didn't work.
How to clear memory cache in Linux [duplicate]
Briefly: A shell will almost certainly close file descriptors related to redirections immediately after the command completes.Details: There's no explicit mention of closing the files opened through redirections in POSIX (as far as I can see). But not closing them immediately wouldn't be very useful. The rules for the environment any commands are started in don't allow for passing extra file descriptors. The shell would need to take care to close any extra fd's it's saved when starting a command that shouldn't have them. For the usual > filename output redirections, the file would in case need to be truncated when starting each command, even if the file descriptor was saved. And any saved file descriptor would point to a wrong file if the file concerned was renamed or removed in the meanwhile. For example, this wouldn't behave correctly if the fd opened for the first echo was kept open and used as-is for the second: echo foo >> x; mv x y; echo bar >> xThe usual fork+exec model used for starting external programs also makes it very easy to have the files automatically close when the command exits. The shell only needs to fork() first, and open any necessary files in the child process, before calling exec() to replace the child with the actual command. When the child process exits, any files opened by it are automatically closed.In awk, though, the syntax for output redirection is similar to the shell, but any opened files are kept open until the script exits, unless explicitly closed. This will only open foo once, and will not truncate it in between the prints either: awk 'BEGIN { print "a" > "foo"; print "b" > "foo" }'
As described here, redirections use open() to write to a file. There's an inner (?) file descriptor created in the shell, and then used when needed. Is the inner descriptor created for the whole duration of the script or the shell lifetime? Is it destroyed after some time, a number of operations, etc? I mean in particular file descriptors for the files that the shell itself opens for its builtins' operations. Is the descriptor created and the file opened for each operation? How long are they kept? Example: #!/bin/bash >>x echo something ...do many other things not related to the file x >>x echo something moreIs the first descriptor instance kept until the second operation? What about the shell I use in in a terminal? I sometimes keep one session open for days, maybe even weeks. Does it still keep the descriptors for all the files I operated on with shell built-ins?
What's the lifespan of a file descriptor?
The best way I can attempt to answer those questions is to say what those three actually are. zRAM zRAM is nothing more than a swap device in essence. The memory management will push pages out to the swap device and zRAM will compress that data, allocating memory as needed. Zswap Zswap is a compressed swap space that is allocated internally by the kernel and does not appear as a swap device. It is used by frontswap in the same way a swap device may be used, but in a more efficient manner. Zcache Zcache is the frontend to frontswap and cleancache. Zcache supersedes zRAM so you don't really want both of them fighting over resources, although there is some talk about how the two can work well together given the right circumstances. For now I wouldn't bother trying and leave it up to the experts to figure that one out. Some reading: Cleancache vs zram? https://lwn.net/Articles/454795/ https://www.kernel.org/doc/Documentation/vm/zswap.txt http://www.zeropoint.com/Support/ZCache/ZCachePro/ZCacheAdvantages.html Personally, I have just disabled zRAM and enabled Zcache on all my systems that have a new enough kernel (zRAM is still enabled on the Android devices). As for performance: that's something you'd have to look into yourself. Everybody is different. In theory, though, Zcache should be much more memory efficient than zRAM and it works on two levels (frontswap and cleancache), and it can page out to a swap device as needed (on the hard drive, for example). You can also choose which compression algorithm to use, should it be using too much CPU (which I can't imagine it will). Update: Zcache has been removed from the 3.11 kernel (for now), so zRAM has again become the only option in newer kernels. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1256503/comments/3 http://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?id=96256460487387d28b8398033928e06eb9e428f7
I've been trying to understand the difference in use cases for Zswap, Zram, and Zcache. Apologies in advance for the long/slightly sloppily worded question. I've done a bunch of googling, and I understand that zram is basically a block device for compressed swap, while zswap compresses in kernel using the frontswap api. It appears that one advantage of zswap is that it can move some pages to a backing swap when under pressure in a LRU manner, while zram can't do that (please confirm, not sure if this is true). So here's my question: 1.) As a desktop user, what is the performance difference between zcache/zswap/zram, especially zswap and zram? For example, is one much better/worse at memory fragmentation (the kind that leads to excessive memory usage and waste)? Bonus question: 2.) Is there a likely ideal combination of the above (say, zram+zswap, or zram+zcache) for desktop performance (including responsiveness of desktop, plus minimally disruptive swap behavior and sane memory management)? *Citation of sources is greatly appreciated. I should add that I'm a decently experienced Linux user (5 years), and have tried to really understand how my system including the kernel works. However, I'm not a programmer, and only have very basic programming knowledge (3 credits college course). But be technical if you need to; I'll parse your meaning on my own time. System specs: Linux Mint 15 Processor:Core 2 Quad 6600 (2.4ghz) Ram: 8G linux kernel: liquorix 3.11 series Storage: 128 GB SSD, 1TB HDD 5400rpmNo "buy more ram" comments, please! I've maxed the ram on this motherboard, and have a $0 upgrade budget for the foreseeable future. However I like to keep open memory intensive programs (multiple browsers being the main consumers of my ram) so I don't mind swapping within reasonable performance degradation limits.
Zswap, Zram, Zcache desktop usage scenarios
A dirty page does not necessarily require a write-back. A dirty page is one that was written to since the kernel last marked it as clean. The data doesn't always need to be saved back into the original file. The pages are private, not shared, so they wouldn't be saved back into the original file. It would be impossible to have a dirty page backed by a read-only file. If the page needs to be removed from RAM, it will be saved in swap. Pages that are read-only, private and dirty, but within the range of a memory-mapped file, are typically data pages that contain constants that need to be initialized at run time, but don't change after they have been initialized. For example, they may contain static data that embeds pointers; the pointer values depend on the address at which the program or library is mapped, so it has to be computed after the program has started, with the page being read-write at this stage. After the pointers have been computed, the contents of the page won't ever change in this instance of the program, so the page can be changed to read-only. See “Hunting Down Dirty Memory Pages” by stosb for an example with code fragments. You may, more rarely, see read-only, executable, private, dirty pages; these happen with some linkers that mix code and data more freely, or with just-in-time compilation.
Executing (for example) the following command to get the list of memory mapped pages: pmap -x `pidof bash`I got this output:Why some read-only pages are marked as "dirty", i.e. written that require a write-back? If they are read-only, the process should not be able to write to them... (In the provided example dirty pages are always 4 kB, but I found other cases with different values) I checked also the /proc/pid/smaps and that pages are described as "Private Dirty".
Why read-only memory mapped regions have dirty pages?
On a modern 64-bit x86 Linux? Yes. It calls kmap() or kmap_atomic(), but on x86-64 these will always use the identity mapping. x86-32 has a specific definition of it, but I think x86-64 uses a generic definition in include/linux/highmem.h. And yes, the identity mapping uses 1GB hugepages. LWN article which mentions kmap_atomic. I found kmap_atomic() by looking at the PIO code.[*] Finally, when read() / write() copy data from/to the page cache: generic_file_buffered_read -> copy_page_to_iter -> kmap_atomic() again.[*] I looked at PIO, because I realized that when performing DMA to/from the page cache, the kernel could avoid using any mapping. The kernel could just resolve the physical address and pass it to the hardware :-). (Subject to IOMMU). Although, the kernel will need a mapping if it wants to checksum or encrypt the data first.
On a modern 64-bit x86 Linux, how is the mapping between virtual and physical pages set up, kernel side? On the user side, you can mmap in pages from the page cache, and this will map 4K pages in directly into user space - but I am interesting in how the pages are mapped in the kernel side. Does it make use of the "whole ram identity mapping" or something else? Is that whole ram identity mapping generally using 1GB pages?
How is the page cache mapped in the kernel on 64-bit x86 architectures?
Swap is only valid during a given boot, so all the tracking information is kept in memory. Swapping pages in and out is handled entirely by the kernel, and is transparent to processes. Basically, memory is split up into pages, tracked in page tables; these are structures defined by each CPU architecture. When a page is swapped out, the kernel marks it as invalid; thus, the next time anything tries to access the page, the CPU will fault, which will cause a handler in the kernel to be invoked; it’s this handler’s responsibility to restore the page’s contents. In Linux, there’s a swap_info structure which describes each swap device or file. Within that structure, a swap_map maps memory pages to blocks in the swap device or file. When a page is swapped out, the kernel stores the swap_info index and swap_map offset in the corresponding page table entry, which allows it to find the page on disk when necessary. (All supported architectures provide enough space for this in their page tables, but there are limits — e.g. the available space means that Linux can manage at most 64GiB of swap on x86.) You’ll find a much more detailed description of all this in the “Swap Management” chapter of Mel Gorman’s Understanding the Linux Virtual Memory Manager.
A swap partition doesn't contain a structured filesystem. The kernel doesn't need that because it stores memory pages on the partition marked as a swap area. Since there could be several memory pages in the swap area, how does the kernel locate each page when a process requests its page to be loaded into memory? Let's explain more: Looking at the header of the swap partition from Devuan OS: #define SWAP_UUID_LENGTH 16 #define SWAP_LABEL_LENGTH 16 struct swap_header_v1_2 { char bootbits[1024]; /* Space for disklabel etc. */ unsigned int version; unsigned int last_page; unsigned int nr_badpages; unsigned char uuid[SWAP_UUID_LENGTH]; char volume_name[SWAP_LABEL_LENGTH]; unsigned int padding[117]; unsigned int badpages[1]; };So when mkswap command is executed for a partition, that's what gets placed on that partition, the swap header. Now, let's have a scenario where "process A" has its memory page swapped, so there's one memory page in the swap area. Of course, there could be many memory pages in the swap area. "Process A" needs to access that memory page that was swapped. "Process A" tells the kernel, may I have my swapped memory page, please? The kernel says: sure, my dear friend. The kernel looks for "process A"'s memory page in the swap partition. Since the swap partition isn't a sophisticated structure (not a filesystem), how would the kernel know how to locate that specific memory page of "process A" in the swap partition? Perhaps the kernel somewhere stores sector addresses for those swapped pages, so when a process asks for its memory page, the kernel knows where to look in the swap partition, reads the memory page from the partition and loads it into memory.
How does the kernel address swapped memory pages on swap partition or swap file?
Each physical page of memory is tracked in the kernel using struct page. This allows the kernel to describe how each page is used; in particular, for anonymous and file-backed mappings, the mapping field points to the address_space structure used to describe the mapped object. For code which needs to find virtual mappings using a given physical page, the kernel provides a set of reverse mapping functions. These allow the reverse mappings for anonymous mappings and file-backed mappings to be walked. For example, try_to_unmap walks the maps looking for any use of a given physical page, so that it can unmap it. shrink_page_list calls try_to_unmap when it decides it needs to unmap a page which is mapped into processes.
From my understanding, when Linux swaps a physical page frame in/out RAM, it needs to set valid bit for all virtual pages mapping into this physical page. Mapping a virtual page to physical page frame seems to be well explained in textbooks, but how does the kernel find all virtual pages from a physical page frame? Actual implementation in Linux source code would be appreciated.
How does Linux translate a physical address to (possibly multiple) virtual address?
Seeing articles like this disturbs me although I haven't researched the issue enough to confirm their conclusions. Having an overly-large swap partition on Linux will not cause any performance problems. Swap is used as necessary and can be somewhat controlled by swappiness. The amount of swap space allocated is never considered in the algorithms for swapping (or paging) out processes. The biggest consideration is that if you have to start swapping your performance is going to drop considerably. Any system that relies on swapping needs to have more physical memory installed.
Can a Linux swap partition be too big? I'm pretty certain the answer is, "no" but I haven't found any resources on-point, so thought I'd ask. In contrast, the main Windows swap file, pagefile.sys, can be too large. A commonly cited cap is 3x installed RAM, else the system may have trouble functioning. The distinction seems to lie in the fact that Linux virtual memory is highly configurable with kernel parameters, not to mention compile options, whereas Windows virtual memory is barely so. Windows virtual memory management consequently seems to rely on algorithms that are immutable or seem to rely on swap file size and how it is configured. Linux has its own virtual memory management algorithms, of course, but the question is whether and how they are affected by the size of the specified swap partition or file. This comes up because I have a system with 16GB physical RAM configured with a series of 64GB partitions to facilitate a multi-boot capability. For convenience / laziness, I've simply designated one of these 64GB partitions as swap, i.e., 4x physical RAM in contrast to Windows' 3x cap (the latter being relevant only as a frame of reference because this is a Linux-only system). I'm debugging some issues around memory management and VMware Workstation and have come to wonder what, if any, effect the swap partition's size has on compaction, swappiness, page faults, and performance generally. Many thanks for any constructive input.
Can a Linux Swap Partition Be Too Big?
The explanation on Quora seems to me to be rather confusing, and mixes up a number of concepts. The term “address binding”, in the context of memory addresses (as opposed to network addresses for example), comes from Leon Presser and John R. White’s 1972 paper on linkers and loaders (see also the ACM entry), where it is defined as follows:The translation or mapping of a logical into a physical address is called address binding.A quick read could give the impression that this is talking about logical and physical addresses from a memory management perspective, but that’s not the case; in the paper, physical addresses are addresses of “information” in memory, and logical addresses are the symbols used to refer to that information. Thus address binding is what is commonly referred to nowadays as symbol (or pointer) relocation, and as you say, this can happen at compile time (when generating a static binary for example), at load time (when the dynamic linker resolves symbols in a shared library), or at execution time (when the running program resolves symbols manually, e.g. using dlopen).
I have found some explanations about what "address binding" is. They say that "address binding is an operation of mapping virtual or logical addresses to physical addresses." Is this definition correct? I cannot make sure whether it is correct or not because a university presentation says that converting virtual addresses to physical addresses is performed in execution time. However, address binding says that binding operation can be implemented in compile time, load time or execution time. This shows that there is a contradiction.
What is Address Binding?
Processes (or the kernel, acting on behalf of processes) pre-allocate address space, not pages. When a process allocates memory, the corresponding page-table entries are allocated, and initialised to point to the zero page (except on architectures which forbid this). The zero page is set up to return all zeroes on reads, and fault on writes — the fault handler will then allocate a separate physical page.
Does the process pre-allocate heap and stack memory while dividing it into pages? If yes, will all those pages be empty initially?
How does the paging concept work with heap and stack memory?
Answers to questions 1 and 2: no, once paging has been enabled, the CPU instructions only use virtual addresses, which are translated to physical addresses using the MMU before reading or writing RAM. The __va and __pa macros don't access memory, they just convert addresses between the address spaces. On a 32-bit machine, __va just adds 0xc0000000 to the physical address given as argument, because the mapping has been set up so that physical address N is at virtual address N+0xc0000000. Addresses you want to access with the CPU must have a mapping; you can't bypass the MMU. So a mapping that manages only 128 MB is not sufficient.
I am studying Professional Linux Kernel Architecture and I am in Chapter 3 Memory Management. While I studied kernel address space itself is split into direct mapping area, vmalloc area, kmap area and fixed mapping area. What I am wondering is just like below.Can direct mapping area(896MB) of kernel address space in 32-bit machine be accessed by function like __va, __pa without MMU? if 1. is true then master kernel page table(swapper_pg_dir) manages only 128MB ? While I am studying kernel code I found difference in paging_init function between 32-bit and 64-bit. In 32-bit, I found pagetable_init function which initialize and master kernel page table in paging_init function. function paging_init in 32-bit void __init pageit_init(void){ pagetable_init(); __flush_tlb_all(); kmap_init(); olpc_dt_build_devicetree(); sparse_memory_present_with_active_regions(MAX_NUMNODES); sparse_init(); zone_sizes_init(); }But in 64-bit, I could't find kernel page table related function in paging_init function. void __init paging_init(void) { sparse_memory_present_with_active_regions(MAX_NUMNODES); sparse_init(); node_clear_state(0, N_MEMORY); if (N_MEMORY != N_NORMAL_MEMORY) node_clear_state(0, N_NORMAL_MEMORY); zone_sizes_init(); }Is 64-bit kernel doesn't have master kernel page table? If it's true is it just access kernel memory only by direct mapping?
Kernel address space and Kernel page table
The available address space depends on the architecture. One limit is the amount of address space made available by the architecture itself. 64-bit architectures usually allow 64-bit pointers, and 32-bit architectures allow 32-bit pointers. The amount of addressable space can be limited by the architecture beyond these constraints, and the architecture can also impose a certain structure. On top of all that, the kernel applies its own decisions, and some of these are configurable. On 32-bit x86, five different setups are possible:the default allocates 3GiB to userspace, and 1GiB to the kernel, and allows for nearly 1GiB of “low” physical memory — the split is at 0xC0000000; a variant 3G/1G split shifts the split down to allow for a full 1GiB of low memory — the split is at 0xB000000; the 2G/2G split allocates 2GiB to userspace, 2GiB to the kernel, and has two variants like the 3G/1G split — the split is at either 0x8000000 or 0x7800000; the 1G/3G split allocates 1GiB to userspace, 3GiB to the kernel — the split is at 0x4000000.For a system with 512MiB of RAM, you should use the default 3G/1G split; userspace will have 3GiB of address space, and the kernel will have 1GiB. On 64-bit x86, two different setups are possible, allowing either 128 TiB or 64 PiB of address space for both userspace and the kernel. Other architectures have different setups.
On what basis the size of User and kernel virtual memory is decided in Linux? (32-bit, if that's relevant.) Is it configurable? If we have 512 MB RAM What will be the size of user and kernel virtual address?
Size of virtual memory in Linux
I think 2.4 supports the uname system call. Try this /* * Author: NagaChaitanya Vellanki */ #include <sys/utsname.h> #include <stdio.h> #include <errno.h>int main() { struct utsname buf; if(uname(&buf) != -1) { printf("Operating System name: %s\n", buf.sysname); printf("Node name: %s\n", buf.nodename); printf("Release: %s\n", buf.release); printf("Version: %s\n", buf.version); printf("Machine: %s\n", buf.machine); } else { printf("Error: %s\n", strerror(errno)); } return 0; }To compile gcc -o uname uname.cSample output on my raspberry-pi ./uname Operating System name: Linux Node name: naga-playground Release: 4.4.11-v7+ Version: #888 SMP Mon May 23 20:10:33 BST 2016 Machine: armv7lTry these if available as suggested by man-page cat /proc/sys/kernel/osrelease 4.4.11-v7+cat /proc/sys/kernel/ostype Linuxcat /proc/sys/kernel/version #888 SMP Mon May 23 20:10:33 BST 2016
I have an ARM Linux system running kernel version 2.4, but I'm not sure if the processor has a memory-management unit, so how can I tell whether the system is running a uClinux kernel or a vanilla Linux kernel? The system does not have uname.
Determing if an embedded Linux system runs uClinux
Only posix_memalign should be used since it's defined by POSIX, as its name suggests. For issues with memalign see for example the Solaris 10 manual page for memory allocation functions:The argument to free() is a pointer to a block previously allocated by malloc(), calloc(), or realloc(). After free() is executed, this space is made available for further allocation by the application, though not returned to the system. Memory is returned to the system only upon termination of the application. If ptr is a null pointer, no action occurs. If a random number is passed to free(), the results are undefined.The description of free() doesn't even mention memalign allocations.for example, memalign() would call malloc(3) and then align the obtained valueThe following implementation is an example of the given description: void *memalign(size_t alignment, size_t size) { void *mem; uintptr_t uip_mem, uip_align; // fail if alignment is smaller than sizeof(void*) or not a power of two if (alignment < sizeof(void*) || alignment & (alignment - 1)) { errno = EINVAL; return NULL; } // allocate alignment extra bytes to prevent heap overflow mem = malloc(size + alignment); if (mem == NULL) return NULL; uip_mem = (uintptr_t) mem; uip_align = (uintptr_t) alignment; // align up returned address return (void*) ((uip_mem + uip_align - 1) & -uip_align); }See this Wikipedia page for more information about alignment. The previous implementation may or may not return the same address returned by malloc, depending on its alignment. Since the returned address may not be the exact same returned by a malloc call, it cannot safely be passed to free, as it expects an unchanged value returned by malloc (or other allocation function).
I use Linux only but I want to understand what this means: From the Linux Programming Interface:Blocks of memory allocated using memalign() or posix_memalign() should be deallocated with free(). On some UNIX implementations, it is not possible to call free() on a block of memory allocated via memalign(), because the memalign() implementation uses malloc() to allocate a block of memory, and then returns a pointer to an address with a suitable alignment in that block. The glibc implementation of memalign() doesn’t suffer this limitation.From man memalign:POSIX requires that memory obtained from posix_memalign() can be freed using free(3). Some systems provide no way to reclaim memory allocated with memalign() or valloc() (because one can pass to free(3) only a pointer obtained from malloc(3), while, for example, memalign() would call malloc(3) and then align the obtained value). The glibc implementation allows memory obtained from any of these functions to be reclaimed with free(3).I don't know much about memory alignment and don't understand on "for example, memalign() would call malloc(3) and then align the obtained value". Could someone tell me what's going on here and what could be wrong with free()?
On some UNIX implementations, it is not possible to call free() on a block of memory allocated via memalign()
The 16-bit ISA bus only has 24 address lines, so it can only encode addresses up to 16MiB. This matches the 80286 CPU for which it was designed (as an extension to the 8-bit expansion bus used with the 8086 and its 20 address lines). The ISA bus itself was never extended beyond 24 address lines. It was replaced by MCA, EISA, the VESA local bus, and PCI.
As per Robert Love 's Linux Kernel Development , an x86 ISA device cannot perform DMA in to the full 32 bit address space because ISA devices can access only the first 16MB of physical memory(range 0MB-16 MB). Why is that so?
Why x86 ISA devices cannot perform DMA in to the full 32 bit address space?
In this context, “fixing up” means merging or splitting the VMAs as appropriate so that they match the regions being manipulated:if a region to be locked (or unlocked) is smaller than the VMA containing it, the VMA needs to be split; if consecutive VMAs can be merged, they should be.The documentation you’re reading is old, but this still applies to current kernels. Fixing up is handled by mlock_fixup, which calls vma_merge and split_vma as appropriate. See also the documentation describing the unevictable LRU infrastructure.
I'm reading "Understanding the Linux Virtual Memory Manager" by Gorman. In Chapter 4 about Process Address Space, when VMA operations are introduced, for example create, lock and unlock, the text mentions "fix up region". What does "fix up" means specifically? Does it apply in the same way to different VMA operations? Detailed quote:Linux can lock pages from an address range into memory via the system call mlock() which is implemented by sys_mlock() whose call graph is shown in Figure 4.10. At a high level, the function is simple; it creates a VMA for the address range to be locked, sets the VM_LOCKED flag on it and forces all the pages to be present with make_pages_present(). A second system call mlockall() which maps to sys_mlockall() is also provided which is a simple extension to do the same work as sys_mlock() except for every VMA on the calling process. Both functions rely on the core function do_mlock() to perform the real work of finding the affected VMAs and deciding what function is needed to fix up the regions as described later.The system calls munlock() and munlockall() provide the corollary for the locking functions and map to sys_munlock() and sys_munlockall() respectively. The functions are much simpler than the locking functions as they do not have to make numerous checks. They both rely on the same do_mmap() function to fix up the regions.If an old area exists where the mapping is to take place, fix it up so that it is suitable for the new mapping;The kernel version used in the book is Linux 2.4.22.
what does fix mean in VMA operation in virtual memory managment
The Linux kernel has a component called the OOM killer (out of memory). As Patrick pointed out in the comments the OOM killer can be disabled but the default setting is to allow overcommit (and thus enable the OOM killer). Applications ask the kernel for more memory and the kernel can refuse to give it to them (because there is not enough memory or because ulimit has been used to deny more memory to the process). If overcommit is enabled then an application has asked for some memory and was granted the amount but if the application writes to a new memory page (for the first time) and the kernel actually has to allocate memory for this but cannot do that then the kernel has to decide which process to kill in order to free memory. The kernel will rather kill new processes than old ones, especially those who (together with their children) consume much memory. So in your case the new process might start but would probably be the one which gets killed. You can use the files /proc/self/oom_adj /proc/self/oom_score /proc/self/oom_score_adjto check the current settings and to tell the kernel in which order it shall kill processes if necessary.
What happens if a Linux, let’s say Arch Linux or Debian, is installed with no swap partition or swap file. Then, when running the OS while almost out of RAM, the user opens a new application. Considering that this new application needs more RAM memory than what’s needed, what will happen? What part of the operating system handles RAM management operations, and can I configure it to behave differently?
What happens if a Linux distro is installed with no swap and when it’s almost out of RAM executes a new application? [duplicate]
I found this solution as well, but I'm afraid of killing data of other processes. Isn't there a more selective solution?echo 3 > /proc/sys/vm/drop_caches does not and cannot kill any processes or cause any harm to your system - it just evicts everything from your caches, not shared memory. ipcs has no relationship to your issue either. tmpfs indeed occupies shared memory but unmounting a tmpfs mount point must automatically free your shared memory. Why hasn't it happened for you, I've no idea. I believe your /tmp/ramdisk is still mounted but for some reasons df doesn't show it. reboot will fix your issue.
I executed this command to create a RAM-Disk: mount -t tmpfs -o size=60G tmpfs /tmp/ramdiskAfter that I copied several files into this virtual filesystem as follows: cp /mnt/user/hugefile.bin /tmp/ramdisk/hugefile.bin cp /mnt/user/hugefile2.bin /tmp/ramdisk/hugefile2.bin cp /mnt/user/hugefile3.bin /tmp/ramdisk/hugefile3.binThen, the last of the cp commands freezed and the CPU load hit the maximum. I think because the size of the RAM-Disk was larger than the free memory. I interrupted it by CTRL+C. After a while I deleted the mounted RAM-Disk as follows: umount /tmp/ramdiskBut and thats now my problem, it did not freed the shared memory: free -g total used free shared buff/cache available Mem: 62 0 0 53 61 7 Swap: 0 0 0As you can see there is no high usage on virtual memory: df -BG | grep tmpfs tmpfs 1G 1G 1G 1% /run devtmpfs 32G 0G 32G 0% /dev tmpfs 32G 0G 32G 0% /dev/shm tmpfs 1G 1G 1G 1% /var/log tmpfs 4G 0G 4G 0% /tmp/plextranscodeI found this hint to use ipcs to analyze the usage, but the result is empty: ipcs------ Message Queues -------- key msqid owner perms used-bytes messages ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status ------ Semaphore Arrays -------- key semid owner perms nsemsI found this solution as well, but I'm afraid of killing data of other processes. Isn't there a more selective solution?
How do I delete shared memory that was used by mounted tmpfs directory?
In 32 bit systems with more than 896 MB of RAM it is obvious, that the mapping of kernel addresses need to be changed because of kernel virtual addresses and the non-continuous mapping.Yes, this is known as highmem.But how is this handled in 64 bit? As the RAM can always be mapped entirely in the address space, the master kernel page table needs to be set up once at initialization and then is never changed, since the mapping is never changed. Thus, this kernel region in the user page table never needs to be updated?Yes.Btw has someone a good explanation how the user process page tables are updated in 32 bit? As it is always said, that the master kernel page tables are not directly used but only used as a reference. Are the entries for the kernel region copied for every process in its user page table?In the highmem document linked above, it says that highmem mappings only require manipulating "the kernel's page tables". "Page tables" are actually a type of tree structure. E.g. see "Four-level page tables" [LWN.net, 2004]. The top level is a single page (4096 bytes). The entries which map the kernel range, are set to the same values in all processes, and hence are shared. The temporary mappings happen at a lower level of the tree, so they only need to modify inside the shared kernel page tables, and they do not need to modify each process page table separately. At least, that's my high-level overview. I don't have all the words.
In 32 bit systems with more than 896 MB of RAM it is obvious, that the mapping of kernel addresses need to be changed because of kernel virtual addresses and the non-continuous mapping. But how is this handled in 64 bit? As the RAM can always be mapped entirely in the address space, the master kernel page table needs to be set up once at initialization and then is never changed, since the mapping is never changed. Thus, this kernel region in the user page table never needs to be updated? Btw has someone a good explanation how the user process page tables are updated in 32 bit? As it is always said, that the master kernel page tables are not directly used but only used as a reference. Are the entries for the kernel region copied for every process in its user page table?
Does the kernel address region in user page tables need to be updated in an 64 bit system?