output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
[deleted answer by OP:] I would still like to know: what exactly is making nct6775 available now?There are a lot of attempts at answering the general question in the following link. Unfortunately none of them are comprehensive, so I will try to improve on them. Linux: How to find the device driver used for a device? In your case, the sensor device can be found as one of the links shown in ls -l /sys/class/hwmon/*. You could try to extend that command, and find your kernel module immediately: ls -l /sys/class/hwmon/*/device/driver/moduleHowever, this command makes some assumptions. It will not work in every case. If the command does not work, narrow it down by checking each individual link in the chain. There are three possible cases.You have a driver link, but no module link. This means the driver is built in to the kernel! Which would kind of answer your question :-). It is equally possible to ls -l on the driver link. I.e. to see the name of the driver, change the above command to remove the /module part. Often the driver name is the same as the name of the loadable module, but sometimes they are different. The driver link is not immediately under device, but... If the above command does not work, you might need to replace device with device/device, or so on. The device link takes you to the parent device. But sometimes the driver is on the grandparent device instead, or even further :-). None of the parent device(s) have a driver link, or there is no parent device link at all. The device link takes you to the parent device. For example, you might have a network device /sys/class/wlan0, and /sys/class/wlan0/device might point to a PCI card which provides wlan0. In your case, I can imagine it not having anything like a device on the standard pci bus. In this case the driver is supposed to define its own custom device, in /sys/devices/platform/. This is exactly what the coretemp driver for my Intel CPU does. But if your driver got this wrong, it would create a device with no parent, and hence no device link. Sensors (hwmon devices) are one of the more obscure child devices; I've seen this happen several times before. Looking in ls /sys/devices/virtual/*, I seem to have three devices that get this wrong, and all of them are hwmon devices. If there is no "physical" / parent device - then there can be no driver. This is expected behaviour for genuinely virtual devices, like loopback (lo) or bridge networking devices. It reflects the device model of the Linux kernel. On a physical device, you can remove the driver that is bound to a it, and potentially bind a different driver. It wouldn't make sense to support this without having a physical device. It's just unfortunate because there is no equivalent method like this, to find the module that implements a virtual device.Contents:Example results looking in /sys I found the module name, now...1. Example results looking in /sys $ cd /sys/class/hwmon/ $ ls -l * total 0 lrwxrwxrwx. 1 root root 0 Dec 2 17:50 hwmon0 -> ../../devices/virtual/thermal/thermal_zone0/hwmon0 lrwxrwxrwx. 1 root root 0 Dec 2 17:50 hwmon1 -> ../../devices/virtual/hwmon/hwmon1 lrwxrwxrwx. 1 root root 0 Dec 2 17:50 hwmon2 -> ../../devices/virtual/thermal/thermal_zone8/hwmon2 lrwxrwxrwx. 1 root root 0 Dec 2 17:50 hwmon3 -> ../../devices/platform/coretemp.0/hwmon/hwmon3$ ls -l hwmon3/device/driver/module lrwxrwxrwx. 1 root root 0 Dec 2 18:32 /sys/class/hwmon/hwmon3/device/driver/module -> ../../../../module/coretempBut the other results did not look so helpful :-). What is virtual/thermal/thermal_zone0/hwmon0? hwmon devices (and some other types) also have a name. E.g. the iwlwifi sensor, which is really provided by my Intel Wi-Fi card. But the driver is buggy and declared it as a virtual device. $ head */name ==> hwmon0/name <== acpitz==> hwmon1/name <== dell_smm==> hwmon2/name <== iwlwifi==> hwmon3/name <== coretempHere's a different device, where the driver is on the "grandparent": $ ls -l */device/device/driver lrwxrwxrwx. 1 root root 0 Dec 2 18:33 /sys/class/hwmon/hwmon0/device/device/driver -> ../../../../bus/acpi/drivers/thermalAlso there is no module for this driver, because this one is built-in to the kernel. You can confirm this if you can find the corresponding option in the kernel build configuration. This is not necessarily named the same as the module though. $ ls -l */device/device/driver/module ls: cannot access '*/device/device/driver/module': No such file or directory$ grep CORETEMP= /boot/config-$(uname -r) CONFIG_SENSORS_CORETEMP=m $ grep ACPI_THERMAL= /boot/config-$(uname -r) CONFIG_ACPI_THERMAL=y2. I found the module name, now... You said you're not 100% sure what you've done. If you've found the module name, but you were worried because you can't remember if you installed it from an unknown website, here are some things you could look at. You can reload a module and check the path your module was reloaded from: $ modprobe --remove coretemp$ modprobe -v coretemp insmod /lib/modules/4.19.4-200.fc28.x86_64/kernel/drivers/hwmon/coretemp.ko.xzThen you can query your package manager to confirm the module file came from the distribution kernel package. E.g. for RPM: $ rpm -q --whatprovides /lib/modules/4.19.4-200.fc28.x86_64/kernel/drivers/hwmon/coretemp.ko.xz kernel-core-4.19.4-200.fc28.x86_64$ rpm -q --whatprovides /boot/vmlinuz-$(uname -r) kernel-core-4.19.4-200.fc28.x86_64Your package manager should also let you verify the installed package files have not been modified. It's not so simple to confirm where the package came from :-). Usually you look at the package name and guess :-). You can get a list of available packages and where they come from e.g. with dnf info kernel, but I don't think dnf can show the checksum of the RPM file that was installed or of the available RPMs.
What sensors I can monitor on my AMD Threadripper 1950x on an ASRock x399 Taichi mobo under Linux. It was announced last year that temperature monitoring was working for Ryzen processors and that was supposedly included in the 4.15 kernel, according to this:https://www.phoronix.com/scan.php?page=news_item&px=AMD-Zen-Temps-Hwmon-Next. However, it seems the temperatures are offset, which was fixed in kernel 4.18.6 according to this: https://www.phoronix.com/scan.php?page=news_item&px=Linux-4.18.6-k10temp-Correct As far as I can tell there is absolutely no talk of per-core temperature monitoring under Linux as is available with Windows. However, other sources suggest that I might need to build modules specifically based on my motherboard. These instructions seem to suggest that I can build the appropriate kernel drivers based on the output of sensors-detect: https://linuxconfig.org/monitor-amd-ryzen-temperatures-in-linux-with-latest-kernel-modules Accoding to sensors-detect I have nct6775, but I can't find any sign that that I have the appropriate kernel module (not shown with lsmod, is there someplace else I should look?). Unfortunately, I cannot build from the repository because it is no longer on github. So these are my questions:What drivers and kernel modules give what information? Specifically, which ones give the per-core readings that are available under Windows? What is the status of temperature drivers for Ryzen under linux: complete, incomplete, hacked together and never-to-be-reliable? If I can get nct6775 built, what will that give me in addition to the K10 that I already have? Where else might I go to get the source to build them from? Why is this so poorly documented? Is not having clear info about this a year and a half after release par for the course, are is AMD being unusually unhelpful by industry standards?
Ryzen/Threadripper temperature sensors: Which senors are related to which kernel modules and how to enable them
If using systemd Edit /lib/systemd/system/thermald.service by running sudo systemctl edit --full thermald.serviceAdd the option at the end of ExecStart: [Unit] Description=Thermal Daemon Service[Service] Type=dbus SuccessExitStatus=1 BusName=org.freedesktop.thermald ExecStart=/usr/sbin/thermald --no-daemon --dbus-enable[Install] WantedBy=multi-user.target Alias=dbus-org.freedesktop.thermald.serviceIf using upstart (below Ubuntu 15.04) In Ubuntu, you can add the option in /etc/init/thermald.conf: # thermald - thermal daemon # Upstart configuration file # Manages platform thermalsdescription "thermal daemon"start on runlevel [2345] and started dbus stop on stopping dbus# # don't respawn on error # normal exit 1respawn# # consider something wrong if respawned 10 times in 1 minute # respawn limit 10 60exec thermald --no-daemon --dbus-enableAdd the option in the last line.
I get the message: thermald: Unsupported cpu model, use thermal-conf.xml file or run with --ignore-cpuid-checksensors-detect suggests coretemp and w83627hf which are installed and in /etc/module.TryIf I run sudo thermald --no-daemon --ignore-cpuid-check | tee thermald.log, I get: NO RAPL sysfs present Polling mode is enabled: 4TryI change the thermal-conf.xml to the example here. Running sudo thermald --no-daemon | tee thermald.log and I get: NO RAPL sysfs present 10 CPUID levels; family:model:stepping 0x6:f:6 (6:15:6) Need Linux PowerCap sysfs Unsupported cpu model, using thermal-conf.xml only Polling mode is enabled: 4 sensor id 2: No temp sysfs for reading raw temp XML zone: invalid sensor type pkg-temp-0 Zone update failed: unable to bindTherefore, the easiest way seems to run thermald with the option --ignore-cpuid-check. How can I run thermald with the option --ignore-cpuid-check? Or is there another way to get the prober xml configuration?
How to run thermald with option --ignore-cpuid-check
Using sed Here is one way: $ sensors -Af | sed -n '2{s/°.*//; s/[^+-]*//; p; q}' +105.8Or, using the same command inside command substitution to capture its output in a variable: temp=$(sensors -Af | sed -n '2{s/°.*//; s/[^+-]*//; p; q}')s/°.*// removes the first occurrence of the degree symbol, °, and everything after it. s/[^+-]*// removes everything up to but not including the first + or -. Using awk $ sensors -Af | awk 'NR==2{print $3+0; exit;}' 105.8The number that we want is in the third field. Because the third field contains characters, for example +105.8°F, we add 0 to it. This forces awk to convert it to what we want: a number.
I would like to cut 150.8 from this string temp1: +150.8°F (crit = +197.6°F). Here is my script for logging temperatures with the command sensors: #!/bin/bash now=$(date +"%m_%d_%Y") # get current date now_excel=$(date +"%D %H:%M") # get current date & time in excel formatfile_dir="/var/www/html/logs" file="$file_dir/logging_$now.csv" # backup name and directorytemp=$(sensors -Af | sed -n '2{p;q}') # temp1: +150.8°F (crit = +197.6°F) #temp_num="$temp" | sed 's/+\(.*\)°/\1/g'# add line to csv printf "$now_excel" >> "$file" printf ", " >> "$file" printf "$temp" >> "$file" printf "\n" >> "$file"find "$file_dir"/* -mtime +3 -exec rm {} \; # remove any backup files older than 3 daysexit 0
Cut string from command output
On x86 hardware, it gets a lot of its information from DMI, an API to get information from the BIOS. More details at github : lshw
For example: Where does the lshw - list hardware program read the info out? (I mean on a software level, that there are probably some ROM chips here and there is of course the case) Basically everything, (the user could potentially want to know) about the machine and system internals, is provided to the user by the kernel with virtual filesystems. i.e. procfs mounted at /proc right? So is the only way to read (non process/user-land stuff),(means the actual system-software/kernel/os infos and not some isolated process which gets told everything) is through that virtual filesystems. How does the kernel get it? I mean does it see the ROM chips/Sensors as I/O Hardware and do they have an physical address which is memory mapped? For the CPU I know that x86 has a special instruction which puts the cpuinfo in a register out of which it can be read with additionall instructions.(i.e. lscpu)
Where does system information come from
I got the same problem and implemented a solution: Using sed to parse the piped output of sensors with regular expressions the result is appended to the log file.The date is written as UNIX timestamp to file and formated to stdout. To suppress the line break the echo -n "$(date +"%H:%M:%S") command is used. Next the output of sensors is piped into sed to parse every line to find the temperatures by searching for °C. The result is agine piped to sed. Now the string is devided into three parts: The name of the sensor at the beginning with colon and whitespaces ^[a-zA-Z0-9 ]*:\s*, The temperature consisting of sign, numbers and point \([0-9.+-]*\) and the rest to the end of the string .*$. The second part is marked as reference by the use of the brackets. The result is again piped to sed to remove the line breaks. Click for details The scripts sleeps for X seconds. (In my case 5 seconds.)The resulting batch script: # Printing the names of the columns as first row in file echo "Time; temp1; temp2; Physical id 0; Core 0; Core 1; Core 2; Core 3; SIO Temp; temp3" > Temperatures.log while true do # Printing the time and all temperatures to stdout echo -n "$(date +"%H:%M:%S"): " sensors | sed -n "/°C/p" | sed "s/^[a-zA-Z0-9 ]*:\s*\([0-9.+-]*\).*$/\1/" | sed ':a;N;$!ba;s/\n/;\t/g' # Logging time as UNIX time and temperatures to file echo -n "$(date +"%s"); " >> Temperatures.log sensors | sed -n "/°C/p" | sed "s/^[a-zA-Z0-9 ]*:\s*\([0-9.+-]*\).*$/\1/" | sed ':a;N;$!ba;s/\n/;\t/g' >> Temperatures.log # Sleeping for X seconds sleep 5 done
I was wondering how to reformat terminal output for logging information. More specifically I would like to reformat the output of the sensors command from the lm-sensors package and write it to a file. The output looks something like this: acpitz-virtual-0 Adapter: Virtual device temp1: +61.0°C (crit = +99.0°C) temp2: +29.8°C (crit = +99.0°C)coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +63.0°C (high = +86.0°C, crit = +100.0°C) Core 0: +62.0°C (high = +86.0°C, crit = +100.0°C) Core 1: +59.0°C (high = +86.0°C, crit = +100.0°C) Core 2: +63.0°C (high = +86.0°C, crit = +100.0°C) Core 3: +61.0°C (high = +86.0°C, crit = +100.0°C)radeon-pci-0100 Adapter: PCI adapter temp1: +61.5°CMy purpose for reformatting is, for later using the data with gnuplot (realtime plotting). So the result should look similar to this: # Timestamp [hh:mm:ss] temp1 [°C] temp2 [°C] ... 13:45:52 65.0 29.0 . 13:45:53 66.0 28.0 . 13:45:54 64.0 27.0 . 13:45:55 55.0 26.0 . ... ... ... .I would like to use this on multiple computers with a different amount of sensors this would require some sort of loop. But from where to where would one loop and how to eliminate the redundant lines (e.g. acpitz-virtual-0, Adapter: Virtual device, ...). Also i'm aware of the lm-sensors package capabilities to produce graphs. But I would like to realize a homebrew solution and also keep the question more general.
Personalize sensor's output and save it to file
Follow the specification your MB is based on the iTE IT8728 chip which is not supported by lm-sensors. Sorry.
I use Debian linux squeeze x64 now on gigabyte 970a-DS3. I installed this OS on motherboard which burned (from asus). And I've changed my motherboard to that gigabyte. After running back into linux, it shows I have only 13,9 °C temperatures and is it only 1 sensor (instead of about 3 or 4 sensors), which it has detected. it shows just: Debx64>sensors k10temp-pci-00c3 Adapter: PCI adapter temp1: +13.9°C (high = +70.0°C, crit = +86.0°C)I searched on google, but nothing there helping me. I was trying to reinstall lm_sensors, trying again sensors-detect |y and nothing was working. Why it doesn't show all of my sensors and why it shows only 1 sensors with totaly low temperature (13.9°C is epic fail...) What to do, to fix this problem? Is there anything how to tell linux "I want to search again all devices, which has any sensors"? I know this is unique problem, but I would like to know, how to solve this problem. Thank you, all for helping me
Sensors showing bad values
It depends on what is the output of sensors. If yours is something like mine: % sensors k10temp-pci-00c3 Adapter: PCI adapter temp1: +44.0°C (high = +70.0°C)then you can use the following script, adapting it accordingly. Besides TEMP_STOP and TEMP_START, you should change the regular expression that filters the line from sensors you want to use. It's the parameter to grep, in the temp function. #!/bin/bashTEMP_STOP=98 TEMP_START=90temp() { sensors | grep '^temp1:' | sed -e 's/.*: \+\([+-][0-9.]\+\)°C.*$/0\1/' }while true; do TEMP=$(temp) # keep waiting until temp is too hot while [ $(echo "$TEMP < $TEMP_STOP" | bc) = 1 ]; do sleep 10 TEMP=$(temp) done echo temp $TEMP too hot, stopping. # now wait for it to cool down... while [ $(echo "$TEMP > $TEMP_START" | bc) = 1 ]; do sleep 10 TEMP=$(temp) done echo ok now, restarting... done
I have a pretty badly ventilated computer whose temperature reaches 100º C in some occasions. The poor thing can't be ventilated any better ("put a bigger fan" is not a suitable solution). When the CPU reaches 100º C, the machine stops "violently" (just shuts down). That machine is running Ubuntu 10.10 with lm-sensors-3 (the installed package is lm-sensors 1:3.1.2-6) I know what program is causing the issue (a very demanding media player) I could actually stop it for a while without causing major disruptions when the temperature reaches 98º C and boot it again when it reaches... lets say 90º C. Is that possible to do something like that directly through lm-sensors or do I have to create my own process that checks lm-sensors periodically and "does its things" depending on the temperature? Thank you in advance.
Lm-Sensors: Run specific commands when temperature goes above/below limits
The relevant lines from dmesg are: [ 518.172735] usb 1-3: new full-speed USB device number 4 using xhci_hcd [ 518.306677] usb 1-3: New USB device found, idVendor=0403, idProduct=6001 [ 518.306686] usb 1-3: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 518.306689] usb 1-3: Product: FT232R USB UART [ 518.306692] usb 1-3: Manufacturer: FTDI [ 518.306695] usb 1-3: SerialNumber: AK04P01W [ 518.309382] ftdi_sio 1-3:1.0: FTDI USB Serial Device converter detected [ 518.309442] usb 1-3: Detected FT232RL [ 518.309445] usb 1-3: Number of endpoints 2 [ 518.309448] usb 1-3: Endpoint 1 MaxPacketSize 64 [ 518.309450] usb 1-3: Endpoint 2 MaxPacketSize 64 [ 518.309453] usb 1-3: Setting MaxPacketSize 64 [ 518.309771] usb 1-3: FTDI USB Serial Device converter now attached to ttyUSB0These are the relevant lines, because by the timestamps they belong together, as reaction to what happens when you plugin the device, and they happen long enough after the boot messages so there's no connection to that. As you can see, a new USB device is detected, you are given details of the device, and in reaction to that, the module ftdi_sio is loaded, which provides the special device files /dev/ttyUSB0. If no kernel driver would habe been loaded, you could hunt (e.g. with google, or grep the kernel source) for the vendor/product combination (0403:6001, also shown in lsusb), and then try to find a kernel driver for this device. The bcm2708 driver mentioned in the other answers isn't relevant at all: That's a driver for the I2C bus e.g. for the Raspberry Pi, and not for your laptop. But we already have a working driver, which just provides a serial interface, and has no connection to the kernel I2C infrastructure. So lmsensors, i2detect etc. all won't work (unless you write or find an additional driver). The website of your USB-I2C converter you mentioned in the comments explains the protocol to use over the serial link: You send a sequence of bytes, and then optional receive a sequence of bytes as an answer. The command sequence looks like <command-byte> <address> <register (0-2 bytes)> <data byte count (0-1 bytes)> <write data>And the webpage for the SRF 02 explains how the registers of the sensor chip look like: 6 registers you can read, one 1 command register you can write. So, for example, to read the version, you need to read 01 byte from register 00, the default chip address is E0, the LSB is the R/W bit, so instead you use E1 as address, and the required command for the USB-I2C adapter is 55. So the full sequence you'd send over serial is 55 E1 00 01, and then you'd read one byte as answer. You can do that from the command line: $ printf '\x55\xE1\x00\x01' > /dev/ttyUSB0 $ hexdump -n 1 -e '"%02x \n"' < /dev/ttyUSB0Or you can open /dev/ttyUSB0 in your favorite language, and then just read and write bytes using the commands your language provides.
I am trying to get my laptop to communicate with my SRF02 sensor, which is using a USB-I2C interface. My laptop is running Debian Jessie. Problem: When I run sudo i2cdetect -y 0 I see no devices at all. This is the same for port 1 but beyond that lots of devices display at random places (eg port 4 shows a nearly full table). At none of the ports < 3 is EX70 taken, which is the device's default location. I have tried auto loading i2c-dev on startup but the problem persists. The module docs say you need the FTDI VCP driver but this should be included in the Linux kernel. I am convinced this is a software issue because I was able to get data using the exact same device and setup from a computer running Windows 8. There are a lot of posts about this already but all of them are specifically Raspberry Pi based, and use the Raspbian and GPIO pins instead of USB. EDIT: Here is a link to the dmesg output just after plugging in the device. The log is too large to post here :P http://pasted.co/38dc9292 Thanks in advance, Max
Enabling I2C on Debian - i2cdetect doesn't show device
Using only jq (twice) and a shell loop: for pathname in /sys/bus/w1/devices/w1_bus_master1/28-*/; do jq -n \ --arg ID "$(basename "$pathname")" \ --arg temp "$(cat "$pathname"/temperature)" \ '{ ID: $ID, temp: $temp }' done | jq -s '. | map( .temp = (.temp | tonumber / 1000) )'The loop iterates over the pathnames corresponding to the directories that starts with 28- that you mentioned in the question. The loop uses jq to create a single JSON object for each directory, consisting of an ID element and a temp element. The value for ID will simply be the basename of the directory pathname, and the temperature is read from the temperature file in the directory. These separate JSON objects are then piped to a second jq process which creates an array of them using the -s (--slurp) option. It also modifies the temp element of each object by converting it from a string into a number and dividing it by 1000 (this is something you never said anything about, so I'm guessing). The same thing, but using the slightly handier jo utility inside the loop: for pathname in /sys/bus/w1/devices/w1_bus_master1/28-*/; do jo ID="$(basename "$pathname")" \ temp="$(cat "$pathname"/temperature)" done | jq -s '. | map( .temp /= 1000 )'The jo utility additionally detects that temp is a number, so we don't have to convert these from strings later. Both of these loops would create "pretty-printed" JSON like [ { "ID": "28-00000cbece90", "temp": 21.812 }, { "ID": "28-00000cbece91", "temp": 21.812 } ]Add the -c (--compact-output) option to the final jq to instead produce compact output like [{"ID":"28-00000cbece90","temp":21.812},{"ID":"28-00000cbece91","temp":21.812}]
I have the files in folders that begin with 28-. They are one wire bus sensors to measure the temperature. The Raspberry recognize them via its SPI interface; every probe has its ID (something beginning with 28-), and RB creates a tree for every sensor named as the probe's ID, like ls /sys/bus/w1/devices/w1_bus_master1 -1 28-00000cbece94/ 28-00000cbeeca3/ 28-00000cbeedf6/ 28-00000cbf87ba/ ...Inside the folder there are two files (among many others) which are temperature and name. name is the probe ID which is also the folder name; temperature is ( ... surprise) the temperature. Where the ID is both the folder name and the content of the file: cat /sys/bus/w1/devices/w1_bus_master1/28-00000cc002fa/name 28-00000cc002faand the temp is cat /sys/bus/w1/devices/w1_bus_master1/28-00000cc002fa/temperature 21812I would like to write a script or compose a sequence of bash commands that ends in yielding an array of JSON objects, like: [ {"ID": "28-00000cbece94", "temp": 24.712}, {"ID": "28-00000cbeeca3", "temp": 24.735}, <so on> ] I think awk should be involved but maybe find -exec, but a simple grep+cat but even tree, but... Any help? Thanks in advance
Get content of files with same name in subfolders to obtain a JSON array
I’m assuming you’re running Debian 10, but the instructions for later versions are similar. The module you’re after is supported by the kernel version used in Debian 10, but it is not enabled; let’s fix that.Install the kernel source for the default version in your release: sudo apt install linux-sourceExtract it: cd /usr/src tar xf linux-source-*.tar.xz(assuming there’s only a single linux-source tarball available, which will be the case unless you’ve installed multiple linux-source packages).Copy the current kernel configuration: cd linux-source-*/ cp /boot/config-$(uname -r) .configEnable the configuration for the sht3x module: make menuconfig(this might complain about missing tools, such as a compiler; sudo apt install build-essential should fix things). To find which option needs to be enabled, and where it is, press / and enter “SHT3X”:This gives a number of pieces of information:the option is called SENSORS_SHT3X; it is listed under “Device Drivers”, “Hardware Monitoring Support”; the options it depends on are already enabled; but it is disabled.Press Enter to exit the search results, go down to “Device Drivers”, press Enter, then go down to “Hardware Monitoring Support”, press Enter again, find the “SHT3x” option, and press M to enable it as a module. Press Tab until “Save” is highlighted, then Enter, confirm the name of the file to write (.config), and select “Exit” several times until you’re back at the prompt.Finally, build the module: make drivers/hwmon/sht3x.koThis might require additional dependencies, at least libelf-dev and libssl-dev (sudo apt install libelf-dev libssl-dev). If all goes well, you’ll end up with a drivers/hwmon/sht3x.ko file which you can load as a module.
TL;DR: The kernel module sht3x (https://www.kernel.org/doc/html/latest/hwmon/sht3x.html) seems to be missing in a standard debian installation. I need it in order to read an external sensor. How can I install this kernel module? The whole story I try to connect an SHT31 temperature/humidity sensor to my Debian notebook. In order to do so, I flashed an ATTiny85 micro controller to act as i2c-tiny-usb interface. I got this part working - lsusb lists the device as Bus 003 Device 003: ID 0403:c631 Future Technology Devices International, Ltd i2c-tiny-usb interfaceand I also get a promising response from i2cdetect $ sudo i2cdetect -l i2c-3 i2c i915 gmbus dpc I2C adapter i2c-1 i2c i915 gmbus vga I2C adapter i2c-8 i2c i2c-tiny-usb at bus 001 device 017 I2C adapter i2c-6 i2c AUX B/port B I2C adapter i2c-4 i2c i915 gmbus dpb I2C adapter i2c-2 i2c i915 gmbus panel I2C adapter i2c-0 i2c i915 gmbus ssc I2C adapter i2c-7 i2c AUX D/port D I2C adapter i2c-5 i2c i915 gmbus dpd I2C adapter $ sudo i2cdetect 8 WARNING! This program can confuse your I2C bus, cause data loss and worse! I will probe file /dev/i2c-8. I will probe address range 0x08-0x77. Continue? [Y/n] Y 0 1 2 3 4 5 6 7 8 9 a b c d e f 00: -- -- -- -- -- -- -- -- 10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 40: -- -- -- -- -- 45 -- -- -- -- -- -- -- -- -- -- 50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 70: -- -- -- -- -- -- -- -- However, I cannot read sensor data, because the kernel module sht3x is not installed on my (standard Debian) system and is not listed in lsmod. Question How can I install and make use of the sht3x kernel module on my Debian notebook?
hwmon: add missing kernel module
smartctl will show you the min/max temperature of a disk:tempminmax - Raw Attribute is the disk temperature in Celsius. Info about Min/Max temperature is printed if available. This is the default for Attributes 190 and 194. The recording interval (lifetime, last power cycle, last soft reset) of the min/max values is device specific.Example: 194 Temperature_Celsius 0x0022 060 060 --- Old_age Always - 40 (Min/Max 10/60)In this case 10°C is the minimum and 60°C is the maximum temperature the device experienced.
How to check harddisk maximum temperature throughout its lifespan in Linux? In the Windows software Hard Disk Sentinel, we can check the Maximum Temperature (during entire lifespan):The software displays the current hard disk temperature and logs maximum and average HDD temperatures.This value is available even if a harddisk is connected to a PC for its first time. So I believe this is recorded in the sensor of the harddisk itself, instead of logging in the PC. Which software / commands could be used to check this value in Linux?
How to check harddisk maximum temperature throughout lifespan?
Well, it turns out all I had to do was load the module with force=1. I dismissed the possibility because it's supposed to be for "unknown vendors" and ASRock is one of the known vendors. Plus, it appeared to recognize the chip. But reading the source code, I have to conclude this one doesn't have ASRock's customer ID that it knows. In case anyone knows anything about this (though perhaps talking to the kernel people would be better at this point), there is now an additional line in the kernel log: [406090.428581] nct6683 nct6683.2576: NCT6683D EC firmware version 1.0 build 07/18/16I also found out the customer ID is 0xe1b (had to tweak the code to write it in the log). Not far off from the one recognized as ASRock's (0xe2c), but maybe it's just coincidence.
Trying to get hwmon working more fully on a system with an ASRock Z370M Pro4. The coretemp and drivetemp drivers seem to be fine. However, this board also has a Nuvoton NCT6683 chip for the usual voltage, fan speed, and temperature monitoring. This is what sensors-detect comes up with, and when the module loads, it looks (sort of) successful: [ 3.520633] nct6683: Forcibly enabling EC access. Data may be unusable. [ 3.521769] nct6683: Found NCT6683D or compatible chip at 0x2e:0xa10However, it's not appearing as a sensor device, and the reason seems to be that it doesn't get registered in hwmon: $ ls -l /sys/class/hwmon total 0 lrwxrwxrwx 1 root root 0 Jul 12 07:19 hwmon0 -> ../../devices/platform/coretemp.0/hwmon/hwmon0 lrwxrwxrwx 1 root root 0 Jul 12 07:19 hwmon1 -> ../../devices/pci0000:00/0000:00:17.0/ata1/host0/target0:0:0/0:0:0:0/hwmon/hwmon1 lrwxrwxrwx 1 root root 0 Jul 12 07:19 hwmon2 -> ../../devices/pci0000:00/0000:00:17.0/ata2/host1/target1:0:0/1:0:0:0/hwmon/hwmon2 lrwxrwxrwx 1 root root 0 Jul 12 07:19 hwmon3 -> ../../devices/pci0000:00/0000:00:17.0/ata3/host2/target2:0:0/2:0:0:0/hwmon/hwmon3 lrwxrwxrwx 1 root root 0 Jul 12 07:19 hwmon4 -> ../../devices/pci0000:00/0000:00:17.0/ata4/host3/target3:0:0/3:0:0:0/hwmon/hwmon4But why? Could I have misconfigured the kernel somewhere? If I need to debug the hwmon registration, how would I do that? Note that firmware setup is able to get the sensor readings just fine, so I know the hardware works. EDIT: Distro is Gentoo, with Linux version 5.13.2, with custom config. Here is the current config.
hwmon driver working, yet not working
k10temp is the CPU temperature sensor. With older AMD CPUs you get only one value, which isn't even "real" temperature:For CPUs older than Family 17h, there is one temperature measurement value, available as temp1_input in sysfs. It is measured in degrees Celsius with a resolution of 1/8th degree. Please note that it is defined as a relative value; to quote the AMD manual:Tctl is the processor temperature control value, used by the platform to control cooling systems. Tctl is a non-physical temperature on an arbitrary scale measured in degrees. It does not represent an actual physical temperature like die or case temperature. Instead, it specifies the processor temperature relative to the point at which the system must supply the maximum cooling for the processor's specified maximum case temperature and maximum thermal power dissipation.
Yes, I know that is an old chip. Running a sensors-detect with all the optional tests yields nothing. My sensors refuse to read the temperature of the CPU. I know there is a working temperature sensor in there because I can see the correct temperature in the BIOS. Does anyone know which kernel module I have to enable? Or how to find out without sensors-detect? My CPU is a 6 core AMD Phenom II 1090T. Just in case, here is the complete output of lsmod. sensors-detect loaded k10temp, which does not detect the CPU temperature. Module Size Used by binfmt_misc 24576 1 dm_multipath 32768 0 scsi_dh_rdac 16384 0 scsi_dh_emc 16384 0 scsi_dh_alua 20480 0 snd_hda_codec_hdmi 61440 4 snd_hda_codec_realtek 126976 1 snd_hda_codec_generic 81920 1 snd_hda_codec_realtek ledtrig_audio 16384 2 snd_hda_codec_generic,snd_hda_codec_realtek snd_hda_intel 53248 0 snd_intel_dspcfg 28672 1 snd_hda_intel snd_hda_codec 135168 4 snd_hda_codec_generic,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec_realtek snd_hda_core 90112 5 snd_hda_codec_generic,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_hda_codec_realtek edac_mce_amd 32768 0 snd_hwdep 20480 1 snd_hda_codec snd_pcm 106496 4 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_hda_core joydev 24576 0 input_leds 16384 0 ccp 86016 0 snd_timer 36864 1 snd_pcm snd 90112 8 snd_hda_codec_generic,snd_hda_codec_hdmi,snd_hwdep,snd_hda_intel,snd_hda_codec,snd_hda_codec_realtek,snd_timer,snd_pcm kvm 663552 0 soundcore 16384 1 snd k10temp 16384 0 mac_hid 16384 0 sch_fq_codel 20480 2 nct6775 69632 0 hwmon_vid 16384 1 nct6775 ip_tables 32768 0 x_tables 40960 1 ip_tables autofs4 45056 2 btrfs 1261568 0 zstd_compress 167936 1 btrfs raid10 57344 0 raid456 155648 0 async_raid6_recov 24576 1 raid456 async_memcpy 20480 2 raid456,async_raid6_recov async_pq 24576 2 raid456,async_raid6_recov async_xor 20480 3 async_pq,raid456,async_raid6_recov async_tx 20480 5 async_pq,async_memcpy,async_xor,raid456,async_raid6_recov xor 24576 2 async_xor,btrfs raid6_pq 114688 4 async_pq,btrfs,raid456,async_raid6_recov libcrc32c 16384 2 btrfs,raid456 raid1 45056 0 raid0 24576 0 multipath 20480 0 linear 20480 0 nouveau 1949696 1 mxm_wmi 16384 1 nouveau wmi 32768 2 mxm_wmi,nouveau video 49152 1 nouveau i2c_algo_bit 16384 1 nouveau ttm 106496 1 nouveau drm_kms_helper 184320 1 nouveau pata_acpi 16384 0 syscopyarea 16384 1 drm_kms_helper sysfillrect 16384 1 drm_kms_helper sysimgblt 16384 1 drm_kms_helper hid_generic 16384 0 uas 28672 1 fb_sys_fops 16384 1 drm_kms_helper r8169 90112 0 usbhid 57344 0 realtek 24576 1 pata_atiixp 16384 2 usb_storage 77824 1 uas drm 491520 4 drm_kms_helper,ttm,nouveau i2c_piix4 28672 0 hid 131072 2 usbhid,hid_generic ahci 40960 2 libahci 32768 1 ahciFor good measure, I also present the current output of sensors: k10temp-pci-00c3 Adapter: PCI adapter temp1: +36.5°C (high = +70.0°C)nouveau-pci-0100 Adapter: PCI adapter GPU core: 900.00 mV (min = +0.85 V, max = +1.05 V) temp1: +52.0°C (high = +95.0°C, hyst = +3.0°C) (crit = +105.0°C, hyst = +5.0°C) (emerg = +135.0°C, hyst = +5.0°C)nct6776-isa-0290 Adapter: ISA adapter Vcore: 1.20 V (min = +0.00 V, max = +1.74 V) in1: 192.00 mV (min = +0.00 V, max = +0.00 V) ALARM AVCC: 3.30 V (min = +2.98 V, max = +3.63 V) +3.3V: 3.28 V (min = +2.98 V, max = +3.63 V) in4: 528.00 mV (min = +0.00 V, max = +0.00 V) ALARM in5: 1.67 V (min = +0.00 V, max = +0.00 V) ALARM in6: 1.83 V (min = +0.00 V, max = +0.00 V) ALARM 3VSB: 3.44 V (min = +2.98 V, max = +3.63 V) Vbat: 3.39 V (min = +2.70 V, max = +3.63 V) fan1: 865 RPM (min = 0 RPM) fan2: 3890 RPM (min = 0 RPM) fan3: 0 RPM (min = 0 RPM) fan4: 0 RPM (min = 0 RPM) fan5: 0 RPM (min = 0 RPM) SYSTIN: +40.0°C (high = +0.0°C, hyst = +0.0°C) ALARM sensor = thermistor CPUTIN: +45.0°C (high = +80.0°C, hyst = +75.0°C) sensor = thermistor AUXTIN: -26.0°C (high = +80.0°C, hyst = +75.0°C) sensor = thermistor PCH_CHIP_TEMP: +0.0°C PCH_CPU_TEMP: +0.0°C PCH_MCH_TEMP: +0.0°C intrusion0: OK intrusion1: OK beep_enable: disabledThe nouveau entry is of course a graphics card.
How to get sensors-detect to detect an AMD Phenom II temtperature sensor
I think discrepancy could be explained by the three points below: First, screenfetch shows only one value, this means it's an average value. Second, keep in mind there are different sensors in a system (always talking about CPU):one or more sensors on the motherboard "close" the CPU, (usually) one sensor for the package CPU and last one sensor for each core of the CPU.Third and last but not least, all these values could be arbitrarily corrected by the software: because the piece of hardware (the sensor diode) which reads the temperature could not physically be in the same position of the hardware that we want to measure it (simply because there is already the piece of hardware itself we want to measure it, the CPU core in this case), thus, in order to make the temperature read by the sensor true, the software increases it by a delta, a fixed and specific value for that CPU model. NB: I'm not an expert, so if I miss something and/or made a mistake, please correct me!
When I run screenfetch my cpu temperature is always shown higher than what xfce4-sensors show. note: xfce4-sensors and lm-sensors output the same temperature so I omitted lm-sensors. Here is a screenshot of screenfetch and xfce4-sensors at exactly the same time. Can anyone explain to me which temperature is the correct one and why they are different ? Also what is package id 0 ?
Why is the cpu temperature different on screenfetch and xfce4-sensors and lm-sensors
Based on your output of sensors, it appears that lm_sensors does not detect any fan speed reading. You should try running sensors-detect and answer yes to all questions to hopefully detect one that wasn't previously configured. If not, then it simply won't be possible to control those fans. The BIOS controls the fans with a PWM, but its control is usually quite limited as is its configuration for it.
I have a Samsung NP900X3E laptop and I would like to control the fans as one of them is making weird noise. I'm running Ubuntu 14.04.4 LTS with "Linux laptop 3.13.0-85-generic #129-Ubuntu SMP Thu Mar 17 20:50:15 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux" sensors reports: root@laptop:/# sensors acpitz-virtual-0 Adapter: Virtual device temp1: +54.0°C (crit = +106.0°C) temp2: +29.8°C (crit = +106.0°C)coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +53.0°C (high = +87.0°C, crit = +105.0°C) Core 0: +47.0°C (high = +87.0°C, crit = +105.0°C) Core 1: +49.0°C (high = +87.0°C, crit = +105.0°C)sensors-detect reports: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9)lspci reports: root@laptop:/# lspci 00:00.0 Host bridge: Intel Corporation 3rd Gen Core processor DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 7 Series/C210 Series Chipset Family MEI Controller #1 (rev 04) 00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 1 (rev c4) 00:1c.3 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 4 (rev c4) 00:1c.4 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 5 (rev c4) 00:1d.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation HM75 Express Chipset LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 7 Series Chipset Family 6-port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation 7 Series/C210 Series Chipset Family SMBus Controller (rev 04) 01:00.0 Network controller: Intel Corporation Centrino Advanced-N 6235 (rev 24) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06) 03:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02)Any idea how I could control the two fans? The problem is that when one fan is needed to cool down the CPU, it's always the same one that is used. This fan is "tired" while the other works great when two fans are needed. Ideally, I would like to "permute" in software the usage of the two fans.
Control fans of Samsung NP900X3E
The solution in this case was to set modify /etc/default/grub so as to contain this line: GRUB_CMDLINE_LINUX_DEFAULT="quiet acpi_osi=!Windows 2020". The acpi_osi parameter tells the kernel to treat ACPI events as if they were happening on the value OS.
I'm running Debian Testing on a Dell Latitude 7480. I've been having a lot of freezing issues, and I have finally narrowed it down to an overheating problem. On battery, I can work for 1hr+ with no problem, and sometimes the system will freeze: mouse stops moving, the backlight of the keyboard doesn't power off, I cannot SSH into this machine. On AC power the same occurs after 15-20 minutes after plugging in to power; the bottom of the laptop is quite warm when this happens (not scalding hot, just warmer than it should be). I am currently at this machine on AC and it hasn't frozen after 21 minutes, but I have a USB fan connected to it. The problem is that the fan never starts. I ran watch sensors during the whole session yesterday and the temperature does vary; however, the fan speed always changes to a positive number during a watch cycle (2 seconds) and goes back to zero after one or two; so the system reads a spinning fan for about 2-4 seconds, then it stops, but I never hear it. I know the fan works because I ran the onboard diagnostic tool and the fan not only started but I could hear it at full speed at some point during the memory test. EDIT: I forgot to mention that at some point I ran sensors-detect, which suggested I added the modules fan and coretemp to /etc/modules, which I did. When I run lsmod, both modules always display 0 on the Used by column. Yesterday the system froze at 20:15, so today I checked /var/log/syslog and I found this: Mar 9 20:15:01 host CRON[1203]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)I searched for this and all I got was this post, but I cannot see it has any relation to my problem (I do have Apache installed but this is not a server, it's a laptop, and I don't run mysql here; also, the CPU meters don't go up, and reboot is not slower than it usually is). There are many other lines like this, but I cannot recall all of them having happened when the system froze; I'm sure not all of them did, because there are more log lines after some of them that indicate the machine was still running. The only other information I can gather is the following, also from /var/log/syslog: Mar 10 18:45:20 host sensors[600]: dell_smm-isa-0000 Mar 10 18:45:20 host sensors[600]: Adapter: ISA adapter Mar 10 18:45:20 host sensors[600]: Processor Fan: 0 RPM (min = 0 RPM, max = 6600 RPM) Mar 10 18:45:20 host sensors[600]: CPU: +39.0°C Mar 10 18:45:20 host sensors[600]: Ambient: +24.0°C Mar 10 18:45:20 host sensors[600]: SODIMM: +23.0°C Mar 10 18:45:20 host sensors[600]: Other: +24.0°C Mar 10 18:45:20 host sensors[600]: nvme-pci-3c00 Mar 10 18:45:20 host sensors[600]: Adapter: PCI adapter Mar 10 18:45:20 host sensors[600]: Composite: +23.9°C (low = -273.1°C, high = +84.8°C) Mar 10 18:45:20 host sensors[600]: (crit = +89.8°C) Mar 10 18:45:20 host sensors[600]: acpitz-acpi-0 Mar 10 18:45:20 host sensors[600]: Adapter: ACPI interface Mar 10 18:45:20 host sensors[600]: temp1: +25.0°C (crit = +107.0°C)Mar 10 18:45:20 host fancontrol[608]: Settings for hwmon6/pwm1: Mar 10 18:45:20 host fancontrol[608]: Depends on hwmon6/temp1_input Mar 10 18:45:20 host fancontrol[608]: Controls hwmon6/fan1_input Mar 10 18:45:20 host fancontrol[608]: MINTEMP=20 Mar 10 18:45:20 host fancontrol[608]: MAXTEMP=60 Mar 10 18:45:20 host fancontrol[608]: MINSTART=150 Mar 10 18:45:20 host fancontrol[608]: MINSTOP=100 Mar 10 18:45:20 host fancontrol[608]: MINPWM=0 Mar 10 18:45:20 host fancontrol[608]: MAXPWM=255 Mar 10 18:45:20 host fancontrol[608]: AVERAGE=1 Mar 10 18:45:20 host systemd[1]: Started fan speed regulator.Mar 10 18:45:20 host fancontrol[787]: Common settings: Mar 10 18:45:20 host fancontrol[787]: INTERVAL=10 Mar 10 18:45:20 host ModemManager[795]: <info> ModemManager (version 1.18.6) starting in system bus... Mar 10 18:45:20 host fancontrol[787]: Settings for hwmon6/pwm1: Mar 10 18:45:20 host fancontrol[787]: Depends on hwmon6/temp1_input Mar 10 18:45:20 host fancontrol[787]: Controls hwmon6/fan1_input Mar 10 18:45:20 host fancontrol[787]: MINTEMP=20 Mar 10 18:45:20 host fancontrol[787]: MAXTEMP=60 Mar 10 18:45:20 host fancontrol[787]: MINSTART=150 Mar 10 18:45:20 host fancontrol[787]: MINSTOP=100 Mar 10 18:45:20 host fancontrol[787]: MINPWM=0 Mar 10 18:45:20 host fancontrol[787]: MAXPWM=255 Mar 10 18:45:20 host fancontrol[787]: AVERAGE=1The two blocks above are not consecutive, but it is the relevant information. Here are the contents of some files I deemed relevant: cat /sys/devices/platform/dell_smm_hwmon/driver_override (null)cat /sys/devices/platform/dell_smm_hwmon/uevent DRIVER=dell_smm_hwmon MODALIAS=platform:dell_smm_hwmoncat fancontrol # Configuration file generated by pwmconfig, changes will be lost INTERVAL=10 DEVPATH=hwmon6=devices/platform/dell_smm_hwmon DEVNAME=hwmon6=dell_smm FCTEMPS=hwmon6/pwm1=hwmon6/temp1_input FCFANS= hwmon6/pwm1=hwmon6/fan1_input MINTEMP=hwmon6/pwm1=20 MAXTEMP=hwmon6/pwm1=60 MINSTART=hwmon6/pwm1=150 MINSTOP=hwmon6/pwm1=100The last one is already on the syslog block above, but I reproduce it here nonetheless. All solutions I've encountered to no fan on Linux suggest I install fancontrol and then run pwmconfig. The first time I tried I got an error telling me there was no /etc/fancontrol.conf file; I tried running this command while a USB fan was plugged and it worked. To be on the safe side, I just pressed Enter to generate the config file with the default parameters, but I still cannot hear the fans kicking in. As I said above, the sensors program tells me the speed changes every 2-4 seconds, but the fan is never audible and it doesn't stay on. The fan works on Windows (this laptop used to have it but I replaced the SSD with a new one, but kept and didn't format the old one), and as I said above, also in the onboard diagnostic tool. I've also run a Puppy Linux on a USB stick and it doesn't have this problem, although I didn't hear the fan working either. Is there a way to properly configure fancontrol to solve this? Are there any other options? I can very well use the laptop with a fan plugged in, but that's not the sort of solution I'm looking for. Thanks!
Debian Testing freezing because of overheating, fan sensors give confusing info
The site is down but the mailing list is active; http://comments.gmane.org/gmane.linux.drivers.sensors/38361
I can't load the www.lm-sensors.org webpage. I'm looking for documentation/information on lm-sensors to see if it is easy or hard to update it to detect Temper-USB thermometers. It would be nice if we could get it to detect Temper-USB thermometers so we could chart it using PSensor. Is the project dead?
Is the lm-sensors project dead? www.lm-sensors.org doesn't load [closed]
It's totally expected for new AMD and Intel CPUs due to their heat density due to very small transistors where billions of them are crammed into very small space. If you don't want to see such high temperatures or temperature fluctuations, disable turbo boost: echo 0 | sudo tee /sys/devices/system/cpu/cpufreq/boostIt might be different for Intel CPUs, please consult with https://askubuntu.com/a/620114
How reliable is the program lm-sensors in Linux? When I run watch sensors, I see large skips of temperature, as high as 10 degrees Celsius within the refresh interval of 2 seconds. The two readings below show this, with two screenshots taken 2 seconds apart, with Core 1 jumping 8 degrees down between those. Reading 1: Every 2.0s: sensors Thu Nov 19 14:02:41 2020iwlwifi_1-virtual-0 Adapter: Virtual device temp1: +66.0°C acpitz-virtual-0 Adapter: Virtual device temp1: +59.0°C coretemp-isa-0000 Adapter: ISA adapter Package id 0: +67.0°C (high = +100.0°C, crit = +100.0°C) Core 0: +61.0°C (high = +100.0°C, crit = +100.0°C) Core 1: +67.0°C (high = +100.0°C, crit = +100.0°C) Core 2: +60.0°C (high = +100.0°C, crit = +100.0°C) Core 3: +59.0°C (high = +100.0°C, crit = +100.0°C)pch_skylake-virtual-0 Adapter: Virtual device temp1: +55.5°C Reading 2, taken 2 seconds later: Every 2.0s: sensors Thu Nov 19 14:02:43 2020iwlwifi_1-virtual-0 Adapter: Virtual device temp1: +65.0°C acpitz-virtual-0 Adapter: Virtual device temp1: +61.0°C coretemp-isa-0000 Adapter: ISA adapter Package id 0: +59.0°C (high = +100.0°C, crit = +100.0°C) Core 0: +59.0°C (high = +100.0°C, crit = +100.0°C) Core 1: +59.0°C (high = +100.0°C, crit = +100.0°C) Core 2: +58.0°C (high = +100.0°C, crit = +100.0°C) Core 3: +57.0°C (high = +100.0°C, crit = +100.0°C)pch_skylake-virtual-0 Adapter: Virtual device temp1: +55.5°C
How reliable is the program `lm-sensors` in Linux?
The core number comes from the cpu_core_id variable in the struct temp_data in the coretemp driver module. In its source code, cpu_core_id is described like this: * @cpu_core_id: The CPU Core from which temperature values should be read * This value is passed as "id" field to rdmsr/wrmsr functions.The rdmsr and wrmsr are machine code instructions to read/write Model-Specific Registers in the specified processor. The coretemp module uses these instructions through functions defined in arch/x86/lib/msr-smp.c. These functions just pass the CPU/core ID field through as-is, so the IDs shown are exactly the IDs used by your motherboard and CPU(s). If your motherboard had 4 CPU sockets with only one socket populated, the firmware might have configured to assign the ID numbers to each socket in turn, so the IDs that would belong to the empty sockets are just left unused. But in your case, there is a sequence of four contiguous core IDs at the end (36 .. 39), so this might be something different. Maybe this is a single processor which has two types of cores, and one type of cores is numbered with gaps in the numbering (0, 4, 8 ...) and the other type with no gaps (36 .. 39)? To know more, it would be necessary to identify the exact processor model (e.g. using the output of lscpu | head -14), and then study the technical documentation of that processor model to see how the core IDs are assigned at the hardware/microcode level. If the motherboard/firmware cannot dictate the assignment of core IDs, then one could guess that the CPU manufacturer might be planning a next generation of processors with more cores of the first type (i.e. with the gaps in the numbering either partially or completely filled). But this is just a guess, and the manufacturer's plans might change anyway...
The core number is 0,4,8,.....39 in the sensors command. Why not 0,1,2,3,4.....? foo@foo-linux:~$ sensors coretemp-isa-0000 Adapter: ISA adapter Package id 0: +73.0°C (high = +80.0°C, crit = +100.0°C) Core 0: +46.0°C (high = +80.0°C, crit = +100.0°C) Core 4: +50.0°C (high = +80.0°C, crit = +100.0°C) Core 8: +52.0°C (high = +80.0°C, crit = +100.0°C) Core 12: +47.0°C (high = +80.0°C, crit = +100.0°C) Core 16: +73.0°C (high = +80.0°C, crit = +100.0°C) Core 20: +50.0°C (high = +80.0°C, crit = +100.0°C) Core 24: +58.0°C (high = +80.0°C, crit = +100.0°C) Core 28: +52.0°C (high = +80.0°C, crit = +100.0°C) Core 36: +48.0°C (high = +80.0°C, crit = +100.0°C) Core 37: +48.0°C (high = +80.0°C, crit = +100.0°C) Core 38: +48.0°C (high = +80.0°C, crit = +100.0°C) Core 39: +48.0°C (high = +80.0°C, crit = +100.0°C)update again This is a 12th Gen Intel(R) Core(TM) i7-12700 This is a PC, not a server, with only 1 CPU socket. update foo@foo-linux:~$ cat /proc/cpuinfo | grep -i apicid apicid : 0 initial apicid : 0 apicid : 1 initial apicid : 1 apicid : 8 initial apicid : 8 apicid : 9 initial apicid : 9 apicid : 16 initial apicid : 16 apicid : 17 initial apicid : 17 apicid : 24 initial apicid : 24 apicid : 25 initial apicid : 25 apicid : 32 initial apicid : 32 apicid : 33 initial apicid : 33 apicid : 40 initial apicid : 40 apicid : 41 initial apicid : 41 apicid : 48 initial apicid : 48 apicid : 49 initial apicid : 49 apicid : 56 initial apicid : 56 apicid : 57 initial apicid : 57 apicid : 72 initial apicid : 72 apicid : 74 initial apicid : 74 apicid : 76 initial apicid : 76 apicid : 78 initial apicid : 78
Why aren't the CPU core numbers in sensors output consecutive?
The k10temp driver only reports what it's capable of reporting and individual cores temperatures and wattage are not currently available/implemented. Patches are welcome (but that doesn't mean they will be merged).Shouldn't k10temp be reporting individual core temperatures like coretemp for Intel CPUs?No. "Should" doesn't apply to Linux drivers in any meaningful way as they are too often written without any input, support or specs from the OEM. If you want full reporting you'll have to run Windows and HWiNFO64. The latter is a proprietary product, so again you cannot expect to ever see it ported under Linux. Even porting it to Linux would be problematic since it needs direct access to hardware and that often means you'll have to disable existing native drivers for the same hardware. Here are two out of tree projects which provide a lot more data than k10temp:https://github.com/ocerman/zenpower https://github.com/leogx9r/ryzen_smuIt's unlikely you'll ever see them merged into the Linux kernel.
This is the output from sensors on my machine, Ryzen 5 3600X on a Biostar B450MH: amdgpu-pci-0a00 Adapter: PCI adapter vddgfx: 725.00 mV fan1: 0 RPM (min = 0 RPM, max = 3630 RPM) edge: +45.0°C (crit = +100.0°C, hyst = -273.1°C) (emerg = +105.0°C) junction: +45.0°C (crit = +110.0°C, hyst = -273.1°C) (emerg = +115.0°C) mem: +46.0°C (crit = +105.0°C, hyst = -273.1°C) (emerg = +110.0°C) power1: 10.00 W (cap = 190.00 W)acpitz-acpi-0 Adapter: ACPI interface temp1: +38.0°C (crit = +127.0°C)k10temp-pci-00c3 Adapter: PCI adapter Tctl: +38.1°C Tdie: +38.1°C Tccd1: +39.5°CI ran sensors-detect prior to this and allowed all checks to be made. Shouldn't k10temp be reporting individual core temperatures like coretemp for Intel CPUs?
lm_sensors not reporting individual core temps on AM4/B450
Try this, sensors | awk -F '(' '/^Core/{gsub("[[:space:]]+"," "); printf "%s\t", $1}'( as a field delimiter /^Core/ to extract lines only which starts with 'Core' gsub("[[:space:]]+"," "); to replace multiple successive spaces to single space as per expected result "%s\t", to print all results in same line with tab delimiter.
I'm trying to retrieve only the temp of the 4 cores, to display them into my terminal (I needed them separated). My original output is : (OC) √ ~ $ sensors ~ 9:24:24 coretemp-isa-0000 Adapter: ISA adapter Package id 0: +68.0°C (high = +100.0°C, crit = +100.0°C) Core 0: +66.0°C (high = +100.0°C, crit = +100.0°C) Core 1: +65.0°C (high = +100.0°C, crit = +100.0°C) Core 2: +64.0°C (high = +100.0°C, crit = +100.0°C) Core 3: +66.0°C (high = +100.0°C, crit = +100.0°C)BAT0-acpi-0 Adapter: ACPI interface in0: 12.98 V curr1: 1000.00 uA dell_smm-virtual-0 Adapter: Virtual device fan1: 3757 RPMacpitz-acpi-0 Adapter: ACPI interface temp1: +27.8°C (crit = +119.0°C)I tried with awk, but not enough and I didn't know how I could retrieve the temp and separate them to have a result like : Core n°1 : 63°C Core n°2 : 64°C Core n°3 : 67°C Core n°4 : 85°C
Get only temp core from sensors
The scaling function used here uses SI prefixes, where “M” corresponds to 106, so “4.29 MW” means “4.29 megawatts” (and your system is presumably reporting incorrect values, or sensors is mis-interpreting them).
I installed the lm-sensors on my Ubuntu platform in order to check the temperature of my processors and possibly some other info. When I run the command, I see the following: alexis~$ sensors power_meter-acpi-0 Adapter: ACPI interface power1: 4.29 MW (interval = 1.00 s)coretemp-isa-0000 Adapter: ISA adapter Package id 0: +41.0°C (high = +81.0°C, crit = +91.0°C) Core 0: +40.0°C (high = +81.0°C, crit = +91.0°C) Core 1: +39.0°C (high = +81.0°C, crit = +91.0°C) Core 2: +40.0°C (high = +81.0°C, crit = +91.0°C) Core 3: +39.0°C (high = +81.0°C, crit = +91.0°C) Core 4: +38.0°C (high = +81.0°C, crit = +91.0°C) Core 5: +39.0°C (high = +81.0°C, crit = +91.0°C) Core 6: +40.0°C (high = +81.0°C, crit = +91.0°C) Core 7: +38.0°C (high = +81.0°C, crit = +91.0°C) Core 8: +39.0°C (high = +81.0°C, crit = +91.0°C) Core 9: +39.0°C (high = +81.0°C, crit = +91.0°C) Core 10: +40.0°C (high = +81.0°C, crit = +91.0°C) Core 11: +40.0°C (high = +81.0°C, crit = +91.0°C) Core 12: +41.0°C (high = +81.0°C, crit = +91.0°C) Core 13: +39.0°C (high = +81.0°C, crit = +91.0°C) Core 14: +38.0°C (high = +81.0°C, crit = +91.0°C) Core 15: +39.0°C (high = +81.0°C, crit = +91.0°C)coretemp-isa-0001 Adapter: ISA adapter Package id 1: +41.0°C (high = +81.0°C, crit = +91.0°C) Core 0: +40.0°C (high = +81.0°C, crit = +91.0°C) Core 1: +37.0°C (high = +81.0°C, crit = +91.0°C) Core 2: +37.0°C (high = +81.0°C, crit = +91.0°C) Core 3: +38.0°C (high = +81.0°C, crit = +91.0°C) Core 4: +38.0°C (high = +81.0°C, crit = +91.0°C) Core 5: +37.0°C (high = +81.0°C, crit = +91.0°C) Core 6: +40.0°C (high = +81.0°C, crit = +91.0°C) Core 7: +39.0°C (high = +81.0°C, crit = +91.0°C) Core 8: +38.0°C (high = +81.0°C, crit = +91.0°C) Core 9: +39.0°C (high = +81.0°C, crit = +91.0°C) Core 10: +39.0°C (high = +81.0°C, crit = +91.0°C) Core 11: +38.0°C (high = +81.0°C, crit = +91.0°C) Core 12: +37.0°C (high = +81.0°C, crit = +91.0°C) Core 13: +38.0°C (high = +81.0°C, crit = +91.0°C) Core 14: +38.0°C (high = +81.0°C, crit = +91.0°C) Core 15: +38.0°C (high = +81.0°C, crit = +91.0°C)I'm wondering about the first line: power1: 4.29 MW (interval = 1.00 s)What does 4.29 MW stand for in this context?
What does "power1: 4.29 MW (interval = 1.00 s)" mean? That is, what is the "MW" unit?
Fan control, especially for old hardware like yours, is a quite obscure matter on Linux; there are multiple variables to take into account, e.g.:kernel version; BIOS version; BIOS settings; and their combination;personally I have never had a such problem, rather the opposite: fan running constantly at 100% with no reason... but in this case, it was "only" annoying; Coming back to your question: First, I power off the laptop because high temperature can damage hardware; Second, I do not rely on temperature measures, rather on acoustic noise, to prove fan running; I can hear the fan? Third, I search for information about hardware-specific (that's for your laptop model/manufacturer) fan control software; i.e. [https://sourceforge.net/projects/fnfx/]
I am trying to boot AntiX LiveCD on my old Toshiba A200 laptop with 2 GB of RAM. But when I accidentally launched the "sensors" in the terminal, I saw that the CPU temperature was 70°C! I turned off the laptop and turned it on again, and the fan revved up to the maximum. It turns out that antiX stops the fan? What should I do in this case?
AntiX linux disable laptop fan
lm-sensors comes with the only configuration file /etc/sensors3.conf which has some definitions (allowed minimums and maximums) for certain chips. Since lm-sensors has no way of knowing which exact AMD K10 compatible CPU you're running, they cannot add it to the configuration which works for everyone. Probably you can find allowences on the net and add them to e.g. /etc/sensors.d/k10temp.conf chip "k10temp-*" set in1_min ? set in1_max ? set in2_min ? set in2_max ? set in3_min ? set in3_max ? set in4_min ? set in4_max ?You can check /etc/sensors3.conf or https://github.com/lm-sensors/lm-sensors/tree/master/configs There's no need to actually specify mins and maxs because modern CPUs have safeguards against overheating and I believe you're not going to use liquid nitrogen to freeze your CPU.
I assembled a brand new Ryzen-based workstation. The temperature sensors returned by sensors seems to be working fine, but those of the CPU do not specify low, crit, and high thresholds. > sensors nvme-pci-0100 Adapter: PCI adapter Composite: +36.9°C (low = -20.1°C, high = +74.8°C) (crit = +79.8°C)k10temp-pci-00c3 Adapter: PCI adapter Tctl: +47.8°C Tdie: +47.8°C Tccd1: +31.8°C Tccd2: +31.8°CIs this normal? Should I set this manually somewhere? What are acceptable temperature ranges for these 3 sensors? I am using debian bullseye.
No crit high low returned by sensors - is this normal?
I would try while true do echo -n "$(date +"%H:%M:%S"): " echo -n "$(sensors | grep Tdie) " awk '$2 == "MHz" { if (c< $4) c=$4;} END {printf "cpu MHz %s\n",c}' /proc/cpuinfo sleep 1 done on my box cpu frequency change from 998 to 1200 MHz, I am not sure sort -r will behave as expected. (string "900" is greater than string "1200")
I have 2 scripts: track_temps.sh while true do echo -n "$(date +"%H:%M:%S"): " sensors | grep Tdie # Sleeping for X seconds sleep 1 donetrack_mhz.sh while true do # Printing the time and all temperatures to stdout echo -n "$(date +"%H:%M:%S"): " cat /proc/cpuinfo | grep "MH" | sort -r | head -1 # Sleeping for X seconds sleep 1 doneThe output from track_temps.sh looks like: 09:31:44: Tdie: +69.1°C (high = +70.0°C) 09:31:45: Tdie: +69.1°C (high = +70.0°C) 09:31:46: Tdie: +69.1°C (high = +70.0°C)The output from track_mhz.sh looks like: 09:32:01: cpu MHz : 4015.803 09:32:02: cpu MHz : 4008.034 09:32:03: cpu MHz : 4028.516I would like to merge the output, so that it looks like 09:31:44: Tdie: +69.1°C (high = +70.0°C) cpu MHz: 4015.803 09:31:45: Tdie: +69.1°C (high = +70.0°C) cpu MHz: 4008.034 09:31:46: Tdie: +69.1°C (high = +70.0°C) cpu MHz: 4028.516Even better, would be
How to merge the output of these 2 tracking scripts
You said you've checked that the fan on the GPU actually works, so the "0 RPM" is either a sensor fault or the driver doesn't actually know how to read the fan speed from this particular GPU model. Or perhaps the GPU manufacturer has chosen to use a 2-wire fan in this model, so there'll be no easy way to monitor the fan speed. Yes, some GPUs tend to run rather hot. But as far as I know, the "high", "crit" and "emerg" set-points are determined automatically by the nouveau driver using the information stored in the GPU firmware, so apparently your card is fine up to 95 degrees Celsius. You still have a margin of about ten degrees even when playing videos, so it's OK. The temperature and fan speed readings in question are produced by the nouveaudriver which handles only NVidia GPUs, so unless you also have a NVidia GPU on the motherboard, it must be the GPU on the card. And you have coretemp with a large number of cores displayed, which indicates a modern Intel or AMD processor, each of which would normally have an iGPU of the same manufacturer, so having a NVidia GPU on the motherboard is unlikely.
Not sure if I should be concerned. The temp is not considered high. But it is much higher than anything else. Also if I play a video the GPU temp is 85C. And the fan shows 0 rpm. Never noticed that before. Not sure if that is a problem? Putting my hand in there I could feel the air from the fan on the GPU on a card. Not sure if they are referring to a GPU on the motherboard?
Running sensors I get gpu temp of 72C and fan 0 rpm... bad?
You can use following commands for the same: Method 1 (md5, sha256, sha512) openssl passwd -6 -salt xyz yourpassNote: passing -1 will generate an MD5 password, -5 a SHA256 and -6 SHA512 (recommended) Method 2 (md5, sha256, sha512) mkpasswd --method=SHA-512 --stdinThe option --method accepts md5, sha-256 and sha-512 Method 3 (des, md5, sha256, sha512) As @tink suggested, we can update the password using chpasswd using: echo "username:password" | chpasswd Or you can use the encrypted password with chpasswd. First generate it using this: perl -e 'print crypt("YourPasswd", "salt", "sha512"),"\n"'Then later you can use the generated password to update /etc/shadow: echo "username:encryptedPassWd" | chpasswd -eThe encrypted password we can also use to create a new user with this password, for example: useradd -p 'encryptedPassWd' username
I need to manually edit /etc/shadow to change the root password inside of a virtual machine image. Is there a command-line tool that takes a password and generates an /etc/shadow compatible password hash on standard out?
Manually generate password for /etc/shadow
Both "!" and "!!" being present in the password field mean it is not possible to login to the account using a password. As can be read from the documentation of RHEL-4, the "!!" in the shadow-password field means the account of a user has been created, but has not yet been given a password. The documentation states (possibly erroneously) that until being given an initial password by a sysadmin, it is locked by default. However, as others have noted, and as the man pages indicate for later versions of RHEL-7, it is possible a user may still log on to the account through other means, such as via SSH using public/private key authentication.
The second field in the Linux /etc/shadow file represents a password. However, what we have seen is that:Some of the password fields may have a single exclamation <account>:!:.....Some of the password fields may have a double exclamation <account>:!!:.....Some of the password fields may have an asterisk sign <account>:*:.....By some research on internet and through this thread, I can understand that * means password never established, ! means locked. Can someone explain what does double exclamation (!!) mean? and how is it different from (!)?
Difference between ! vs !! vs * in /etc/shadow
For the early history of Unix password storage, read Robert Morris and Ken Thompson's Password Security: A Case History. They explain why and how early Unix systems acquired most the features that are still seen today as the important features of password storage (but done better).The first Unix systems stored passwords in plaintext. Unix Third Edition introduced the crypt function which hashes the password. It's described as “encryption” rather than “hashing” because modern cryptographic terminology wasn't established yet and it used an encryption algorithm, albeit in an unconventional way. Rather than encrypt the password with a key, which would be trivial to undo when you have the key (which would have to be stored on the system), they use the password as the key. When Unix switched from an earlier cipher to the then-modern DES, it was also made slower by iterating DES multiple times. I don't know exactly when that happened: V6? V7? Merely hashing the password is vulnerable to multi-target attacks: hash all the most common passwords once and for all, and look in the password table for a match. Including a salt in the hashing mechanism, where each account has a unique salt, defeats this precomputation. Unix acquired a salt in Seventh Edition in 1979. Unix also acquired password complexity rules such as a minimum length in the 1970s.Originally the password hash was in the publicly-readable file /etc/passwd. Putting the hash in a separate file /etc/shadow that only the system (and the system administrator) could access was one of the many innovations to come from Sun, dating from around SunOS 4 in the mid-1980s. It spread out gradually to other Unix variants (partly via the third party shadow suite whose descendent is still used on Linux today) and wasn't available everywhere until the mid-1990s or so. Over the years, there have been improvements to the hashing algorithm. The biggest jump was Poul-Henning Kamp's MD5-based algorithm in 1994, which replaced the DES-based algorithm by one with a better design. It removed the limitation to 8 password characters and 2 salt characters and had increased slowness. See IEEE's Developing with open source software, Jan–Feb.2004, p.7–8. The SHA-2-based algorithms that are the de facto standard today are based on the same principle, but with slightly better internal design and, most importantly, a configurable slowness factor.
When did Unix move away from storing clear text passwords in passwd? Also, when was the shadow file introduced?
When did Unix stop storing passwords in clear text?
Python: python -c 'import crypt; print crypt.crypt("password", "$6$saltsalt$")'(for python 3 and greater it will be print(crypt.crypt(..., ...))) Perl: perl -e 'print crypt("password","\$6\$saltsalt\$") . "\n"'
In /etc/shadow file there are encrypted password. Encrypted password is no longer crypt(3) or md5 "type 1" format. (according to this previous answer) Now I have a $6$somesalt$someveryverylongencryptedpasswdas entry. I can no longer use openssl passwd -1 -salt salt hello-world $1$salt$pJUW3ztI6C1N/anHwD6MB0to generate encrypted passwd. Any equivalent like (non existing) .. ? openssl passwd -6 -salt salt hello-world
/etc/shadow : how to generate $6$ 's encrypted password? [duplicate]
You are looking for passwd -l user. From man passwd:Options: [...] -l, --lock lock the password of the named account. This option disables a password by changing it to a value which matches no possible encrypted value (it adds a '!' at the beginning of the password).
Based on /etc/shadow(5) documentation on the second (password) field:encrypted password If the password field contains some string that is not a valid result of crypt(3), for instance ! or *, the user will not be able to use a unix password to log in (but the user may log in the system by other means).My question is whether there is a linux command to disable the user password,i.e. set a "*" or a "!" on password field.
Disable password on linux user with command
Separate users means a separate User ID, and therefore separate hashes will be involved with the algorithm. Even a user with the same name, same password, and created at the same time will (with almost certain probability) end up with a different hash. There are other factors that help create the encryption. If you want to look at a quick example here it may explain it better.
If I cat /etc/shadow I can get the encrypted passwords of root and my user. These passwords are the same (I know, bad security) for each account, but in /etc/shadow they show up as being different encrypted strings. Why? Are different algorithms used for each?
Root and my password are the same
These files are perfectly normal. From the shadow(5) manual page:/etc/shadow- Backup file for /etc/shadow.You may similarly see /etc/passwd-, /etc/group- and /etc/gshadow-. These backup files are created all the tools in the Linux user database utility suite (shadow): both interactive tools such as vipw and automated tools such as passwd, useradd, etc. They let you easily revert the last change to the user database.
I am noticing a lot of weird files appearing in my router and on my various filesystems. Files in weird places or files that have to do with security with a - sign after them. If I do ls -l /etc/shadow*, this is what I see. -rw-r----- 1 root shadow 1163 Aug 9 15:48 shadow -rw------- 1 root root 1163 Aug 8 21:11 shadow-Does that look normal? What is the 2nd shadow file used for?
Is it normal to have a file called "shadow-" in the /etc directory?
NP in the password field of /etc/shadow indicates that that the account cannot be logged into with a password but can be logged into with other authentication methods, such as su down from root or cron jobs. NP means that password authentication will always fail, but other login methods may succeed. You can set an account in this state with passwd -N. This differs from *LK* (reported as LK by passwd -s), which disables all logins to the account regardless of the authentication method. Confusingly, when passwd -s sees NP in /etc/shadow, it reports NL, whereas NP in the passwd -l report indicates that the account is open to all winds: users will be authenticated without even getting a password prompt (this is indicated by an empty password field in /etc/shadow). UP is a documented code in the passwd -s output on Solaris 11 (not on Solaris 11 Express). It means that “this account has not yet been activated by the administrator and cannot be used.” If I understand the documentation correctly, its effect is similar to NP; the intent is that the system administrator will run passwd later to set a password (i.e. it's the first stage in the process where the admin creates the account for a future user, then later has the user type a password when they first come on-site). The documentation doesn't indicate whether passwd -s reports UP when it finds that in /etc/shadow; while this is plausible, the confusion around NP invites caution. Usually, anything in the password field of /etc/shadow (or other password database) that isn't an empty string is treated as a hashed password, and leads to a denied authentication if it doesn't match any of the valid hashed password formats. This is the case with normal password authentication on OpenSolaris, I can't speak for other versions but would be somewhat surprised if this wasn't the case. Note that if there are several entries for the same user, I think only the first one is taken into account. (At least that's the case under Linux, and I have no reason to believe that Solaris would be different in this respect.)
I found some entries in a shadow file whose meaning I don't understand. user:UP::::::: user1:NP:::::::What does UP and NP mean? In addition to those 2, the same shadow file has the normal hashed entry and the LK that indicates a locked account. The machine is a Solaris 10 VM.
What's the meaning of NP and UP in the password field of the shadow file
why have programs like su access to /etc/shadowBecause programs like su and passwd have set SetUID. You can check by using : [root@tpserver tmp]# ls -l /usr/bin/passwd -r-s--x--x 1 root root 21200 Aug 22 2013 /usr/bin/passwdWhen you look around in your file permission you will see "s". If anybody is trying to run the passwd program, by default it's taking the privilege of owner (root here) of the file. This means any user can get root privilege to execute the passwd program, because only the root user can edit or update /etc/passwd and /etc/shadow file. Other users cant. When the normal user runs the passwd program on his terminal, the passwd program is run as "root", because the effective UID is set to "root". So the normal user can easily update the file. You can use the chmod command with the u+s or g+s arguments to set the setuid and setgid bits on an executable file, respectivelyLong Answer : Set-User_Id (SUID): Power for a Moment: By default, when a user executes a file, the process which results in this execution has the same permissions as those of the user. In fact, the process inherits his default group and user identification. If you set the SUID attribute on an executable file, the process resulting in its execution doesn't use the user's identification but the user identification of the file owner. The SUID mechanism, invented by Dennis Ritchie, is a potential security hazard. It lets a user acquire hidden powers by running such a file owned by root. $ ls -l /etc/passwd /etc/shadow /usr/bin/passwd -rw-r--r-- 1 root root 2232 Mar 15 00:26 /etc/passwd -r-------- 1 root root 1447 Mar 19 19:01 /etc/shadowThe listing shows that passwd is readable by all, but shadow is unreadable by group and others. When a user running the program belongs to one of these two categories (probably, others), so access fails in the read test on shadow. Suppose normal user wants to change his password. How can he do that? He can do that by running /usr/bin/passwd. Many UNIX/Linux programs have a special permission mode that lets users update sensitive system files –like /etc/shadow --something they can't do directly with an editor. This is true of the passwd program. $ ls -l /usr/bin/passwd -rwsr-xr-x 1 root root 22984 Jan 6 2007 /usr/bin/passwdThe s letter in the user category of the permission field represents a special mode known as the set-user-id (SUID). This mode lets a process have the privileges of the owner of the file during the instance of the program. Thus when a non privileged user executes passwd, the effective UID of the process is not the user's, but of root's – the owner of the program. This SUID privilege is then used by passwd to edit /etc/shadow. Reference Link
Normally only root can access /etc/shadow. But programs like su and sudo can check passwords without running as root. So the question is: Why can these programs access /etc/shadow without privileges? I tried to access it without privileges via python with the spwd module, but I didn't get access (like expected). Which mechanism do these programs use?
why have programs like su access to /etc/shadow
On Ubuntu/Debian mkpasswd is part of the package whois and implemented in mkpasswd.c which as actually just a sophisticated wrapper around the crypt() function in glibc declared in unistd.h. crypt() takes two arguments password and salt. Password is "test" in this case, salt is prepended by "$6$" for the SHA-512 hash (see SHA-crypt) so "$6$Zem197T4" is passed to crypt(). Maybe you noticed the -R option of mkpasswd which determines the number of rounds. In the document you'll find a default of 5000 rounds. This is the first hint why the result would never be equal to the simple concatenation of salt and password, it's not hashed only once. Actually if you pass -R 5000 you get the same result. In this case "$6$rounds=5000$Zem197T4" is passed to crypt() and the implementation in glibc (which is the libc of Debian/Ubuntu) extracts the method and number of rounds from this. What happens inside crypt() is more complicated than just computing a single hash and the result is base64 encoded in the end. That's why the result you showed contains all kinds of characters after the last '$' and not only [0-9a-f] as in the typical hex string of a SHA-512 hash. The algorithm is described in detail in the already mentioned SHA-Crypt document.
I'm puzzled by the hash (ASCII) code stored under Linux (Ubuntu) /etc/shadow. Taking a hypothetical case, let password be 'test', salt be 'Zem197T4'. By running following command, $ mkpasswd -m SHA-512 test Zem197T4A long series of ASCII characters are generated (This is actually how Linux store in the /etc/shadow) $6$Zem197T4$oCUr0iMuvRJnMqk3FFi72KWuLAcKU.ydjfMvuXAHgpzNtijJFrGv80tifR1ySJWsb4sdPJqxzCLwUFkX6FKVZ0When using online SHA-512 generator (e.g. http://www.insidepro.com/hashes.php?lang=eng), what is generated is some hex code as below: option 1) password+salt 8d4b73598280019ef818e44eb4493c661b871bf758663d52907c762f649fe3355f698ccabb3b0c59e44f1f6db06ef4690c16a2682382617c6121925082613fe2option 2) salt+password b0197333c018b3b26856473296fcb8637c4f58ab7f4ee2d6868919162fa6a61c8ba93824019aa158e62ccf611c829026b168fc4bf90b2e6b63c0f617198006c2I believe these hex code should be the 'same thing' as the ascii code generated by mkpasswd. But how are they related? Hope someone could enlighten me?
SHA512 salted hash from mkpasswd doesn't match an online version
No, the shadow file does not contain encrypted passwords, not on any Unix variant that I've seen. That would require an encryption key somewhere — where would it be? Even the original version of the crypt function was in fact a hash function. It operated by using the password as a key for a variant of DES. The output of crypt is the encryption of a block with all bits zero. Although this uses an encryption function as part of the implementation, the crypt operation is not an encryption operation, it is a hash function: a function whose inverse is hard to compute and such that it is difficult to find two values producing the same output. Within its limitations, the original DES-based crypt implementation followed the basic principles of a good password hash function: irreversible function, with a salt, and a slow-down factor. It's the limitations, not the design, that make it unsuitable given today's computing power: maximum of 8 characters in the password, total size that makes it amenable to brute force, salt too short, iteration count too short. Because of the crypt name (due to the fact that crypt uses encryption internally), and because until recently few people were educated in cryptography, a lot of documentation of the crypt function and of equivalents in other environments describes it as “password encryption”. But it is in fact a password hash, and always has been. Modern systems use password hashing functions based on more robust algorithms. Although some of these algorithms are known as “MD5”, “SHA-256” and “SHA-512”, the hash computation is not something like MD5(password + salt) but an iterated hash which meets the slowness requirement (though common methods lack the memory hardness that protects against GPU-based acceleration).
man 5 shadow says this about the 2nd field:encrypted passwordIs that true nowadays? I think it should say "hashed password". Am I correct?
Does the shadow file have encrypted passwords?
The shadow(5) manual on Ubuntu refers to the crypt(3) manual. The crypt(3) manual says that the default password encryption algorithm is DES. It goes on to say that the glibc2 library function also supports MD5 and at least SHA-256 and SHA-512, but that an entry in /etc/shadow for a password encrypted by one of these algorithms would look like $1$salt$encrypted (for MD5), $5$salt$encrypted (for SHA-256), or $6$salt$encrypted (for SHA-512), where each $ is a literal $ character, where salt is a salt of up to 16 characters, and where encrypted is the actual hash. Since your encrypted password does not follow that pattern, I'm assuming that it's encrypted using the default DES algorithm.
I want to know my /etc/shadow password hash if its SHA or MD or something else. From what I read, it is related to the $ sign, but I don't have any dollar signs. Im using Ubuntu 16 Example: user:0.7QYSH8yshtus8d:18233:0:99999:7:::
How to know if password in /etc/shadow is hashed with SHA or MD?
There was an answer that got deleted, while somewhat wrong, did lead me in the correct direction. Using gawk's strftime combined with some arithmetic gives me what I wanted. cat shadow | gawk -F: '{ print $1 ":" strftime("%Y%m%d",86400*$3) ":" strftime("%Y%m%d",86400*$4)}'root:20120304:19691231 daemon:20100203:19691231 bin:20100203:19691231 sys:20100203:19691231
The file /etc/shadow has a couple date fields that are expressed as the number of days since Jan 1, 1970. Is there an easy way using to get a list of users and the calendar date of the last password change, and the expiration? Ref: man shadow(5)
Extract dates from /etc/shadow
Disclaimer Below are my own findings and the way I interpreted them without having an expert understanding of cryptography and the concepts involved. Signatures Notably, as the yescrypt CHANGES file on the OpenWall GitHub states about the Changes made between 0.8.1 (2015/10/25) and 1.0.0 (2018/03/09), yescrypt has two signatures:$7$ - classic scrypt hashes, not very compact fixed-length encoding $y$ - native yescrypt and classic scrypt hashes, new extremely compact variable-length encodingThis extremely compact variable-length encoding is what introduces much but not all of the complexity that the end of the second-to-last paragraph in this UNIX StackExchange answer talks about. Parameters For a simple description of the parameters, the BitcoinWiki Yescrypt parameters section can be helpful:Parameter Descriptionpassword password to hashsalt salt to useflags flags to toggle featuresN increasing N increases running time and memory user increasing R increases the size of blocks operated on by the algorithm (and thus increases memory use)p parallelism factort increasing T increases running time without increasing memory useg the number of times the hash has been "upgraded", used for strengthening stored password hashes without requiring knowledge of the original passwordNROM read-only memory that the resulting key will be made to depend onDKLen the length of key to derive (output)Format Of these, $7$ hashes only use the following:N - encoded with 1 byte (character) r - encoded with 5 bytes (characters) p - encoded with 5 bytes (characters)Since $7$ also means fixed-length encoding, every parameter has a prespecified number of bytes that encode it and every one of the parameters comes in order: $7$Nrrrrrppppp$...$. Let's enclose each byte in [] square brackets: $7$[N][r1][r2][r3][r4][r5][p1][p2][p3][p4][p5]$...$. Further, this means 11 is the exact number of bytes required (hence why it's not compact) for parameters in the sequence specified. On the other hand, $y$ hashes require three parameters:flags - encoded with at least 1 byte (character) N - encoded with at least 1 byte (character) r - encoded with at least 1 byte (character)Still, $y$ hashes can use all parameters by encoding them with variable-length. Effectively, this means each parameter is prefixed with its own size # encoded in the first byte and continues with # bytes: $y$[flags_len=#][flags1]...[flags#][N_len=#][N1]...[N#][r_len=#][r1]...$...$ To make things even more complex, the mandatory parameters are followed by an optional have parameter. Based on the value of have, yescrypt decides which, if any of p, t, g, and NROM are also part of the supplied data. For comprehensive guidelines about the parameters and which ones to use in what situations, it's probably best to consult the yescrypt PARAMETERS file on the OpenWall GitHub. Encoding Decoding the parameter fields is done via decode64_uint32(), which uses an array, indexed by atoi64() with the difference between the ASCII values of the current byte and the . period character (46), which is the base: atoi64_partial[77] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 64, 64, 64, 64, 64, 64, 64, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 64, 64, 64, 64, 64, 64, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63 };For each field, starting from the first field byte yescrypt performs the following actions:Use the first field byte to index the array as described above. Perform calculations with the array item to get a partial value for the field. Perform calculations with the array item to get the number of following bytes that encode the rest of the field value. For each next byte, the algorithm uses it to index the array again and adds the data to reach the final field value.There is some pseudo-code for other processes in the BitcoinWiki Yescrypt functions section. Demo Parameter Encoding Let's take an example from the PARAMETERS file above:flags = YESCRYPT_DEFAULTS N = 4096 r = 32 p = 1 t = 0 g = 0 NROM = 0The above set of values is described as a standard Large and slow (memory usage 16 MiB, performance like bcrypt cost 2^8 - latency 10-30 ms and throughput 1000+ per second on a 16-core server) choice for Password hashing for user authentication, no ROM. $y$ is the signature. flags = YESCRYPT_DEFAULT = 182 = 0xB6 = j in yescrypt variable-length encoding. Here, flags should decode to YESCRYPT_DEFAULT, which is equivalent to YESCRYPT_RW_DEFAULTS, defined as (YESCRYPT_RW | YESCRYPT_ROUNDS_6 | YESCRYPT_GATHER_4 | YESCRYPT_SIMPLE_2 | YESCRYPT_SBOX_12K):YESCRYPT_RW = 0x002 YESCRYPT_ROUNDS_6 = 0x004 YESCRYPT_GATHER_4 = 0x010 YESCRYPT_SIMPLE_2 = 0x020 YESCRYPT_SBOX_12K = 0x080Performing the logical OR operation, yescrypt arrives at the final number and encodes it. N = 4096 = 0x1000 = 9 in yescrypt variable-length encoding. In fact, N = 2decoded_N_field. r = 32 = 0x20 = T in yescrypt variable-length encoding. $ at this point tells yescript that no optional parameters were specified. Finally, the salt is added. It is theoretically of arbitrary length. However, the salt must be of a length that's a power of 4. $y$j9T$SALT$ Examples Here are a couple of valid but not secure examples, which may be visually helpful after the descriptions above:$7$9/..../..../$SALTS$ $y$./.$SALT$ $y$8/.$SALT$
Consider this Shadow string $y$j9T$PaFEMV0mbpeadmHDv0Lp31$G/LliR3MqgdjEBcFC1E.s/3vlRofsZ0Wn5JyZHXAol5There are 4 partsid : y (yescrypt) param : j9T salt : PaFEMV0mbpeadmHDv0Lp31 hash : G/LliR3MqgdjEBcFC1E.s/3vlRofsZ0Wn5JyZHXAol5Q:What does j9T in param field mean? Are there other options in this field? Where can we find official documentation?I've seen this question; The format of encrypted password in /etc/shadow, however, there is no explanation there.
What does j9T mean in yescrypt (from /etc/shadow)?
It's possible ifthe target system uses shadow passwords, and /etc/shadow is not overridden by other mechanisms (via PAM, nss, etc.), and the target system doesn't hash /etc/shadow, and the target system has the same usernames as the source system, and the UIDs on the target system are the same as the UIDs on the source system, and the encryption methods used by the passwords need to be supported on the target system, and /etc/passwd on the target system must be in sync with the injected /etc/shadow.I hope I didn't forget anything. :) The easier (and safer) way to do it is to use vipw to save credentials for the relevant users on the source system, then copy them on the target system
I am asking, because I generated a live CD using the hash from an existing /etc/shadow, assuming I will then be able to login with the corresponding password, but apparently login fails.
When I copy /etc/shadow to another system, is it possible to login with the according passwords?
As far as I know, all unix variants have an /etc/passwd file with the traditional layout, for the sake of applications that read the file directly. Each line in the file contains colon-separated records which correspond to struct passwd fields:user name (login) encrypted password (if present) user id (number, in decimal) principal group id (number, in decimal) Gecos field. This field is not used by system programs except for display purposes. It is normally a comma-separated list of fields, the first three being full name, office number and telephone number. home directory login shellOne thing that varies between systems is how much liberty you can take with the syntax. For example, GNU libc (i.e. Linux) ignores lines that begin with #: they are comments. GNU libc also ignores whitespace at the beginning of a line, so they can be indented. An invalid line might cause programs to stop processing the file or to skip to the next line. Most modern systems no longer store an encrypted password in the second field. The content of that field is not a reliable indication of whether the user has a password set (and even if you found that out, this is not a reliable indication of whether the user can log in, because there are many other authentication methods such as SSH keys, one-time passwords, biometrics, smartcards, …). When passwords aren't in /etc/passwd, where they are is system-dependent. The Rosetta Stone for Unix mentions many unix variants.Solaris uses /etc/shadow, and this has been copied by others including Linux. Linux and Solaris shadow files have the same format; I don't know if the other systems that have a file called /etc/shadow use the same format. BSD systems have /etc/master.passwd, and additionally have database files for faster access, updated by pwd_mkdb.Remember that /etc/passwd hasn't been guaranteed to contain the full list of users for a couple of decades: users can come from other databases such as NIS (YP) or LDAP. As a system administrator, avoid edit the /etc/passwd file directly; use vipw instead, if your system provides it (and if it doesn't, consult your manuals to see what method is recommended to modify the user database). What I wrote above goes for groups, too. The fields in /etc/group are struct group members: group name, password (largely unused), numerical group id, and a comma-separated list of user names (the users who have this group as a secondary group). Linux has a /etc/gshadow file, but this is rarely used, as group authentication is not widely practiced.
Are formats of files /etc/shadow and /etc/passwd same on all unix and unix-like systems same or are there significant differences? I mean syntax of files, not file location or name
/etc/shadow and /etc/passwd format compatibility
What prevents me from just editing the /etc/shadow file in unencrypted systems?Nothing, there is no specific protection for /etc/shadow. Some systems might have tampering detection, so the system administrator would know that /etc/shadow was changed (unless you also overrode the tampering detection, typically by updating it so it considered your modified /etc/shadow as correct), but nothing stops you from editing files in an unencrypted file system. Encrypting the drive (or the partition holding /etc/shadow) is sufficient to prevent such attacks, but not to prevent more sophisticated attacks. Full protection against attacks involving physical access is still not quite there, although Secure Boot and TPM measurements do make successful attacks much harder.
Real dumb question, but suppose a computer with Linux does not have an encrypted hard drive. If I generated a hash with "openssl passwd", couldn't I just run a live version of Ubuntu and add my hash to the /etc/shadow file? Or is /etc/shadow encrypted even if the hard drive isn't?
What prevents me from just editing the /etc/shadow file in unencrypted systems? [duplicate]
Yes. You're correct. Each steps can be split in minor tasks as well, but you describe the overall algorithm. Follow a couple of articles describing in details the login process. [1] [2] Note that this is only about the plain password, not mentioning PAM system. [3]
I would like to know how the password verification in Linux works. I know that the passwords are stored as a hash in /etc/shadow file and user information is in /etc/passwd file. My understanding is this: Selecting what user you want to login as decides what user name the system should check. When you enter the password and hit enter, the system goes to the /etc/shadow file and finds the line corresponding to the user name. From step 2 it gets the hash of the actual password. It then generates the hash of the entered password and compares both of them. If a match is found, voilà. Else, error message. Is my understanding correct?
How does Linux verify the login password?
If the hashing algorithm isn't listed in the password field, it's usually because it's in traditional DES-based crypt form. The hash you've provided even looks like a crypt hash. Examples of what other DES hashes look like: [root@xxx601 ~]# openssl passwd -crypt myPass 7BQrU5yVqiGqU [root@xxx601 ~]# openssl passwd -crypt newPass Mbq6MsDxJOsow [root@xxx601 ~]#Crypt hashes are typically the weakest possible hashes for a variety of reasons. Not the least of which is that it can only support passwords up to eight characters so all characters after the eighth are just ignored.
http://www.aychedee.com/2012/03/14/etc_shadow-password-hash-formats/ From the above article I can see the password can be encrypt in abot 6 different ways to genereate the hash in the format of $1$ ... However, when I read the shadow file of my machine, I get something like this root:l2tdfsoZQxobQ:15743:0:99999:7::: bin:*:13653:0:99999:7::: daemon:*:13653:0:99999:7::: adm:*:13653:0:99999:7::: lp:*:13653:0:99999:7::: sync:*:13653:0:99999:7::: shutdown:*:13653:0:99999:7::: halt:*:13653:0:99999:7::: mail:*:13653:0:99999:7::: news:*:13653:0:99999:7::: uucp:*:13653:0:99999:7::: operator:*:13653:0:99999:7::: games:*:13653:0:99999:7::: gopher:*:13653:0:99999:7::: ftp:*:13653:0:99999:7::: nobody:*:13653:0:99999:7::: dbus:!!:13653:0:99999:7::: vcsa:!!:13653:0:99999:7::: rpm:!!:13653:0:99999:7::: haldaemon:!!:13653:0:99999:7::: pcap:!!:13653:0:99999:7::: nscd:!!:13653:0:99999:7::: named:!!:13653:0:99999:7::: netdump:!!:13653:0:99999:7::: sshd:!!:13653:0:99999:7::: rpc:!!:13653:0:99999:7::: mailnull:!!:13653:0:99999:7::: smmsp:!!:13653:0:99999:7::: rpcuser:!!:13653:0:99999:7::: nfsnobody:!!:13653:0:99999:7::: apache:!!:13653:0:99999:7::: squid:!!:13653:0:99999:7::: webalizer:!!:13653:0:99999:7::: xfs:!!:13653:0:99999:7::: ntp:!!:13653:0:99999:7::: mysql:!!:13653:0:99999:7:::For the root password, it is like l2tdfsoZQxobQ, so what encryption method did the system use for this password?
/etc/shadow encryption method
When you ask a PAM module to change a password (or participate in changing a password), the module can retrieve both the new password and the old, as given by the user: as Christopher points out, passwd asks for the old password as well as the new (unless you’re running it as root and changing another user’s password). The module can use that information to compare both passwords, without having to somehow reverse the current hash or enumerate variants. The PAM functions involved include pam_sm_chauthtok and pam_get_item, whose documentation (and the other pages referenced there) should help you understand what’s going on. You can see how it’s done in libpam-cracklib’s source code.
I know that on Linux (at least debian) every password are hashed and stored in /etc/shadow. However thanks to the libpam-cracklib you can add some rules on passwords. For instance in /etc/pam.d/common-password you can set Difok which is a parameter that indicate the number of letter that can be the same between an old and a new password. But how linux can know when I type in an new password the similarity with my old pasword as it doesn't know my real password (it just have a hash)? Thanks !
How Linux can compare old and new password?
The passwd utility is installed setuid, which means that when it runs, it runs as the user that owns the file, not as the user that called it. In this case, passwd belongs to root, so the setuid bit causes the program to run with root privileges. It is therefore able to make changes to the passwd and shadow files. If you look at the permissions for the passwd utility, you'll see something like this: -r-sr-xr-x 2 root wheel 8.2K 19 Jan 17:24 /usr/bin/passwdThis is from my FreeBSD system - what you see will depend on the OS you are using. The s in the owner execute position (4th column) indicates the setuid bit. For further reference, the syscall is setuid, and is part of the standard C library.
I am trying to change my password as a non root user : passwd The data is updated in /etc/shadow, but checking the permission i get: ---------- 1 root root 6076 Jan 27 17:14 /etc/shadow cat /etc/shadow cat: /etc/shadow: Permission deniedClearly there were no permissions on the file for anyone, even then the passwd command succeeds, and i am indirectly updating data to a non-previliged resource (shadow file)! So can anyone explain the mechanism that how the updation takes place in background ? Explanation with reference to the system calls will be very useful.
How passwd command from non-root account succeeds
Why not authenticate those users for which you only have an unsalted SHA-1 hash of their password by another means than /etc/shadow. Using PAM, you can have as many authentication modules as you want and stack them as you want. You can keep pam_unix.so for some users and use pam_ldap.so for the rest.
We have an automated sync-routine that uses useradd to create new users on a Ubuntu 10.04 machine. The application launching the routine provided both username and CRYPT-encrypted password. However, since we changed how passwords are handled in order to include LDAP support, passwords now don't have to be CRYPT but can also be MD5 or SHA-1. In fact, SHA-1 is the new default. This however now causes problems. I have read up on how /etc/shadow is handled and there doesn't seem to be an id for SHA-1, only for SHA-256/SHA-512($5$ and $6$ respectively). The only thing I found was to change the whole thing from CRYPT to SHA-1. We could do that, but we wanted the whole transition to be as non-disruptive as possible. Is there a way to use both CRYPT and SHA-1 passwords together? NOTES - The main application is a CMS on an entirely different server. The linux server in question is a local machine(slave) at the client's location in order to provide local services. - We are aware that we could switch the entire system out to use LDAP-only, but, as outlined earlier, we don't want to change everything at once.
Can linux use a mix of SHA-1 and CRYPT passwords?
On a usual desktop installation, what you see in /etc/shadow is what you get, and the system users generally don't have passwords set. They're not used for interactive logins, so they don't need passwords. E.g. on the system I looked at, /etc/shadow has this line for sys: sys:*:19101:0:99999:7:::That * is where the password hash would be if the user had a password. But * is an invalid password hash, no password will produce it, so there is no password the user could log in with. Note that if the password hash field was empty, it might allow login without entering a password. With the Linux PAM libraries, this is controlled by the nullok setting to pam_unix.so (man page):nullok The default action of this module is to not permit the user access to a service if their official password is blank. The nullok argument overrides this default.(if it's enabled, and the password is empty, the module doesn't even ask for the password but accepts the login directly.) If you really wanted, you could set a password for the system in the usual way, but I would suggest reconsidering what you're doing; there's probably a better way. Also note that they may not have a usable shell set either, e.g. on the system I looked, the sys user's shell is set to /usr/sbin/nologin.
I want to get the password hash of the sys user in Debian (but the password is preferred). I heard that all password hashes are stored in the /etc/shadow file, but there isn't a password hash for the sys user. How to get it? P.S. I have root access.
Where is the system's "sys" user's password hash stored in Debian?
Historically /etc/passwd had all of the user data, there was no shadow. However it was discovered that a dictionary attack could be done on the file, to discover passwords (if they are in the dictionary). Therefore it was decided to remove the passwords from /etc/passwd, the rest of the file remained, as it was used by many programs, e.g. ls. The passwords were moved to /etc/shadow, and this file was made so that only root can read it./etc/passwd now has an x for the password field. /etc/shadow only shares the first field (the key-field / the user name). /etc/shadow has been expanded to contain other password management fields.
It seems to me that /etc/shadow and /etc/passwd contain the same data. Why are there two files? Are they different?
What is the difference between /etc/shadow and /etc/passwd?
The .org files are remnants of an old base-passwd upgrade, which detected differences between the Debian default system accounts and those present on the system. When this happened, the upgrade would have offered to fix the files, keeping backups with .org suffixes. They can be deleted now.
On a Debian 10 server, which started as Debian 7 and updated whenever new version came out, I accidentally found these three files: /etc/passwd.org, /etc/group.org, /etc/shadow.org The backup files /etc/passwd-, /etc/group-, /etc/shadow- and other *- files are present, as they should. For example, all passwd files are (same applies for the other two): $ ll /etc/passwd* -rw-r--r-- 1 root root 2,1K Αυγ 13 14:08 /etc/passwd -rw-r--r-- 1 root root 2,1K Αυγ 13 14:06 /etc/passwd- -rw-r--r-- 1 root root 2,0K Ιουν 20 2015 /etc/passwd.orgTheir last access time is somewhere in 2015. Their contents are on par with /etc/passwd, /etc/group, /etc/shadow, as they probably were some time in the past - I can see some deleted users. I cannot find any info of such *org files. Does anybody has any idea what are there *org files and what is their use?
/etc/passwd.org, /etc/group.org, /etc/shadow.org files
The entries like +::0:0::: can only work as intended if you have passwd: compat in your /etc/nsswitch.conf file. If you use passwd: files nis instead, this entry will not have its intended effect. At least according to nsswitch.conf(5) man page on my Debian 9 system, that does not seem like valid syntax anyway: it should be either +user::0:0::: where user would be a NIS username who will be given root access on this system, or just + which includes all NIS users except those that have previously been excluded using -user or -@netgroup syntax, without overriding the NIS-specified UID/primaryGID values. By extension, +::0:0::: would seem to mean "every NIS user is root on this system", which seems like not a good idea in the first place. The danger is, for an application that handles authentication on its own by reading /etc/passwd and /etc/shadow but does not implement the passwd: compat style syntax extensions, that line literally means "user + has UID 0 and GID 0 and has no password". If you're using such an application, this is a "type + to the username prompt, just press Enter at the password prompt; you now have root access" vulnerability. Since there is no valid shell, you might not get shell access immediately: but just having UID 0 access through an application probably gives a savvy intruder plenty of leverage to gain full root shell access quite soon afterwards.
I was reading the BSI Security Guidelines (GERMAN), on NIS and it explicitly mentioned that one should prevent the entry +::0:0::: from occuring in the /etc/passwd file of the NIS server. From my research I have garnered, that the + would import the entire NIS list into the passwd file. The solution proposed by the guideline, is to add a * to the password section of the entry, which would make the username be looked up in the shadow file. Is this not somewhat counter productive, as it would essentially make importing the NIS list useless (since these do not have entries in shadow)? Furthermore, what would a legitimate usage of this entry be and how could an attacker exploit the entry (without the *)?
Why is +::0:0::: not supposed to be found in /etc/passwd?
There could be multiple users with the same uid (but different name, home directory, shell, etc) in /etc/passwd. And that was current practice -- IIRC even today, there's a toor "alternate root" account on BSD. If the /etc/shadow passwords were indexed by uid instead of user name, then which /etc/passwd entry would each of them correspond to?
/etc/shadow contains the username, but not the uid. Is there a specific reason, why a char * field was chosen over an int? For direct username->password check this might be quicker, but for relations to /etc/passwd a string-comparison on each user seems a little expensive. I'd like to know the rationale behind this decision.
Why does /etc/shadow uses user name instead of uid?
Since su is usually configured via pam_unix, it's oftentimes configured with the nullok_secure directive on Debian systems:$ grep -m1 pam_unix /etc/pam.d/system-auth auth sufficient pam_unix.so nullok_secureChanging that default to just nullok should enable password-less su usage.
Having read this: https://stackoverflow.com/questions/11700690/how-do-i-completely-remove-root-password I was under the impression that a blank root password, as in modifying the /etc/shadow file for the root entry to be something like this: root::0:0:99999:7:::Should allow me to su to root without being prompted for a password. Please note that I'm not looking for a practical or secure way to do this. I'm aware that public key authentication is a viable way to do this in a sane way through authorized_keys and SSH. That's not what this question is concerned with. The expected behavior is that either not prompting for a password or at least accept a blank password when prompted (enter key, nothing more) should allow the user to become root. Actual behavior is that the user does get prompted for a password, and does not accept a blank password (enter key). I also tried su -, su root, su - root. Same behavior. Just to be certain that I actually did change the shadow file I also tried my old password, which also doesn't work, although simply a cat of the shadow file should be enough to confirm that this change was actually carried out. Restoring the original shadow file from my shadow.bak restored original functionality. This is on a Debian 9 system, su from util-linux 2.32.1. Is my syntax incorrect? Should it be root::0:0:99999:7::: or is this ability to use a blank password no longer possible? Since when was it removed?
Blank root password disabled in modern distros?
You fear of an update of shadow-utils is IMO unwarranted. The routines described in that HOWTO are available on my Ubuntu 12.04 and Mint 17 systems without installing anything special. The structure to read /etc/shadow information in a C program can be found in /usr/include/shadow.h and with man 5 shadow and the functions that you would need to find e.g. a shadow password entry by name as defined in /usr/include/shadow.h is getspnam and that will get you a man page as well (man getspnam) describing that and all related functions. Based on that you should be able to get the hashed password entry for any given name. The hashed password should have multiple '$' tokens, cut of everything after and including the last '$' from the hashed password and present that as salt to crypt(), the glibc version (according to man 3 crypt) should be able to handle the "extended" salts that indicate SHA512 entries as are more common nowadays.
as my related question doesn't seem to get much love, here another one: What's the proper way to authenticate a user via username/password prompt in Linux nowadays? In principle, I suppose I would have to obtain username and password, read salt and hash of the corresponding user from /etc/shadow. I would then calculate the hash of the given password and the stored salt and check if the result matches the hash stored in /etc/shadow. Normally, I could simply authenticate via PAM (e.g. pam_unix) which does all this already but my application is a custom PAM module and I found no method to call one PAM module from another. If this is possible somehow, I'd gladly go for this solution. As of now, I found this really dated tutorial http://www.tldp.org/HOWTO/Shadow-Password-HOWTO-8.html from 1996 when, apparently, shadow support was not yet built into libc. It mentions pw_auth and valid as helper functions for authentication. I tried implanting these into my code and linking against libshadow.a of the shadow-tools but I get 'unresolved external reference' errors for pw_auth and valid. The code looks something like this: if ((pw->pw_passwd && pw->pw_passwd[0] == '@' && pw_auth (pw->pw_passwd+1, pw->pw_name, PW_LOGIN, NULL)) || !valid (passwd, pw)) { return (UPAP_AUTHNAK); }I haven't checked this further but anyway this is not a preferred solution as I'd have to update my code every time shadow-utils are updated. I'd much rather link to a library (that isn't PAM) that provides authentication against /etc/shadow. Is there such thing and I didn't find it yet? Or some other solution?
What's the correct way to authenticate a user without PAM?
The usual implementation of password changing involves hardlinking /etc/shadow to /etc/stmp (or some similar name; link() being atomic on local filesystems, this constitutes a kind of lock file mechanism), writing out a new one to a temporary file, then renaming the original /etc/shadow to /etc/shadow- or similar and renaming the temporary to /etc/shadow. This is done for robustness: at all times the original shadow file, unmodified, still exists and can be recovered easily even if the power fails at just the wrong time or something equally bad (unless it destroys the entire disk).
I created a hard link for the shadow file. For removing the passwd of the user I opened the shadow file in vi editor and removed the encrypted passwd and then saved. The inode value of the shadow file was changed. Then I updated the passwd of the user and again the inode value of the shadow file changed. Why the inode of the shadow file changes when it is edited/updated?
Why the inode value of shadow file changes?
It means that the password is locked. Tools, such as usermod -L add a ! to the password to invalidate it. usermod -U removes the !. From man 5 shadowIf the password field contains some string that is not a valid result of crypt(3), for instance ! or *, the user will not be able to use a unix password to log in (but the user may log in the system by other means).
In /etc/shadow, I have a line that begins: ubuntu:!$6$Pi4BKmX8$........................Why is there a ! before the $6$ in the hash?
Why is there a ! in the password hash?
From: man 5 shadow:A password field which starts with a exclamation mark means that the password is locked. The remaining characters on the line represent the password field before the password was locked.Without anything, it means you don't have any password for that account.This field may be empty, in which case no passwords are required to authenticate as the specified login name. However, some applications which read the /etc/shadow file may decide not to permit any access at all if the password field is empty.The *:If the password field contains some string that is not a valid result of crypt(3), for instance ! or *, the user will not be able to use a unix password to log in (but the user may log in the system by other means).We use it when we don't want to lock an account but don't permit it to login, say for users that are in charge of some kind of service and login is not necessary for them but they're not locked down. Finally the number you are asking for is: "date of last password change" for that account.
On one of my machines it's root::somenumber[...]::: with somenumber[...] being the same as for my actual account (after what appears to be the encrypted passphrase) and the "logcheck" account (after :*:). On another machine it's root :!:somenumber[...]::: with somenumber[...] being the same for all accounts until the most recently added ones starting with postfix:*:. I didn't enter a root password during installation for both of these machines. However I accidentally set it for one of them and had to remove it again using the passwd -d root command. I'm running Debian 9.1 with KDE. What exactly should be in there if I wish for my root account to be locked (I use the sudo command)? Are those file contents fine? And related to this would also be this question: how can I view a history of changes to the shadow file including info on which user changed what and when.
What should be in the /etc/shadow file if I want my root account to be disabled?
A password field that starts with * means the corresponding user is not allowed to login. This is generally used for system accounts, such as mysql, mail, apache, etc. However, if the entry is literally ending with :::::::, this means the corresponding user is a NIS / NIS+ account.
When I look at /etc/shadow I see several entries that look like this: username:**___________::::::: What does the "**___________" mean? That is where I should see the password hash.
What does **___________ mean in /etc/shadow?
The pwconv command automagically backups the /etc/passwd in a file called /etc/passwd-. Try to restore this file and rename /etc/shadow to /etc/shadow-.
I'm playing with a rather old, heavily customized Linux installation (based on Debian etch, running on a Netgear ReadyNAS device). Recently I've switched from /etc/passwd to /etc/shadow via pwconv. Now I'd like to switch back due to authentication problems with the Apache server, because mod_auth_shadow is not installed and I'm hesitant to install it. As an additional complication: pwunconv does not exist either. I have a backup, but it's old and I have been installing a lot of packages since it was taken. I also have the option of doing a factory reset, but that will mean I lose the RAID configuration which I do not want. What steps would I need to take to go back to regular passwd-based password authentication rather than shadow passwords?
How do I switch from /etc/shadow back to /etc/passwd?
Your understanding is correct, at least on Linux. And I've confirmed that my accounts show sane dates. Two possibilities come to mind:You—or a prior admin—used chage -d (or direct editing) to change the expire date. Possibly this was done to prevent password expiration, by making the last change date (and thus expiry date) in the distant future. (Why admin didn't use chage -M, I can't say) As cjm said, the password was changed when the clock was set very wrong. Somewhat unlikely, as wrong clocks seem to be in the past more often than the future. (Because a BIOS battery dies, and sets the clock to the minimum date the BIOS authors thought reasonable.)
My understanding is that the last change date(in /etc/shadow) is the number of days since 01/01/1970 that the password was changed, but I'm seeing numbers like 19708 which translate to 17/12/2023 (future). How is it so?
Unix last password change date
From the package version numbers, it looks like Debian 9 ("stretch"), which is the oldstable version since 2019-07-06. Maybe it's time to consider updating your Docker image to use a newer stable version? CVE-2017-12424 appears to be about the /usr/sbin/newusers tool, which is in the passwd package. If you don't need that specific tool in your Docker image, maybe use a .dockerignore file to omit it entirely, as a workaround? shadow is the source code package that builds multiple utilities: Debian packages the utilities into three separate packages. In this case, the vulnerable utility is in the passwd package, which would need to be upgraded; however, Debian 9 does not currently seem to have a newer version of the package available. Only Debian 10 ("buster", the new stable version) and above has the fixed version available. Moving up to a passwd package from Debian 10 without upgrading the rest of the image to match is likely to cause library dependency errors. Upgrading your entire base image from Debian 9 to Debian 10 might be a good investment of your time at this point.But if you want a fixed version of the passwd package that is compatible with other Debian 9 packages right now, you might have to download the Debian source package for shadow 4.5-1.1 to a Debian 9 system with the compiler and other build tools installed, and run a dpkg-buildpackage on the sources to get a newer version of the passwd package that is compiled against the libraries of Debian 9. Injecting this custom package into your Docker image build process would be your task. (As a side effect of the build, you will also get newer versions of packages login and uidmap: however, as long as the standard Debian 9 versions of those packages don't have any known vulnerable contents, you'll have the option of ignoring them.)Note that CVE-2017-12424 applies only if you have a system in place that allows unprivileged users to run the newusers command in a privileged context, e.g. a Control Panel in a web-hosting environment or an /etc/sudoers entry that allows a non-root user to run newusers as root. This is probably the reason why the fixed version has not been propagated to Debian 9 yet: the security team did not consider it a high-priority issue.
I have been told one of my Docker images has "Docker security issue CVE-2017-12424", says one of its package shadow version is "1:4.4-4.1". And I need to upgrade. But I can only see version 1:4.4-4.1 of my Debian packages: $ dpkg -l | grep 1:4.4-4.1 ii login 1:4.4-4.1 amd64 system login tools ii passwd 1:4.4-4.1 amd64 change and administer password and group dataI found the shadow github repo https://github.com/shadow-maint/shadow, but I didn't find any document related. Can you please help me, and tell me about how to check shadow's package version, and how to upgrade it in a Debian environment.
How to upgrade shadow package in Debian
instead: chpasswd -e <<< 'userA:yourencryptedpassword'If you were going to use sed - despite the risks: To set a password - no matter what it was before: sed -i.sedbackup 's/^\(userA:\)[^:]*\(:.*\)$/\1yournewpassword\2/' /etc/shadowTo replace a specific password string: sed -i.sedbackup 's/^\(userA:\)youroldpassword\(:.*\)$/\1yournewpassword\2/' /etc/shadow
I am looking for a sed command to change the line: userA:$6$lhkjhl$sdlfhlmLMHQSDFM374FGSDFkjfh/7mD/354dshkKHQSkljhsd.sdmfjlk57HJ/:95170::::::to userA:$6$sLdkjf$576sdKUKJGKmlk565oiuljkljpi/9Fg/rst3587zet324etze.dsfgLIMLmdf/:34650::::::
Change Shadow Password
If you look at the binaries installed by pam, it includes unix_checkpwd.unix_chkpwd is a helper program for the pam_unix module that verifies the password of the current user. It also checks password and account expiration dates in shadow. It is not intended to be run directly from the command line and logs a security violation if done so. It is typically installed setuid root or setgid shadow. The interface of the helper - command line options, and input/output data format are internal to the pam_unix module and it should not be called directly from applications.
PAM manages to check the user password, when called from unprivileged screen-lockers. E.g.:Password for GNU screen lockscreen command? https://github.com/google/xsecurelockI can't find any SUID-root binary in the screen package on Fedora 26, but the lockscreen command (Ctrla Ctrlx) still works. I can't see any the Makefile in xsecurelock setting SUID root anywhere either. I'm confused. How does this work? My user does not have read access to /etc/shadow. I am not using the OpenWall pam_tcb for per-user shadow files.
How is PAM checking the user password in unprivileged processes?
Since you seem to have access to /etc/shadow as a privileged user (sudo?), do sudo passwd root If on the other hand, you are editing the filesystem in the MicroSD card in another machine, just edit out the root password in /etc/shadow. Delete the encrypted password field as in: root::14610:0:99999:7:::Then you will be able to enter as root in the console, press ENTER when asked for the password, and change it once you login with passwd.
I'm running Bananian linux on my Banana Pro recently I changed some config settings but quit it with ctrl + c without finishing editing all the config settings. After restart I am unable to login with default login - "root", I get the error incorrect login every time I try. I tried checking my username in /etc/passwd and /etc/shadow /etc/passwd fileroot:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin bin:x:2:2:bin:/bin:/usr/sbin/nologin sys:x:3:3:sys:/dev:/usr/sbin/nologin sync:x:4:65534:sync:/bin:/bin/sync games:x:5:60:games:/usr/games:/usr/sbin/nologin man:x:6:12:man:/var/cache/man:/usr/sbin/nologin lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin mail:x:8:8:mail:/var/mail:/usr/sbin/nologin news:x:9:9:news:/var/spool/news:/usr/sbin/nologin uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin proxy:x:13:13:proxy:/bin:/usr/sbin/nologin www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin backup:x:34:34:backup:/var/backups:/usr/sbin/nologin list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin systemd-timesync:x:100:103:systemd Time Synchronization,,,:/run/systemd:/bin/false systemd-network:x:101:104:systemd Network Management,,,:/run/systemd/netif:/bin/false systemd-resolve:x:102:105:systemd Resolver,,,:/run/systemd/resolve:/bin/false systemd-bus-proxy:x:103:106:systemd Bus Proxy,,,:/run/systemd:/bin/false ntp:x:104:109::/home/ntp:/bin/false sshd:x:105:65534::/var/run/sshd:/usr/sbin/nologin. /etc/shadow fileroot:$6$9KzHxAiY$L8WtC4E1KoZYbzaxMCK4AhpVGfS3oKLNdn1YjIbunGcQDJLm8GwjRy1fXU7vhHh7DrR8hNChqPnaoL76efh/f/:14610:0:99999:7::: daemon:*:16628:0:99999:7::: bin:*:16628:0:99999:7::: sys:*:16628:0:99999:7::: sync:*:16628:0:99999:7::: games:*:16628:0:99999:7::: man:*:16628:0:99999:7::: lp:*:16628:0:99999:7::: mail:*:16628:0:99999:7::: news:*:16628:0:99999:7::: uucp:*:16628:0:99999:7::: proxy:*:16628:0:99999:7::: www-data:*:16628:0:99999:7::: backup:*:16628:0:99999:7::: list:*:16628:0:99999:7::: irc:*:16628:0:99999:7::: gnats:*:16628:0:99999:7::: nobody:*:16628:0:99999:7::: systemd-timesync:*:16628:0:99999:7::: systemd-network:*:16628:0:99999:7::: systemd-resolve:*:16628:0:99999:7::: systemd-bus-proxy:*:16628:0:99999:7::: ntp:*:16628:0:99999:7::: sshd:*:16628:0:99999:7:::
Unable to recover lost login
Most likely you modified the filesystem in an emergency shell or from a rescue disk. Your SELinux labels are probably wrong for /etc/shadow. Easiest fix is to touch /.autorelabel and reboot normally. It will relabel the filesystem and reboot.
I have a CentOS 7 image on my local machine that I want to allow login as root. This is going to be a system dedicated for testing. I initially tried using rescue mode and added kernel param "systemd.unit=emergency.target" but it says root login is locked. So I start /bin/bash instead. I see root in /etc/shadow is locked using "!!". So I run passwd root and assign it a password. However, I am still unable to login as root. It keeps telling me password incorrect (I'm sure password is correct). Is there somewhere else I overlooked? I am logging in via console, not using SSH.
CentOS 7 - how to login as root
I have never used Manjaro, but the process that works for Arch Linux should be fine in your case too. You should be able boot off a Manjaro live USB, mount the root file system of your Manjaro installation, mount your existing /home directory and put the backup copy you have of shadow back to its place. Then you should also be able to boot your installed Manjaro, because /etc/grub.d is only used to (re-)create your GRUB configuration and is not required during the boot process. It is however important that you restore (editing them again, if you have no backup) the files it contains, otherwise your system risks becoming unbootable (or, more likely, not dual-bootable anymore) the next time some package update triggers the re-creation of your boot loader configuration. This also likely worked if you had an encrypted root file system. udev takes care of activating any block devices (e.g. MD RAID arrays or LVM volumes) as soon as they become available, and the only things usually left to you are:Opening encrypted devices; in your case, you should be able to: cryptsetup open /dev/your_encrypted_device decrypted_device_nameUnless something wiped your LUKS headers this will only require the passphrase. (Note that there is no way to recover your data if the LUKS headers have been wiped or damaged and you don't have a backup).Mounting file systems. E.g. mount /dev/sdaN / or mount /dev/mapper/mapped_dev /.lsblk may help you explore your device tree and locate the right devices to open/mount (look at the TYPE column).When pacman installs a new version of a file whose modification time does not match the one recorded in the packages database, the existing file is not overwritten and a .pacnew file is created instead. Promptly taking care of .pacnew files is important because, on a rolling-release distribution, any package update may introduce breaking changes. For instance, an existing configuration file may mention options that have been deprecated in the to-be-installed version of a program, which needs different options instead. Distribution maintainers can not take care of all the possible cases and checking configuration files is left to the user. pacdiff is aimed at helping in this process: it walks through the pacnew (and .pacsave) files tracked by the package manager and offers you to review them. "(O)verwrite with pacnew" does just what is says: the existing file is replaced with the .pacnew version and your custom configuration is lost. While the most proper action is usually to review the existing and .pacnew versions of a file and merge them when needed, some .pacnew files are not meant to be acted upon. Assuming Manjaro aligns with Arch in this respect, this is true for the user database (which includes /etc/passwd and /etc/shadow) "unless Pacman outputs related messages for action".
After all these years of using Linux, I have never goofed up this badly. I should have known better, I honestly don't know what I was thinking. While trying to fix a small error message I was getting (this is irrelevant to my now much bigger issue), I found a post that said to "just run pacdiff and overwrite old files", and without much looking into it, I began to overwrite.. I guess I did have the fortitude to back up /etc/shadow before overwriting it and stopped overwriting after just a few entries, but I am now locked out of root and all my users. I'm on the computer right now as I'm scared to restart as I also overwrote /etc/grub.d (which I did not backup)! The /etc/shadow backup is in my /home and is owned by root so I cannot read it now, but I do have it. What exactly does pacdiff and (O)verwrite with pacnew do? I have found instructions to recover /etc/shadow file, but will I be able to get into grub-boot loader on restart now? My root and home partition are not encrypted, but I do have a LUKS encrypted partition. If worse comes to worse, and I have to reinstall without formatting my /home and encrypted partition, is any instance of cryptsetup able to open a LUKS encrypted partition with the passphrase? The information to mount and open is stored in the LUKS partition header, so I should be good, right? I am unable to find out which cipher I am using as I don't have root access, but I'm almost positive it's whatever the default is. How do I proceed from here? I'm not going to shutdown this machine until I have a plan in place. I'm on a dual boot Manjaro/Windows. I do have a bootable Manjaro USB made and another machine if need be. Any help would be much appreciated. This is not my proudest moment, but a great lesson learned. I need you guys more than ever.
Locked out: overwrote /etc/shadow and /etc/grub.d with pacnew
If the has identifier is lost, the hash will be interpreted as coming from the DES crypt algorithm (the default). The real password won’t match, so the user will in effect be locked out from the account, but the account itself won’t be locked down — so for example root will be able to access it. It may be possible to find a password which will hash to the stored hash, interpreted as a DES hash, but that will only be possible for users who have read access to the file containing the hashes. See How to find the hashing algorithm used to hash passwords? for details of the hash identification.
I was reading the shadow file structure explained here. I was wondering what happens if, perhaps for some type of error or wrong manual changes, the $id field representing the hashing algorithm is missing. The hash would be interpreted using a default system hashing algorithm? Or the account would be locked down for having no hashing algorithm associated?
Encrypted password with no linked algorithm
Yes, the passwd command first writes the modified contents of the /etc/shadow file in full to /etc/nshadow, runs fsync() to ensure the nshadow file is actually written to the disk, and then renames /etc/nshadow to /etc/shadow. This is done to eliminate the possibility of ever having an incomplete file in place as /etc/shadow, even for the briefest time. POSIX specifications say that file rename operations within a single filesystem must be atomic, i.e. any other operations must only be able to see the rename operation as either "not started yet" or "fully completed", never in any kind of "in progress" half-way state. The pwconv command will also produce /etc/npasswd and /etc/nshadow when you use it to convert an archaic non-shadowed password file to the shadowed format. Some versions of pwconv may require the system administrator to move those files into place manually. If /etc/nshadow exists on your system, it might be a remnant of a pwconv command run at some time in the past... or it might be there because the rename("/etc/nshadow", "/etc/shadow") system call at the end of some password change operation failed. Such a failure would suggest possible filesystem corruption, or other problems. If the timestamp of the nshadow file is Jul 25 06:43, then you might want to find out what happened on the system at that time. Was there a problem of some sort that has since then been fixed, or did someone run the pwconv command for any reason? If the root password was changed using some sort of automation tool, you might want to find out exactly what that automation tool will actually do. Perhaps it will run pwconv for whatever reason.
I saw some abnormal thing after changing root password in linux. When I typed ls -al /etc/ | grep shadow after changing root password, the result is as below. -r-------- 1 root root 653 Mar 9 2018 gshadow -r-------- 1 root root 800 Jul 25 06:43 shadow -r-------- 1 root root 796 Jul 25 06:43 shadow-But sometimes the result is different with the above. -r-------- 1 root root 653 Mar 9 2018 gshadow -r-------- 1 root root ? Jul 25 06:43 nshadow -r-------- 1 root root ? Jul 25 06:43 shadow -r-------- 1 root root ? Jul 25 06:43 shadow-I'm just showing an example and don't remember exact size of those files (nshadow, shadow, shadow-). As my research, the /etc/nshadow is written by passwd when changing password, and then passwd just renames /etc/nshadow to /etc/shadow. But I don't know it is correct. Anyway, what is the /etc/nshadow?? and why this file is generated?? Please let me know the reason :(
Why the /etc/nshadow file remains after changing root password?
Leave the password field blank. newusers will complain repeatedly about 'No password supplied' and being unable to change the password, but the users will be created with ! (i.e. invalid password) in the shadow password field. username::1002:1002::/home/username:/bin/bashinstead of: username:*:1002:1002::/home/username:/bin/bashFor example: # echo "username::10000:10000::/home/username:/bin/bash" | newusers No password supplied No password supplied No password supplied newusers: (user username) pam_chauthtok() failed, error: Authentication token manipulation error newusers: (line 1, user username) password not changed# tail -1 /etc/passwd username:x:10000:10000::/home/username:/bin/bash# tail -1 /etc/shadow username:!:16713:0:99999:7:::
Doing: # newusers username::1002:1002::/home/username:/bin/bash ^D No password supplied No password supplied No password supplied newusers: (user username) pam_chauthtok() failed, error: Authentication token manipulation error newusers: (line 1, user username) password not changedadds the user with no password (with hash "!" in /etc/shadow). Doing: # newusers username:*:1002:1002::/home/username:/bin/bash ^Dadds the user with a password of "*" (with some hash in /etc/shadow). However, what I need is to add that "username" user with a hash of "*" in /etc/shadow. How do I do that?
How to bulk add disabled-password users?
For salt, the idea is simple: 3DES for example has 11 characters of hashed password plus 2 characters of salt. Lets say the outcome is: SSHHHHHHHHHHH. That's what's stored in the shadow file. I.e. both the salt and the hash. Once I type my password, the library gets the shadow entry, extracts the salt (first two characters), combines the salt with my unencrypted password (the one I just typed) and generates a new hash. If the new hash is the same as the one in the shadow file then it means that my password matches the original one. The idea is the same behind all cases of salted hashes. The format in the shadow file has changed a bit to be easier to distinguish between different hash algorithms but eventually all that is stored are: (optionally) an id that identifies the hashing function, the salt and the hash. Then using any inputed password, the system generates a new hash (by applying the hashing function to the salt+password) and compares that with the stored one. For LDAP I believe that this is the same. I.e. the system fetches the entry from LDAP and performs the same set of functions.
Does the supplied password during login get converted to a salted hash and then compared to the one in /etc/shadow? /what if the user is in LDAP but not in the shadow file? Would it use kerberos?
How does the authentication process with the salted hash in shadow work
We figured it out ... the kscreensaver file located in /etc/pam.d was misconfigured during a recovery update. We had backup files of the kscreensaver configuration file and simply copied using cp command to the original condition before the update. #%PAM-1.0 # This file is auto-generated. # User changes will be destroyed the next time authconfig is run. # auth required pam_listfile.so file=/etc/allowed.nmr.users item=user sense=allow onerr=fail auth required pam_env.so auth sufficient pam_fprintd.so auth [default=1 success=ok] pam_localuser.so auth [success=done ignore=ignore default=die] pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 1000 quiet_success auth sufficient pam_sss.so forward_pass auth required pam_deny.soaccount required pam_unix.so account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 1000 quiet account [default=bad success=ok user_unknown=ignore] pam_sss.so account required pam_permit.sopassword requisite pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type= password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok password sufficient pam_sss.so use_authtok password required pam_deny.sosession optional pam_keyinit.so revoke session required pam_limits.so -session optional pam_systemd.so session optional pam_oddjob_mkhomedir.so umask=0077 session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_sss.sosystemctl rstart sssdWe restarted sssd and the unlock function for active directory users was restored.
We have a Linux system running Centos 7 and have an issue with the screen lock. We have a multi-user environment where each user has their own account. The authentication is using our university active directory. Only local accounts use the passwd & shadow files and indeed if a local account locks the screen they are able to unlock it. All other users are authenticated using the AD and get authentication error when they try. We are using sssd. This from secure log: Oct 30 08:59:54 b400 kcheckpass[94374]: pam_listfile(kscreensaver:auth): Refused user teach for service kscreensaver Oct 30 08:59:55 b400 kcheckpass[94374]: pam_sss(kscreensaver:auth): authentication failure; logname=syin uid=1005 euid=1005 tty=:0 ruser= rhost= user=teach Oct 30 08:59:55 b400 kcheckpass[94374]: pam_sss(kscreensaver:auth): received for user teach: 17 (Failure setting user credentials) Oct 30 09:00:02 b400 gdm-launch-environment]: pam_unix(gdm-launch-environment:session): session opened for user gdm by ouidad(uid=0) Oct 30 09:00:03 b400 polkitd[663]: Registered Authentication Agent for unix-session:c243 (system bus name :1.20066 [/usr/bin/gnome-shell], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)This from messages: Oct 30 08:59:55 b400 [sssd[krb5_child[94379]]]: Preauthentication failed Oct 30 08:59:55 b400 [sssd[krb5_child[94379]]]: Preauthentication failed Oct 30 08:59:55 b400 [sssd[krb5_child[94379]]]: Preauthentication failed Oct 30 08:59:55 b400 kcheckpass[94374]: Authentication failure for teach (invoked by uid 1005)The sssd logs were either empty or provided no clues. What can I do to make sure on login it checks against the AD instead of the passwd/shadow files for screen unlocks?
Screen Lock Checks password file but not Active Directory
ruby-shadow should be installed on all hosts which are being managed using puppet. Verify it is loaded properly by running the below command. #ruby -e "require 'puppet' ; puts Puppet.features.libshadow?" true The package is available for download from the following location http://pkgs.repoforge.org/ruby-shadow/
I have written a manifests using users resource as shown below node 'node2.example.com','node3.example.com'{ user { 'ash': ensure => 'present', managehome => 'true', comment => 'Zaman Home', home => '/home/ash', shell => '/bin/bash', expiry => 'absent', password => '$1$cs1j/t.D$4qjZLwFQ2Ocr0pulyNTUx/', password_min_age => '30', password_max_age => '60'} } The user is getting created successfully as shown below from /etc/passwd ash:x:503:503:Zaman Home:/home/ash:/bin/bashBut the issue for me is that /etc/shadow is not getting updated . ash:!!:16875:0:99999:7::: I have got ruby-shadow package installed. # ruby -e "require 'puppet' ; puts Puppet.features.libshadow?" trueVersion are as follows: # ruby --version ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-linux] # puppet --version 3.8.5Please suggest
users resource in Puppet not updating /etc/shadow
Use sudo: sudo perl -pi -e 's/^an24:/an24:\*LK*/g' /etc/shadow
The following command is being executed by a non root user : perl -pi -e 's/^an24:/an24:\*LK*/g' /etc/shadowA permission denied is being issued. This indicates some privileges issues. Can such command be executed? I tried to set the setuid and group id on the script executing this command with no success. The Operating System is Solaris 10.
Perl - /etc/shadow - Permission Denied
File permissions do not really apply to root: Programs running as root can read and write files regardless of protection settings. (However, even root cannot execute a file unless one of the execute bits is set; it does not matter which one). That explains why cat can do it. But apparently, rsync runs its own check in order to rein in what might get copied. So it's not a "limitation" but intended behavior (not that it's any consolation to you). I say "apparently" because I haven't found any documentation for this behavior. (Apologies if I'm just restating your question! Not quite sure from your description.) If your problem is limited to the shadow files, I'd be inclined to add the call to cat to your backup script, and ignore the rsync error for these files. You could add logic to only sync the shadow files when they've changed, but really they're small enough that I wouldn't bother.
I am trying to backup several servers' /etc by using rsync from another server. Here's a relevant snippet: PRIVKEY=/path/to/private.key RSYNC_OPTS="--archive --inplace --no-whole-file --quiet" ssh -n -i $PRIVKEY root@${ip} "rsync $RSYNC_OPTS /etc 192.168.25.6::"'$(hostname)'where ${ip} is the IP Address of the server to be backed up, while 192.168.25.6 is the IP Address of the server holding the backup. Everything went well, except for /etc/{,g}shadow files on some servers. Since their permissions are 0000, rsync seems to not want to read them ("Permission denied (13)" errors). A quick check using the following: ssh -i $PRIVKEY root@${ip} "cat /etc/{,g}shadow"successfully dumped the files. Is there a way to get round this rsync limitation?EDITs The backup server is Ubuntu 12.04. The servers to be backed up are a smorgasbord of Ubuntu, RHEL, OEL, and CentOS servers. I've tried adding -t to the ssh options, and prefixed rsync with sudo, but still have the same errors.
How to backup /etc/{,g}shadow files with 0000 permission?
The second field (j9T) is not the salt, it's the param (hash complexity parameter). You could read more information about the format of the hash here and here You salt is actually the third field, and you can see it's different. The actual hash is the fourth field.
I had created two users on Linux with the same exact passwords, but when I looked at the /etc/shadow file, I found that the hashed values look different, although the salt file is the same. (Please see below, j9T is the salt). Why the hashed passwords are NOT similar, although the slat and password are similar? # tail /etc/shadow Bob:$y$**j9T**$ewJ0HB756BZDnPjx7zzbm0$i39AKrfuQuvvoQJpujwWd7Z4bcZgN1l0IWeJsNmLzg7:19254:0:99999:7::: Bob:$y$**j9T**$pFF5c93UZvdFYD2nanxEO.$SMhaxtPUPEUZdZZx.b1tGmjXgM67nqBJgMk2sNP.5s4:19254:0:99999:7:::
Hashed passwords are NOT similar although the salt and password are similar
The password hashing schemes $5$ and $6$ are not just single SHA256 or SHA512 hashes of the password + salt. That would be way too easy to brute-force. Instead, the hashing is iterated a configurable number of times: if the password hash does not include a $rounds=N$ specifier, the default is 5000 iterations. You'll find a complete description of the algorithm at: https://www.akkadia.org/drepper/SHA-crypt.txt OpenSSL includes a tool that can be used to calculate password hashes. For example, to manually check the password EasyDiamond for your userA, you would run: openssl passwd -5 -salt FSRFdXhqehxiiFZV EasyDiamondThe output will be the result of the chosen password hashing algorithm (option -5 specifies the $5$ algorithm) with the specified salt (from the /etc/shadow line) and password. If it matches what is listed for userAin your /etc/shadow, you now know userA's password is EasyDiamond. The man page for openssl passwd may be a bit tricky to access, since OpenSSL usually has a separate man page for each subcommand, and this subcommand is named passwd which matches the regular passwd command. Different distributions may solve this in different ways, but a common way is to use a man section name with a suffix: in Debian/Ubuntu family of Linux distributions at least, man 1ssl passwd will produce a man page for the openssl passwd subcommand. Or you might use man -a passwd to get all the possible man pages for passwd, including at least /usr/bin/passwd, openssl passwd and /etc/passwd.
I have the following users with password from /etc/shadow: userA:$5$FSRFdXhqehxiiFZV$SoIvO/4Y2tvOzIi.8p1Ud6AInQ3K5XT/WqVE5Zh4GT8:18388:0:99999:7::: userB:*:18388:0:99999:7::: userC:$6$SJRweUUklycIq1C$ZKCPyeM9bAoTynYioSYBOmZTATXsajucfHNE3ZNfWNmql1GKdsYCTprf/aXOspBxxzlRuDEvjRlzLf7rbx.fy0:18388:0:99999:7::: userD:$6$YgVQv3fSdlYwR$yOn6MBS5dGhMoFmPri4tsLYFzFgd0.nc8VYrSBykn/4qQwGV31NhMtoV/VJfhNqkA.FH0oP7GKxqYyK/4/0nr.:18388:0:99999:7::: userE:$1$8HptDKnp$w32YYwwlxi9F.2JDO/gSA.:18388:0:99999:7::: userF:$6$DWKlq62oU9k8O/z$aMNpueRgSIcILIpSMNV.gnSven6kgNbJ4QJlQM1E32snjhndk3LvfWtnR4NoiFIE4NwC7Kga7PZTlWDaxD0Gd1:18388:0:99999:7::: userG:X014elvznJq7E:18388:0:99999:7::: userH:$6$yhVabCDsrFv$CImM3mQGwX6Scbi/mGsl/jwKFJcdnsEq/Wjlve6ApB21ytw6/.weDMi6QkjJh3RiCO9xTBatNjwxd7vUddRS2/:18388:0:99999:7:::Furthermore, I know someone useses the "EasyDiamond" as password. I want to find out who uses it. I tried to use online hashing tools to get the output hash, but unfortunately didn't get a single match. E.g. for userA tried hash the EasyDiamondFSRFdXhqehxiiFZV with SHA-256 and so on. What am I doing wrong?
Define the user with the specific password
Setting a value that is not a valid password hash, such as *LK* to the password hash field of /etc/shadow will block all forms of password authentication, but if the user has other authentication mechanisms (like fingerprints or SSH keys) configured, they will still work. As a special case of this, passwd -l <username> will prefix the existing hash with a single exclamation mark, so it is easy to undo with passwd -u <username>. Other invalid values such as *LK* can be used to indicate that a user account is not meant to be ever used for authentication, e.g. because the account is used to run some system service in a context that has no special privileges nor is associated with any particular human user. Refer to man 5 shadow:If the password field contains some string that is not a valid result of crypt(3), for instance ! or *, the user will not be able to use a unix password to log in (but the user may log in the system by other means).Likewise, man passwd includes this note on the -l option (lock the password) on modern distributions:Note that this does not disable the account. The user may still be able to login using another authentication token (e.g. an SSH key). To disable the account, administrators should use usermod --expiredate 1 (this set the account's expire date to Jan 2, 1970).(Aaron D. Marasco's suggestion that it disallows login and all forms of authentication used to be correct, but the semantics were changed within the last 10 years or so, in order to have a uniform way to disable password authentication but still allow the account to be used for other forms of authentication. This has tripped up many people in the past: there are a lot of books and other documentation around with outdated information on this.) Setting the user's shell to something that is not an interactive shell, such as /bin/false or /usr/sbin/nologin will disable shell access, but if the user account is usable in other ways that use password authentication (e.g. for IMAP email access or FTP file transfers), those can still work. So in a nutshell:to block all forms of authentication to login to or otherwise use an user account on both old & new systems, use usermod -e 1 <username> to disable shell access but allow other services, set user's shell to /usr/sbin/nologin, /bin/false or something similarly non-interactive. to disable password authentication but allow other authentication methods to work, use passwd -l <username> or other methods to set the password hash to an invalid value. to indicate to other sysadmins that the user account has been intentionally disabled with extreme prejudice, apply all of the above :-)
What is the practical difference between setting *LK* in /etc/shadow and setting /usr/sbin/nologon in /etc/passwd? When would we choose one over the other? When would we combine them?
Difference between *LK* in /etc/shadow and /usr/sbin/nologon in /etc/passwd
I found a discussion on the Arch Linux forums related to this issue: https://bbs.archlinux.org/viewtopic.php?id=234525 As per this discussion, the users are created by the systemd-sysusers component. This component creates system users and groups and runs during the installation/upgrade of systemd. The configuration files for systemd-sysusers are:/etc/sysusers.d/*.conf /run/sysusers.d/*.conf /usr/lib/sysusers.d/*.confYou can grep through these files for the 'http' and 'ftp' users. As per systemd conventions, oackages are expected to add files to the /usr/lib/sysusers.d path. You can override them at /etc/sysusers.d. To completely disable a package-provided config file, create a symlink to /dev/null. Man pages: systemd-sysusers(8), sysusers.d(5)
Once in a while, my server throws this error when trying to run the shadow service. I can delete the users (http and ftp) but they keep reappearing. I don't want to add the directories, nor do I need the users. Why do they keep coming back and how can I stop this? UPDATE: I just saw this during an update: (17/35) upgrading systemd Creating group ftp with gid 11. Creating user ftp (n/a) with uid 14 and gid 11. Creating group http with gid 33. Creating user http (n/a) with uid 33 and gid 33. (18/35) upgrading cockpitArchLinux (I know, I know) dockerized everything except cockpit Package list
Shadow service "user 'ftp': directory '/srv/ftp' does not exist"
pwck is probably what you seek.The pwck command verifies the integrity of the users and authentication information. It checks that all entries in /etc/passwd and /etc/shadow have the proper format and contain valid data. The user is prompted to delete entries that are improperly formatted or which have other uncorrectable errors.Similarly, grpck verifies the integrity of the group information files.The grpck command verifies the integrity of the groups information. It checks that all entries in /etc/group and /etc/gshadow have the proper format and contain valid data. The user is prompted to delete entries that are improperly formatted or which have other uncorrectable errors.
I would like assistance with something I have to do. I need to verify if all users in passwd are also in shadow, if the primary group exists, if the homedir exists and if it belongs to the correct user/group. If something is wrong, it should output it to a new file, called for example "errors". How can I implement a script that does this?
Need help with a script that uses passwd and shadow
No, sorry, you can't use the old hash directly to generate the new one. For one, you're talking about having the SHA-256 hash, and then using that to calculate a SHA-512 based hash, which won't happen without inverting the SHA-256 hash. Second, even if you were to use the crypt algorithm based on the same hash ($5$ for the SHA-256 based crypt), pretty much the first step in your link is still "Start by computing the Alternate sum, sha512(password + salt + password)", and for that, you need the plaintext password. The way to make the transition from one password hash to another work, is to either force your users to refresh their passwords in some way that lets you generate the new hashes; or more nicely, modify the login routine so that it accepts a login against the old hash if the new one doesn't exist, and at the same time generates the new hash. That way you can eventually retire the old hashes, except for accounts that are forgotten.
My goal I have an old MySQL database with usernames and passwords hashed using sha256. I don't have the original passwords and it is not possible to learn them. This is a simple user migration from a database to a Linux system. I need to create all those users on linux, because that linux authentication will be used in other part, and we don't want to make users create credentials again. My thoughts so far I have seen that you can use crypt to generate that strange base64 format that uses the shadow file, but that requires the original password. I understand the process is like this in Linux:Generate a simple sha512 hash based on the salt and password Loop 1000 5000 times, calculating a new sha512 hash based on the previous hash concatenated with alternatingly the hash of the password and the salt. Additionally, sha512-crypt allows you to specify a custom number of rounds, from 1000 to 999999999 Use a special base64 encoding on the final hash to create the password hash string.Ref: https://www.vidarholen.net/contents/blog/?p=33 Since the first step is generating the hash with the salt, I think it should be possible. Since linux is open source I believe I could try to get the source code of crypt somewhere like this https://code.woboq.org/userspace/glibc/crypt/crypt-entry.c.html and start in step 2, but before doing all that I don't know if it will work out, I was wondering if there is an easier way. If there is not, where could I get the correct source code of linux for doing that? I am not sure if the link I provided is correct. Thanks a lot.
How can I use existing password-sha256, to allow login authorisation?
You can try by dates of ~/.login file or via first login date in wtmp. But none of these methods with give trustworthy answer to your question.
Kindly assist to to get user creation details (date and time) and users login details in Solaris 10
How to get user creation details
Answering things in order:It returns a pointer to the location in virtual memory, and virtual memory address space is allocated, but the file is not locked in any way unless you explicitly lock it (also note that locking the memory is not the same as locking the region in the file). An efficient implementation of mmap() is actually only possible from a practical perspective because of paging and virtual memory (otherwise, it would require reading the whole region into memory before the call completes). Not exactly, this ties into the next answer though, so I'll cover it there. Kind of. What's actually happening in most cases is that mmap() is providing copy-on-write access to that file's data in the page cache. As a result, the usual cache restrictions on data lifetime apply: if the system needs space, pages can be dropped (or flushed to disk if they're dirty) from the cache and need to be faulted in again. No, because of how virtual memory works. Each process has its own virtual address space, with its own virtual mappings. Every program that wants to communicate data will have to call mmap() on the same file (or shared memory segment), and they all have to use the MAP_SHARED flag.It's worth noting that mmap() doesn't just work on files, you can also do other things with it such as:Directly mapping device memory (if you have sufficient privileges). This is actually used on many embedded systems to avoid the need to write kernel mode drivers for new hardware. Map shared memory segments. Explicitly map huge pages. Allocate memory that you can then call madvise(2) on which in turn lets you do useful things like prevent data from being copied to a child process on fork(2), or mark data for KSM, Linux's memory deduplication feature.
I was going through documentation regarding mmap here and tried to implement it using this video. I have a few questions regarding its implementation.Does mmap provide a mapping of a file and return a pointer of that location in physical memory or does it return with an address of the mapping table? And does it allocate and lock space for that file too?Once the file is stored on that location in memory, does it stay there till munmap is called?Is the file even moved to memory or is it just a mapping table that serves as a redirection and the file is actually in the virtual memory - (disk)?Assuming it is moved to memory, can other processes access that space to read data if they have the address?
Understanding mmap
There is no difference betweem tmpfs and shm. tmpfs is the new name for shm. shm stands for SHaredMemory. See: Linux tmpfs. The main reason tmpfs is even used today is this comment in my /etc/fstab on my gentoo box. BTW Chromium won't build with the line missing: # glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for # POSIX shared memory (shm_open, shm_unlink). shm /dev/shm tmpfs nodev,nosuid,noexec 0 0 which came out of the linux kernel File Systems documentation Quoting:tmpfs has the following uses:There is always a kernel internal mount which you will not see at all. This is used for shared anonymous mappings and SYSV shared memory. This mount does not depend on CONFIG_TMPFS. If CONFIG_TMPFS is not set, the user visible part of tmpfs is not build. But the internal mechanisms are always present.glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for POSIX shared memory (shm_open, shm_unlink). Adding the following line to /etc/fstab should take care of this:tmpfs /dev/shm tmpfs defaults 0 0 Remember to create the directory that you intend to mount tmpfs on if necessary. This mount is not needed for SYSV shared memory. The internal mount is used for that. (In the 2.3 kernel versions it was necessary to mount the predecessor of tmpfs (shm fs) to use SYSV shared memory)Some people (including me) find it very convenient to mount it e.g. on /tmp and /var/tmp and have a big swap partition. And now loop mounts of tmpfs files do work, so mkinitrd shipped by most distributions should succeed with a tmpfs /tmp.And probably a lot more I do not know about :-)tmpfs has three mount options for sizing: size: The limit of allocated bytes for this tmpfs instance. The default is half of your physical RAM without swap. If you oversize your tmpfs instances the machine will deadlock since the OOM handler will not be able to free that memory. nr_blocks: The same as size, but in blocks of PAGE_CACHE_SIZE. nr_inodes: The maximum number of inodes for this instance. The default is half of the number of your physical RAM pages, or (on a machine with highmem) the number of lowmem RAM pages, whichever is the lower.From the Transparent Hugepage Kernel Doc:Transparent Hugepage Support maximizes the usefulness of free memory if compared to the reservation approach of hugetlbfs by allowing all unused memory to be used as cache or other movable (or even unmovable entities). It doesn't require reservation to prevent hugepage allocation failures to be noticeable from userland. It allows paging and all other advanced VM features to be available on the hugepages. It requires no modifications for applications to take advantage of it. Applications however can be further optimized to take advantage of this feature, like for example they've been optimized before to avoid a flood of mmap system calls for every malloc(4k). Optimizing userland is by far not mandatory and khugepaged already can take care of long lived page allocations even for hugepage unaware applications that deals with large amounts of memory.New Comment after doing some calculations: HugePage Size: 2MB HugePages Used: None/Off, as evidenced by the all 0's, but enabled as per the 2Mb above. DirectMap4k: 8.03Gb DirectMap2M: 16.5Gb DirectMap1G: 2Gb Using the Paragraph above regarding Optimization in THS, it looks as tho 8Gb of your memory is being used by applications that operate using mallocs of 4k, 16.5Gb, has been requested by applications using mallocs of 2M. The applications using mallocs of 2M are mimicking HugePage Support by offloading the 2M sections to the kernel. This is the preferred method, because once the malloc is released by the kernel, the memory is released to the system, whereas mounting tmpfs using hugepage wouldn't result in a full cleaning until the system was rebooted. Lastly, the easy one, you had 2 programs open/running that requested a malloc of 1Gb For those of you reading that don't know a malloc is a Standard Structure in C that stands for Memory ALLOCation. These calculations serve as proof that the OP's correlation between DirectMapping and THS maybe correct. Also note that mounting a HUGEPAGE ONLY fs would only result in a gain in Increments of 2MB, whereas letting the system manage memory using THS occurs mostly in 4k blocks, meaning in terms of memory management every malloc call saves the system 2044k(2048 - 4) for some other process to use.
I've been curious lately about the various Linux kernel memory based filesystems. Note: As far as I'm concerned, the questions below should be considered more or less optional when compared with a better understanding of that posed in the title. I ask them below because I believe answering them can better help me to understand the differences, but as my understanding is admittedly limited, it follows that others may know better. I am prepared to accept any answer that enriches my understanding of the differences between the three filesystems mentioned in the title. Ultimately I think I'd like to mount a usable filesystem with hugepages, though some light research (and still lighter tinkering) has led me to believe that a rewritable hugepage mount is not an option. Am I mistaken? What are the mechanics at play here? Also regarding hugepages: uname -a 3.13.3-1-MANJARO \ #1 SMP PREEMPT \ x86_64 GNU/Linux tail -n8 /proc/meminfo HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 8223772 kB DirectMap2M: 16924672 kB DirectMap1G: 2097152 kB(Here are full-text versions of /proc/meminfo and /proc/cpuinfo ) What's going on in the above? Am I already allocating hugepages? Is there a difference between DirectMap memory pages and hugepages? Update After a bit of a nudge from @Gilles, I've added 4 more lines above and it seems there must be a difference, though I'd never heard of DirectMap before pulling that tail yesterday... maybe DMI or something? Only a little more... Failing any success with the hugepages endeavor, and assuming harddisk backups of any image files, what are the risks of mounting loops from tmpfs? Is my filesystem being swapped the worst-case scenario? I understand tmpfs is mounted filesystem cache - can my mounted loopfile be pressured out of memory? Are there mitigating actions I can take to avoid this? Last - exactly what is shm, anyway? How does it differ from or include either hugepages or tmpfs?
On system memory... specifically the difference between `tmpfs,` `shm,` and `hugepages...`
The __init function ipc_ns_init sets the initial value of shmmax by calling shm_init_ns, which sets it to the value of the SHMMAX macro. The definition of SHMMAX is in <uapi/linux/shm.h>: #define SHMMAX (ULONG_MAX - (1UL << 24)) /* max shared seg size (bytes) */On 64-bit machines, that definition equals the value you found, 18446744073692774399.
I'm just wondering where these values are being set and what they default to? Mine is currently 18446744073692774399. I didn't set it anywhere that I can see. $ cat /proc/sys/kernel/shmmax 18446744073692774399$ sysctl kernel.shmmax kernel.shmmax = 18446744073692774399
Where does Linux set the default values for SHMMAX?
It's perfectly okay to use some directory in /run as long as you have the appropriate rights on it. In some modern distros, /tmp is already a virtual file system in memory or a symlink to a directory inside /run. If this is your case (you can check that in /etc/fstab, or typing mtab), you could use /tmp as your temporary directory. Also, don't get confused with the article from Debian. shm_* functions are used to create shared memory segments for Inter-Process Communication. With those functions, you can share a fragment of memory between two or more processes to have them communicate or collaborate using the same data. The processes have the segment of memory attached in their own address space and can read and write there as usual. The kernel deals with the complexity. Those functions are not available as shell functions (and wouldn't be very useful in a shell context). For further information, have a look at man 7 shm_overview. The point of the article is that no program should manage directly the pseudo-files representing shared segments, but instead use the appropriate functions to create, attach and delete shared memory segments.
Is it good practice to create a directory in /run/shm (formerly /dev/shm) and use that like a temp directory for an application? Background: I am writing black box tests for a program which does a lot of stuff with files and directories. For every test I create a lot of files and directories and then run the program and then create the expected set of files and directories and then run diff to compare. I now have about 40 tests and they are already taking over 2 seconds to run. Hoping to speed things up I want to run the tests in a directory on some sort of ramdisk. Researching about ram disk I stumbled upon a question with an answer stating that it is okay to create a directory in /dev/shm and use that like a temp directory. Researching some more however I stumbled upon a wiki page from debian stating that it is an error to use /dev/shm directly. I should use the shm_* functions. Unfortunately the shm_* functions seem to be not available for use in a shell script. Now I am confused. Is it okay or not to use /run/shm (formerly /dev/shm) like a temp directory?
use `/run/shm` (formerly `/dev/shm`) as a temp directory
The kernel is telling you that when the segfault occurred, the instruction pointer 0x7f3c7c523770 was in a SysV IPC shm segment. The shared memory segment started at 0x7f3c7c4e8000 and was 0x60000 bytes long. SysV shm segments are not backed by a file, so the string SYSV00000000 appears where normally you'd get the filename of the executable or library where the segfault occurred. As a result this log line gives us no real useful information. If you want any hope of tracing the cause of the crash, you need the core dump. I suspect that the instruction pointer wasn't supposed to be in there at all. It's pretty weird to load executable code into a SysV shm segment. But I haven't seen any XFCE code, so what looks weird to me might be normal there. You can learn the basics about sysv shm, assuming you have a decent grasp of the basics of memory management by reading these man pages: man svipc man shmget man shmatRun the ipcs command to see what sysv ipc resources are currently allocated. ipcs -m limits the list to just the shared memory segments.
In my dmesg this appeared when my window manager (xfwm4, part of XFCE) crashed: xfwm4[3936]: segfault at 7f3c7c523770 ip 00007f3c7c523770 sp 00007ffffea1ee28 error 15 in SYSV00000000 (deleted)[7f3c7c4e8000+60000]The same SYSV00000000 also appears in other places (like lsof). So, what is this SYSV00000000? I Googled around and found that it's related to virtual memory, but not much else.
What is "SYSV00000000"?
The answer is "Other". You can get a glimpse of the memory layout with cat /proc/self/maps. On my 64-bit Arch laptop:: 00400000-0040c000 r-xp 00000000 08:02 1186758 /usr/bin/cat 0060b000-0060c000 r--p 0000b000 08:02 1186758 /usr/bin/cat 0060c000-0060d000 rw-p 0000c000 08:02 1186758 /usr/bin/cat 02598000-025b9000 rw-p 00000000 00:00 0 [heap] 7fe4b805c000-7fe4b81f5000 r-xp 00000000 08:02 1182914 /usr/lib/libc-2.21.so 7fe4b81f5000-7fe4b83f5000 ---p 00199000 08:02 1182914 /usr/lib/libc-2.21.so 7fe4b83f5000-7fe4b83f9000 r--p 00199000 08:02 1182914 /usr/lib/libc-2.21.so 7fe4b83f9000-7fe4b83fb000 rw-p 0019d000 08:02 1182914 /usr/lib/libc-2.21.so 7fe4b83fb000-7fe4b83ff000 rw-p 00000000 00:00 0 7fe4b83ff000-7fe4b8421000 r-xp 00000000 08:02 1183072 /usr/lib/ld-2.21.so 7fe4b85f9000-7fe4b85fc000 rw-p 00000000 00:00 0 7fe4b85fe000-7fe4b8620000 rw-p 00000000 00:00 0 7fe4b8620000-7fe4b8621000 r--p 00021000 08:02 1183072 /usr/lib/ld-2.21.so 7fe4b8621000-7fe4b8622000 rw-p 00022000 08:02 1183072 /usr/lib/ld-2.21.so 7fe4b8622000-7fe4b8623000 rw-p 00000000 00:00 0 7ffe430c4000-7ffe430e5000 rw-p 00000000 00:00 0 [stack] 7ffe431ed000-7ffe431ef000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]You can see that the executable gets loaded in low memory, apparently .text segment, read-only data, and .bss. Just about that is "heap". In much higher memory the C library and the "ELF file interpreter", "ld-so" get loaded. Then comes the stack. There's only one stack and one heap for any given address space, no matter how many shared libraries get loaded. cat only seems to get the C library loaded. Doing cat /proc/$$/maps will get you the memory mappings of the shell from which you invoked cat. Any shell is going to have a number of dynamically loaded libraries, but zsh and bash will load in a large number. You'll see that there's just one "[heap]", and one "[stack]". If you call dlopen(), the shared object file will get mapped in the address space at a higher address than /usr/lib/libc-2.21.so. There's something of an "implementation dependent" memory mapping segment, where all addresses returned by mmap() show up. See Anatomy of a Program in Memory for a nice graphic. The source for /usr/lib/ld-2.21.so is a bit tricky, but it shares a good deal of its internals with dlopen(). dlopen() isn't a second class citizen. "vdso" and "vsyscall" are somewhat mysterious, but this Stackoverflow question has a good explanation, as does Wikipedia.
when loading a shared library in Linux system, what is the memory layout of the shared library? For instance, the original memory layout is the following: +-----------+ |heap(ori) | +-----------+ |stack(ori) | +-----------+ |.data(ori) | +-----------+ |.text(ori) | +-----------+When I dlopen foo.so, will the memory layout be A or B? A +-----------+ |heap(ori) | +-----------+ |stack(ori) | +-----------+ |.data(ori) | +-----------+ |.text(ori) | +-----------+ |heap(foo) | +-----------+ |stack(foo) | +-----------+ |.data(foo) | +-----------+ |.text(foo) | +-----------+Or B +-----------+ |heap(ori) | +-----------+ |heap(foo) | +-----------+ |stack(foo) | +-----------+ |stack(ori) | +-----------+ |.data(foo) | +-----------+ |.data(ori) | +-----------+ |.text(foo) | +-----------+ |.text(ori) | +-----------+Or anything other than A and B... ?
Memory layout of dynamic loaded/linked library
User mode processes can use Interprocess Communication (IPC) to communicate with each other, the fastest method of achieving this is by using shared memory pages (shmpages). This happens for example if banshee plays music and vlc plays a video, both processes have to access pulseaudio to output some sound. Try to find out more about shared memory configuration and usage with some of the following commands: Display the shared memory configuration: sysctl kernel.shm{max,all,mni}By default (Linux 2.6) this should output: kernel.shmmax = 33554432 kernel.shmall = 2097152 kernel.shmmni = 4096shmmni is the maximum number of allowed shared memory segments, shmmax is the allowed size of a shared memory segment (32 MB) and shmall is the maximum total size of all segments (displayed as pages, translates to 8 GB) The currently used shared memory: grep Shmem /proc/meminfoIf enabled by the distribution: ls -l /dev/shmipcs is a great tool to find out more about IPC usage: ipcs -m will output the shared memory usage, you can see the allocated segments with the corresponding sizes. ipcs -m -i <shmid>shows more information about a specified segment including the PID of the process creating (cpid) and the last (lpid) using it. ipcrm can remove shared memory segments (but be aware that those are only get removed if no other processes are attached to them, see the nattach column in ipcs -m). ipcrm -m <shmid>Running out of shared memory could be a program heavily using a lot of shared memory, a program which does not detach the allocated segments properly, modified sysctl values, ... This is not Linux specific and also applies to (most) UNIX systems (shared memory first appeared in CB UNIX).
What exactly are shmpages in the grand scheme of kernel and memory terminology. If I'm hitting a shmpages limit, what does that mean? I'm also curious if this applies to more than linux
What are shmpages in laymans terms?
Curious, as you're running this application what does df -h /dev/shm show your RAM usage to be? tmpfs By default it's typically setup with 50% of whatever amount of RAM the system physically has. This is documented here on kernel.org, under the filesystem documentation for tmpfs. Also it's mentioned in the mount man page. excerpt from mount man pageThe maximum number of inodes for this instance. The default is half of the number of your physical RAM pages, or (on a machine with highmem) the number of lowmem RAM pages, whichever is the lower.confirmation On my laptop with 8GB RAM I have the following setup for /dev/shm: $ df -h /dev/shm Filesystem Size Used Avail Use% Mounted on tmpfs 3.9G 4.4M 3.9G 1% /dev/shmWhat's going on? I think what's happening is that in addition to being allocated 50% of your RAM to start, you're essentially consuming the entire 50% over time and are pushing your /dev/shm space into swap, along with the other 50% of RAM. Note that one other characteristic of tmpfs vs. ramfs is that tmpfs can be pushed into swap if needed: excerpt from geekstuff.com Table: Comparison of ramfs and tmpfsExperimentation Tmpfs Ramfs --------------- ----- ----- Fill maximum space and continue writing Will display error Will continue writing Fixed Size Yes No Uses Swap Yes No Volatile Storage Yes YesAt the end of the day it's a filesystem implemented in RAM, so I would expect it to act a little like both. What I mean by this is that as files/directories are deleted your're using some of the physical pages of memory for the inode table, and some for the actual space consumed by these files/directories. Typically when you use space on a HDD, you don't actually free up the physical space, just the entries in the inode table, saying that the space consumed by a specific file is now available. So from the RAM's perspective the space consumed by the files is just dirty pages in memory. So it will dutifully swap them out over time. It's unclear if tmpfs does anything special to clean up the actual RAM used by the filesystem that it's providing. I saw mention in several forums that people saw that it was taking upwards of 15 minutes for their system to "reclaim" space for files that they had deleted in the /dev/shm. Perhaps this paper I found on tmpfs titled: tmpfs: A Virtual Memory File System will shed more light on how it is implemented at the lower level and how it functions with respect to the VMM. The paper was written specifically for SunOS but might hold some clues. experimentation The following contrived tests seem to indicate /dev/shm is able to clean itself up. experiment #1 Create a directory with a single file inside it, and then delete the directory 1000 times. initial state of /dev/shm $ df -k /dev/shm Filesystem 1K-blocks Used Available Use% Mounted on tmpfs 3993744 5500 3988244 1% /dev/shmfill it with files $ for i in `seq 1 1000`;do mkdir /dev/shm/sam; echo "$i" \ > /dev/shm/sam/file$i; rm -fr /dev/shm/sam;donefinal state of /dev/shm $ df -k /dev/shm Filesystem 1K-blocks Used Available Use% Mounted on tmpfs 3993744 5528 3988216 1% /dev/shmexperiment #2 Create a directory with a single 50MB file inside it, and then delete the directory 300 times. fill it with 50MB files of random garbage $ start_time=`date +%s` $ for i in `seq 1 300`;do mkdir /dev/shm/sam; \ dd if=/dev/random of=/dev/shm/sam/file$i bs=52428800 count=1 > \ /dev/shm/sam/file$i.log; rm -fr /dev/shm/sam;done \ && echo run time is $(expr `date +%s` - $start_time) s... 8 bytes (8 B) copied, 0.247272 s, 0.0 kB/s 0+1 records in 0+1 records out 9 bytes (9 B) copied, 1.49836 s, 0.0 kB/s run time is 213 sfinal state of /dev/shm Again there was no noticable increase in the space consumed by /dev/shm. $ df -k /dev/shm Filesystem 1K-blocks Used Available Use% Mounted on tmpfs 3993744 5500 3988244 1% /dev/shmconclusion I didn't notice any discernible effects with adding files and directories with my /dev/shm. Running the above multiple times didn't seem to have any effect on it either. So I don't see any issue with using /dev/shm in the manner you've described.
I am repeating tens of thousands of similar operations in /dev/shm, each with a directory created, files written, and then removed. My assumption used to be that I was actually creating directories and removing them in place, so the memory consumption had to be quite low. However it turned out the usage was rather high, and finally caused memory overflow. So my questions is: with operations like mkdir /dev/shm/foo touch /dev/shm/foo/bar [edit] /dev/shm/foo/bar .... rm -rf /dev/shm/fooWill it finally cause memory overflow? and if it does, why is that, since it seems to be removing them in-place. Note: this is a tens of thousands similar operation.
operation in /dev/shm causes overflow
Varnish appears to use a plain memory-mapped file for its shared memory (instead of, e.g., POSIX shm_open). From the source: loghead = mmap(NULL, heritage.vsl_size, PROT_READ|PROT_WRITE, MAP_HASSEMAPHORE | MAP_NOSYNC | MAP_SHARED, heritage.vsl_fd, 0);On BSD, MAP_NOSYNC requests that the kernel not write the shared data to disk unless forced (e.g., to free up memory). When it's mlocked as well, that should almost never happen. Unfortunately, Linux does not support MAP_NOSYNC. So Linux will wind up routinely writing dirtied (changed) pages from the cache to disk. Putting the cache on a tmpfs will avoid that. So too would Varnish using POSIX or SysV shared memory (actually, POSIX shared memory is implemented on Linux with a tmpfs mounted at /dev/shm, so using the tmpfs should be fine).
Varnish, a HTTP accelerator, uses a ~80MB file backed SHM log that is mlock()ed into memory. The Varnish docs recommend to store the file on tmpfs to avoid unnecessary disk access. However if the entire file is locked into memory, does the Linux kernel still write to the backing file? I tried to monitor this using inotify and fatrace, however since this interaction presumably happens all inside the kernel, no file activity was visible to these tools. There is clearly some kind of update happening either to the file or the filesystem, as monitoring the backing file with ls showed the file time changing, and sha1sum showed the contents were changing, but does this actually involve disk access or is it all happening in memory? Basically I'm trying to avoid having to do the tmpfs workaround, as using SHM to back SHM seems like an ugly workaround for a problem that might not even exist.
File backed, locked shared memory and disk interaction
You can look at /proc/<pid>/maps, /proc/<pid>/smaps (or pmap -x <pid> if your OS supports) of interested process ID's and compare outputs to determine shared memory regions. That includes shared memory segments via shmget calls, as well as any shared libraries, files. Edit: As mr.spuratic pointed out his answer here has more details on kernel side You can look at a process RSS using ps, however it doesn't take into consideration all the shared pages. To see RSS for specific process, see below cv@thunder:~$ ps -o rss,pid,comm -p $$,7023 RSS PID COMMAND 22060 7023 xfwm4 6876 18094 bashsmem tool provides more detailed information, taking into consideration of shared pages. See below output for the same above process cv@thunder:~$ smem -t |egrep "RSS|$$|7023" PID User Command Swap USS PSS RSS 9852 cv grep -E RSS|18094|7023 0 340 367 2220 18094 cv bash 0 3472 4043 6876 7023 cv xfwm4 --display :0.0 --sm-c 0 5176 7027 22192 From man smem: smem reports physical memory usage, taking shared memory pages into account. Unshared memory is reported as the USS (Unique Set Size). Shared memory is divided evenly among the processes sharing that memory. The unshared memory (USS) plus a process's proportion of shared memory is reported as the PSS (Proportional Set Size). The USS and PSS only include physical memory usage. They do not include memory that has been swapped out to disk.
I need to know the amount of memory shared between two processes, that is, the intersection of their shared memories. Any ideas?
How to know shared memory between two processes?
Shared memory is not always a protected resource. As such many users can allocate shared memory. It is also not automatically returned to the memory pool when the process which allocated it dies. This can result in shared memory allocations which have been allocated but not used. This results in a memory leak that may not be obvious. By keeping shared memory limits low, most processes which use shared memory (in small amounts) can run. However, the potential damage is limited. The only systems I have uses which require large amounts of shared memory are database servers. These usually are administered by system administrators who are aware of the requirements. If not, the DBA usually is aware of the requirement and can ask for appropriate configuration changes. The database installation instructions usually specify how to calculate and set the appropriate limits. I have had databases die and leave large amounts of shared memory allocated, but unused. This created problems for users of the system, and prevented restarting the database. Fortunately, there where tools which allowed the memory to be located and released.
I run DB2 on Linux where I have to allocate the vast majority of memory on the machine to shared memory segments. This page is typical of the info that I've found about shmall/shmmax: http://www.pythian.com/news/245/the-mysterious-world-of-shmmax-and-shmall/ My system is running fine now, but I'm wondering if there's a historical or philosophical reason why shared memory is so low by default. In other words, why not let shmall default to the max physical memory on the machine? Or in other words, why should a typical admin need to be 'protected from himself' if an app happens to use a lot of shared memory, and have to go in and change these settings? The only thing I can think of is that it does let me set an upper bound to how much memory DB2 can use, but that's a special case.
Linux - why is kernel.shmall so low by default?
When you share a file descriptor over a socket, the kernel mediates. You need to prepare data using the cmsg(3) macros, send it using sendmsg(2) and receive it using recvmsg(2). The kernel is involved in the latter two operations, and it handles the conversion from a file descriptor to whatever data it needs to transmit the file descriptor, and making the file descriptor available in the receiving process. How can same fd in different processes point to the same file? provides useful background. The sending process sends a file descriptor which means something in relation to its (private) file descriptor table; the kernel knows what that maps to in the system-wide open file table, and creates a new entry as necessary in the receiving process’ file descriptor table.
If file descriptors are specific to each process (i.e. two processes may use the same file descriptor id to refer to different open files) then how is it possible to share transfer file descriptors (e.g. for shared mmaps) over sockets etc? Does it rely on the kernel being mapped to the same numerical address range under each process?
Sharing file descriptors
Shared memory can be backed by a regular file, a block device, or swap. It depends on how the memory region was created. When multiple processes are using the same shared memory region, their individual virtual addresses will be pointing to the same physical address. Writes by one process become visible to others directly, without needing to pass through the disk file (if any) that the memory is attached to. If there is a file behind the shared memory, the kernel occasionally syncs changed pages from RAM to the file. The processes using the memory normally don't need to know when this happens, but if necessary, they can call msync to make it happen sooner. If there is no file behind the shared memory, the kernel may move pages to swap when it needs to free some RAM, just like it does with non-shared process memory. When they're swapped back in, they get a new physical address which is immediately available to all processes that have mapped the shared memory. There's one more thing that confused me the first time I looked at it: if you map a file with mmap, you have to use MAP_SHARED if you want to make changes and have them saved back to the file, even if there's only one process involved. At first I thought MAP_SHARED was the wrong name for this functionality, but on further reflection, you're "sharing" your modifications with other processes that access the file with read, or processes that will come along later and mmap it. So it makes sense, kind of.
Are sharing a memory-mapped file and sharing a memory region implemented based on each other? The following two quotes seem to say so, and seem a chicken-egg problem to me. Operating System Concepts introduces sharing a memory-mapped file in the following. Do the multiple processes share the same file by sharing the same physical memory region holding the content of the file?Multiple processes may be allowed to map the same file concurrently, to allow sharing of data. Writes by any of the processes modify the data in virtual memory and can be seen by all others that map the same section of the file. Given our earlier discussions of virtual memory, it should be clear how the sharing of memory-mapped sections of memory is implemented: the virtual memory map of each sharing process points to the same page of physical memory—the page that holds a copy of the disk block. This memory sharing is illustrated in Figure 9.22. It also introduces shared memory in the following. Do multiple processes share a memory region by sharing a memory-mapped file? Does a "memory-mapped file" reside on disk or main memory? I think it is on the disk, but "The memory-mapped file serves as the region of shared memory between the communicating processes" seems to mean that it resides in main memory.Quite often, shared memory is in fact implemented by memory mapping files. Under this scenario, processes can communicate using shared memory by having the communicating processes memory-map the same file into their virtual address spaces. The memory-mapped file serves as the region of shared memory between the communicating processes (Figure 9.23). Thanks.
Are sharing a memory-mapped file and sharing a memory region implemented based on each other?
An embedded system may have a static /dev, rather than use udev to populate it. If you don't have /lib/udev, then presumably your system isn't running udev. In that case, you need to create /dev/shm on the root filesystem. If the root filesystem is an initramfs, rebuild your initramfs with an extra line in the initramfs description file: dir /dev 755 0 0 dir /dev/shm 755 0 0 …If the root filesystem is an on-disk filesystem, just create the directory. # mkdir /dev/shm
I need to create /dev/shm on an embedded ARM system. From "Installed The Latest Changes to Current and......". I see that it can be created with mkdir /lib/udev/devices/shm, but I'm wondering what is supposed to be at that location? The only directory I have at that location is /lib/modules/, there's no devices/ or anything. So I went ahead and just created them, empty directories. I then added: tmpfs /dev/shm tmpfs defaults 0 0to my /etc/fstab and I didn't add an mtab entry, since I don't have an /etc/mtab. I then rebooted and now, there's still no /dev/shm device. Any ideas how I get that device? EDIT #1 Oh, and a mount -a (after a reboot) results in: # mount -a mount: mounting tmpfs on /dev/shm failed: No such file or directory`
can not create /dev/shm
According to Brad Spengler the subject mode x applies to System V shared memory only, see http://forums.grsecurity.net/viewtopic.php?f=5&t=3935. On top of that PaX strikes unless MPROTECT is disabled for the binary under consideration.
I am conducting some research on Grsecurity on Hardened Gentoo, see http://en.wikibooks.org/wiki/Grsecurity. To be more specific, I am trying to find an example where subject mode x makes a difference. As said in the wiki: subject mode x: Allows executable anonymous shared memory for this subject. Now, the kernel rejects mem = mmap(NULL, MAP_SIZE, PROT_WRITE|PROT_EXEC, MAP_ANONYMOUS | MAP_SHARED, -1, 0);as well as mem = mmap(NULL, MAP_SIZE, PROT_WRITE, MAP_ANONYMOUS | MAP_SHARED, -1, 0); mprotect(mem, MAP_SIZE, PROT_EXEC);or vice versa. On the other hand mem = mmap(NULL, MAP_SIZE, PROT_READ|PROT_EXEC, MAP_ANONYMOUS | MAP_SHARED, -1, 0);works fine. For all of the above it does not matter whether grsec is active or not, and if it is, it does not matter whether subject mode x is set or not - the kernel simply does not allow shared memory that is (or was) writable and executable. Therefore: what is subject mode x good for, and for what piece of code would it make a difference?
Grsecurity subject mode x
I believe that the really short answer is that Linux compilers arrange code into pieces, at least one of which is just pure code, and can therefore be memory mapped into more than one process' address space. Any globals get mapped such that each process gets its own copy. You can see this using readelf, or objdump, but readelf gives a clearer picture, I think. Here's a piece of output from readelf -e /usr/lib/libc.so.6. That's the C library, probably mapped into almost every process. The relevant part of readelf output (although all of it is interesting) is the Program Headers: Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align PHDR 0x000034 0x00000034 0x00000034 0x00140 0x00140 R E 0x4 INTERP 0x164668 0x00164668 0x00164668 0x00017 0x00017 R 0x4 [Requesting program interpreter: /usr/lib/ld-linux.so.2] LOAD 0x000000 0x00000000 0x00000000 0x1adfc4 0x1adfc4 R E 0x1000 LOAD 0x1ae220 0x001af220 0x001af220 0x02c94 0x057c4 RW 0x1000 DYNAMIC 0x1afd90 0x001b0d90 0x001b0d90 0x000f8 0x000f8 RW 0x4 NOTE 0x000174 0x00000174 0x00000174 0x00044 0x00044 R 0x4 TLS 0x1ae220 0x001af220 0x001af220 0x00008 0x00048 R 0x4 GNU_EH_FRAME 0x164680 0x00164680 0x00164680 0x06124 0x06124 R 0x4 GNU_STACK 0x000000 0x00000000 0x00000000 0x00000 0x00000 RW 0x10 GNU_RELRO 0x1ae220 0x001af220 0x001af220 0x01de0 0x01de0 R 0x1The two LOAD lines are the only pieces of the file that get mapped directly into memory. The first LOAD header maps a piece of /usr/lib/libc.so.6 into memory with R and E permissions: read and execute. That's the code. Hardware features keep a program from writing to that piece of memory, so all programs can share the same pages of real, physical memory. The kernel can set up the hardware to map the same physical memory into all processes. The second LOAD header is marked RW - read and write. This is the part with global variables that the C library uses. Each process gets its own copy in physical memory, mapped into that process' address space with the hardware permissions set to allow reading and writing. That section is not shared. You can see these memory mappings in a running process using the /proc file system. A good command to illustrate: cat /proc/self/maps. That lists all the memory mappings that the cat process has, and from what files the kernel got them. As far as how much you have to do to ensure that your function is allocated to memory that gets mapped into different processes, it's pretty much all down to the flags you give to the compiler. Code intended for ".so" shared libraries is compiled "position independent". Position independent code does things like refer to memory locations of variables with offsets relative to the current instruction, and jumps or branches to locations relative to the current instruction, rather than loading from or writing to absolute addresses, and jumping to absolute addresses. That means the "RE" LOAD piece of /usr/lib/libc.so and the "RW" piece only have to have be loaded at addresses that are the same distance apart in each process. In your example code, the static variable will always be at least a multiple of a page size apart from the code that references it, and it will always get loaded that distance apart in a process' address space due to the way the LOAD ELF headers are given. Just a note about the term "shared memory": There's a user-level shared memory system, associate with "System V interprocess communications system". That's a way for more than one process to very explicitly share a piece of memory. It's fairly complicated and obscure to set up and get correct. The shared memory that we're talking about here is more-or-less completely invisible to any user process. Your example code won't know the difference if it's running as position independent code shared between multiple processes, or if it's the only copy ever.
I found this Q&A saying shared libraries can be shared between processes using shared memory. It seems like it would be impossible, though, to share code between processes without some pretty severe restrictions on the type of code that can be shared. I'm thinking about libraries with non-reentrant C functions whose output depends on the values of global variables or static variables inside their definition body. Like this one. int really_really_nonreentrant(void x) { static int i = 0; i++; return i; }A library with a function like this will return separate increasing sequences for each process using it, so it definitely seems like the code isn't being shared between processes. Is really_really_nonreentrant() being separated from the reentrant functions, or is it being kept mostly with the other functions with just static int i being separated out? Or is the entire library being kept out of shared memory because this function is nonreentrant? Ultimately, how much does one have to do to ensure one's library gets allocated to shared memory?
non-reentrant libraries in shared memory?
By system clock I mean the clock that tells the time down right of the panel "System clock" generally refers to the clock maintained by the kernel; applications such as date and GUI clocks such as the one you refer to make calls to it like this.Why, out of all the processes that the system runs, does the clock need a shared memory segment?There's probably dozens of different GUI and DE based clocks available for linux so there's no way to say specifically. This implies it involves multiple processes which is certainly not necessary for a GUI clock, but if it is integrated with the desktop, who knows -- it could also possess some functionality you haven't discovered yet. You have a lot of choices, IPC wise, when programming. What method you use depends on the exact requirements but also perhaps some personal preference. I'm more a sockets n' serialization kinda guy, but shared mem is very popular; when I run ipcs -a I get a few dozen entries under "Shared Memory Segments". Interestingly, if I run it on a headless system I get none, so presumably those are all related to GUI applications. Glib and D-bus may have facilities built on shared mem used by such programs.
This is a combination of programming and Linux question but I think it suits better here. I am writing an application that works with ipcs (shared memory segments) and after each running I am checking if any ipcs are left using the bash command ipcs. I noticed a lot more than I created so I thought they are part of the system software. i decided to examine each one and see where it is connected. After closing the process each one is connected I noticed that one of the processes that is attached to a shared memory segment is the system clock. By system clock I mean the clock that tells the time down right of the panel (or up depending on how you set up things) and not the CPU clock. Why, out of all the processes that the system runs, does the clock need a shared memory segment?
Why does the clock need a shared memory segment?
When a process dies, its memory is reclaimed by the operating system. It's marked as free, and will be allocated to other processes sooner or later when other processes require memory. The memory is always wiped before being allocated to a process. It doesn't matter that there's been memory corruption in the process. The concept of memory corruption is in the context of the execution of the process —it means that the content of the memory is not what the programmer intended. When the process is dead, this concept is no longer meaningful. The same goes for a memory leak: all the memory of the process is reclaimed when it exits. Shared memory is an exception to this because it doesn't belong to any single process. When a process exits, all that gets reclaimed is the process's handle on the shared memory; the shared memory itself remains until it's explicitly removed. Think of a shared memory object as a file that lives purely in memory and isn't attached to the filesystem. It's like a temporary file without a name. A process that uses shared memory should clean it up before exiting. Preferably, if a process uses shared memory, it should be run by a supervisor process, and the supervisor should clean up resources such as shared memory and temporary files if the main process crashes.
There are many questions on Stack Overflow asking about how a system handles memory leaks and what happens on abnormal termination. Examples: https://stackoverflow.com/questions/6727383/dynamically-allocated-memory-after-program-termination https://stackoverflow.com/questions/10223677/when-a-program-terminates-what-happens-to-the-memory-allocated-using-malloc-that https://stackoverflow.com/questions/2975831/is-leaked-memory-freed-up-when-the-program-exits However, I could not find any posts asking the same about memory corruption. Is handling of memory leaks and memory corruption by the Linux kernel the same? When the process exits, are the corrupted segments of memory freed and reclaimed, and are they safe to use by other processes? Also, what about processes using POSIX shared memory (/dev/shm)? From my understanding it seems that shared memory does not get reclaimed by the system unless it is deleted by shm_unlink. (http://man7.org/linux/man-pages/man7/shm_overview.7.html) Does this mean that if shared memory segment somehow gets corrupted then the user is basically screwed until they reboot the system? Or will kernel clear the shared memory by shm_unlink automatically on user logout (without rebooting) after all user processes get killed? Thanks!
How is memory corruption handled by Linux when the process terminates?