output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
SCADA describes "Supervisory Control and Data Acquisition" systems - it often is Real Time, but doesn't have to be. There is usually a component that is real time (eg for logging, or managine pressures etc in machinery - ie essential heartbeat stuff) but this isn't absolutely necessary.
Many are still legacy code with a TCP/IP front end tacked on, and for some of these the front end is a very basic web server.
From an offtopic security perspective, there is a large scale problem with connecting legacy SCADA apps to the Internet without hardening them appropriately - as this leads to logical attacks on real world systems (such as oil pipelines, power stations etc
|
Is SCADA one of the RTOS out there (it's used for real-time control and data acquisition)?
| Is SCADA an RTOS? |
If you need to check the full list of mount points, use getmntent(3) or its thread-safe GNU extension getmntent_r(3).
If you just want to quickly check whether a given directory has a filesystem mounted on it or not, then use one of the functions in the stat(2) family. For example, if you want to check if /mnt has a filesystem mounted or not, you could do something like this:
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>struct stat mountpoint;
struct stat parent;/* Get the stat structure of the directory...*/
if stat("/mnt", &mountpoint) == -1) {
perror("failed to stat mountpoint:");
exit(EXIT_FAILURE);
}/* ... and its parent. */
if stat("/mnt/..", &parent) == -1) {
perror("failed to stat parent:");
exit(EXIT_FAILURE);
}/* Compare the st_dev fields in the results: if they are
equal, then both the directory and its parent belong
to the same filesystem, and so the directory is not
currently a mount point.
*/
if (mountpoint.st_dev == parent.st_dev) {
printf("No, there is nothing mounted in that directory.\n");
} else {
printf("Yes, there is currently a filesystem mounted.\n");
} |
I would like to check if USB disk is mounted in a C application. I know that in a script I can accomplish this via mount | grep /mnt (the mount point where udev mounts the USB drive) but I need to do this in a C application. Earlier I used to accomplish this using system("sh script.sh") but doing this is causing some serious issues as this code runs in a very time critical thread.
| Detect if USB disk is mounted in C application in Linux |
There is old school console tool:
nethogs - Net top tool grouping bandwidth per processe.g. run in this manner:
# nethogs eth0NetHogs version 0.8.0 PID USER PROGRAM DEV SENT RECEIVED
11173 user rtorrent eth0 111.001 4.358 KB/sec
13159 user rtorrent eth0 125.673 3.734 KB/sec
9737 user irssi eth0 0.027 0.1
9687 user chromium-browser eth0 0.000 0.000 KB/secYou can browser the developer's site, for more information and more such tools.
Now you can grab the source, make your own fork and develop kind of GUI.
Appending sockets info with fidelity near bandwidth not a great job.
|
Is there a GUI to track any socket connection sent to this computer and which program that initiates it?
Also if possible track any incoming connection sent to this computer and which program that handles it (as a realtime popup indicator if possible) ?
For example:
"/bin/x owned by user x tries to connect to x.x.x.x:x"
"x.x.x.x connected to your computer on port 80 handled by /usr/bin/apache"or at minimum, what should I learn to create this kind of software?
| Linux GUI to track connections made from/to this computer |
You can use perf; for example,
perf stat -e context-switches,cpl_cycles.ring0,cpl_cycles.ring123 your_commandwill produce a summary similar to
Performance counter stats for 'your_command': 1 context-switches
11,890,096 cpl_cycles.ring0
9,980,265 cpl_cycles.ring123 0.011218937 seconds time elapsed 0.007533000 seconds user
0.003766000 seconds syswhich shows that there was one context switch (to another process, not the kernel) during your_command’s execution, and the CPU spent 54% of its time running kernel code.
Ensuring that a given process gets as much of the CPU’s attention as possible can get quite complicated. Victor Stinner’s benchmark setup documentation provides a good overview of the problems, and techniques to mitigate them; his write-up is focused on benchmarking but much of it is applicable in other circumstances.
|
For the purpose of profiling a program I would like to run it uninterrupted on one CPU.
To do this I use a combination of taskset and chrt:
# taskset -c 1 chrt -f 99 ./my_programNow my question is if there is a tool that lets me check if/how often the process is nevertheless interrupted by context switches to the kernel.
| How to check if/how often my process is preempted by the kernel? |
You probably wanted to take patch-4.4.12-rt20.patch.xz, not patches-4.4.12-rt20.tar.xz. As the extension hints, the latter is a tar archive, not a single patch file. Apparently it contains the same patches as the single-file version, but with commit messages etc.
patch is smart enough to ignore useless stuff (like the tar file structure, apparently), so some of the patches work. But I suppose the component patches might depend on each other, and be in the wrong order in the tar file, so it doesn't apply cleanly.
|
I'm trying to install a kernel with the RT_PREEMPT patch on a Lubuntu 16.04 distro and running into some issues I'm not sure how to deal with. I've downloaded the sources for kernel v4.4.12 (linux-4.4.12.tar.xz) and what I believe to be the appropriate RT_PREEMPT patch (patches-4.4.12-rt20.tar.xz), both from kernel.org. I've extracted the kernel sources with tar xf, cd'd into the directory, then I try to apply the patch with xzcat ../patches-4.4.12.tar.xz | patch -p1 (per recommendations here: https://rt.wiki.kernel.org/index.php/RT_PREEMPT_HOWTO). This command just generates a slew of errors complaining about patches for files that don't exist, previously applied patches, failed hunks, etc. Some of the patch hunks seem to succeed but so many of them fail.
This can't be the correct means to patch this kernel can it? Any idea where I'm going wrong?
EDIT: Here's a sample that covers the kinds of errors I'm seeing:
rush@lubuntuvm:~/preempt-rt/linux-4.4.12$ xzcat ../patches-4.4.12-rt20.tar.xz | patch -p1
patching file arch/x86/kernel/nmi.c
Hunk #1 FAILED at 231.
Hunk #2 FAILED at 256.
Hunk #3 FAILED at 305.
3 out of 3 hunks FAILED -- saving rejects to file arch/x86/kernel/nmi.c.rej
patching file arch/x86/kernel/reboot.c
patching file include/linux/kernel.h
Hunk #1 succeeded at 255 (offset -4 lines).
Hunk #2 FAILED at 460.
1 out of 2 hunks FAILED -- saving rejects to file include/linux/kernel.h.rej
patching file kernel/panic.c
Hunk #1 FAILED at 61.
1 out of 1 hunk FAILED -- saving rejects to file kernel/panic.c.rej
patching file kernel/watchdog.c
Hunk #1 FAILED at 361.
1 out of 1 hunk FAILED -- saving rejects to file kernel/watchdog.c.rej
patching file kernel/stop_machine.c
Hunk #12 succeeded at 482 (offset -10 lines).
Hunk #13 succeeded at 544 (offset -10 lines).
Hunk #14 succeeded at 648 (offset -10 lines).
patching file block/blk-mq.c
Reversed (or previously applied) patch detected! Assume -R? [n] n
Apply anyway? [n]
Skipping patch.
3 out of 3 hunks ignored -- saving rejects to file block/blk-mq.c.rej
patching file block/blk-mq.h
Reversed (or previously applied) patch detected! Assume -R? [n]
Apply anyway? [n]
Skipping patch.
3 out of 3 hunks ignored -- saving rejects to file block/blk-mq.h.rej
patching file net/core/dev.c
Hunk #1 succeeded at 3542 (offset -3 lines).
Hunk #2 succeeded at 3552 (offset -3 lines).
patching file arch/arm64/Kconfig
patching file arch/arm64/include/asm/thread_info.h
patching file arch/arm64/kernel/asm-offsets.c
patching file arch/arm64/kernel/entry.S
can't find file to patch at input line 794
Perhaps you used the wrong -p or --strip option?
The text leading up to this was:
--------------------------
|--
|2.8.1
|
|patches/0026-hwlat-detector-Use-trace_clock_local-if-available.patch0000644001303100130310000000625512741715155025466 0ustar rostedtrostedtFrom c184dd4a4a5d88b3223704297a42d1aaab973811 Mon Sep 17 00:00:00 2001
|From: Steven Rostedt <[emailprotected]>
|Date: Mon, 19 Aug 2013 17:33:26 -0400
|Subject: [PATCH 026/351] hwlat-detector: Use trace_clock_local if available
|
|As ktime_get() calls into the timing code which does a read_seq(), it
|may be affected by other CPUS that touch that lock. To remove this
|dependency, use the trace_clock_local() which is already exported
|for module use. If CONFIG_TRACING is enabled, use that as the clock,
|otherwise use ktime_get().
|
|Signed-off-by: Steven Rostedt <[emailprotected]>
|Signed-off-by: Sebastian Andrzej Siewior <[emailprotected]>
|---
| drivers/misc/hwlat_detector.c | 34 +++++++++++++++++++++++++---------
| 1 file changed, 25 insertions(+), 9 deletions(-)
|
|diff --git a/drivers/misc/hwlat_detector.c b/drivers/misc/hwlat_detector.c
|index c07e85932cbf..0fcc0e38df42 100644
|--- a/drivers/misc/hwlat_detector.c
|+++ b/drivers/misc/hwlat_detector.c | Applying RT_PREEMPT |
Is it possible that a higher priority task or thread is interfering with your USB cameras / ports?
By default on PREEMPT_RT (or when using threaded interrupts on mainline linux) all IRQ threads will be run at 50 prio with SCHED_FIFO. So unless you've set these threads/tasks of yours to a higher priority, it's very possible that they are getting preempted by something else.
For example, linux proaudio users will always set their audio interface to have the highest priority on the system to avoid it getting preempted or interrupted by other tasks/threads... you will want to do something similar for your cameras and important tasks.
another possibility is that you have shared interrupts on one/some of your USB ports - this could cause intermittent drops, as well. you should be able to tell by viewing interrupts in procfs. The fact that you are getting drops on 3 cameras, but not on one camera -- makes me think something is shared / getting poked in the background...
beyond that, you could use ftrace to get a better look at what is going on and what functions are causing the delay / could be the culprit. possibly latencytop might also gives some hints, if you see something really out of place.
EDIT:
and these "uvcvideo: Marking buffer as bad (error bit set)" messages look suspect. -- It's possible you need to set some appropriate values for your cameras, as noted here;
https://stackoverflow.com/questions/17155738/uvcvideo-marking-buffer-as-bad-error-bit-set
Failing that, I found a bug report here; https://bugzilla.kernel.org/show_bug.cgi?id=207045 that has a linked patch that is supposed to fix this issue...
It still applies over linux-5.16.2; https://lore.kernel.org/lkml/[emailprotected]/
might be helpful, if your hardware has this issue. ya never know.
|
We have a problem of losing about one frame every 60 seconds or so with four USB cameras hooked up to Ubuntu 20.04 with the Realtime Linux patches applied. From the user code ioctl(VIDIOC_DQBUF) call level we see that v4l2_buffer.sequence skips a buffer, but with no error reported. What makes it odd is that one camera doesn't skip, but three do, even though they are all on separate USB ports.
Looking at the kernel debug information we see info like this:
Jan 21 08:48:52 kernel: [ 612.290354] uvcvideo: frame 1955 stats: 0/151/151 packets, 0/0/151 pts (!early initial), 0/151 scr, last pts/stc/sof 0/0/0
Jan 21 08:48:52 kernel: [ 612.291017] uvcvideo: frame 1940 stats: 0/151/151 packets, 0/0/151 pts (!early initial), 0/151 scr, last pts/stc/sof 0/0/0
Jan 21 08:48:52 kernel: [ 612.294264] uvcvideo: frame 1956 stats: 0/9/9 packets, 0/0/9 pts (!early initial), 0/9 scr, last pts/stc/sof 0/0/0
Jan 21 08:48:52 kernel: [ 612.294269] uvcvideo: Marking buffer as bad (error bit set).
Jan 21 08:48:52 kernel: [ 612.294270] uvcvideo: Frame complete (FID bit toggled).
Jan 21 08:48:52 kernel: [ 612.294270] uvcvideo: frame 1957 stats: 0/1/1 packets, 0/0/0 pts (!early !initial), 0/1 scr, last pts/stc/sof 0/1217480818/18547
Jan 21 08:48:52 kernel: [ 612.294272] uvcvideo: Marking buffer as bad (error bit set).
Jan 21 08:48:52 kernel: [ 612.294678] uvcvideo: frame 1958 stats: 0/2/2 packets, 0/0/0 pts (!early !initial), 0/1 scr, last pts/stc/sof 0/1217480818/18547
Jan 21 08:48:52 kernel: [ 612.294682] uvcvideo: Marking buffer as bad (error bit set).
Jan 21 08:48:52 kernel: [ 612.294682] uvcvideo: Frame complete (FID bit toggled).
Jan 21 08:48:52 kernel: [ 612.294683] uvcvideo: frame 1959 stats: 0/1/1 packets, 0/0/0 pts (!early !initial), 0/1 scr, last pts/stc/sof 0/1267616628/19316
Jan 21 08:48:52 kernel: [ 612.294685] uvcvideo: Marking buffer as bad (error bit set).
Jan 21 08:48:52 kernel: [ 612.294686] uvcvideo: Frame complete (EOF found).
Jan 21 08:48:52 kernel: [ 612.294888] uvcvideo: Dropping payload (out of sync).
Jan 21 08:48:52 kernel: [ 612.295094] uvcvideo: Marking buffer as bad (error bit set).
Jan 21 08:48:52 kernel: [ 612.295094] uvcvideo: Dropping payload (out of sync).
Jan 21 08:48:52 kernel: [ 612.295299] uvcvideo: Dropping payload (out of sync).
Jan 21 08:48:52 kernel: [ 612.295509] uvcvideo: Marking buffer as bad (error bit set).
Jan 21 08:48:52 kernel: [ 612.295510] uvcvideo: Dropping payload (out of sync).
Jan 21 08:48:52 kernel: [ 612.295715] uvcvideo: frame 1960 stats: 0/5/5 packets, 2/4/3 pts (!early !initial), 2/3 scr, last pts/stc/sof 1284525428/1284525171/19827Looking at the source code, Frame complete (FID bit toggled) means that the USB driver hasn't sent the complete frame (otherwise we would get an (EOF found) message), which is backed up by the log showing 0/2/2 packets instead of 0/151/151.
How do I proceed with the debugging now? I find it hard to believe that the USB driver is buggy, but is there some not quite RTLinux-ready component in the stack?
| Why would USB video be dropping frames in Realtime Linux? |
You want strace(1) for that; it lists all the system calls made. See the manual page for details on various ways to present the trace data.
You might also find ltrace(1) useful if you want inter-library calls rather than system calls traced.
|
I cannot remember this command (and googling was unsuccessful), but there is a way to get the list of actions performed by a process, that outputs something like
# listprocessactions -p 1234
0.321 Open "A" /var/log/nginx/supersite.log
0.322 Write to /var/log/nginx/supersite.log
0.401 Close /var/log/nginx/supersite.log
0.555 Opens TCP connection with slashdot.org
...I'm interested in the files aspect (open / RW files).
The question is what is that command (and if possible in which package on deb / ubuntu)
| Command to list in real time all the actions of a process |
System Management Mode is not the only thing that makes x86 bad at hard real time. The unpredictability of the execution speed due to caches, pipelines and so on makes x86, and any other high-end processor, bad at real time. All these features that make a processor fast on average also make the worst case difficult to manage.
The current generation of ARM chips is divided in three series: Cortex A for high-end microprocessors (the closest thing to x86), Cortex-R for real-time applications, and Cortex-M in a microcontroller profile. The Cortex-R does not have an MMU (some have an MPU) but may have a cache. It is used in many real-time applications (ARM tries to compete with DSPs, fairly successfully).
The ARM architecture itself does not define anything like SMM. It's possible that chip manufacturers add something like it, you'd have to look at the manufacturer's documentation.
|
I wish to install Xenomai which works on Linux providing a kind of hard real time environment.
x86/64 architectures are supposed to contain the "System Management Mode" which prevents them from being used for hard real time systems.
By "System Management Mode" I mean this: http://en.wikipedia.org/wiki/System_Management_Mode
Do embedded boards like ARM also have this "System Management Mode"?
Answers with references will be appreciated.
| System Management Mode in embedded systems |
For tty devices, you must use tcdrain() on the file descriptor.
|
I need to synchronize an IO pin value with a write to a serial port from user space (because I wasn't yet able to do it from kernel space - see my other question). My code (leaving out error checking) is as follows:
char buf[3] = {'U','U','U'};
int fd = open("/dev/ttyS1", O_RDWR | O_NOCTTY); // supposed to be blocking
// fcntl(fd, F_SETFL, fcntl(fd, F_GETFL) & ~O_NONBLOCK); <-- makes no difference
FILE *f = fopen("/sys/class/gpio/gpio200/value", "w"); // the relevant IO// set IO
fprintf(f, "1");
fflush(f);
// send data
write(fd, buf, sizeof(buf));
// unset IO
fprintf(f, "0");
fflush(f);The behavior is that the IO is quickly toggled to 1 and back at the start of the write. In other words, write() returns long before the data has been actually put on the wire.
Is there a hope here?
| Knowing when a write() on a serial port has finished transmitting data |
No neither eCos nor FreeRTOS are Hurd based. They are different operating systems.
eCos and FreeRTOOS are Realtime Operating systems and don't have to do anything with Hurd. Don't try to just arbitrary put different kind of Operating systems together.
Plan9 is also considered a successor of Unix and is as far as i know not considered as a Unix system but has some Posix support.
|
Hurd is actually not Unix nor Linux. But more superior says. Where Plan9 and Linux are in the range of Unix/Linux. eCos and FreeRTOS is also completely not unix/linux.
What is the main differences between Hurd and FreeRTOS/eCos and General Unix/Linux?
Can i name FreeRTOS or eCos tree as Hurd family? As those OS are not considered to be Linux nor Unix.
| Hurd vs Plan9 vs Linux vs eCos vs FreeRTOS what are the main differences specially with Hurd? |
Looking through the file print.c, I found the following snippet in the "print" function:
if (use_syslog) {
syslog(level, "[%ld.%03ld] %s%s%s",
ts.tv_sec, ts.tv_nsec / 1000000,
message_tag ? message_tag : "", message_tag ? " " : "",
buf);
}print.h defines more macros using this function. Some seem unused, and others relate to errors not present in your example, but the macro pr_info is called at one point in clock.c in a way that could account for those logs:
if (!stats_get_result(s->delay, &delay_stats)) {
pr_info("rms %4.0f max %4.0f "
"freq %+6.0f +/- %3.0f "
"delay %5.0f +/- %3.0f",
offset_stats.rms, offset_stats.max_abs,
freq_stats.mean, freq_stats.stddev,
delay_stats.mean, delay_stats.stddev);
} else {
pr_info("rms %4.0f max %4.0f "
"freq %+6.0f +/- %3.0f",
offset_stats.rms, offset_stats.max_abs,
freq_stats.mean, freq_stats.stddev);
}I don't know much about PTP4L, but hopefully those variable names point you on the right track. If you want to explore further, here is the github repository.
|
I'm using the ptp4l and phc4sys services to syncronize the clocks of my Centos 7.4 servers to a central PTP source. The services regularly write syslog records like the one below.
I haven't found any documentation explaining what each field here means, and what the units are. I'd appreciate any leads!Nov 14 17:07:26 stg1 ptp4l: [718277.895] rms 74 max 99 freq +8760 +/- 84 delay 535 +/- 0
Nov 14 17:07:27 stg1 phc2sys: [718278.105] phc offset -62 s2 freq +14460 delay 2117 | Interpretation of PTP4L and PHC2SYS syslog records |
So is the kernel clock rate adaption needed at allNot at all and since a long time ago :netem has used high res timers for several years and is independent of
HZ value.High resolution timers's granularity does not depend on the linux kernel timer frequency (CONFIG_HZ). They are only limited by the clock source chosen as reference and can reach the ns.
|
I want to use network emulation (netem) on a PREEMPT-RT kernel to emulate latency and jitter down to 0.5 ms +- 10 %.
Thus, I initially thought that I have to adapt the internal kernel clock rate to at least 2000 Hz as this means I can add time-deterministic delays of 0.5 ms.
However, it seems that with lower kernel clock rates, it should work fine too (measured with experiments). After some research, I thought it could be due to a tickless setting or dynamic tick in the kernel config, but basically I am now rather confused how kernel clock rate works and why it is important.
So is the kernel clock rate adaption needed at all and do I have a comprehension problem how the clock rates actually work?
Thanks for your help :)
| How does Kernel Clock Rate matter in network emulation (by netem)? |
I raised it in Kernel.org, and then got a response that apparently it was intended to behave that way.
https://lore.kernel.org/linux-rt-users/[emailprotected]/
This basically meant that when we use 5.9.1 version with arm64 architecture, we need to disable KVM and then the Fully preemptible option comes up immediately. I was able to test it successfully.
|
I am trying to get my own custom Real-time Linux on a Raspberry Pi 4B. My status is this:I built the Linux 5.9.1 version, and have my own version of U-Boot, RFS with which I am able to successfully load and start the kernel, mount RFS, as well as reach the Kernel console also.I need to apply the Real-time patch on top of the Linux Kernel that I am building and so I used the corresponding patch for Linux 5.9.1. Since I am building a 64-bit kernel, I use the following command to get into the Kernel config and update the preemptible option: make ARCH=arm64 CROSS_COMPILE=aarch64-rpi3-linux-gnu- menuconfigBut I do not see the fully preemptible kernel option here:
.config - Linux/arm64 5.9.1 Kernel ConfigurationGeneral setup ───────────────────────────────────────────────────────────────── ┌────────────────────── Preemption Model ───────────────────────┐
│ Use the arrow keys to navigate this window or press the │
│ hotkey of the item you wish to select followed by the <SPACE │
│ BAR>. Press <?> for additional information about this │
│ ┌───────────────────────────────────────────────────────────┐ │
│ │ ( ) No Forced Preemption (Server) │ │
│ │ ( ) Voluntary Kernel Preemption (Desktop) │ │
│ │ (X) Preemptible Kernel (Low-Latency Desktop) │ │
│ │ │ │
│ │ │ │
│ │ │ │
│ └───────────────────────────────────────────────────────────┘ │
├───────────────────────────────────────────────────────────────┤
│ <Select> < Help > │ When I run:
make menuconfigI do see that option though for the x86 option:
.config - Linux/x86 5.9.1 Kernel ConfigurationGeneral setup ───────────────────────────────────────────────────────────────── ┌────────────────────── Preemption Model ───────────────────────┐
│ Use the arrow keys to navigate this window or press the │
│ hotkey of the item you wish to select followed by the <SPACE │
│ BAR>. Press <?> for additional information about this │
│ ┌───────────────────────────────────────────────────────────┐ │
│ │ ( ) No Forced Preemption (Server) │ │
│ │ ( ) Voluntary Kernel Preemption (Desktop) │ │
│ │ (X) Preemptible Kernel (Low-Latency Desktop) │ │
│ │ ( ) Fully Preemptible Kernel (Real-Time) │ │
│ │ │ │
│ │ │ │ Linux Kernel: 5.9.1
Linux RT patch used: patch-5.9.1-rt19.patch.xz
I have enabled the expert mode also, as instructed in another post in unix.stackexchange
.config - Linux/x86 5.9.1 Kernel ConfigurationGeneral setup ─────────────────────────────────────────────────────────────────
┌────────────────────────────── General setup ───────────────────────────────┐
│ Arrow keys navigate the menu. selects submenus ---> (or empty │
│ submenus ----). Highlighted letters are hotkeys. Pressing includes, │
│ excludes, modularizes features. Press to exit, <?> │
│ for Help, </> for Search. Legend: [] built-in [ ] excluded module │
│ ┌────^(-)────────────────────────────────────────────────────────────────┐ │
│ │ [] Support initial ramdisk/ramfs compressed using LZMA │ │
│ │ [] Support initial ramdisk/ramfs compressed using XZ │ │
│ │ [] Support initial ramdisk/ramfs compressed using LZO │ │
│ │ [] Support initial ramdisk/ramfs compressed using LZ4 │ │
│ │ [] Support initial ramdisk/ramfs compressed using ZSTD │ │
│ │ [ ] Boot config support │ │
│ │ Compiler optimization level (Optimize for performance (-O2)) --│ │
│ │ -- Configure standard kernel features (expert users) ---> │ │
│ │ -- Enable membarrier() system call │ │
│ │ -- Load all symbols for debugging/ksymoops │ │
│ │ -- Include all symbols in kallsyms │ │
│ │ [] Enable bpf() system call │ │
│ │ [ ] Enable userfaultfd() system call │ │
│ │ [] Enable rseq() system call │ │
│ │ [ ] Enabled debugging of rseq() system call │ │
│ │ [*] Embedded system │ │
│ │ [ ] PC/104 support │ │
│ │ Kernel Performance Events And Counters ---> │ │I see that this problem does not happen in the previous RT-patch that was released for Linux 5.6.19. Is there something missing for the 64-bit case from my side?
| Real-time patch on Linux 5.9.1 does not show fully-preemptible option for arm64 option |
SCHED_FIFO and SCHED_RR are supported on the standard Linux kernel, the PREEMPT_RT patches aren’t required. See the sched(7) manpage for details of the kernel’s scheduling policies.
The PREEMPT_RT patches reduce the kernel’s latency by enabling preemption in even more places than the mainline kernel currently supports: critical sections, interrupt handlers, sections which run with interrupts disabled... This helps with hard real-time workloads since there’s less chance that an uninterruptible section will delay a real-time event.
|
I am running a python program on the raspberry Pi (Raspbian) that I would like to give higher priority. I want to run the following command :
$ sudo chrt --rr 50 python3 loopExample.pyI have read people using "PREEMPT_RT patch". Is this needed to use SCHED_FIFO SCHED_RR effectively?
| SCHED_RR and SCHED_FIFO only work on "prempt kernal"? |
1ms is plenty to generate a few Ethernet frames, but on a typical Linux system, you can't count on not having the occasional pause. Even if you make your process high-priority, I don't think you can expect to always make a 1ms deadline.
RTLinux combines a real-time operating system with Linux. Linux runs as a non-real-time-priority task in the real-time scheduler.
I lack experience with RTLinux, so I can't offer concrete advice, but it does include Ethernet drivers, so it looks suitable for your use case.
|
I am looking to generate raw Ethernet frames with payload that is preloaded into memory.
The Ethernet frames (10-60 full frames) should be generated at 1 ms intervals with no exception.
What would be my option to do this? My concern is in regards to the real-time requirements of such an application. Interrupts should be minimized and the process should perhaps have a core dedicated to its execution? If Linux/software is not an option the alternative is FPGA.
Looking forward to hear potential solutions.
| Generate raw Ethernet frames with memory preloaded payloads at < 1 ms intervals |
For the most part the RT kernel will make subtle changes to insure your frame time is not over run.
Even then it is generally considered that the difference is very small, its more of a "if you aren't quite there, this might tip you over" adjustment.
If your USB performance is not good enough for what you are doing, you could try to reduce the number of devices connected to your system to free up additional hardware resources.
Or perhaps a faster version of USB.
As far as I know there isn't a means you could directly impact how fast your usb latency is unfortunately.
|
I recently applied the rt patch to my kernel in an attempt to lower the worst-case latency of sending messages over USB. Unfortunately I have seen no improvement in the worst-case over the unpatched kernel. Is there a patch I need for libusb, or even another way to communicate over USB to take advantage of the RT kernel to lower the worst-case latency?
| Is there a way to get libusb to behave in real time? |
I didn't make a visible effect because the process was run in user space and it was given real time priorities in context of other processes in user space.
However, the kernel space was still loaded with interrupts, and when other processes got their (small) share of processor time, they could have initiated a system call which as a result has a migration to kernel space where the real time priorities of our process don't mean anything.
The interrupts were all happening in kernel space, an where therefor also not influenced by the real time priorities.
|
we were trying to get the best result with software PWM on raspberry pi with Raspbian. We made a python script which starts PWM on a GPIO pin, and observed the results with the oscilloscope.
It didn't do well, the delay wasn't acceptable.
After that we set the realtime priority of our software PWM process to 99 and changed the scheduling to real time round robin, anf later to fifo (1865 is the process pid).
sudo chrt -f -p 99 1865
sudo chrt -r -p 99 1865It acted the same as before the priority change.
ALL the other processes were run with nonrealtime priorities. However, there were around 3000 interrupts per second going on from the timer and USB.
In this question the answer stated:... the vanilla kernel handles real time priorities, which gives them higher priority than normal tasks, and those tasks will generally run until they voluntarily yield the CPUAny ideas why the change of priority didn't have a visible efect?
Do real time priorities influence what will happen with the process on interrupts?
| Change of real time priority made no visible effect |
I would imagine that sudo is preserving your environment of the root user, and therefore may not have paths or other environment variables that the martin user has set. It may be also that you need to run jack via sudo from a shell with the -s /path/to/shell option.
However as root, you have the rights to su (substitute user) without being prompted for a password (and not require configuration of sudo to achieve this, sudo is specifically aimed at non-root users).
su - martin -c /usr/bin/jackd ...-c tells su what command to run, and the - option (which can also be done via -l) will attempt to set up the environment similar to that of the user it is being ran as (in this case martin).
|
After an upgrade to debian wheezy (I did not upgrade the kernel - it is still 3.8.2) I can no longer start jackd in the way I used to do it. I get you are not allowed to use realtime scheduling.
My investigation show, that this is related to a sudo command in my script, where I sudo from root to martin. The sudo is required, because I start jackd when my firewire mixing console gets switched on, using an udev rule. I can reproduce the problem by typing sudo from the commandline.
In short, this is what I observestart jackd as martin -> works
start jackd as root -> works
login as root and su - martin, then start jackd -> works
as root sudo -u martin /usr/bin/jackd ... -> does not work
as above but sudo -E -u martin ... -> does not workMy /etc/security/limits.conf contains these lines
@audio - rtprio 40
@audio - nice -20
@audio - memlock 1554963sudo -u martin id shows that I am in the audio group, however root is not. After sudoing from root to martin, martin has no realtime permissions
sudo -u martin sh -c "ulimit -e -r"
scheduling priority (-e) 0
real-time priority (-r) 0Adding root to the audo group made no difference. Root still has no realtime permissions and after sudo -u martin martin still looks as above
| Losing (realtime) permission when sudoing from root to myself |
No, it’s a solely compile-time configuration option, there’s no runtime equivalent. You’ll need to rebuild your kernel.
|
I have Linux with a kernel that was compiled with the real-time patch, but the config option CONFIG_PREEMPT_RT_FULL was not enabled (it says in /proc/config that it is not set).
Do you now if there is any way to turn this on, without having to recompile the kernel?
I guess it's not possible but maybe there's some way?
| Enable CONFIG_PREEMPT_RT_FULL after the kernel compilation |
None, in general. The realtimeness of your system is not dictated by the features it has, but whether you apply the -rt patchset or not, and whether you do design and run the software that needs to be a realtime task appropriately. The only thing within an -rt-patched kernel that'd increase latency would be a driver that has a prolonged critical section where it disables interrupts, but, that would probably not fly with in-kernel drivers for most devices, and also, removing that driver would disable a hardware component that is actively being used. So, I don't think that's an option here.
Since your handle is "ABeginner", I'll allow myself the broad recommendation that you shouldn't be building your own kernel config from scratch, but use the -rt kernel with the oldconfig from your Linux distro, or if it supplies one, use the distro-supplied -rt kernel.The system at this time, using cyclictest to test the delay effect, it was found that the jitter is a bit high, with a minimum value of 2 and a maximum value of 39.These numbers seem OK, within the boundaries of a modern x86_64 or aarch64 system. You say "jitter is a bit high"; but as an engineer involved in real-time signal processing: "a bit high" is not really a thing. Either your maximum latency is OK, or it's not OK. That's what real-time systems are about.
So, either you come to the conclusion that 39 µs is good enough (and it really might be! For anything audio-related, for example; or it might not be, say, for a machine emergency stop controller), or it's not – but then I'd ask you why you're doing a nanosleep accuracy test (and not something handling a hardware timer interrupt in kernel space, for example). That would really lead a little far here, I guess – it would involve figuring out which cores you pin to your realtime application process (if there's any such process, but then again, you're measuring userland nanosleep, so there has to be one), how they relate to your source of events, and what is in the chain between the source of events (assuming it's not just a timer) and your system acting on the event. So, sadly, all the hard things about improving real-time systems apply to Linux just as much as to much more bare-bone operating systems like FreeRTOS.
|
I am customizing a Linux real-time system using the Linux 6.4.0 kernel and patch-6.4.6-rt8 patch. When running make menuconfig, which configurations should be turned off to improve real-time performance?The system mainly serves as a robotic arm controller.
The system at this time, using cyclictest to test the delay effect, it was found that the jitter is a bit high, with a minimum value of 2 and a maximum value of 39.
# cyclictest -t1 -p 80 -n -i 1000 -l 10000
# /dev/cpu_dma_latency set to 0us
policy: fifo: loadavg: 0.27 0.07 0.02 1/122 627T: 0 ( 577) P:80 I:1000 C: 10000 Min: 2 Act: 2 Avg: 2 Max: 39
# | To create a Linux RT real-time system, which functions in the kernel should be cropped out? |
This answer first takes the readers to the historical perspective of standard development, then it brings the attention of the readers to specific texts of the standard to explain the reason for the requirement.
In XPG Issue 3, sigaction, along with sig*set, sigismember, sigpending, sigprocmask, and sigsuspend, are introduced for alignment with POSIX.1-1988 standard. Of these, sigaction provided the most comprehensive and consistent interface for specifying signal dispositions; sigpending, sigprocmask and sigsuspend provided ways for fine-grained response to signals.
In XPG Issue 4 (the oldest currently available in digital form), sigaltstack, sig{hold,ignore,pause,relse,set}, siginterrupt were introduced. The latest standard didn't say where they were from, only that all but sigaltstack are obsolescent, as they only work in single-threaded processes.
In XPG Issue 5, which is Single Unix Specification Version 2, pthread_sigmask, sigqueue, sigtimedwait, sigwaitinfo, and sigwait are introduced for alignment with POSIX realtime and threads extensions.
Now, it's important to look at 2 other places in the standard.
1st in General Information for the System Interfaces volume, Signal Concepts:... a signal can be "blocked" from delivery to a thread. If the action associated with a blocked signal is anything other than to ignore the signal, and if that signal is generated for the thread, the signal shall remain pending until it is unblocked, it is accepted when it is selected and returned by a call to the sigwait() function, or the action associated with it is set to ignore the signal.Obviously, apart from possibly eventually ignoring the signal, the signal can have 2 ways of reaching the process/thread - being unblocked, or being accepted by sigwait (and its siblings such as sigtimedwait and sigwaitinfo)
2nd in sigaction:The result of the use of sigaction() and a sigwait() function concurrently within a process on the same signal is unspecified.Obviously, sigwait had been envised as a secondary method of processing signals.
That is further confirmed by the following text in the rationale for the sigtimedwait and sigwaitinfo interface:The sigwait functions provide a synchronous mechanism for threads to wait for asynchronously-generated signals. |
In APUE, chapter 12 page 454, it is mentioned that:To avoid erroneous behavior, a thread must block the signals it is waiting for ...The similar is said in the standard:The signals defined by set shall have been blocked at the time of the call to sigwait(); otherwise, the behavior is undefined.What erroneous/undefined behavior is being discussed in these texts? I can't find a rationale or application usage in the standard, and I'm having difficulty comprehending the explanation in the book:if the signals are not blocked ..., then a timing window is opened up where one of the signals can be delivered to the thread before it completes its call to sigwait. | Why is it necessary to block a signal before sigwait()'ing it? |
Turned out that what I had in my mind is kind of possible (but it's complex).
We have two ways to tell Linux to exclude one or more CPUs from its normal process scheduling.The isolcpus boot option (documentation)The Linux specific cpusets (documentation)After that, a program can tell Linux that it wants to be run on that CPU with the following system call:
#define _GNU_SOURCE
#include <sched.h>int sched_setaffinity(pid_t pid, size_t len, cpu_set_t *set);This way that specific CPU (or CPUs) will be just for that process (or processes).
|
Correct me if I'm wrong:
As far as I searched around the web, the main difference between a micro-controller (like Arduino) and a SBC (like Raspberry Pi) is that the micro-controller is atomic which means it runs your code line by line and does nothing else, no delay. But a SBC (RPi for example) runs an OS, and the OS is not atomic, which means it will schedule your code for running, because the OS has to do other works too, your code may run with a delay, depending on how busy the OS is.
Therefore, for some projects like a flight controller for example, we should use a micro-controller so the drone would react to our commands and sensors' data immediately without any delay.
There are some boards like the BeagleBone that have a CPU capable of running an OS, and one or more micro-controllers capable of doing something atomic. which means you get both worlds with one board.
Now here's my question:
On a multi-core CPU that is running a Linux OS, can we tell the Linux kernel to reserve one core for only one process? Say I have a Python program that controls a drone on a Raspberry Pi, can I tell Linux to use three of four cores for itself and use one core just for my flight controller program? Am I making any sense?
I am aware of Linux job priority and some real-time Linux kernels, but I haven't looked into those options in detail. I would appreciate any guidance regarding this topic, thanks.
| Can Linux use a CPU core as a micro-controller? |
It seems that real time kernel is available on AMD64/x86_64 architectures only - yours is i386 (32 bit). Repo URLs can be accessed from a browser, so if you open CentOS 7 real time kernel or Centos real time kernel on Cern you'll see only 64 bit support. This is also confirmed on Red Hat's real time kernel installation guide, page 9, only 64 bit support. Although RHEL 7 is a 64 bit only distro with 32 bit libraries support, so no surprise. Yet I suggest you read Red Hat's doc, it'll you help understand the benefits of a real time kernel and if you really need it.
|
I have computer where I'm trying to install real-time kernel.
My OS:
# uname -a
Linux localhost.localdomain 3.10.0-1127.el7.centos.plus.i686 #1 SMP Sun Apr 5 18:08:31 UTC 2020 i686 i686 i386 GNU/LinuxI have created file /etc/yum.repos.d/CentOS-rt.repo with following content
# CentOS-rt.repo[rt]
name=CentOS-7 - rt
baseurl=http://mirror.centos.org/centos/\$releasever/rt/\$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7When I tried to update packages I got error message, please see below
# yum update -y
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirror1.hs-esslingen.de
* extras: mirror1.hs-esslingen.de
* updates: mirror1.hs-esslingen.de
base | 3.6 kB 00:00:00
extras | 2.9 kB 00:00:00
http://mirror.centos.org/centos/7/rt/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found
Trying other mirror.
To address this issue please refer to the below wiki article https://wiki.centos.org/yum-errorsIf above article doesn't help to resolve this issue please use https://bugs.centos.org/. One of the configured repositories failed (CentOS-7 - rt),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled
yum --disablerepo=rt ... 4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage: yum-config-manager --disable rt
or
subscription-manager repos --disable=rt 5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise: yum-config-manager --save --setopt=rt.skip_if_unavailable=truefailure: repodata/repomd.xml from rt: [Errno 256] No more mirrors to try.
http://mirror.centos.org/centos/7/rt/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not FoundI have never installed kernel previously.
Please help me bypass above error.
Thank you.
| Install a real-time kernel on CentOS 7 i386 |
Kernel version 3.10 is the version RHEL 7.x (and so also CentOS and related distributions) is locked on. RedHat will maintain a version of that kernel and backport any necessary bugfixes and new drivers as part of the active support for their distribution. When RHEL 8.0 will be released, it will have a new kernel version that will again be maintained for the duration of the 8.x series. Since the kernel source code is open, CentOS and other related distributions get to use the same extremely widely used (and so extensively tested) code base.
1.) CERN, the European Organization for Nuclear Research, maintains a version of CentOS 7 with optional customizations specifically for use at CERN.
2.) Canned by RedHat and supported by them in RHEL 7.x with backported bugfixes and other things, yes. Because these kernels are the basis of RHEL 7.x, they get to benefit from all the things backported by RedHat, so they might actually have significantly better support for new hardware than Linus Torvalds's "vanilla" 3.10.
3.) You can think of it as a continuum between "enterprise-grade" 3.10 in RHEL/CentOS, and the "bleeding edge" of... actually kernel version 5.0.7 at the time of this writing. I don't really know about their maintaining model.
| I'm looking to expand and also get more current information from the question asked here. I have a requirement for real-time behavior that has is sub-millisecond and am now exploring what my options are. I am working with Centos, and ideally would be a more recent kernel (>4.14) for support for my chipset. As I understand it, I have a few options:Apply the CONFIG_RT_PREEMPT patch to the kernel of my choice with the pre-emption model of my choice (see here)
Utilize a canned real-time kernel from either CERN or CENTOS, both of which top out at kernel version 3.10
I'm also aware of a 'RTLinux' distro, which as I understand it is now a legacy product owned by WindRiverSo give the above knowledge, I've got a few questions:Is the CERN site just a mirror for the CENTOS distribution? They certainly look similar. Who owns/maintains this?
Are the CERN and CENTOS real-time kernels just a canned flavor of the above CONFIG_RT_PREEMPT patch?
Just in case anyone on her has insight to the CONFIG_RT_PREEMPT patch, their main page lists their actively maintained kernel patches as (4.0-rt, 4.1-rt, 4.4-rt), although if you dig they've got patches available all the way up to 4.19, which is my preference. What's their model for maintaining patches? Why wouldn't I use the 4.19 patch?Thanks
| Available options for real-time Linux (Centos 7) and their relation to one another [closed] |
In the last source distribution, (rtnet-0.9.12.tar.bz2), I can see rtnet-0.9.12/drivers/experimental/rt_r8169.c, so the rt_ nomenclature remains. The module filename should be rt_r8169.ko. It's not there either because it wasn't compiled, or because it failed to compile (it is under the ‘experimental’ subdirectory, after all). I see there's an --enable-r8169 option in the configure script. Did you supply it?
|
From here: http://www.xenomai.org/index.php/RTnet:Installation_%26_Testing#Testing_with_a_single_node_.28local_loopback.29TODO: simplify the following steps.
- Then you need to edit the file rtnet.conf under the /usr/local/rtnet/etc folder for the correct setup to run RTnet. Edit the following parameters:
- Set the host up as master or slave depending on how you are going to use it.
- The RT_DRIVER should be the realtime equivalent of the module you removed nl. rt_8139too.Kernel: 2.6.38.8
linux-y3pi:~ # ethtool -i eth0
driver: r8169
version: 2.3LK-NAPI
firmware-version:
bus-info: 0000:01:00.0After RTnet installation I get:
linux-y3pi:/usr/local/rtnet/modules # ls
rt_8139too.ko rtcfg.ko rt_eepro100.ko rt_loopback.ko rtnet.ko rtudp.ko
rtcap.ko rt_e1000.ko rtipv4.ko rtmac.ko rtpacket.ko tdma.koHow to find what corresponds to r8169?
| What is the realtime equivalent of the module r8169? |
I found out that only kitty (among those who I tested) can support ligatures. And looking at how it improves, I don't think anything can substitute it for me. And after some research, I found that (probably on a lot more speedy PC) some people manage to open kitty in 0.2s (0.1s to load OpenGL and 0.1s for the rest). Also, I fortunately found that kitty does support "server-client" architecture. You can create any number of groups in which terminals share some stuff (I don't have any specifics). And to create "the main" (the easiest) group, I only had to add 3 more chars: kitty -1:--single-instance, -1
If specified only a single instance of kitty will run. New invocations
will instead create a new top-level window in the existing kitty
instance. This allows kitty to share a single sprite cache on the GPU
and also reduces startup time. You can also have separate groups of
kitty instances by using the --instance-group option.Now the terminal (fully customized) opens in less than 0.2s (0.16-0.18)! That's 0.4s or 3.3 times faster than my first timings. It's about as fast as other server terminals (gnome-terminal, xfce4-terminal). There are only 2 downsides:I don't think there's a way to run server in background, therefore 1st terminal opens with regular speed (0.57s);
If you kill one terminal (maybe something did freeze) — the other ones all go with it.
But currently I think these things aren't an issue for me and I probably can live with them. I installed Pop!_OS 22.04 and the UI is super responsive (I really think it will decrease the time even further). |
I just checked my timings: it takes about 0.41-0.45 seconds to open a new gnome-terminal window and about 0.55-0.65 seconds to open kitty. And it does bother me a bit that it takes so much time to open (I want it to be close to instant, like UI elements are responding to mouse/keyboard events). I want some suggestions for either speeding up window open process or some terminal alternatives that open faster (I use kitty for several years now). Maybe someone can share their timings, so I can compare mine to something (I didn't find anything about this issue on the Internet).
Here's a MWE:
terminal=kitty # gnome-terminal
date +%s.%N > .start; $terminal -- sh -c 'echo "$(date +%s.%N)-$(cat .start)" | bc | cut -c 2- > .diff; rm -f .start'; cat .diff; rm -f .diffMy config: Laptop with i7-8550U, SSD, Ubuntu 20.04.
P.S. I'm hoping soon to hop to Pop!_OS 22.04. After that, I'll check my timings again (perhaps they'll improve).Update:
Tried with zero config using root:kitty: 0.38-0.43 seconds
gnome-terminal: 0.41-0.46 secondsUpdate 2:
Run ranger using kitty with root (0 conf.) and alacritty (cargo crate is user-wide; 0 conf.):kitty: 0.50-0.57 seconds
alacritty: 0.37-0.43 seconds
alacritty without ranger: 0.22-0.28 seconds (now we're talking)kitty:
date +%s.%N > .start; kitty ranger --cmd 'shell echo "$(date +%%s.%N)-$(cat .start)" | bc | cut -c 2- > .diff; rm -f .start; kill $PPID'; cat .diff; rm -f .diffalacritty:
date +%s.%N > .start; alacritty -e ranger --cmd 'shell echo "$(date +%%s.%N)-$(cat .start)" | bc | cut -c 2- > .diff; rm -f .start; kill $PPID'; cat .diff; rm -f .diffP.S. I use ranger a lot (want to switch to lf) and 99% I open it with a shortcut bound to kitty ranger command.
| Time it takes to open a new terminal window |
If you want to reduce network latency and jittering, being said that it will always increase cpu load whatever the trafic and, in some cases, also decrease the throughput under heavy trafic :
A/ THE DEFINITVE HAMMER : BUSY POLLING ! (Big fat warning, the less number of cpus you get, the more you will sacrifice on everything else.)
The idea is that rather than fire&forgetting some blocking recvmsg hence freeing your cpu for other jobs, eventually flushing your cpu caches, and eventually come back to your task after several context switches and softirq handlings… you loop into your task busy-waiting for data from the NIC.
As soon as data is made available in the buffer… it will be processed without any additional delay.
Please refer to man recvmsg and do read the part related to the MSG_DONTWAIT flag. Also note that you could achieve a similar effect opening your socket O_NONBLOCK and also note that the polling is also possible to be achieved by the kernel but I personally do not like the idea since… I get only 2 cores… ;-)
This being said, you'll definitely want to pin your task to one cpu, this will prevent possible task migration overhead and help keeping the cache hot.
Benefits of this method are immediate ! Reduction of latency & jitter to their minimum with no expense on throughput but… since there is no free lunch… highest possible cpuload.
B/ LOW LEVEL NIC TUNING (Interrupt coalescing, ring buffers, transmit queues… via ethtool)
- Buffers : In general and whatever the subsystem (network/sound/…) buffers are the ennemies of latency / jittering. So you'll want to reduce them to their minimum.
What is the value for the strict minimum ?
When, under heavy load, you start getting packets dropped and/or overruns (as reported by ifconfig)
- Interrupt coalescing :
Interrupt coalescing adds latency to the packet arrival time since the packet is in host memory, but the host is not aware of the packet until some time later. However, the system will use fewer CPU cycles because fewer interrupts are generated and the host processes several packets per interrupt.
Therefore it can be seen interesting to reduce coalescing to the minimum possible at, here the expense of cpu time AND throughput.OF course, this is not needed in case of busy polling.
OF course, this gets very little effect in case of multiqueue network cards if you do not first ensure that their associated IRQs are evenly distributed on all available cores,
OF course, It wont get any effect if your system is not running irqthreaded since the real work of the IRQ handling won't be achieved by a dedicated kernel thread following a real time scheduling policy. |
I have two computers that are connected via an Ethernet cable with Ubuntu 22.04 installed on them. I have a client on computer A) which is sending UDP packets to a server on computer B) and I am measuring the latency and jitter of these packets in different scenarios. I have written the source code of the client and server in C, using the socket library.
When there is additional, high bandwidth traffic between the two computers on top of the packets which latency is measured, the jitter and latency is less than when I send the packets without that additional traffic:Round trip time without additional traffic: 0.556 ms
Round trip time with additional traffic: 0.105 ms
Jitter without additional traffic: 0.042 ms
Jitter with additional traffic: 0.014 msIt seems interesting because I would imagine less packets means less latency and jitter, but the results show otherwise. Can someone tell me what can be the reason? I suspect it has to do with buffer optimization, when more packets arrive, the buffer must be emptied more often but I am not sure. If that is the case, how could I configure the buffers in order to minimize latency?
Edit#1:
As suggested I tried to modify the parameters of my configuration for the NIC (ethtool -c):I could only change the initial value of rx-usecs 3 to rx-usecs 1 us. I could not modify adaptive-rx and rx-usecs-low, I don't think they are supported by my nic.
Decreasing the rx-usecs value to 1 did not solve the issue, the latency difference of the two scenarios remained the same if not increased.
Increasing it to 5 and 10 us did not seem to help either.
| Why is there less latency and jitter when there is additional high bandwidth traffic between two computers with Ubuntu 22.04 installed on them? |
Yes, it is possible. You have to choose your kernel at GRUB2 boot menu.
See under Advanced options for Debian. Some GRUB2 config:Disable hidden GRUB2 boot menu in /etc/default/grub. Comment GRUB_HIDDEN_TIMEOUT="5"
and GRUB_HIDDEN_TIMEOUT_QUIET="true" or change it to false.
Set a resonable time to see the menu GRUB_TIMEOUT="3" |
For instance, I know it's possible to easily install several desktop interfaces and choose what session to log into on startup.
Is this type of thing possible for choosing between kernels? I would like to be able to install the low latency and realtime kernels on my system while still being able to use the normal kernel easily when doing everyday chores such as just checking my email and doing my school.
Basically, I would like to log into my system using a minimal window manager and a low-latency kernel only when I'm doing things such as music production.
System specs:Debian Bullseye, Stable
amd64 (Intel i2)
HP Compaq 8000 Elite Convertible Minitower
8GB RAMThank you in advance, and God bless.
| How to Choose Between Real-Time, Low-Latency and Normal Kernels on Startup? |
This is the asker's attempt at guessing.
Just like POSIX Threads, the realtime APIs are found useful in regular applications, coupled with the fact that specifications for these APIs being implementable without major obstacle, operating systems supporting these interfaces become more common, so the standard move them to base; all because POSIX is a prescriptive standard that aims to gather consensus.
Being a realtime API doesn't mean the applications using it is a real-time application. The ability of the operating system (and to an extent, hardware) to guarantee quality of service of these APIs is dependent on various factors, most importantly system load.
It's unreasonable to expect a finite system to be able to serve infinite amount of realtime requests that're beyond its capability. I haven't experience with realtime programming, but it's a sense-based guess of mine that realtime applications have well-defined scope and goals that programmers are obligated to achieve, and beyond which, users of realtime systems are expected to avoid exceeding.
|
While reading the standard, I noticed that bunch of APIs were,Introduced in Issue 5 for alignment with POSIX realtime APIs,Marked for option group membership in Issue 6, andMoved to Base in Issue 7 (SUSv4).Q: Does this mean that all systems conforming to "Unix(R) V7" product standard are realtime systems? What are the actual capability of such system with regard to real-time requirements?
| Single Unix Specification version 4 (Issue 7) moved bunch of Real-Time APIs to Base, What Next? |
I've finally figured out the multiroom audio Raspberry Pi conundrum!
The solution was to bring in PulseAudio. ALSA cannot do it alone because of the Raspberry Pi's ALSA bcm2835 driver limitations. The driver cannot copy data from one stream to another using mmap for reasons I don't quite understand. This is the case even when mmap is specifically enabled and mmap emulation is used (mmap_emul) - see: https://blog.dowhile0.org/2013/04/27/mmap-support-for-raspberry-pi-bcm2835-alsa-driver/.
The complete solution to Raspberry Pi audio stream duplication locally and via trx:Transmission side - install packages:
sudo su
sudo apt install alsa-utils opus-tools lame vlc cmake libasound2-dev libortp-dev libopus-dev pulseaudio
mkdir ~/Installers
cd ~/Installers
git clone http://www.pogo.org.uk/~mark/trx.git
make
sudo make installTransmission side - Add user to audio group:
sudo usermod -a -G audio your_usernameTransmission side - activate the alsa loopback module:
modprobe snd-aloop
echo "snd-aloop" | sudo tee -a /etc/modulesTransmission side - setup ALSA devices to support trx:
You should edit ~/.asoundrc if you want to do this for just one user, or /etc/asound.conf to make it for all users.
# nano /etc/asound.conf OR
# nano ~/asoundrc:File contents:
# /etc/asound.conf OR ~/asoundrc: # tx_dmix ensures audio sent to tx gets resampled properly
pcm.tx_dmix {
type dmix
ipc_key 2867
ipc_perm 0666 # allow other users
slave {
pcm "hw:Loopback,0,0"
rate 48000
format S16_LE
channels 2
period_size 256
buffer_size 8192
}
} # tx is the entry point for anything that wants to play down the TX link
pcm.tx {
type plug
slave.pcm "tx_dmix"
hint.description "Audio input for TX transmission."
} # Hubcap is used by TX to resample audio into Opus-friendly sample rate
pcm.hubcap {
type plug
slave {
pcm "hw:Loopback,1,0"
rate 48000
format S16_LE
}
hint.description "Internal loopback capture and resampler for TX. Only TX should use."
} # Headphones (3.5mm jack) playback
pcm.headphones_hw {
type hw
card Headphones
sync_ptr_ioctl 1
} pcm.headphones {
type plug
slave.pcm "headphones_hw"
} pcm.!default {
type plug
slave.pcm "headphones"
}Transmission side - set up PulseAudio connections
# nano /etc/pulse/default.paOR
# nano ~/.pulse/default.pa:File contents:
# PulseAudio config - duplicates audio for local playback and network playback .include /etc/pulse/default.pa # Set up Pulse sinks to connect to our ALSA devices we configured in .asoundrc
load-module module-alsa-sink device="tx" sink_name=tx
load-module module-alsa-sink device="headphones" sink_name=headphones # Create stream duplicator
load-module module-null-sink sink_name=localandtx
load-module module-loopback source=localandtx.monitor sink=tx
load-module module-loopback source=localandtx.monitor sink=headphones
set-default-sink localandtxRestart pulse after creating this file
pulseaudio -kTransmission side - create tx initialising script:
nano ~/run_tx.shFile contents
#!/bin/bash
# Fires up the TX transmission
# Usage:
# sudo run_tx.sh [ip] [latency in ms]
# sudo run_tx.sh 239.0.0.99 64 # Defaults
DEFAULT_TX_IP="239.0.0.99"
DEFAULT_LATENCY_BUFFER_MS="64" # Resolve from args
CHOSEN_TX_IP=${$1:-$DEFAULT_TX_IP}
CHOSEN_LATENCY_BUFFER_MS=${$2:-$DEFAULT_LATENCY_BUFFER_MS} echo "Launching TX on ${CHOSEN_TX_IP} with ${CHOSEN_LATENCY_BUFFER_MS} ms buffer." # Start TX, using hubcap ALSA device
tx -d hubcap -m $(($CHOSEN_LATENCY_BUFFER_MS)) -h $CHOSEN_TX_IP & # Boost priority of all TX threads (necessary to prevent buffer underruns)
TX_PIDS=$(ps -L -C tx -o lwp=)
for TX_PID in TX_PIDS
do
sudo renice -10 $(($TX_PID))
doneTransmission side - run the tx sender:
chmod +x ~/run_tx.sh
sudo ~/run_tx.sh Receiving side - install packages (just runs ALSA, no need for PulseAudio):
sudo su
sudo apt install alsa-utils opus-tools lame vlc cmake libasound2-dev libortp-dev libopus-dev
mkdir ~/Installers
cd ~/Installers
git clone http://www.pogo.org.uk/~mark/trx.git
make
sudo make installReceiving side - Create rx initialising script:
nano ~/run_rx.shFile contents
#!/bin/bash
# Fires up the RX Reception side
# Usage:
# sudo run_rx.sh [ip] [latency in ms]
# sudo run_rx.sh 239.0.0.99 64 # Defaults
DEFAULT_RX_IP="239.0.0.99"
DEFAULT_LATENCY_BUFFER_MS="64" # Resolve from args
CHOSEN_RX_IP=${$1:-$DEFAULT_RX_IP}
CHOSEN_LATENCY_BUFFER_MS=${$2:-$DEFAULT_LATENCY_BUFFER_MS} echo "Launching RX receiver, listening on ${CHOSEN_RX_IP} with ${CHOSEN_LATENCY_BUFFER_MS} ms buffer." # Start RX, using default ALSA device
rx -m $(($CHOSEN_LATENCY_BUFFER_MS)) -h $CHOSEN_RX_IP & # Boost priority of all RX threads (necessary to prevent buffer underruns)
RX_PIDS=$(ps -L -C rx -o lwp=)
for RX_PID in RX_PIDS
do
sudo renice -10 $(($RX_PID))
doneReceiving side - run the rx listener:
chmod +x ~/run_rx.sh
sudo ~/run_rx.sh Play something on the tx side into PulseAudio default sink (localandtx). It will be played out from the rx side via the rx device's default audio output.To specify a different audio device to playback in rx, add the -d "alsa_device_name" option to the rx -m $(($CHOSEN_LATENCY_BUFFER_MS)) -h $CHOSEN_RX_IP & line e.g. rx -m $(($CHOSEN_LATENCY_BUFFER_MS)) -h $CHOSEN_RX_IP -d Headphones &
|
I am trying to achieve a multi-room audio setup in my house using Raspberry Pis. How can I get VLC playing simultaneously out the local headphone port while also streaming it to other devices via trx?
Background:
I have found a fantastic package called trx which allows low-latency streaming using the Opus codec across the LAN: http://www.pogo.org.uk/~mark/trx/streaming-desktop-audio.html
I have managed to successfully get trx installed and working with the following:
sudo modprobe snd-aloop
Transmission side ~/.asoundrc
# TX device catches played audio from a player (e.g. VLC)
# point vlc to this device:
# cvlc --alsa-audio-device="tx" <file_or_stream>
pcm.tx {
type plug
slave.pcm {
type dmix
ipc_key 2867
slave {
pcm "hw:Loopback,0,0"
rate 48000
format S16_LE
channels 2
period_size 256
buffer_size 8192
}
}
}# Hubcap ensures 48000Hz sample rate (Opus compatible)
pcm.hubcap {
type plug
slave {
pcm "hw:Loopback,1,0"
rate 48000
format S16_LE
}
}Transmission side:
tx -d hubcap -m 64 -h 239.0.0.99 &
cvlc --alsa-audio-device="tx" {source_file_path_or_url}
Receiving side:
rx -m 64 -h 239.0.0.99
There are occasional buffer underruns which are easily fixed by changing the niceness of the tx processes to -10.
renice -n -10 {process_id}
The problem:
I would like to be able to play audio from VLC to the devices across the network listening to the multicast 239.0.0.99, and also from the transmission device's headphone / line-out socket.
I cannot figure out how to set up a plug, route and multi in ~/.asoundrc so that there is one ALSA device cvlc can play to, where the audio is fed to both local hw:1 (headphone socket) and plug:tx (input for audio to be transmitted via tx).
The ALSA asound configuration documentation is abysmal. I have tried the following addition to .asoundrc with no luck:
pcm.headphones_dmix {
type dmix
slave {
pcm "hw:Headphones"
}
}pcm.localandtx {
type plug
slave {
format S16_LE
pcm {
type multi
slaves.tx.pcm "tx"
slaves.tx.channels 2
slaves.local.pcm "headphones_dmix"
slaves.local.channels 2
bindings.0.slave tx
bindings.0.channel 0
bindings.1.slave tx
bindings.1.channel 1
bindings.2.slave local
bindings.2.channel 0
bindings.3.slave local
bindings.3.channel 1
}
}
route_policy duplicate
ttable {
0.0 1
1.1 1
0.2 1
1.3 1
}
hint {
show on
description "Play both locally and via TX."
}
}With the above:
vlc will happily play to the local headphones with --alsa-audio-device="hw:Headphones"
vlc will happily play to devices running rx with --alsa-audio-device="tx"
But, vlc won't play to either with --alsa-audio-device="localandtx". I want it to play to both. (I am aware of the additional latency when sending audio via trx).
The vlc errors are:
ALSA lib pcm_direct.c:2031:(snd1_pcm_direct_parse_open_conf) Unique IPC key is not defined
[015a4ac8] alsa audio output error: cannot open ALSA device "localandtx": Invalid argument
[015a4ac8] main audio output error: Audio output failed
[015a4ac8] main audio output error: The audio device "localandtx" could not be used: Invalid argument.
[015a4ac8] main audio output error: module not functional
[71b7a980] main decoder error: failed to create audio outputIs there any useful (sane) tool for debugging an ALSA asound config file?
How do I determine which .asoundrc argument in localandtx is "invalid"?
How can I achieve audio routing to both hw:Headphones and tx in ALSA without using PulseAudio?
| Duplicating audio in pure ALSA for playback on local device and streaming via trx |
Below is the command that you need to run to scan the host devices so it will show the new hard disk connected.
echo "- - -" >> /sys/class/scsi_host/host_$i/scan$i is the host number
|
I'm having a little issue. I've a live system which run on RHEL 6.7 (VM) and have VMware 6.5 (which is not managed by our group) . The issue is, the other group tried to extend the capacity of an existing disk on a VM. After that, I ran a scan command to detect new disk as usual with echo "- - -" > /sys/class/scsi_host/host0/scan, but nothing happened. They added 40G on sdb disk which should be 100G and I saw that is changed on VM but not in Linux. So where is the problem ? As I said, this is a live system, so I don't want to reboot it.
Here is the system :
# df -h /dev/mapper/itsmvg-bmclv
59G 47G 9.1G 84% /opt/bmc# lsblk sdb 8:16 0 60G 0 disk └─itsmvg-bmclv (dm-2) 253:2 0 60G 0 lvm /opt/bmc# vgs VG #PV #LV #SN Attr VSize VFree itsmvg 1 1 0 wz--n- 59.94g 0 # pwd /sys/class/scsi_host# ll lrwxrwxrwx 1 root root 0 Nov 13 16:18 host0 -> ../../devices/pci0000:00/0000:00:07.1/host0/scsi_host/host0 lrwxrwxrwx 1 root root 0 Nov 13 16:19 host1 -> ../../devices/pci0000:00/0000:00:07.1/host1/scsi_host/host1 lrwxrwxrwx 1 root root 0 Nov 13 16:19 host2 -> ../../devices/pci0000:00/0000:00:15.0/0000:03:00.0/host2/scsi_host/host2 | How to detect new hard disk attached without rebooting? |
Yeah, it's generally built into the firmware. Some drive manufacturers provide an MS Windows based management tool that will allow you to modify various parameters, including disabling the "sleep" or spin down timer. If you have access to a Windows box it might be worth it to pursue that angle.
|
Does anyone know if there is an elegant way to tell an external usb drive not to spin down after a period of inactivity? I've seen cron based solutions that write a file every minute, but nothing that smells of nice unixey elegance. There must be a hdparm, or scsi command that I can issue (usb drives are accessed via the sd driver in OpenBSD) to the drive to tell it to not sleep. I'm afraid that this is probably a feature built into the controller in the enclosure, and as such not much can change it aside from ripping the drive out of it's enclosure and plopping it directly in the machine, but I figured I would ask, on the off chance.
Ideally, I'm looking for an OpenBSD solution, but I know there are others out there w/the same problem so any solutions will be considered for the answer.
| Prevent a USB external hard drive from sleeping |
Usb devices are far more complex than simply pipes you read and write. You'll have to write code to manipulate them. You (probably) don't need to write a kernel driver. See http://libusb.info (née libusb.org) and http://libusb.sourceforge.net/api-1.0. This claims to work with Linux, OSX, Windows, Android, OpenBSD, etc. Under Mac OS X there are user-level functions in I/O Kit that will let you access USB. Under Windows, you may be able to use WinUSB, but it's complicated.
Here's a little diagram I drew once to help me understand the architecture of USB:
╭────────────────────────────────────╮
┌──────┐ │ device ┌─────┐ ┌─────────┐ │
│ Port ├──┐ │ ┌─┤ EP0 ├──┤ control │ │
└──────┘ │ │ ┌────────┐ │ └─────┘ ├─────────┤ │
├────┤addr = 2├─┤ ┌─────┐ │ │ │
│ │ └────────┘ ├─┤ EP1 ├──┤interface│ │
│ │ │ └─────┘ │ #0 │ │
│ │ │ ┌─────┐ ├─────────┤ │
│ │ ├─┤ EP2 ├──┤ │ │
│ │ │ └─────┘ │interface│ │
│ │ │ ┌─────┐ │ #1 │ │
│ │ └─┤ EP3 ├──┤ │ │
│ │ └─────┘ └─────────┘ │
│ ╰────────────────────────────────────╯
│
│
:Executive summary: every device has an address (assigned by the O/S and subject to change), and up to (I think) 32 endpoints.
Within the device are one or more "interfaces". For example, a web cam might provide a "camera" interface and a "microphone" interface. A multi-function printer would provide several interfaces.
Endpoint 0 is used for control and configuration of the device, and the others are to access the various interfaces. Each interface has zero or more (usually more) endpoints.
Endpoints can be one of several transfer types:Control Transfers are used to query and configure the device. Every device is required to support a minimum set of control statements. I believe that control transfers are only used with endpoint 0.
Bulk Transfers send or receive data at full bandwidth
Interrupt transfers (I'm not really sure how this is different from bulk transfers; USB doesn't have interrupts). Examples include keyboard and mouse
Isochronous transfers send or receive data at full bandwidth with realtime requirements but without reliability. Used for audio/video applications.Also worth noting: a USB device can have multiple configurations, which control what interfaces are available and so forth. Changing a device configuration is almost like unplugging the device and plugging a different device in its place.
All of this information is laid out in device descriptors, config descriptors, interface descriptors, endpoint descriptors, etc., which can be queried via endpoint zero.
(Internally, data isn't a stream of bytes, it's packed into packets whose exact formats are part of the USB specification. For the most part, you don't need to worry about this as the controllers and drivers will manage this part for you.)
In practice, depending on your API library and operating system, you'll need to detect the device, read the various descriptors to find out what you're dealing with, optionally set its configuration (if the OS permits), open the interface, and open the endpoints.
For bulk endpoints, you can read and write raw data to them. For control transfers, the API library will provide function calls. I've never used interrupt or isochronous transfers; I'm sure your API library will have the relevant documentation.More info: "Function" is a collection of interfaces that work together. It's not originally part of the USB spec, and it's up to the device driver to know which interfaces should be grouped together. The USB Working Group has defined device classes which support functions. This is done via the Interface Association Descriptor (IAD).
|
I'm attempting to write raw data to a USB device connected to my computer. I'm using Kali Linux, and I found the correct filepath: "/dev/usb/003/013" . However, when I try to write data to it I get an error.
root@kali:~/usb# printf "test" > /dev/bus/usb/003/013
bash: printf: write error: Invalid argumentI also tried using cat:
root@kali:~/usb# cat test > /dev/bus/usb/003/013
cat: write error: Invalid argumentIn the previous case the file 'test' does exist and have data in it. It seems that the system is unable to write to the file descriptor even though it is there.
After researching I've come to the conclusion that you either:
A. Need a USB driver that will interface with the device.
B. Use an SCSI Pass Through to write data directly to the Endpoints on the device.
I'm new to USB programming and although I'm game to try, I've never written a driver before. Any advice or help would be appreciated.
Is it possible to write raw data to the Device like I originally tried? If not, could you explain some options available to me?
| How can I write raw data to a USB device |
They show up as SCSI devices because the drivers speak SCSI to the next kernel layer (the generic disk driver). This isn't actually true of all SATA drivers on all kernel versions with all kernel compile-time configurations, but it's common. Even PATA devices can appear as SCSI at that level (again, that depends on the kernel version and kernel compile-time configuration, as well as whether the ide-scsi module is used).
It doesn't really matter whether the driver speaks SCSI to the physical device. Often, it does. ATAPI, used for talking to PATA/SATA optical drives and other devices, is a SCSI-based protocol encapsulation. However, PATA/SATA disks don't use ATAPI. The libata set of drivers also includes a translator between the ATA command set and SCSI so that you can place PATA/SATA disks under the umbrella of the SCSI subsystem. The separate ide interface inside the kernel is more of a historical survivance.
You'll notice that USB disks also appear as SCSI, for the same reason (and they speak SCSI too on the USB bus). The same goes for Firewire.
|
I have 3 SATA devices on my system. They show up under /proc/scsi/scsi, although these are not SCSI devices. Why do my SATA devices show up under the SCSI directory?
$ cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: WDC WD2500AAJS-6 Rev: 01.0
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: TSSTcorp Model: CDDVDW TS-H653Z Rev: 4303
Type: CD-ROM ANSI SCSI revision: 05
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST3320620AS Rev: 3.AA
Type: Direct-Access ANSI SCSI revision: 05 | Why do my SATA devices show up under /proc/scsi/scsi? |
Ok, I think I've worked this out.
TL;DR
Use dd with a large block size to read from the tape instead:
dd if=/dev/nst0 bs=1M | tar tvf -Background
When you write to tapes, the data is written in units called blocks. These are like sectors on a hard disk. Where hard disk blocks were fixed at 512-bytes for many years and only recently moved to 4096-byte blocks, tape blocks can be set to any size you like.
The block size you wish to use is set with the setblk subcommand in mt-st:
mt-st -f /dev/nst0 setblk 512 # Use 512-byte blocks
mt-st -f /dev/nst0 setblk 64k # Use 65536-byte blocksWhen you issue a read operation to the drive, it will return data in block-sized chunks. You can't read half a block - the smallest amount of data you can read from a tape is one block, which of course could be any number of actual bytes depending on what the block size is.
This means if the program you are using supplies a 16kB memory buffer, you will be able to read up to 32 blocks at a time from the tape with 512-byte blocks as these fit exactly in the 16kB buffer. However you will not be able to read anything from the tape with 64kB blocks, because you can't fit even one of them into the 16kB buffer, and remember you can't read anything less than one whole block at a time.
Should you attempt to do this, by using a buffer that's too small for one block, the driver (in this case the st SCSI tape driver) will return a memory allocation error code to advise you that your read buffer is too small to hold even a single block.
To further complicate matters, some tape drives (apparently the LTO ones I am using) also support variable-sized blocks. This means the block size is determined by the size of each write operation and each block can be a different size to the last.
This mode is set with a block size of zero:
mt-st -f /dev/nst0 setblk 0 # Use variable-sized blocksThis is also the default option as - presumably, I am guessing here - it wastes less space with an incorrectly configured program. If, for example, you had set 4k blocks but your program only wrote data in units of 512 bytes at a time, there is a risk that each 512-byte chunk of data would take up 4k on the tape.
Cause
If you now put everything together, you will realise that a tape can hypothetically have a 512-byte block followed by a 64kB block. If the program is reading the tape with a 16kB buffer, it will successfully read the first block, but then when it tries to read more, it won't be able to fit the following 64kB block in its buffer so the driver will return an error.
This explains why I was getting Cannot allocate memory errors most of the time, and occasionally I was able to get tar to extract the first few files but then I got the error again. I had not set the block size with mt-st so it had defaulted to variable-sized blocks when the tape was written, and now tar was using too small a buffer to read in some of those blocks.
tar has a couple of options for setting its own internal block sizes, namely --blocking-factor, --read-full-records, and --record-size, however these only work if tar is used to directly read and write to the tape.
Because I wrote to the tape through the mbuffer program to reduce tape shoe-shining, the block size in the tar archive no longer matched the block size on the tape. This meant --blocking-factor had little effect - it would allow the first block on the tape to be read, which includes a header telling tar what the blocking factor is supposed to be, wherein it switches to that and ignores the value given on the command line. This means the second and subsequent blocks can no longer be read!
Solution
The solution is to use another program to read from the tape - one that can have the read buffer size set to a value large enough to hold the biggest block we are likely to see.
dd works for this, and in a pinch this works:
dd if=/dev/nst0 bs=256k | tar tvf -You may need to increase 256k if your tape has larger blocks on it, but this worked for me. 1M also works fine so it doesn't appear to matter if the value is too large, within reason.
|
I am experimenting with some old SCSI tape drives, and I have successfully written some data to a tape, but I am struggling trying to read it back again.
# tar tvf /dev/st0
tar: /dev/st0: Cannot read: Cannot allocate memory
tar: At beginning of tape, quitting now
tar: Error is not recoverable: exiting now# dd if=/dev/st0 of=test
dd: error reading '/dev/st0': Cannot allocate memory
0+0 records in
0+0 records out
0 bytes copied, 3.20155 s, 0.0 kB/sAfter these commands, dmesg says:
st 10:0:3:0: [st0] Block limits 1 - 16777215 bytes.
st 10:0:3:0: [st0] Failed to read 65536 byte block with 512 byte transfer.
st 10:0:3:0: [st0] Failed to read 131072 byte block with 65536 byte transfer.
st 10:0:3:0: [st0] Failed to read 65536 byte block with 10240 byte transfer.
st 10:0:3:0: [st0] Failed to read 94208 byte block with 69632 byte transfer.
st 10:0:3:0: [st0] Failed to read 65536 byte block with 10240 byte transfer.
st 10:0:3:0: [st0] Failed to read 65536 byte block with 512 byte transfer.Most of these were because I was testing different block sizes with the tar -b option, but none of those had any effect.
Occasionally I'm able to read a few kB of data off the first block on the tape (which tar can extract until the data cuts off), but usually it fails with no data at all read.
I have (apparently) successfully written data to tape, moved the tape to the other drive, seeked to the end of the data and then written more, so there appears to be no difficulty in writing data to the drive, just in reading it back again.
I am using two LTO-3 drives. One is a half height HP Ultrium 920 and the other is a full height HP Ultrium 960. Both of them have this problem. I have tried with two different SCSI cards (an LSI Logic Ultra320 card and an Adaptec Ultra2/SE 40MB/sec card), both of which produce the same errors.
I have tried a cable with an attached terminator (gave me 40MB/sec even on the Ultra320 card), then a two-connector cable which meant I could only connect one drive so I enabled the "term power" jumper on the drive, which got me to Ultra160 (even though the drive and controller are both Ultra320) but none of this changed anything and throughout it all I still got the same errors when trying to read from the drive.
I downgraded from Linux kernel 4.10.13 to 4.4.3 (the previous version on this machine) and the error message changes from "Cannot allocate memory" to "Input/output error" but the problem remains the same.
Any ideas what could cause this error?
EDIT: The 40MB/sec problem was caused because I was using an SE active terminator. Once I replaced this with an LVD terminator the speeds went up to Ultra160. I think I need new cables to hit Ultra320 but this is now double the tape bandwidth (max 80MB/sec) so it's fine with me for the time being. Made no difference with the error messages though.
| "Cannot allocate memory" when reading from SCSI tape |
I have solved the problem by buying a SAS2008 card. It still complains a little in the log, but it never blocks the disk I/O. Also I have tested it supports 4 TB SATA drives, whereas the LSI-SAS1068E only supports 2 TB.
As I will be returning the LSI-SAS1068E to the seller, I will not be able to try out other suggestions. Therefore I close the question here.
|
I/O to my software RAID6 often freezes for around 30 seconds after which everything is back to normal.
After the freeze is over this is put into syslog:
Mar 14 18:43:57 server kernel: [35649.816060] sd 5:0:23:0: [sdy] CDB: Read(10): 28 00 6c 52 68 58 00 04 00 00
Mar 14 18:43:58 server kernel: [35651.149020] mptbase: ioc0: LogInfo(0x31140000): Originator={PL}, Code={IO Executed}, SubCode(0x0000) cb_idx mptscsih_io_done
Mar 14 18:43:58 server kernel: [35651.151962] mptscsih: ioc0: task abort: SUCCESS (rv=2002) (sc=ffff8807b02dfe80)
Mar 14 18:43:58 server kernel: [35651.151967] mptscsih: ioc0: attempting task abort! (sc=ffff88002a7f30c0)
Mar 14 18:43:58 server kernel: [35651.151972] sd 5:0:23:0: [sdy] CDB: Read(10): 28 00 6c 52 6c 58 00 04 00 00
Mar 14 18:43:58 server kernel: [35651.151981] mptscsih: ioc0: task abort: SUCCESS (rv=2002) (sc=ffff88002a7f30c0)
Mar 14 18:43:58 server kernel: [35651.151984] mptscsih: ioc0: attempting task abort! (sc=ffff8804120e5ec0)
Mar 14 18:43:58 server kernel: [35651.151988] sd 5:0:23:0: [sdy] CDB: Read(10): 28 00 6c 52 70 58 00 04 00 00
Mar 14 18:43:58 server kernel: [35651.151996] mptscsih: ioc0: task abort: SUCCESS (rv=2002) (sc=ffff8804120e5ec0)
Mar 14 18:43:58 server kernel: [35651.151999] mptscsih: ioc0: attempting task abort! (sc=ffff880154afb280)
Mar 14 18:43:58 server kernel: [35651.152020] sd 5:0:23:0: [sdy] CDB: Read(10): 28 00 6c 52 74 58 00 04 00 00
Mar 14 18:43:58 server kernel: [35651.152029] mptscsih: ioc0: task abort: SUCCESS (rv=2002) (sc=ffff880154afb280)I have googled the error and someone suggested trying using 1.5Gbps instead of 3.0Gbps. Using lsiutil I changed the link speed:
# lsiutil -p 1 -i Firmware Settings
-----------------
SAS WWID: 500605b002c0f680
Multi-pathing: Disabled
SATA Native Command Queuing: Enabled
SATA Write Caching: Enabled
SATA Maximum Queue Depth: 32
Device Missing Report Delay: 0 seconds
Device Missing I/O Delay: 0 seconds
Phy Parameters for Phynum: 0 1 2 3 4 5 6 7
Link Enabled: Yes Yes Yes Yes Yes Yes Yes Yes
Link Min Rate: 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5
Link Max Rate: 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5
SSP Initiator Enabled: Yes Yes Yes Yes Yes Yes Yes Yes
SSP Target Enabled: No No No No No No No No
Port Configuration: Auto Auto Auto Auto Auto Auto Auto Auto
Target IDs per enclosure: 1
Persistent mapping: Enabled
Physical mapping type: None
Target ID 0 reserved for boot: No
Starting slot (direct attach): 0
Target IDs (physical mapping): 8
Interrupt Coalescing: Enabled, timeout is 16 us, depth is 4That did not help.
I tried changing 'Device Missing I/O Delay' to 32. That did not help either.
I tried changing /sys/class/scsi_device/*/device/timeout from 30 to 100 and then to 3. All failed.
$ uname -a
Linux server 3.2.0-0.bpo.1-amd64 #1 SMP Sat Feb 11 08:41:32 UTC 2012 x86_64 GNU/Linux
$ grep LSISAS1068E /var/log/messages
Mar 13 15:47:44 server kernel: [ 21.082363] scsi5 : ioc0: LSISAS1068E B3, FwRev=01210000h, Ports=1, MaxQ=483, IRQ=45
$ modinfo mptscsih
filename: /lib/modules/3.2.0-0.bpo.1-amd64/kernel/drivers/message/fusion/mptscsih.ko
version: 3.04.20
license: GPL
description: Fusion MPT SCSI Host driver
author: LSI Corporation
srcversion: 85D42A00FEBA3C95555E3AF
depends: scsi_mod,mptbase
intree: Y
vermagic: 3.2.0-0.bpo.1-amd64 SMP mod_unload modversions
$ cat /sys/block/sdae/device/model
ST3000DM001-9YN1
$ cat /sys/block/sdae/device/rev
CC4CThe problem happens extremely rarely if there are only read or write operations: I can read or write 1 TB with no problem. The problem seems to arise when there are both read and write operations. On a raid6 that happens if you write a file smaller than stripe size and you do not have the stripe cached already (in which case the stripe must be read to compute new checksum).
The system is not a virtual machine.
What is causing the problem? How do I get rid of the 30 seconds of freezing?
Edit: additional testing
I have found a nice test set that seems to provoke the problem. It contains files that are smaller than the stripe size thus forcing recomputation of parity thus forcing a lot of reads combined with the writes.
I must admit that I did not think that the queue scheduler would have any effect on this problem. I was wrong. It is clear that deadlineis much worse than the others. None of them solve the problem, though.
# cat /sys/block/sdaa/queue/scheduler
noop deadline [cfq]Changing scheduler to noop causes the problem to arise after 100-120 secs.
parallel echo noop \> {} ::: /sys/block/sd*/queue/schedulerChanging scheduler to deadline causes the problem to arise after 20-30 secs.
parallel echo deadline \> {} ::: /sys/block/sd*/queue/schedulerChanging scheduler to cfq causes the problem to arise after 120-300 secs.
parallel echo cfq \> {} ::: /sys/block/sd*/queue/schedulerEdit2
Since the scheduler has an effect I am thinking if the problem is caused by too many requests in a timeframe. Can I somehow throttle the number of requests sent per second?
| mptscsih: ioc0: task abort: SUCCESS (rv=2002) causes 30 seconds freezing |
cat /proc/scsi/scsi |
How can I list scsi device ids under Linux?
| Find scsi device ids under Linux? |
Here's an approach that should work:Get the list of sdX devices to exclude
exclude=$(cut -d/ -f3 exclude.txt)Iterate over the /sys/block/sdX directories:
for sysfile in /sys/block/sd? ; doExtract the sdX name from that path, and build the delete file name
dev=$(basename $sysfile)
del=$sysfile/device/deleteCheck if that sdX is in the excluded list:
if [[ $exclude == *$dev* ]] ; then
echo "Device $dev excluded"Check if you have appropriate write permissions on the delete file
elif [ ! -w $del ] ; then
echo "$del does not exist or is not writable"Do the delete (not really)
else
echo "echo 1 > $del"
fiYou're done!
done |
I have a list of scsi disks that I need to remove. The list is considered random at best and changes from time to time. I want to remove everything except a predefined list that I have created. Let's assume for now that I only want to keep:
/dev/sda
/dev/sdbThe command I need to execute is:
"echo 1 > /sys/block/sdX/device/delete"Where X is the device to be removed.
I'm not good at bash scripting so I don't really know where to start.
To recap so I don't get DV'd for not being clear.
I need to "echo 1 > /sys/block/sdX/device/delete" for every sdX device on the system except for a predetermined list.
EDIT: After the answer below, this is what I've decided to use. "LocalDisks.txt" should contain lines like "/dev/sda"
#!/bin/bash
exclude=$(cut -d/ -f3 LocalDisks.txt)for sysfile in /sys/block/sd* ; dodev=$(basename $sysfile)
del=$sysfile/device/deleteif [[ $exclude == *$dev* ]] ; then
echo "Device $dev excluded"elif [ ! -w $del ] ; then
echo "$del does not exist or is not writable"else
echo 1 > $del
fidone | "echo 1 > /sys/block/sdX/device/delete" on all disks except predetermined list |
You can get the make and model of each physical block device with lsblk:
$ lsblk -do +VENDOR,MODEL,SERIAL
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT VENDOR MODEL SERIAL
sda 8:0 0 1.8T 0 disk ATA WDC WD20EARS-00M WD-WCAZA571XXXX
sdf 8:80 0 465.1G 0 disk WD My Passport 070A WD-WXF1A30YXXXX
sr0 11:0 1 1024M 0 rom HL-DT-ST DVDRAM GH22LS40 6FA3D3AFXXXX
sr1 11:1 1 668M 0 rom WD Virtual CD 070A 57584631413330593830XXXX |
from the df -k command, I see sda, sdb, sdc disks. They have some partitions (for example, sda has sda1, sda3). I want to detach sdb and sdc temporarily for OS upgrade. How exactly can I tell which disk is which? (actually I know sdc is the disk I recently attached, but how can I tell sda,sdb,sdc, from the SCSI connection? I remember SCSI connectors didn't have any order..)
ckim@stph45:/boot/grub] cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: Samsung SSD 850 Rev: EXM0
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST2000DM001-1CH1 Rev: CC27
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: HL-DT-ST Model: BD-RE BH16NS40 Rev: 1.00
Type: CD-ROM ANSI SCSI revision: 05
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: WDC WD100EFAX-68 Rev: 83.H
Type: Direct-Access ANSI SCSI revision: 05
ckim@stph45:/boot/grub] df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 226026064 103433752 111103800 49% /
tmpfs 32958068 0 32958068 0% /dev/shm
/dev/sda1 201454560 4911408 186303152 3% /tools
/dev/sdc1 4806455048 387782752 4174511784 9% /home1
/dev/sdc2 4806466304 18391096 4543914032 1% /home2
/dev/sdb1 1922727280 853724060 971327620 47% /home | how to tell which disk is sda and which is sdb disk? |
I think you can get what you want by cross referencing the output from lshw -c disk and this command, udevadm info -q all -n <device>.
For example
My /dev/sda device shows the following output for lshw:
$ sudo lshw -c disk
*-disk
description: ATA Disk
product: ST9500420AS
vendor: Seagate
physical id: 0
bus info: scsi@0:0.0.0
logical name: /dev/sda
version: 0003
serial: 5XA1A2CZ
size: 465GiB (500GB)
capabilities: partitioned partitioned:dos
configuration: ansiversion=5 signature=ebc57757If I interrogate the same device using udevadm I can find out what it's DEVPATH is:
$ sudo udevadm info -q all -n /dev/sda | grep DEVPATH
E: DEVPATH=/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sdaThis string has all the info you're looking for regarding this device. The PCI address, "0000:00:1f.2", along with the SCSI address, "0:0:0:0". The SCSI address is the data in the 6th position if you break this data up on the forward slashes ("/").
|
I have a PCI-attached SATA controller connected to a (variable) number of disks on a machine with a Linux 2.6.39 kernel. I am trying to find the physical location of the disk, knowing the PCI address of the controller.
In this case, controller is at address 0000:01:00.0, and there are two disks, with SCSI addresses 6:0.0.0.0 and 8:0.0.0 (though these last two aren't necessarily fixed, this is just what they are right now).
lshw -c storage shows the controller and the SCSI devices (system disk and controller trimmed):
*-storage
description: SATA controller
product: Marvell Technology Group Ltd.
vendor: Marvell Technology Group Ltd.
physical id: 0
bus info: pci@0000:01:00.0
version: 10
width: 32 bits
clock: 33MHz
capabilities: storage pm msi pciexpress ahci_1.0 bus_master cap_list rom
configuration: driver=ahci latency=0
resources: irq:51 ioport:e050(size=8) ioport:e040(size=4) ioport:e030(size=8) ioport:e020(size=4) ioport:e000(size=32) memory:f7b10000-f7b107ff memory:f7b00000-f7b0ffff
*-scsi:1
physical id: 2
logical name: scsi6
capabilities: emulated
*-scsi:2
physical id: 3
logical name: scsi8
capabilities: emulatedlshw -c disk shows the disks:
*-disk
description: ATA Disk
product: TOSHIBA THNSNF25
vendor: Toshiba
physical id: 0.0.0
bus info: scsi@6:0.0.0
logical name: /dev/sdb
version: FSXA
serial: 824S105DT15Y
size: 238GiB (256GB)
capabilities: gpt-1.00 partitioned partitioned:gpt
configuration: ansiversion=5 guid=79a679b1-3c04-4306-a498-9a959e2df371 sectorsize=4096
*-disk
description: ATA Disk
product: TOSHIBA THNSNF25
vendor: Toshiba
physical id: 0.0.0
bus info: scsi@8:0.0.0
logical name: /dev/sdc
version: FSXA
serial: 824S1055T15Y
size: 238GiB (256GB)
capabilities: gpt-1.00 partitioned partitioned:gpt
configuration: ansiversion=5 guid=79a679b1-3c04-4306-a498-9a959e2df371 sectorsize=4096However, there does not seem to be a way to go from the PCI address to the SCSI address. I have also looked under the sysfs entries for the PCI and SCSI devices and no been able to find an entry which makes the connection. When the disks are plugged into different physical ports on the controller, the SCSI address doesn't necessarily change, so this cannot be used with an offset to correctly determine the location of the disk.
Listing disks by ID also doesn't work - ls -lah /dev/disks/by-path shows that the entry for pci-0000:01:00.0-scsi-0:0:0:0 points to /dev/sdc (or in general, the last disk connected), and there are no other paths that start in pci-0000:01:00.0 that aren't just links to partitions of that drive.
Are there any other ways to map the controller address into something that can be used to locate the disks?
| Match PCI address of SATA controller and SCSI address of attached disks |
You can change the timeout by writing to /sys/module/usb_storage/parameters/delay_use.
For older usb disks, a settle delay of 5 seconds or even more may be needed (and 5 was the default until it was reduced to 1 second in 2010), presumably because the controller is starved of power while the disk motors are initializing. Or possibly because the internal SCSI firmware takes time to boot up before it's responsive (can you tell I'm just speculating here?).
For modern solid-state storage, it's probably not needed at all, and many people set it to 0. Unfortunately, it's a global parameter that applies to all devices, so if you have any slow devices at all, you have to endure the delay for every mass-storage USB device you use. It would be nice if it could be set per-device by udev, but that's not the case.
|
I'm writing an initramfs-script and want to detect usb-sticks as fast as possible.
When I insert an usb 2.0 stick, the detection of idVendor, idProduct and USB class happens within 100 ms. But the scsi subsystem does not "attach" until about 1 s has passed and it takes another 500 ms before the partition is fully recognized.
I assume that the driver needs to read the partition table in order to detect partitions. Why does it take so long? I don't expect the urb send/recev time to be that long or the access time of the flash to take so much time.
I've tried 5 sticks from different vendors and the result is about the same.
[ 5731.097540] usb 2-1.2: new high-speed USB device number 7 using ehci-pci
[ 5731.195360] usb 2-1.2: New USB device found, idVendor=0951, idProduct=1643
[ 5731.195368] usb 2-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 5731.195372] usb 2-1.2: Product: DataTraveler G3
[ 5731.195376] usb 2-1.2: Manufacturer: Kingston
[ 5731.195379] usb 2-1.2: SerialNumber: 001CC0EC32BCBBB04712022C
[ 5731.196942] usb-storage 2-1.2:1.0: USB Mass Storage device detected
[ 5731.197193] scsi host9: usb-storage 2-1.2:1.0
[ 5732.268389] scsi 9:0:0:0: Direct-Access Kingston DataTraveler G3 PMAP PQ: 0 ANSI: 0 CCS
[ 5732.268995] sd 9:0:0:0: Attached scsi generic sg2 type 0
[ 5732.883939] sd 9:0:0:0: [sdb] 7595520 512-byte logical blocks: (3.88 GB/3.62 GiB)
[ 5732.884565] sd 9:0:0:0: [sdb] Write Protect is off
[ 5732.884568] sd 9:0:0:0: [sdb] Mode Sense: 23 00 00 00
[ 5732.885178] sd 9:0:0:0: [sdb] No Caching mode page found
[ 5732.885181] sd 9:0:0:0: [sdb] Assuming drive cache: write through
[ 5732.903834] sdb: sdb1
[ 5732.906812] sd 9:0:0:0: [sdb] Attached SCSI removable diskEdit
So I've found the delay_use module parameter that by default is set to 1 second, which explains the delay I'm seeing. But I'm wondering if someone can provide context as to why that parameter is needed? A comment suggested that for older usb sticks, delay_use might need to be set to as much as 5 seconds. What is it inside the usb stick that takes so much time; firmware initialization; reads from the flash? I find it hard to belive that we need delays as long as 1 second or more when the latency for accessing flash is in the order of tens of microseconds.
I realize that this might be slightly off-topic for this channel, if so, I'll go to electronics.stackexchange.com
| Why does it take so long to detect an usb stick? |
All block devices have a removable attribute, among other block device attributes. These attributes can be read from userland in sysfs at /sys/block/DEVICE/ATTRIBUTE, e.g. /sys/block/sdb/removable.
You can query this attribute from a udev rule, with ATTR{removable}=="0" or ATTR{removable}=="1".
Note that removable (the device keeps existing but may have no media) isn't the same thing as hotpluggable (the device can come and go). For example, CD drives are removable but often not hotpluggable. USB flash drives are both, but hard disks in external enclosures are typically hotpluggable but not removable.
If you want to find out the nitty-gritty of when a device is considered removable, you'll have to dig into the kernel source. Search for removable — there aren't too many spurious hits. For SCSI devices, the removable bit is read from the device in scsi_add_lun with a SCSI INQUIRY command.
|
In DMESG I see:
[sdb] Attached SCSI removable diskHow does Linux decide what is removable and not removable?
Is there a way I can look up if a device is "removable" or not other than the log, for example somehwere in /sys or /proc? | How to tell if a SCSI device is removable? |
This answer checks the list of all attached block devices and iterates over them with udevadmin to check their respective ID_BUS.
You can see all attached block devices in /sys/block. Here is the bash script from the linked answer that should let you know if it is a USB storage device:
for device in /sys/block/*
do
if udevadm info --query=property --path=$device | grep -q ^ID_BUS=usb
then
echo $device
fi
done |
I would like to list only the USB storage devices connected to my computer. Since these are SCSI disks, I used the command lsscsi, which lists the USB drives as well as my computer's hard drive and CD drive. Is there a way to ignore the memory storage that's not a USB? I have also tried lsusb, but this includes my keyboard, mouse, and other non-storage devices.
| What is a Linux command that lists only USB storage devices? |
After cycling around /sys for a while, I found this solution:
# echo /sys/class/enclosure/*/*/device/block/sdaa
/sys/class/enclosure/2:0:35:0/Slot 15/device/block/sdaa
# echo 1 > '/sys/class/enclosure/2:0:35:0/Slot 15/locate' Or:
# echo 1 > /sys/class/enclosure/*/*/device/block/sdaa/../../enclosure*/locateTo blink all detected devices:
parallel echo 1 \> ::: /sys/class/enclosure/*/*/device/block/sd*/../../enclosure*/locateThis is useful if you have a drive that is so broken that is not even detected by Linux (e.g. it does not spin up).
Edit:
I have made a small tool (called blink) to blink slots. https://gitlab.com/ole.tange/tangetools/tree/master/blink
|
I want to blink the failing device in my 24-disk SAS enclosure.
I have found sg_ses --index 7 --set=locate /dev/sg24 which is supposed to identify slot 7.
But how do I figure out which slot/index /dev/sdh is?
This is not obvious as Linux does not name /dev/sdX after the slot, but after the sequence it was detected. Think what happens if slot 1 is empty at boot, but is filled later.
Edit:
The controller is a SAS2008.
| Locate disk in SAS enclosure |
They are the first four bytes in the buffer returned from a mode sense command (see drivers/scsi/sd.c, sd_mode_sense()). The meaning can be gleaned by looking at drivers/scsi/scsi_lib.c, scsi_mode_sense()): this routine returns a structure called "data" which, according to a comment, abstracts the mode header data; the first two bytes in the buffer (00 and 3a) are the high order/low order bytes of the length of "data" minus 2, the third byte (00) is the medium_type, and the fourth byte is device-specific:
data->length = buffer[0]*256 + buffer[1] + 2;
data->medium_type = buffer[2];
data->device_specific = buffer[3];So data->length is 0*256 + 0x3a + 2 = 60, the medium_type is 0 and who knows what the fourth byte means... (BTW, the printk that prints that Mode Sense: line is labeled KERN_DEBUG so it's really not meant for regular consumption).
You can use sg_modes from the sg3_utils package to examine things like this without having to go to extreme lengths to translate them:
# sg_modes -a /dev/sg0
ATA SAMSUNG MZ7LN512 4L0Q peripheral_type: disk [0x0]
Mode parameter header from MODE SENSE(10):
Mode data length=60, medium type=0x00, WP=0, DpoFua=0, longlba=0
Block descriptor length=8
> Direct access device block descriptors:
Density code=0x0
00 00 00 00 00 00 00 02 00>> Read-Write error recovery, page_control: current
00 01 0a 80 00 00 00 00 00 00 00 00 00
>> Caching, page_control: current
00 08 12 04 00 00 00 00 00 00 00 00 00 00 00 00 00
10 00 00 00 00
>> Control, page_control: current
00 0a 0a 02 00 00 00 00 00 ff ff 00 1eThe other line you mention:
Write cache: enabled, read cache: enabled, doesn't support DPO or FUAis produced by the routine sd_read_cache_type in drivers/scsi/sd.c. It uses a couple of different sources for that information: the write and read cache information is obtained by looking at a specific byte of the modepage==8 buffer; the DPO/FUA information is obtained from the abovementioned "data" structure (although it does not necessarily contain the same data: the modepages used in the two calls may be different).
AFAICT, the information on this line and the information on the debugging line above are not directly related.
|
Run dmesg and grep on [sda] Mode Sense: to return a row such as this:
[sda] Mode Sense: 00 3a 00 00What do the 4 bytes of data represent 00 3a 00 00?
Its likely the answer is contained in a subsequent row of output such as:
[sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA... but I would like to know how to map the data to the descriptions
| dmesg Output - Mode Sense - Bytes Explanation |
With FreeBSD 9+ the camcontrol utility can be used to control if either a SATA or a SCSI drive is disconnected, or not, in such circumstances:
camcontrol negotiate /dev/<dev> -D disable |
While accessing a drive with high error rates (as, for example, here for opensuse) in FreeBSD, the system eventually disconnects the drive and it disappears from /dev. This makes it impossible to run ddrescue or testdrive in any reasonable fashion.
| How to prevent FreeBSD from disconnecting a drive device? |
The very high SCSI device numbers (scsi 0:0:90903:0) show that there's a problem in this case that the hardware keeps dropping & re-initializing the drive.
The MPT SAS hardware does most of the re-initializing itself here, so we can't entirely control that from the Kernel. Separately, you mention having 21 drives, so they are probably behind one or more SAS expanders.
The question then becomes, it is possible, in software, to disable a port on a SAS expander?
If the expander actually supports it (I think it was optional in the standard), then yes!
The package in question is smp_utils. sg3_utils will also be helpful).
What you want is:Figure out the expander device per the manpage above (probably ls /dev/bsg/expand*)
Confirm the faulty disks are attached to the phys from the dmesg: smp_discover /dev/bsg/expander-....
Disable the PHYs, in the form of smp_phy_control --phy=NN --op=di /dev/bsg/expander-.... Expanded for your case:smp_phy_control --phy=13 --op=di /dev/bsg/expander-0:0
smp_phy_control --phy=15 --op=di /dev/bsg/expander-0:0The phy numbers were already in your output: 13, 15, but you might want to confirm them using smp_discover.
|
Similar to this question I a am interested in completely ignoring a drive, but in my case it is one drive which is exposed to the system as a SCSI drive. I have two drives from 21 drives in the server failing and failing:
[2524080.689492] scsi 0:0:90900:0: Direct-Access ATA ST3000DM001-1CH1 CC43 PQ: 0 ANSI: 6
[2524080.689502] scsi 0:0:90900:0: SATA: handle(0x000d), sas_addr(0x5003048001f298cf), phy(15), device_name(0x0000000000000000)
[2524080.689506] scsi 0:0:90900:0: SATA: enclosure_logical_id(0x5003048001f298ff), slot(3)
[2524080.689594] scsi 0:0:90900:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[2524080.690671] sd 0:0:90900:0: tag#1 CDB: Test Unit Ready 00 00 00 00 00 00
[2524080.690680] mpt2sas_cm0: sas_address(0x5003048001f298cf), phy(15)
[2524080.690683] mpt2sas_cm0: enclosure_logical_id(0x5003048001f298ff),slot(3)
[2524080.690686] mpt2sas_cm0: handle(0x000d), ioc_status(success)(0x0000), smid(17)
[2524080.690695] mpt2sas_cm0: request_len(0), underflow(0), resid(0)
[2524080.690698] mpt2sas_cm0: tag(65535), transfer_count(0), sc->result(0x00000000)
[2524080.690701] mpt2sas_cm0: scsi_status(check condition)(0x02), scsi_state(autosense valid )(0x01)
[2524080.690704] mpt2sas_cm0: [sense_key,asc,ascq]: [0x06,0x29,0x00], count(18)
[2524080.690728] sd 0:0:90900:0: Attached scsi generic sg0 type 0
[2524080.691269] sd 0:0:90900:0: [sdb] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB)
[2524080.691285] sd 0:0:90900:0: [sdb] 4096-byte physical blocks
[2524111.163712] sd 0:0:90900:0: attempting task abort! scmd(ffff880869121800)
[2524111.163722] sd 0:0:90900:0: tag#2 CDB: Mode Sense(6) 1a 00 3f 00 04 00
[2524111.163729] scsi target0:0:90900: handle(0x000d), sas_address(0x5003048001f298cf), phy(15)
[2524111.163733] scsi target0:0:90900: enclosure_logical_id(0x5003048001f298ff), slot(3)
[2524111.442310] sd 0:0:90900:0: device_block, handle(0x000d)
[2524113.442331] sd 0:0:90900:0: device_unblock and setting to running, handle(0x000d)
[2524114.939280] sd 0:0:90900:0: task abort: SUCCESS scmd(ffff880869121800)
[2524114.939358] sd 0:0:90900:0: [sdb] Write Protect is off
[2524114.939366] sd 0:0:90900:0: [sdb] Mode Sense: 00 00 00 00
[2524114.939444] sd 0:0:90900:0: [sdb] Asking for cache data failed
[2524114.939501] sd 0:0:90900:0: [sdb] Assuming drive cache: write through
[2524114.940380] sd 0:0:90900:0: [sdb] Read Capacity(16) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[2524114.940387] sd 0:0:90900:0: [sdb] Sense not available.
[2524114.940566] sd 0:0:90900:0: [sdb] Read Capacity(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[2524114.940570] sd 0:0:90900:0: [sdb] Sense not available.
[2524114.940778] sd 0:0:90900:0: [sdb] Attached SCSI disk
[2524114.984489] mpt2sas_cm0: removing handle(0x000d), sas_addr(0x5003048001f298cf)
[2524114.984494] mpt2sas_cm0: removing : enclosure logical id(0x5003048001f298ff), slot(3)
[2524134.939383] mpt2sas_cm0: log_info(0x31111000): originator(PL), code(0x11), sub_code(0x1000)
[2524134.940116] mpt2sas_cm0: removing handle(0x000e), sas_addr(0x5003048001f298d0)
[2524134.940122] mpt2sas_cm0: removing enclosure logical id(0x5003048001f298ff), slot(4)
[2524153.940404] scsi 0:0:90902:0: Direct-Access ATA ST3000DM001-1CH1 CC43 PQ: 0 ANSI: 6
[2524153.940418] scsi 0:0:90902:0: SATA: handle(0x000d), sas_addr(0x5003048001f298cf), phy(15), device_name(0x0000000000000000)
[2524153.940423] scsi 0:0:90902:0: SATA: enclosure_logical_id(0x5003048001f298ff), slot(3)
[2524153.940699] scsi 0:0:90902:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[2524153.942194] sd 0:0:90902:0: tag#0 CDB: Test Unit Ready 00 00 00 00 00 00
[2524153.942205] mpt2sas_cm0: sas_address(0x5003048001f298cf), phy(15)
[2524153.942208] mpt2sas_cm0: enclosure_logical_id(0x5003048001f298ff),slot(3)
[2524153.942212] mpt2sas_cm0: handle(0x000d), ioc_status(success)(0x0000), smid(12)
[2524153.942214] mpt2sas_cm0: request_len(0), underflow(0), resid(0)
[2524153.942217] mpt2sas_cm0: tag(65535), transfer_count(0), sc->result(0x00000000)
[2524153.942220] mpt2sas_cm0: scsi_status(check condition)(0x02), scsi_state(autosense valid )(0x01)
[2524153.942223] mpt2sas_cm0: [sense_key,asc,ascq]: [0x06,0x29,0x00], count(18)
[2524153.942361] sd 0:0:90902:0: Attached scsi generic sg0 type 0
[2524153.942833] sd 0:0:90902:0: [sdb] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB)
[2524153.942840] sd 0:0:90902:0: [sdb] 4096-byte physical blocks
[2524154.190159] scsi 0:0:90903:0: Direct-Access ATA ST3000DM001-1CH1 CC43 PQ: 0 ANSI: 6
[2524154.190174] scsi 0:0:90903:0: SATA: handle(0x0022), sas_addr(0x5003048001ec55ed), phy(13), device_name(0x0000000000000000)
[2524154.190179] scsi 0:0:90903:0: SATA: enclosure_logical_id(0x5003048001ec55ff), slot(1)
[2524154.190368] scsi 0:0:90903:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[2524154.191634] sd 0:0:90903:0: tag#1 CDB: Test Unit Ready 00 00 00 00 00 00
[2524154.191639] mpt2sas_cm0: sas_address(0x5003048001ec55ed), phy(13)
[2524154.191642] mpt2sas_cm0: enclosure_logical_id(0x5003048001ec55ff),slot(1)
[2524154.191645] mpt2sas_cm0: handle(0x0022), ioc_status(success)(0x0000), smid(12)
[2524154.191648] mpt2sas_cm0: request_len(0), underflow(0), resid(0)
[2524154.191651] mpt2sas_cm0: tag(65535), transfer_count(0), sc->result(0x00000000)
[2524154.191654] mpt2sas_cm0: scsi_status(check condition)(0x02), scsi_state(autosense valid )(0x01)
[2524154.191657] mpt2sas_cm0: [sense_key,asc,ascq]: [0x06,0x29,0x00], count(18)
[2524154.191800] sd 0:0:90903:0: Attached scsi generic sg3 type 0
[2524154.192211] sd 0:0:90903:0: [sdd] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB)
[2524154.192219] sd 0:0:90903:0: [sdd] 4096-byte physical blocksThis is in our case an old server we have decided not to upgrade/fix. And I am now thinking about even not removing old drives out, just leaving them in, making array smaller, and disabling them. The array is not full, and we are using it only as an additional backup location for some other servers.
So, me being lazy and not wanting to go to a server room, is there a way to just disable those drives and move on? :-)
More information about the system:
lspci -nn -v -s 05:00.0:
05:00.0 Serial Attached SCSI controller [0107]: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 [1000:0087] (rev 05)
Subsystem: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 [1000:3020]
Flags: bus master, fast devsel, latency 0, IRQ 29
I/O ports at 7000 [size=256]
Memory at df640000 (64-bit, non-prefetchable) [size=64K]
Memory at df600000 (64-bit, non-prefetchable) [size=256K]
Expansion ROM at df500000 [disabled] [size=1M]
Capabilities: [50] Power Management version 3
Capabilities: [68] Express Endpoint, MSI 00
Capabilities: [d0] Vital Product Data
Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [c0] MSI-X: Enable+ Count=16 Masked-
Capabilities: [100] Advanced Error Reporting
Capabilities: [1e0] #19
Capabilities: [1c0] Power Budgeting <?>
Capabilities: [190] #16
Capabilities: [148] Alternative Routing-ID Interpretation (ARI)
Kernel driver in use: mpt3sas
Kernel modules: mpt3saslsscsi -v:
[0:0:3:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdc
dir: /sys/bus/scsi/devices/0:0:3:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:2/end_device-0:0:2/target0:0:3/0:0:3:0]
[0:0:6:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdf
dir: /sys/bus/scsi/devices/0:0:6:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:5/end_device-0:0:5/target0:0:6/0:0:6:0]
[0:0:7:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdg
dir: /sys/bus/scsi/devices/0:0:7:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:6/end_device-0:0:6/target0:0:7/0:0:7:0]
[0:0:8:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdh
dir: /sys/bus/scsi/devices/0:0:8:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:7/end_device-0:0:7/target0:0:8/0:0:8:0]
[0:0:11:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdi
dir: /sys/bus/scsi/devices/0:0:11:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:10/end_device-0:0:10/target0:0:11/0:0:11:0]
[0:0:12:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdj
dir: /sys/bus/scsi/devices/0:0:12:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:11/end_device-0:0:11/target0:0:12/0:0:12:0]
[0:0:13:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdk
dir: /sys/bus/scsi/devices/0:0:13:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:12/end_device-0:0:12/target0:0:13/0:0:13:0]
[0:0:15:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdl
dir: /sys/bus/scsi/devices/0:0:15:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:14/end_device-0:0:14/target0:0:15/0:0:15:0]
[0:0:16:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdm
dir: /sys/bus/scsi/devices/0:0:16:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:15/end_device-0:0:15/target0:0:16/0:0:16:0]
[0:0:18:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdn
dir: /sys/bus/scsi/devices/0:0:18:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:17/end_device-0:0:17/target0:0:18/0:0:18:0]
[0:0:20:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdo
dir: /sys/bus/scsi/devices/0:0:20:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:19/end_device-0:0:19/target0:0:20/0:0:20:0]
[0:0:21:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdp
dir: /sys/bus/scsi/devices/0:0:21:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:20/end_device-0:0:20/target0:0:21/0:0:21:0]
[0:0:22:0] enclosu LSI CORP SAS2X36 0717 -
dir: /sys/bus/scsi/devices/0:0:22:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:21/end_device-0:0:21/target0:0:22/0:0:22:0]
[0:0:23:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdq
dir: /sys/bus/scsi/devices/0:0:23:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:1/expander-0:1/port-0:1:1/end_device-0:1:1/target0:0:23/0:0:23:0]
[0:0:24:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdr
dir: /sys/bus/scsi/devices/0:0:24:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:1/expander-0:1/port-0:1:2/end_device-0:1:2/target0:0:24/0:0:24:0]
[0:0:25:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sds
dir: /sys/bus/scsi/devices/0:0:25:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:1/expander-0:1/port-0:1:3/end_device-0:1:3/target0:0:25/0:0:25:0]
[0:0:26:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdt
dir: /sys/bus/scsi/devices/0:0:26:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:1/expander-0:1/port-0:1:4/end_device-0:1:4/target0:0:26/0:0:26:0]
[0:0:28:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdu
dir: /sys/bus/scsi/devices/0:0:28:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:1/expander-0:1/port-0:1:6/end_device-0:1:6/target0:0:28/0:0:28:0]
[0:0:30:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdw
dir: /sys/bus/scsi/devices/0:0:30:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:1/expander-0:1/port-0:1:8/end_device-0:1:8/target0:0:30/0:0:30:0]
[0:0:31:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdx
dir: /sys/bus/scsi/devices/0:0:31:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:1/expander-0:1/port-0:1:9/end_device-0:1:9/target0:0:31/0:0:31:0]
[0:0:34:0] enclosu LSI CORP SAS2X28 0717 -
dir: /sys/bus/scsi/devices/0:0:34:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:1/expander-0:1/port-0:1:12/end_device-0:1:12/target0:0:34/0:0:34:0]
[0:0:25856:0]disk ATA ST3000DM001-1CH1 CC43 /dev/sda
dir: /sys/bus/scsi/devices/0:0:25856:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:14357/end_device-0:0:14357/target0:0:25856/0:0:25856:0]
[0:0:98760:0]disk ATA ST3000DM001-1CH1 CC43 -
dir: /sys/bus/scsi/devices/0:0:98760:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:60931/end_device-0:0:60931/target0:0:98760/0:0:98760:0]
[2:0:0:0] disk ATA PLEXTOR PX-128M5 1.00 /dev/sdy
dir: /sys/bus/scsi/devices/2:0:0:0 [/sys/devices/pci0000:00/0000:00:1f.2/ata2/host2/target2:0:0/2:0:0:0]lsscsi -Hv:
[0] mpt2sas
dir: /sys/class/scsi_host//host0
device dir: /sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0
[1] ahci
dir: /sys/class/scsi_host//host1
device dir: /sys/devices/pci0000:00/0000:00:1f.2/ata1/host1
[2] ahci
dir: /sys/class/scsi_host//host2
device dir: /sys/devices/pci0000:00/0000:00:1f.2/ata2/host2
[3] ahci
dir: /sys/class/scsi_host//host3
device dir: /sys/devices/pci0000:00/0000:00:1f.2/ata3/host3
[4] ahci
dir: /sys/class/scsi_host//host4
device dir: /sys/devices/pci0000:00/0000:00:1f.2/ata4/host4
[5] ahci
dir: /sys/class/scsi_host//host5
device dir: /sys/devices/pci0000:00/0000:00:1f.2/ata5/host5
[6] ahci
dir: /sys/class/scsi_host//host6
device dir: /sys/devices/pci0000:00/0000:00:1f.2/ata6/host6smp_discover /dev/bsg/expander-0:0:
phy 0:S:attached:[500605b00507dd20:03 i(SSP+STP+SMP)] 6 Gbps
phy 1:S:attached:[500605b00507dd20:02 i(SSP+STP+SMP)] 6 Gbps
phy 2:S:attached:[500605b00507dd20:01 i(SSP+STP+SMP)] 6 Gbps
phy 3:S:attached:[500605b00507dd20:00 i(SSP+STP+SMP)] 6 Gbps
phy 12:U:attached:[5003048001f298cc:00 t(SATA)] 6 Gbps
phy 13:U:attached:[5003048001f298cd:00 t(SATA)] 6 Gbps
phy 14:U:attached:[5003048001f298ce:00 t(SATA)] 6 Gbps
phy 17:U:attached:[5003048001f298d1:00 t(SATA)] 6 Gbps
phy 19:U:attached:[5003048001f298d3:00 t(SATA)] 6 Gbps
phy 20:U:attached:[5003048001f298d4:00 t(SATA)] 6 Gbps
phy 21:U:attached:[5003048001f298d5:00 t(SATA)] 6 Gbps
phy 22:U:attached:[5003048001f298d6:00 t(SATA)] 6 Gbps
phy 23:U:attached:[5003048001f298d7:00 t(SATA)] 6 Gbps
phy 25:U:attached:[5003048001f298d9:00 t(SATA)] 6 Gbps
phy 26:U:attached:[5003048001f298da:00 t(SATA)] 6 Gbps
phy 27:U:attached:[5003048001f298db:00 t(SATA)] 6 Gbps
phy 28:U:attached:[5003048001f298dc:00 t(SATA)] 6 Gbps
phy 29:U:attached:[5003048001f298dd:00 t(SATA)] 6 Gbps
phy 31:U:attached:[5003048001f298df:00 t(SATA)] 6 Gbps
phy 32:U:attached:[5003048001f298e0:00 t(SATA)] 6 Gbps
phy 33:U:attached:[5003048001f298e1:00 t(SATA)] 6 Gbps
phy 34:U:attached:[5003048001f298e2:00 t(SATA)] 6 Gbps
phy 35:U:attached:[5003048001f298e3:00 t(SATA)] 6 Gbps
phy 36:D:attached:[5003048001f298fd:00 V i(SSP+SMP) t(SSP)] 6 Gbps | How to get Linux to completely ignore a SCSI drive? |
It is possible that the buffering mode has been set to “unbuffered.” This is a special feature of LTO tape drives, forcing them to return from a WRITE command only after the data has been written to the tape. This stops any streaming from happening and causes the observed effects.
Unfortunately FreeBSD does not provide the mt drvbuffer 1 command from Linux to turn buffering back on, but it is possible to manually send an appropriately crafted MODE SELECT command to the drive to turn buffering back on:
camcontrol cmd /dev/nsa0 -c '15 10 00 00 04 00' -o 4 '0 0 10 0'If you have more than one tape drive, replace /dev/nsa0 with an appropriate device file.
|
Regardless of what data I write to my LTO-4 tape drive /dev/nsa0, writing is very slow (less than 1 MB/s) and the tape is constantly being wound back and forth in a shoe-shine pattern. No speed problem occurs when reading or erasing (with mt erase) tapes.
It appears that this problem occurs since I tried to enable SMART monitoring on the tape drive using smartctl.
| My LTO tape drive is slow and “shoe-shines” on FreeBSD |
An easy way to get the correspondence is to look at the device/block subdirectory in the /sys hierarchy:
# ls -1d /sys/class/scsi_device/*/device/block/*
/sys/class/scsi_device/1:0:0:0/device/block/sr0
/sys/class/scsi_device/2:0:0:0/device/block/sda
/sys/class/scsi_device/2:0:1:0/device/block/sdb
/sys/class/scsi_device/2:0:2:0/device/block/sdc
/sys/class/scsi_device/2:0:3:0/device/block/sdd
/sys/class/scsi_device/2:0:4:0/device/block/sde
/sys/class/scsi_device/2:0:5:0/device/block/sdfThe directory name in there correspond to the block device name in /dev.
|
Under the /sys/class/scsi_device folder I have the following:
root@linux01:/sys/class/scsi_device # ls
1:0:0:0 2:0:0:0 2:0:1:0 3:0:0:0How can I know how each of these devices is related to the disk?
For example, how can I determine if device 2:0:1:0 is disk /dev/sdb?
root@linux01:/sys/class/scsi_device # sfdisk -s
/dev/sdb: 15728640
/dev/sdc: 524288000
/dev/sda: 153600
[...]
# more /etc/redhat-release ( Linux VM machine )
Red Hat Enterprise Linux Server release 6.5 (Santiago) | Correspondence between SCSI device entries in /sys and the disks in /dev |
Proper SAS/SATA connectors are hot plug safe, so as long as you are using those connectors both for data and power ( not the usual PC molex power connector ) then you won't hurt anything plugging them in.
|
I have Ubuntu 12.04.1 LTS 64-bit on a PowerEdge 2900. My current setup has two 300GB disks (no RAID), but I want migrate the system to three new 600GB disks. I'm trying to connect the new disks, make a RAID5 array, and copy my partitions to the new RAID, but i'm not sure if the server has hot-plug support or, in particular, if it's activated.
Looking at the system I get:
admin@host:~$ lsscsi -v
[4:0:0:0] disk HITACHI HUS151414VLS300 A48B /dev/sda
dir: /sys/bus/scsi/devices/4:0:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:08.0/host4/port-4:0/end_device-4:0/target4:0:0/4:0:0:0]
[4:0:1:0] disk HITACHI HUS151414VLS300 A48B /dev/sdb
dir: /sys/bus/scsi/devices/4:0:1:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:08.0/host4/port-4:1/end_device-4:1/target4:0:1/4:0:1:0]admin@host:~$ lspci | grep '02:08.0'
02:08.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)LSI SAS 1068 has hot-swap support according to the description, but I'm not sure, and I can't power-off the system for migration or check in bios. I'm afraid to just connect a disk in case it damages the controller or the disk itself, so I need a way to check if hot-plug/hot-swap is activated in the system.
| How to check if hot-swap or hot-plug are activated on my Linux machine |
More convenient way is to use lsscsi utility.
From documentation about FC:For FC devices (logical units), the '--transport' option will show the
port name and the port identifier instead of the SCSI INQUIRY
"strings". For example:$ lsscsi -g
[3:0:0:0] enclosu HP A6255A HP04 - /dev/sg3
[3:0:1:0] disk HP 36.4G ST336753FC HP00 /dev/sdd /dev/sg4
[3:0:2:0] disk HP 36.4G ST336753FC HP00 /dev/sde /dev/sg5$ lsscsi -g --transport
[3:0:0:0] enclosu fc:0x50060b00002e48a3,0x0b109b - /dev/sg3
[3:0:1:0] disk fc:0x21000004cf97de68,0x0b109f /dev/sdd /dev/sg4
[3:0:2:0] disk fc:0x21000004cf97e385,0x0b10a3 /dev/sde /dev/sg5lsscsi uses sysfs(from Introduction section of documentation):The lsscsi command scans the sysfs pseudo file system that was
introduced in the 2.6 Linux kernel series. Since most users have
permissions to read sysfs (usually mounted at /sys ) then meta
information can be found on some or all SCSI devices without a user
needing elevated permissions to access special files (e.g. /dev/sda ).
The lsscsi command can also show the relationship between a device's
primary node name, its SCSI generic (sg) node name and its kernel
name. |
I have a Debian 9 running.
It has a SSD connected as well as a fibrechannel link to a SAN storage.
As far I see both are visible as /dev/sdX devices.
How can I find out what is the disk and what is the storage?
Where is the storage configured in the system?
| SCSI: SAN or local disk? |
You're missing the package "lsscsi". You can run yum install lsscsi (on Red Hat based distributions) or apt-get install lsscsi (on Debian based distributions).
| I can't execute the "lsscsi" command in linux, I keep getting "command not found" when I try to run it.
How can I solve this issue?
| Failed in execute of "lsscsi" command [closed] |
The four numbers represent a SCSI address, often referred to as H:C:T:L. The four components are host, channel (or bus), target, and LUN.
With drives you’re likely to encounter on an end-user system (SATA, consumer NVMe, USB), the channel, target, and LUN will all be zero. The host number will depend on which port the drive is connected to, and how it’s enumerated; for fixed drives (SATA, NVMe), it won’t vary most of the time, but it’s not impossible for it to change.
|
I can usually see this log in dmesg:
sd 5:0:0:0: [sda] Attached SCSI diskCan you please explain what are these 4 numbers?
Will these numbers change after reboot? or it should be constant?
| sd 5:0:0:0: [sda] Attached SCSI disk, what are these four numbers? Will they change? |
Use the newer Linux driver version 15.00.00.00 from LSI. This 700 MB download also contains precompiled binaries for Debian 6.0.5.
Installation instruction for amd64 architecture - adapted from the included readme - are:
# cd debian\rpms-03
# dpkg -i mpt2sas-15.00.00.00-3_Debian6.0.5.amd64.debAnd the output is:
Selecting previously deselected package mpt2sas.
(Reading database ... 28905 files and directories currently installed.)
Unpacking mpt2sas (from mpt2sas-15.00.00.00-3_Debian6.0.5.amd64.deb) ...
pre 15.00.00.00
Setting up mpt2sas (15.00.00.00-3) ...
post 15.00.00.00
The mpt driver for kernel 2.6.32-5-amd64 is now version 15.00.00.00
Working files in /tmp/mkinitramfs_PvDVif and overlay in /tmp/mkinitramfs-OL_Ko3jrS
post Install Done.The result is that the old driver is renamed from:
/lib/modules/2.6.32-5-amd64/kernel/drivers/scsi/mpt2sas/mpt2sas.koto:
/lib/modules/2.6.32-5-amd64/kernel/drivers/scsi/mpt2sas/mpt2sas.ko.origand the new driver is installed at:
/lib/modules/2.6.32-5-amd64/weak-updates/mpt2sas/mpt2sas.ko |
Linux debian squeeze 6.0.6 (2.6.32-5-amd64) is supplied with quite an old 02.100.03.00 version of the mpt2sas driver.
I do wish to install a much newer mpt2sas driver version. In know there are backported kernel versions available, like bpo.3 and bpo.4. Those backports both contain version 10 of the mpt2sas driver.
The mpt2sas.ko module is already blacklisted from being loaded during boot, with:
$ echo 'blacklist mpt2sas' >> /etc/modprobe.d/mpt2sas.conf; depmod; update-initramfs -u -k $(uname -r)For this mpt2sas driver are precompiled binaries available in rpm format for RHEL5 and SLES10, and there is source code available.
How can a newer much mpt2sas driver be installed in debian?
| How to install much newer mpt2sas driver version in debian squeeze? |
/dev/sgxx is a SCSI-generic device, which allows sending and receiving of raw SCSI commands. When you write to the device, you are expected to start the write with a SCSI header, which defines the operation you wish to do.
Writing random data to an sg device is really a bad idea. You'll be sending random SCSI commands, which might not even exist (hence function not implemented) and furthermore giving a random byte length for the operation, which is highly likely to result in cannot allocate memory. (If you're really unlucky, the random command might do something.)
Depending on what device you actually have connected to /dev/sg11, you might want to investigate the sg3_utils package, or some more specific SCSI device driver like st (tape drives).
One of the useful utility commands which comes with the sg utils is sg_map, which can let you know what the primary device corresponding to an sg device. On non-ancient Linux systems, you can also install lsscsi which provides a nice listing of SCSI devices, again with both the /dev/sg device and the primary device.
sg3_utils also includes sg_dd which is a version of dd which understands the low-level SCSI protocol. (But only use it if you know what you're doing!)
|
I'm trying to use the following command:
dd if=/dev/urandom of=/dev/sg11 bs=16K count=1But when executing it, I get the following error:
dd: writing `/dev/sg11': Function not implementedWhen I try with dd if=/dev/urandom of=/dev/sg11 bs=16K count=1 conv=fsync, then I get a cannot allocate memory error, which becomes a Function not implemented error when I up the bs size.
What causes this issue and how can I fix it?
UPDATE: Sometimes it will tell me cannot allocate memory, and then it will again tell me that function not implemented for the same bs value.
| dd outputting: "function not implemented" when trying to write to /dev/sg11 |
Fedora 29 ships with the 4.18.16 kernel. It appears that CFQ is the default.
$ grep CONFIG_DEFAULT_IOSCHED= /boot/config-4.18.16-300.fc29.x86_64
CONFIG_DEFAULT_IOSCHED="cfq"
$ grep CONFIG_SCSI_MQ_DEFAULT /boot/config-4.18.16-300.fc29.x86_64
# CONFIG_SCSI_MQ_DEFAULT is not set
$ cat /sys/block/sda/queue/scheduler
noop deadline [cfq] As of this writing (November 24, 2018), 4.19.3 is available as update for F29. But, the config options do not appear to have changed.
4.20.0 (RC1) is in the "Rawhide" devel tree. In that devel-tree kernel, CFQ is still the default, and CONFIG_SCSI_MQ_DEFAULT is still unset. The Fedora Kernel list at https://lists.fedoraproject.org/archives/list/[emailprotected]/ is the best place to discuss whether this should change.
|
If it depends on the exact type of block device, then what is the default I/O scheduler for each type of device?
Background information
Fedora 29 includes a Linux kernel from the 4.19 series. (Technically, the initial release used a 4.18 series kernel. But a 4.19 kernel is installed by the normal software updates).
Starting in version 4.19, the mainline kernel has CONFIG_SCSI_MQ_DEFAULT as default y. I.e. that's what you get if you take the tree published by Linus, without applying any Fedora-specific patches. By default, SCSI and SATA devices will use the new multi-queue block layer. (Linux treats SATA devices as being SCSI, using a translation based on the SAT standard).
This is a transitional step towards removing the old code. All the old code will now be removed in version 4.21 5.0, the next kernel release after 4.20.
In the new MQ system, block devices use a new set of I/O schedulers. These include none, mq-deadline, and bfq. In the mainline 4.19 kernel, the default scheduler is set as follows:/* For blk-mq devices, we default to using mq-deadline, if available, for single
queue devices. If deadline isn't available OR we have multiple queues,
default to "none". */A suggestion has been made to use BFQ as the default in place of mq-deadline. This suggestion was not accepted for 4.19.
For the legacy SQ block layer, the default scheduler is CFQ, which is most similar to BFQ.
=> The kernel's default I/O scheduler can vary, depending on the type of device: SCSI/SATA, MMC/eMMC, etc.
CFQ attempts to support some level of "fairness" and I/O priorities (ionice). It has various complexities. BFQ is even more complex; it supports ionice but also has heuristics to classify and prioritize some I/O automatically. deadline style scheduling is simpler; it does not support ionice at all.
=> Users with the Linux default kernel configuration, SATA devices, and no additional userspace policy (e.g. no udev rules), will be subject to a change in behaviour in 4.19. Where ionice used to work, it will no longer have any effect.
However Fedora includes specific kernel patches / configuration. Fedora also includes userspace policies such as default udev rules.
What does Fedora Workstation 29 use as the default I/O scheduler? If it depends on the exact type of block device, then what is the default I/O scheduler for each type of device?
| What does Fedora Workstation 29 use as the default I/O scheduler? |
Iomega were a common supplier of tape drives and accessories, catering for the home and small company market mostly. USB is not an ideal interface for such an adapter because of adapter software support, I/O speed and latency. If you can use a PCI or PCI-e card interface that would be an improvement. If the adapter you've found has the right specs, then it the USB-SCSI adapter should work....
... however, not all SCSI interfaces are the same. Both the physical connector and the electrical interface have changed over the years, so it is important to match the specs up, especially don't connect LV (low voltage) ports to non-LV ports! Internal (inside system box) cabling was often different to external (separate enclosure) cabling; internal cables are often "IDC" ribbon cables. Finally, a SCSI bus needs terminating, for which terminators (essentially an external dongle) can be obtained, but many devices include internal termination (=internal dongle) which can be switched on and off. Ensure you have exactly one termination dongle active for each bus or you will get errors!
You don't say whether you're using Linux, Windows or something else. I would advise Linux unless you already have drivers and other software: the drivers for Windows are not easy to find and were always fiddly to set up.
If/when you get the hardware connected up and a driver configured, don't stick your current tape in it first: test the drive can read and write properly with a blank tape!
If the data on the tape is valuable it may well be worth finding a company that will read it for you: setting this up will have a cost.
So in summary:
- it can be made to work
- be careful of SCSI cabling and interface standards, so:
- buy drive / cable / interface card as a group, not one-at-a-time.
- then get bus termination right
- linux will almost certainly drive it out the box; older windows boxen may be able to depending on driver support.
- tape drives often care very much about the tape block size. Make sure you know what the important values are for your tape and application.
|
I have a DDS-1 tape (Sony 60M 1.3 GB tape) and am hoping to find a tape drive where I can still read it. The problem is that while I can find the associated compatible tape drives online, I am not sure if the interface would work. The older tape drives all rely on a SCSI connection. I am wondering if I am able to use an adapter like the SCSI to USB iOmega:and be able to plug and play with a Linux system? Thanks.
| How can I read and extract information from a DDS-1 tape today? |
From this line in the log:
st0: Sense Key : Medium Error [current]it looks like either the tape is damaged or dirty, or the drive head is dirty or misaligned or damaged. First thing to try is to run a cleaning tape through the drive, then try writing to that tape again. If you get media errors again, try writing to a known good tape, or to a brand new tape. Do not put tapes with valuable data into the drive until you've found a solution - the drive might damage any tape put into it.
This drive appears to be part of a library. The library may provide additional information it has retrieved from the drive about the error (in particular, whether it thinks it was a drive error or a tape error). The info should be visible on the front panel and/or over the network.
|
When I try to take backup of data more than 1MB in a tape (LTO3) using tar command it showing me the following error.
xyz@localhost# tar -cvf /dev/nst0 file1.tar
file1.tar
tar: /dev/nst0: Cannot write: Input/output error
tar: Error is not recoverable: exiting nowOutput of mt -f /dev/st0 status:
SCSI 2 tape drive:
File number=0, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x44 (LTO-3).
Soft error count since last status=0
General status bits on (41010000):
BOT ONLINE IM_REP_ENOutput of dmesg:
st0: Sense Key : Aborted Command [current]
st0: Add. Sense: Information unit iuCRC error detectedand
st0: <<vendor>> ASC=0xff ASCQ=0xffASC=0xff <<vendor>> ASCQ=0xff
Errata on LSI53C1030 occurredsc->req_bufflen=0x2800, xfer_cnt=0x00,difftransfer= 0x1400
st0: Sense Key : Medium Error [current]
Info fld=0x1400Output of cat /proc/scsi/scsi:
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: NECVMWar Model: VMware IDE CDR10 Rev: 1.00
Type: CD-ROM ANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: VMware Model: Virtual disk Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 02
Host: scsi2 Channel: 00 Id: 01 Lun: 00
Vendor: HP Model: Ultrium 3-SCSI Rev: G54W
Type: Sequential-Access ANSI SCSI revision: 03
Host: scsi2 Channel: 00 Id: 02 Lun: 00
Vendor: HP Model: 1x8 autoloader Rev: 1.50
Type: Medium Changer ANSI SCSI revision: 03 | tar: /dev/nst0: Cannot write: Input/output error when taking backup |
The default order in which sda, sdb, sdc are assigned is unpredictable. But it can be overridden through udev. You can control the name of the block device files by adding directives in /etc/udev/rules.d/local.rules (some (older?) systems may only support /etc/udev/rules.conf). Better, you can add directives to create symbolic links, and use those symbolic links in your fstab. You can match by driver, by serial number, or call external programs to read things like filesystem UUIDs. The official documentation is a bit dry; if you need to write udev rules, you may prefer to start with a tutorial.
KERNEL=="sd*", DRIVERS="ahci", SYMLINK+="sata"If you're using LVM exclusively on a drive, it doesn't matter what the letter the block device for the disk uses: you'll just be using the volume names. (That's one of the major advantages of LVM.)
If you look in /dev/disk/by-*, you'll see various ways of naming disks that are part of udev's default setup: /dev/disk/by-id (disk serial number and more), /dev/disk/by-label (filesystem or other label), /dev/disk/by-path (SCSI IDs and so on), /dev/disk/by-uuid (filesystem UUIDs and the like). These may be sufficient for your purposes.
It's better to match filesystem labels or UUIDs than disk serial numbers, because those don't change if you crash a disk in a RAID array or restore from a byte-for-byte copy (or, for labels, make label restoration part of your recovery procedure). You can use filesystem UUIDs directly in /etc/fstab: use UUID=01234567-89ab-cdef-0123-456789abcdef in the first field, instead of a block device path.
|
I have an HP xw8200 workstation running linux with two small, fast SCSI drives hooked up to the onboard LSI SCSI controller. The drives get labeled /dev/sda & /dev/sdb in /dev, respectively. I have a large SATA disk that I want to add to the system to store data, but every time I connect it, it's /dev gets assigned sda & the two scsi drives are assigned sdb,c, which messes with the boot procedure. How can I get this SATA drive to use sdc? Or what's the quickest way to get this set up?
| Devices get renamed when SATA disk attached |
It's been a long time since I've used tape. However, here's what I believe is happening
mt -f /dev/st0 rewindThis rewinds the tape in /dev/st0 ready for writing. Once the device is closed the tape is then automatically rewound because you didn't use the non-rewind device probably called something like /dev/nst0. Obviously in this instance the second part of this operation is effectively a no-op.
dd if=/dev/st0 of=-This reads as many blocks of 512 bytes from the tape device /dev/st0 as possible, and writes them to a file called - in your current directory. (Specifically, - is not an alternative name for stdout.) For a tape this can cause a lot of overruns and rewinds as it tries to handle partial reads from the typically larger block size (often 4K or 8K, but can be much larger). At the end of the dd operation the device is closed and the tape will be rewound automatically.
Depending on the block size you may want something like this (I've called the output file tape.dat rather than -)
dd bs=4K if=/dev/st0 > tape.dat |
Here are my commands
mt -f /dev/st0 rewind
dd if=/dev/st0 of=-As I understand it the first command rewinds my tape in /dev/st0, and the second command writes contents of /dev/st0 to -. My questions areWhere is -?
What is this command doing when it writes the data from the tape to -?The result of the command is:
dd: writing to '-': No space left on device
1234567+0 records in
1234566+0 records out
140000000000 bytes (141 GB) copied, 14500.9 s, 9.8 MB/sIt appears to me I have written the data to something, but I would like to verify where that data was written.
Is it just reading the tape?
Thanks for the help
| Testing LTO drive with mt and dd |
There is no possible to expose disk drives directly as block devices through Adaptec RAID controller. Almost all controllers from Adaptec don't support this feature - at least 5405, 5805 and, more general, a whole 3 and 5 series, though no information about 6 series of RAID controllers. Controller's BIOS doesn't allow to do this - it doesn't support HBA functionality at all.
Several folks tried to do this, but were unsuccessful.
The only thing (workaround) similar to the one described above can be done using (creating) a JBOD volume, that is going to be consisted from the only single disk.
The only exceptions that support HBA are: Adaptec Series 7 and Adaptec Series 8 Controllers (see manual). More explanation from Adaptec here
You can determine, if your controller is supported such feature by looking at it's BIOS menus. Only if the following (or similar) option: Controller Mode is presented, you can turn you RAID controller into simple HBA.
If none of such options exists you can do nothing here.
|
Due to shortage of free built-in SATA 3.0 plugs (6 totally) on my motherboard (Gigabyte 970A-DS3 rev.3) I've got an Adaptec RAID 5405 (3G SAS/SATA RAID) to move all "slow" SATA 1.0/2.0 devices to be connected to this card without creating any RAID. Adaptec RAID 5405 has one SFF-8087 connector and allows to connect up to 4 devices using SFF-8087 to 4 SATA cable. Now I have two devices, connected to this controller using this type of cable: DVD-RW (Plextor PX-891SA) and SATA 2.0 HDD (Hitachi HDP725050GLA360). For some reason, connected HDD is not visible as a block device and thus I can't mount the existing partition neither by using non-persistent /dev/sdXX namings, nor by using UUID (there is no such device/partition not only within /dev/disk/by-uuid but also within all dev/disk/by-* subtree). I'm running oldstable Debian Stretch 9.13.
uname -a:
Linux tekomspb 4.9.0-11-amd64 #1 SMP Debian 4.9.189-3+deb9u2 (2019-11-11) x86_64 GNU/Linuxlspci | grep -i adaptec shows me:
06:00.0 RAID bus controller: Adaptec AAC-RAID (rev 09)First, I tried to discover anything from lsscsi -g:
[0:1:1:0] disk Hitachi HDP725050GLA360 GM4O - /dev/sg0
[0:3:0:0] cd/dvd PLEXTOR DVDR PX-891SA 1.06 /dev/sr0 /dev/sg1
[1:0:0:0] disk ATA PLEXTOR PX-128M5 1.05 /dev/sda /dev/sg2
[2:0:0:0] disk ATA Hitachi HDP72505 A50E /dev/sdb /dev/sg3
<more disks, attached to the MB SATA connectors>The first row, sixth column says - (nothing), despite the fact that sg device is presented in /dev/ tree. I made some further research and found, that despite it is detected by HBA (both, by initial HBA BIOS at startup time and from shell using Adaptec's arcconf utility), visible in /dev as /dev/sg0, visible by smartctl, using smartctl -d sat -a /dev/sg0, it is not presented as block device in /sys. On the other hand, optical drive is quite well detected as block device both within /sys and /dev (as /dev/sr0 and /dev/sg1).
Following is the output of tree -F -d -L 3 --noreport. It is quite well seen that optical drive is detected as block device, but HDD doesn't for some reason.
/sys/devices/pci0000:00/0000:00:15.0/0000:06:00.0/host0/
├── power
├── scsi_host
│ └── host0
│ ├── device -> ../../../host0
│ ├── power
│ └── subsystem -> ../../../../../../../class/scsi_host
├── subsystem -> ../../../../../bus/scsi
├── target0:1:1
│ ├── 0:1:1:0
│ │ ├── bsg
│ │ ├── generic -> scsi_generic/sg0
│ │ ├── power
│ │ ├── scsi_device
│ │ ├── scsi_generic
│ │ └── subsystem -> ../../../../../../../bus/scsi
│ ├── power
│ └── subsystem -> ../../../../../../bus/scsi
└── target0:3:0
├── 0:3:0:0
│ ├── block
│ ├── bsg
│ ├── driver -> ../../../../../../../bus/scsi/drivers/sr
│ ├── generic -> scsi_generic/sg1
│ ├── power
│ ├── scsi_device
│ ├── scsi_generic
│ └── subsystem -> ../../../../../../../bus/scsi
├── power
└── subsystem -> ../../../../../../bus/scsiOutput from arcconf getconfig 1:
----------------------------------------------------------------------
Physical Device information
----------------------------------------------------------------------
Device #0
Device is a Hard drive
State : Ready
Supported : Yes
Transfer Speed : SATA 3.0 Gb/s
Reported Channel,Device(T:L) : 0,1(1:0)
Reported Location : Connector 0, Device 1
Vendor : Hitachi
Model : HDP725050GLA360
Firmware : GM4OA52A
Serial number : GEAXXXXXXXXXXX
Size : 476940 MB
Write Cache : Enabled (write-back)
FRU : None
S.M.A.R.T. : No
S.M.A.R.T. warnings : 0
Power State : Full rpm
Supported Power States : Full rpm,Powered off,Reduced rpm
SSD : No
MaxCache Capable : No
MaxCache Assigned : No
NCQ status : Enabled
Device #1
Device is a CD ROM
Supported : Yes
Transfer Speed : SATA 1.5 Gb/s
Reported Channel,Device(T:L) : 2,0(0:0)
Vendor : PLEXTOR
Model : DVDR PX-891SA
Firmware : 1.06How I can fix this issue to allow HDD to be presented as block device and, thus, be mounted?
| SATA disk drive behind Adaptec RAID 5405 can't be detected as block device |
try this
multipathd -k
show configOn my system it seems that an empty blacklist is ignored and it contains, in addition to vendors blacklisted devices, these devnodes paterns:
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^dcssblk[0-9]*"It matches "dm-"
you could try to add the "dm-1, dm-2 .. " devnodes into the blacklist exception. I never tried. I don't know the impact if you put an exception on a multipath dm file for instance.
|
How do I configure multipath in testing VM (purpose is purely academical)?
I made new logical volume, modified multipath.conf to be as follows:
defaults {
udev_dir /dev
user_friendly_names yes
}blacklist {
}blacklist_exceptions {
device {
vendor "VMware,"
product "VMware Virtual S"
}
}and multipath -v3 says:
Apr 22 03:22:24 | sdb: rev = 1.0
Apr 22 03:22:24 | sdb: h:b:t:l = 2:0:1:0
Apr 22 03:22:24 | (null): (VMware,:VMware Virtual S) vendor/product whitelisted
Apr 22 03:22:24 | sdb: serial =
Apr 22 03:22:24 | sdb: get_state
Apr 22 03:22:24 | sdb: path checker = directio (config file default)
Apr 22 03:22:24 | sdb: checker timeout = 180000 ms (sysfs setting)
Apr 22 03:22:24 | sdb: state = running
Apr 22 03:22:24 | directio: starting new request
Apr 22 03:22:24 | directio: io finished 4096/0
Apr 22 03:22:24 | sdb: state = 3
Apr 22 03:22:24 | sdb: getuid = /lib/udev/scsi_id --whitelisted --device=/dev/%n (config file default)
Apr 22 03:22:24 | /lib/udev/scsi_id exitted with 1
Apr 22 03:22:24 | error calling out /lib/udev/scsi_id --whitelisted --device=/dev/sdb
Apr 22 03:22:24 | sdb: state = running
Apr 22 03:22:24 | /lib/udev/scsi_id exitted with 1
Apr 22 03:22:24 | error calling out /lib/udev/scsi_id --whitelisted --device=/dev/sdb
Apr 22 03:22:24 | sdb: detect_prio = 1 (config file default)
Apr 22 03:22:24 | sdb: prio = const (config file default)
Apr 22 03:22:24 | sdb: const prio = 1
Apr 22 03:22:24 | dm-0: device node name blacklisted
Apr 22 03:22:24 | dm-1: device node name blacklisted
Apr 22 03:22:24 | dm-2: device node name blacklisted
===== paths list =====
uuid hcil dev dev_t pri dm_st chk_st vend/prod/rev dev_st
2:0:0:0 sda 8:0 1 undef ready VMware,,VMware Virtual S running
2:0:1:0 sdb 8:16 1 undef ready VMware,,VMware Virtual S running
[root@localhost ~]# I want to configure multipath for logical volume on /dev/sdb.
My blacklist is empty, why does it say that dm-0/1/2 are blacklisted?
Also, when I run lib/udev/scsi_id --whitelisted --device=/dev manually I got no errors. no output or changes either, though...
| Multipath to a logical volume in a staging VM |
All above command doesn't work if you expanding existing LUN or re-scaning existing LUN.
Solution:
echo "1" > /sys/block/<DEVICE>/device/rescanHandy script:
cd /dev
for DEVICE in `ls sd[a-z] sd?[a-z]`; do echo '1' > /sys/block/$DEVICE/device/rescan; done |
We have CX4-120 EMC SAN storage, I expend existing LUN size to 20GB but now i am not able to see any cylinder changes on host fdisk -l output. Following command i am running to re-scan my hda or LUN
echo "1" > /sys/class/fc_host/host1/issue_lip
echo "1" > /sys/class/fc_host/host2/issue_lipAnd then
echo "- - -" > /sys/class/scsi_host/host1/scan
echo "- - -" > /sys/class/scsi_host/host2/scan
echo "- - -" > /sys/class/scsi_host/host3/scan
echo "- - -" > /sys/class/scsi_host/host4/scan
echo "- - -" > /sys/class/scsi_host/host5/scan
echo "- - -" > /sys/class/scsi_host/host6/scan
echo "- - -" > /sys/class/scsi_host/host7/scanBut still fdisk -l /dev/emcpowere showing old cylinder size, Am i missing something? I have qlogic hda
| Linux EMC scan Lun not working |
Since the large array is on a controller of a separate type (make and model, or rather: chipset), and nothing on it is needed for the system boot process, you can work around this by forcing a delayed controller initialization. The easiest way to do that is to simply blacklist the kernel module that does the kernel initialization, then load it manually very late in the boot process.
NOTE: It is extremely important that this is done only with controllers that hold nothing that is needed for the system to boot properly. Otherwise, you will find yourself with a boot failure - anything from services failing to start or reading/writing files in the wrong place, to an outright kernel panic. Keep this in mind before implementing a scheme like this.
First, find out which kernel module handles the controller in question. A quick Google search based on the information provided in the question says this likely indeed is the mpt2sas module. Second, make sure that the code is actually compiled as a module; something like find "/lib/modules/$(uname -r)" -name 'mpt2sas*' -print will do nicely. (Yes, I know that -print is the default, but I like to be explicit...) Verify its status as a loaded module using lsmod | grep mpt2sas.
Then, add the module to the modules blacklist file. You can either add it to /etc/modprobe.d/blacklist.conf, or a separate configuration file such as /etc/modprobe.d/mpt2sas.conf. Just add the following line to such a file.
blacklist mpt2sasThis will disable automatic loading of the module in question. We are going to take advantage of the fact that it can still be loaded manually using e.g. modprobe - where "manually" can mean "from a script".
Open /etc/rc.local (which is executed after all other rc scripts) in an editor, and add the following lines somewhere in it:
modprobe mpt2sas
mount -aYou probably want to put this late in the file, but obviously before any exit or similar directives. The mount -a may or may not be needed depending on whether the file systems are actually to be mounted on boot and whether they are mounted automatically when they are found by the kernel disk partition probing. You can try without it first, if you want to and feel safe that you can gain access to the system if it doesn't work. If you need anything special to start up RAID or somesuch, that goes between the modprobe and mount. If any specific services need the large array to be available, you can create a separate rc script to start the array and specify that it should run before any such services are started. You can make it execute in the background by wrapping it in a subshell with a syntax like ( commands ) &, but that may or may not have any noticable effect on the result because partition probing is done in the kernel. You may be able to use hdparm to spin down the drives after the partition has been mounted, if they are only rarely accessed. In short, this is the part you may want to customize for your particular needs.
Then, update your initramfs by executing update-initramfs -u as root.
If nothing failed, you should now be able to reboot your system and enjoy the benefits of delayed disk spin-up and partition probing.
|
A "Linux debian 3.2.0-0.bpo.3-amd64 #1 SMP Thu Aug 23 07:41:30 UTC 2012 x86_64 GNU/Linux" system has at least 5 SCSI hosts:
root@debian:~# ls /sys/class/scsi_host
host0 host1 host2 host3 host4root@debian:~# cat /sys/class/scsi_host/host0/proc_name
mpt2sas
root@debian:~# cat /sys/class/scsi_host/host1/proc_name
ata_piix
root@debian:~# cat /sys/class/scsi_host/host2/proc_name
ata_piix
root@debian:~# cat /sys/class/scsi_host/host3/proc_name
ata_piix
root@debian:~# cat /sys/class/scsi_host/host4/proc_name
ata_piixThe mpt2sas SCSI bus has 32 drives attached. On boot (after a reboot) the drives are mostly in standby state (spin down). This causes each drive to spin up in a sequential order. The spin up time of a typical drive is 8 till 9 seconds. As a result the boot after a reboot will take almost 5 minutes.
Having configured to the kernel to scan the scsi bus asynchrone GRUB_CMDLINE_LINUX="scsi_mod.scan=async" doesn't improve boot after reboot time. And setting scsi_mod.scan=none makes the system to not boot at all.
The final goal is to boot via USB, which is another scsi_host:
# cat /sys/class/scsi_host/host5/proc_name
usb-storageHow can I configure this system to exclude mpt2sas or (mpt2sas and ata_piix) bus from spinning up all drives during boot after reboot?
| How to skip/exclude one SCSI bus from scanning during boot? |
MaxBlock from tapeinfo means the maximum block size that the drive supports. For example, when you used tar command, you might specify the block size by tar -b option. This size have the upper limit, and this limit corresponds to MaxBlock. On the other hand, mt-st -f /dev/nst0 tell shows where tape is. Indeed, if you look at the Block Position from tapeinfo, this number agrees with the return from mt-st tell.
You can try the following bash script that I have created to read the remaining capacity from LTO-CM chip. https://github.com/Kevin-Nakamoto/LTO-CM-Read
|
I am currently trying to back up data onto an LTO-4 tape using mt-st
and gnu tar 1.32, but I want to make sure I stop trying to copy things before the tape runs out! LTO-4 nominally has a capacity of 800G or 1.6T compressed. tapeinfo -f /dev/nst0 | grep Comp returns
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
CompType: 0x1
DeCompType: 0x1which I think means that compression is enabled? Then again, I am adding archives to the tape with mt-st -f /dev/nst0 eod ; tar -czf /dev/nst0 directoryname, so I am also compressing that archive with gzip.
In short, I don't know how to visualize how much data the archives on the tape are taking up, they are measured in blocks and I don't know how much data a block consists of. I have copied about 200G of data to the tape already and mt-st -f /dev/nst0 eod ; mt-st -f /dev/nst0 status ; echo -e "\n" ; mt-st -f /dev/nst0 tell returns:
SCSI 2 tape drive:
File number=1, block number=-1, partition=0.
Tape block size 0 bytes. Density code 0x46 (LTO-4).
Soft error count since last status=0
General status bits on (9010000):
EOD ONLINE IM_REP_ENAt block 18763534.But tapeinfo -f | grep MaxBlock returns MaxBlock: 16777215. So it appears I am already passed the maximum block? But mt-st -f /dev/nst0 rewind ; tar -tzvf /dev/nst0 does return a list of all of the files I copied into that archive and moves the tape to the end of data, so I shouldn't have ran out of any space. From looking at the mt manual, I cannot find a way to go to the end of the tape without first writing it.
Here is the rest given by tapeinfo if that helps:
Vendor ID: 'HP '
Product ID: 'Ultrium 4-SCSI '
Revision: 'U57D'
Attached Changer API: No
SerialNumber: 'HU1104ERC3'
MinBlock: 1
MaxBlock: 16777215
SCSI ID: 0
SCSI LUN: 0
Ready: yes
BufferedMode: yes
Medium Type: Not Loaded
Density Code: 0x46
BlockSize: 0
Block Position: 18763534
Partition 0 Remaining Kbytes: 800226
Partition 0 Size in Kbytes: 800226
ActivePartition: 0
EarlyWarningSize: 0
NumPartitions: 0
MaxPartitions: 0 | How do you determine the remaning capacity of a magnetic tape with mt or tar? How much space is in a block? |
Sooo, after a discussion with another system engineer. We figured it could be the meltdown patch which is in fault here.
Because it's a old pcix controller, the driver structure may trigger the KPI and slow down/limit the throughput.
I tried by disabling the pti on the current kernel but no improvement, maybe there's other patchs we can't disable.
The best option but i can't is to install a kernel pre-meltdown to see if it help.
but i decided to go on a pci express scsi controller and it fixed my issue.
|
I've replaced my backup server and I switched from a Debian Stretch to a CentOS7. I have a SuperLoader 3 LTO4 SCSI on an old adaptec SCSI 160. At first, I had to use the CentOSplus kernel which add the old SCSI aic7xxx module to have my hardware detected.
It's working, but I have a top speed to 2MB/s while I do a "btape speed" test.
I've also tried with another SCSI controller 320; this one is natively supported by the CentOS kernel. Same issue; I tried to boot Debian on the same hardware and at this moment, I get the max speed.
Right now, I'm pretty sure it's a kernel issue. Do you know if there're any settings to avoid this issue?
Edit 17/04/18 : On tapestat i see i have a 99% write time wait, is it why it's so slow ? Any idea ? I'm starting to lose hope :(
| SCSI LTO4 very slow on CentOS 7 |
Hardware RAID implementation is realy specific to a controller. In case of non serous hardwarte controller you have absolutly no warrenty to recorver your data in cas of a controller faillure.
On software RAID with free softwere you can try to add the disk and a new computer, but it's not your case.
| I have 4 disks (SATA) that I am 95% certain belong to a raid. The client/boss handed me a box with some disks in them and said recover data, reuse disks. There is another SCSI drive that I suppose is the primary/OS drive.
The box they came from have no power supply, and is the only thing I have with the old SCSI Interface.
So here is my issue. How/what do I do with the four RAID drives that I know next to nothing about? How do I mount these to read from to grab any/all data from them? I guess I may be able to find a box to plug them into, but even then how do I mount it? Will the 4 drives combined contain all the necessary information to mount them and pull data off.
I don't even know what type of raid. I just guess it is a raid because my boss/client said it was (and he is right 50% of the time), and when I try to do parted on them it spits out an error message saying the volume is bigger than the drive.
Please help
| SCSI's, SATA's, RAID's oh my. Please direct me the wizard of RAID recovery [closed] |
As no formatting program seems to exist, I wrote the following shell script which sends appropriate FORMAT UNIT commands to format all 80 tracks of a floppy disk. The device da0 is formatted unless a different device is supplied as an argument. The CDB has been taken from the UFI specification.
#!/bin/shset -e
exec >&2drive=${1:-da0}
numblocks=2880
blocklen=512
tracks=80
track=0progress() {
[ -t 2 ] && printf "\\r%2d/%2d" $track $tracks
}for track in `seq 0 $((tracks-1))`
do
progress # format bottom
camcontrol cmd "$drive" -v \
-c '04 17 i1 00 00 00 00 00 0c 00 00 00' $track \
-o 12 '00 b0 00 08 i4 00 i3' $numblocks $blocklen # format top
camcontrol cmd "$drive" -v \
-c '04 17 i1 00 00 00 00 00 0c 00 00 00' $track \
-o 12 '00 b1 00 08 i4 00 i3' $numblocks $blocklendonetrack=$tracks
progress
[ -t 2 ] && echo |
I want to format a MF 2HD floppy in a USB floppy disk drive. Since a USB floppy disk drive appears as a da(4) device instead of an fdc(4) device, the standard fdformat utility cannot be used. How can I format my floppy disk?
| How to format floppy disks in a USB floppy disk drive on FreeBSD? |
It might be one for ServerFault.
But you're quite correct. Fiber Channel doesn't have the same protection mechanisms as TCP. It's more like UDP in that regard (although that's a bit of a weak analogy) and for many of the same reasons - for some applications, TCP is bad solution because of those reliability mechanisms - your stream can 'stall' for a retransmit, and that hurts a near-real-time application more than a dropped packet does. Storage IO latency you start to 'hurt' when you're more than about 20ms, and that's not enough time for TCP to do it's thing really.
What happens for FCP is that the SCSI driver on the endpoint handles the reliability, because as part of that it can also do load balancing. Commonly, you won't single-attach a fiber, but instead have dual HBAs with dual independent paths to storage.
So your driver routes packets however it likes (some are smarter than others - most do multipathing these days, but some do some quite clever adaptive multipathing), and keeps track of which IOs have been acknowledged or not. The OS can queue IO where appropriate, or ... well, not, if it thinks that's a bad idea. Practically speaking it does this anyway as part of routine filesystem caching mechanisms anyway.
This is why, for example open has the O_DIRECT option: O_DIRECT (since Linux 2.4.10)
Try to minimize cache effects of the I/O to and from this
file. In general this will degrade performance, but it is
useful in special situations, such as when applications do
their own caching. File I/O is done directly to/from user-
space buffers. The O_DIRECT flag on its own makes an effort
to transfer data synchronously, but does not give the
guarantees of the O_SYNC flag that data and necessary metadata
are transferred. To guarantee synchronous I/O, O_SYNC must be
used in addition to O_DIRECT. See NOTES below for further
discussion. |
I am pretty new to SCSI and actually not even sure if this is the correct forum to ask. (I did because I found some SCSI questions :) So please feel free to improve/migrate this question.
I am playing with Fibre Channel transmission, and read in an internal document that unlike TCP, SCSI over FCP-3 is not guaranteed deliviery, hence my questions:does this mean that the SCSI standard/protocol itself is not reliable? But I think it was once very popular for hard disks. How was the issue of reliability solved?
similarly, how is reliability handled in a SAN environment? | How is reliability achieved with SCSI and SAN? |
As already noted in comments, you can find a reproducible path in /dev/disk/by-id (based on the device's manufacturer and serial number) or in /dev/disk/by-path (based on the port that the device is plugged in).
Although you can use these to create udev rules to force a specific drive letter, it isn't worth the trouble for a temporary setup like yours.
Note that restarting in software may not work so well. When a disk is dying, it usually helps to unplug it, let it rest for a few minutes and run ddrescue again.
|
I'm trying to ddrescue (through rescueCD) a drive, which keeps spinning down. e.g
ddrescue -N -n -A -M -f /dev/sdk /dev/sdd mapfileWithout rebooting I can get the drive for a short while through
echo "- - - " > /sys/class/scsi_host/host11/scanI was hoping to 'watch' that command to run recurrently (bad idea?); but it will occasionally put the drive onto a new /dev/sdX. e.g. from sdf to sdg
Is there some way to force it so keep the same /dev/sdX? In a similar way to how you'd mount a partition through UUID, but for the drive.
NB This won't be permanent solution, just for the rescue.
Or, is there some better way to refer to the drive that won't change?
Fdisk gives and identifier which doesn't seem to have changed last time, is this a UUID for the disk?
Disk identifier: A9F95F28-4E6C-4ADB-B618-E9C68D96BFECTrying
ddrescue UUID=A9F95F28-4E6C-4ADB-B618-E9C68D96BFEC /mnt/rescue/testdd.image mapfile ddrescue: Both input and output files must be specified.
Try 'ddrescue --help' for more information.
zsh: no such file or directory: /mnt/rescue/testdd.imageSeems to suggest it's not recognising the UUID, but could be something else.
Other suggestions very welcome! Thanks in advance.
Very much out of my depth (new to linux) and am googling further (udev, wwns) but drowning, badly.
| Force disk onto /dev/sdX |
From the SCSI2-Draft standard (the only one I have that isn't a PDF):
Table D.1 (continued)
+=============================================================================+
| D - DIRECT ACCESS DEVICE |
| . .W - WRITE ONCE READ MULTIPLE DEVICE |
| . . .O - OPTICAL MEMORY DEVICE |
| . . . . |
| ASC ASCQ DTLPWRSOMC DESCRIPTION |
| --- ---- ----------------------------------------------------- |
| 11 0C D W O UNRECOVERED READ ERROR - RECOMMEND REWRITE THE DATA |(obviously, that's not the entire table D.1)
|
I know that in general this means that I have a "bad disk". But I'm after a more specific reason for why I am getting these messages from the kernel:
sd 15:0:0:0: [sda] Attached SCSI disk
sd 15:0:0:0: [sda] Unhandled sense code
sd 15:0:0:0: [sda] Result: hostbyte=0x10 driverbyte=0x08
sd 15:0:0:0: [sda] Sense Key : 0x3 [current]
sd 15:0:0:0: [sda] ASC=0x11 ASCQ=0xc
sd 15:0:0:0: [sda] CDB: cdb[0]=0x28: 28 00 00 00 00 00 00 00 20 00
end_request: critical target error, dev sda, sector 0I've had a bit of a search and I see that 0x3 sense key means "Medium error" and ASC=0x11 means "Read error". But it is still a mystery as to what the ASCQ=0xc means.
The device is a bus powered USB drive which reports its model as "TOSHIBA MQ01ABB200"
| What does this key code qualifier mean? |
You need to use udev rules to change ownership of the devices you wish to passthrough to your user. To not get this memory error you also need to increase the memory limit your user is able to allocate.
I use Arch Linux, so these files might be on different locations on your distribution.Create the group kvm if not already present, and add it to your user's supplementary groups. You can also use your own user's group, if you don't intend to run the vm as another user.Udev rules:
Create the file /etc/udev/rules.d/99-vm.rules
SUBSYSTEM=="vfio", OWNER="root", GROUP="kvm"
SUBSYSTEM=="***", ATTR{idVendor}=="***", ATTR{idProduct}=="***" OWNER="root", GROUP="kvm"Edit the second line and add similar ones for every device you wish to passthrough with vfio, in this example I use the vendor and product IDs to match the device.Memory limit:
Add the following lines to the file /etc/security/limits.conf
@kvm soft memlock 150000
@kvm hard memlock 150000Here we set the limit for the group kvm at 150000KB, that should be enough for your 128MB VM, but should be increased if you increase the VM's memory allocation.SOURCE: https://www.evonide.com/non-root-gpu-passthrough-setup/ (the setup is a little different because he uses a separate user for the vm)
|
I have sucessfully passed to vm pci-e and pci pure devices.
I want to pass a scsi controller to a vm
the controller is seen by the os
06:06.0 SCSI storage controller [0100]: BusLogic BT-946C (BA80C30) [MultiMaster 10] [104b:1040]
Kernel driver in use: vfio-pci
Kernel modules: BusLogicI detach the controller
virsh nodedev-detach pci_0000_06_06_0I start the vm
qemu-system-i386 -boot a -fda boot_install.img -m 128 -no-fd-bootchk --enable-kvm -device pcnet,netdev=network0 -netdev tap,id=network0,ifname=tap1,script=no,downscript=no -device vfio-pci,host=06:06.0and...
qemu-system-i386: -device vfio-pci,host=06:06.0: VFIO_MAP_DMA failed: Cannot allocate memory
qemu-system-i386: -device vfio-pci,host=06:06.0: VFIO_MAP_DMA failed: Cannot allocate memory
qemu-system-i386: -device vfio-pci,host=06:06.0: vfio 0000:06:06.0: failed to setup container for group 12: memory listener initialization failed: Region pc.ram: vfio_dma_map(0x7fadc1bccc00, 0x0, 0xa0000, 0x7fadb5200000) = -12 (Cannot allocate memory)how to solve?
Thanks
| No way to pass to virtual machine this old scsi controller? |
I was using the wrong tool:
hdparm(8) - get/set SATA/IDE device parameters
For SCSI, use:
sdparm(8) - access SCSI modes pages; read VPD pages; send simple SCSI commands
lsscsi(8) - list SCSI devices (or hosts) and their attributes
|
I'm running VMWare Workstation 12.1.1 Pro under Windows 10, with a guest Linux Mint 17.2 with virtual hardware version 11.
My only current disk is a virtual SATA.
When I add a virtual SCSI device at 0:0, (creating a new 0.4GB disk as a test) I'm getting the error shown below.
I re-installed vmware-tools after creating the SCSI disk. No change. I also tried upgrading the virtual hardware to version 12.
What am I missing?@ravi@boxy:~$ sudo hdparm -I /dev/sda /dev/sda:
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00ATA device, with non-removable media
Model Number: ��������@���������p����@�
Serial Number: ���=�D���i@���
Firmware Revision: ��i�
Standards:
Used: unknown (minor revision code 0x00d8)
Supported: 11 10
Likely used: 11
Configuration:
CHS addressing not supported
LBA user addressable sectors: 38024320
Logical/Physical Sector size: 512 bytes
device size with M = 1024*1024: 18566 MBytes
device size with M = 1000*1000: 19468 MBytes (19 GB)
cache/buffer size = unknown
Capabilities:
LBA, IORDY(may be)(cannot be disabled)
Standby timer values: spec'd by Vendor
R/W multiple sector transfer: Max = 255 Current = 255
Recommended acoustic management value: 234, current value: 0
DMA: not supported
PIO: unknown
* reserved 69[2]
* reserved 69[6]
* SET MAX SETPASSWORD/UNLOCK DMA commands
Security:
Master password revision code = 14080
10min for SECURITY ERASE UNIT. 119808min for ENHANCED SECURITY ERASE UNIT.
Logical Unit WWN Device Identifier: 0000000000000000
NAA : 0
IEEE OUI : 000000
Unique ID : 000000000
Integrity word not set (found 0x0000, expected 0x79a5)
@ravi@boxy:~$ | VMware: `SG_IO: bad/missing sense data` on a fresh SCSI virtual disk |
The numbers are assigned by the kernel (and its device drivers), based on hardware information where appropriate. Thus on a real parallel SCSI setup, the second field will identify the bus on the corresponding HBA, the third field will identify the target (which is commonly determined by jumpers on each device), and the fourth identifies a subset of the target (which is determined by the target).
All this is exposed under /sys/block on Linux, so any command can look there for the relevant information. lsscsi does have its own nomenclature in some cases, e.g. for NVMe devices (with “N” in the host field), but all the information used is also available under /sys/block.
|
When doing lsscsi:
$lsscsi[0:0:2:0] disk FUJITSU MAM3184MP 0105 /dev/sda
[1:0:0:0] cd CREATIVE CD5233E 1.00 /dev/scd0In my understanding:
H : SCSI host id
C : SCSI channel
T : Target Number
L : LUN
How do the 4 numbers from? Does it read from BIOS? or they were decided by OS?
any other Linux command can get these numbers too?
| How the h c t l numbers read from in lsscsi? |
Hopes this helps anyone facing the same issue, there is a config which has to be enabled while compiling the guest kernel -
CONFIG_SCSI_MQ_DEFAULT=yYou can then use ls /sys/block/sdq/mq/ to see the number of multi queues.
|
QEMU newbie here. I am trying to boot a VM using QEMU, for increased performance I am trying to use a virtio-scsi mounted drive. I am following the steps given here. However when I boot my VM and I try to check for the virtio-scsi queues using ls /sys/block/sdb/mq/ I do not see the option of mq, does that mean I was unable to mount a virtio-scsi drive? But when I checked my boot up logs I could see I was able to mount my drive.
This is the command I am using to boot my VM
sudo qemu-system-x86_64 -hda x86.img -m 8096 -serial mon:stdio -nographic -smp 4
--enable-kvm -device virtio-scsi-pci,id=scsi0,num_queues=4
-device scsi-hd,drive=drive0,bus=scsi0.0,channel=0,scsi-id=0,lun=0 -drive file=test.img,if=none,id=drive0Any help would be appreciated.
| QEMU virtio-scsi: Cannot see the number of queues after booting VM with virtio-scsi command |
I sent a mail to target-devel maillist and thanks to "David Disseldorp" I got an answer from him.This functionality was added recently via:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=54a6f3f6a43cf5a5ad0421e4440a4c7095e7a223
With this change (and 2d882847280e3ae1ddc95175d0fc2006e11bb63f), you
can change the vendor ID via the configfs path at:
target/core/$backstore/$name/wwn/vendor_idSo excluding distributions with backports, you'll need a kernel >= v5.0.
|
I'm using LIO for SAN scsi.
The shared disks shown as LIO-ORG at client side.
Example;
[root@testing ~]# lsblk -S
NAME HCTL TYPE VENDOR MODEL REV TRAN
sdc 4:0:0:0 disk LIO-ORG mydisk 4.0 fcI can change the VENDOR name if I build my own kernel but it's not sustainable.
So the question is can I change the Vendor name with any easy way?
| Can I change LIO SCSI Disk Vendor name |
It looks like this simply doesn't work on newer versions of Ubuntu. The /proc/scsi filesystem has changed in a way that no longer works with the outdated RAID management software.
I have two options:Set up on an older version of Ubuntu, likely one which is no longer supported.
(Maybe, untested) possibly compile a custom kernel which adds support for the legacy /proc/scsi filesystem.
Switch to Windows Server, which has functioning drivers and management software.Using outdated, unsupported software is a non-option for me, as is trying to maintain a custom-built kernel which may or may not work. I'm switching back to Windows for my server needs.
|
I have a RocketRaid 622 RAID controller card, installed on a machine running Linux Mint 18.2. The drivers have been successfully compiled and installed with dkms, after some customizations to make it compatible with the latest linux kernel.
One of the tools that comes with the card is a WebUI service. It depends on the existence of the /proc/scsi/rr62x filesystem path, which appears to no longer exist in the latest Linux kernel? The drives are mounted and visible, but the controller daemon just can't interact with the hardware, because it depends on this /proc/scsi/rr62x interface.
Is there a way to re-enable this missing interface on my machine? Are there some good resources for why this interface was removed in newer versions of linux?
| /proc/scsi/<device> not found, but device is otherwise working |
All modern CPUs have the capacity to interrupt the currently-executing machine instruction. They save enough state (usually, but not always, on the stack) to make it possible to resume execution later, as if nothing had happened (the interrupted instruction will be restarted from scratch, usually). Then they start executing an interrupt handler, which is just more machine code, but placed at a special location so the CPU knows where it is in advance. Interrupt handlers are always part of the kernel of the operating system: the component that runs with the greatest privilege and is responsible for supervising execution of all the other components.1,2
Interrupts can be synchronous, meaning that they are triggered by the CPU itself as a direct response to something the currently-executing instruction did, or asynchronous, meaning that they happen at an unpredictable time because of an external event, like data arriving on the network port. Some people reserve the term "interrupt" for asynchronous interrupts, and call synchronous interrupts "traps", "faults", or "exceptions" instead, but those words all have other meanings so I'm going to stick with "synchronous interrupt".
Now, most modern operating systems have a notion of processes. At its most basic, this is a mechanism whereby the computer can run more than one program at the same time, but it is also a key aspect of how operating systems configure memory protection, which is is a feature of most (but, alas, still not all) modern CPUs. It goes along with virtual memory, which is the ability to alter the mapping between memory addresses and actual locations in RAM. Memory protection allows the operating system to give each process its own private chunk of RAM, that only it can access. It also allows the operating system (acting on behalf of some process) to designate regions of RAM as read-only, executable, shared among a group of cooperating processes, etc. There will also be a chunk of memory that is only accessible by the kernel.3
As long as each process accesses memory only in the ways that the CPU is configured to allow, memory protection is invisible. When a process breaks the rules, the CPU will generate a synchronous interrupt, asking the kernel to sort things out. It regularly happens that the process didn't really break the rules, only the kernel needs to do some work before the process can be allowed to continue. For instance, if a page of a process's memory needs to be "evicted" to the swap file in order to free up space in RAM for something else, the kernel will mark that page inaccessible. The next time the process tries to use it, the CPU will generate a memory-protection interrupt; the kernel will retrieve the page from swap, put it back where it was, mark it accessible again, and resume execution.
But suppose that the process really did break the rules. It tried to access a page that has never had any RAM mapped to it, or it tried to execute a page that is marked as not containing machine code, or whatever. The family of operating systems generally known as "Unix" all use signals to deal with this situation.4 Signals are similar to interrupts, but they are generated by the kernel and fielded by processes, rather than being generated by the hardware and fielded by the kernel. Processes can define signal handlers in their own code, and tell the kernel where they are. Those signal handlers will then execute, interrupting the normal flow of control, when necessary. Signals all have a number and two names, one of which is a cryptic acronym and the other a slightly less cryptic phrase. The signal that's generated when the a process breaks the memory-protection rules is (by convention) number 11, and its names are SIGSEGV and "Segmentation fault".5,6
An important difference between signals and interrupts is that there is a default behavior for every signal. If the operating system fails to define handlers for all interrupts, that is a bug in the OS, and the entire computer will crash when the CPU tries to invoke a missing handler. But processes are under no obligation to define signal handlers for all signals. If the kernel generates a signal for a process, and that signal has been left at its default behavior, the kernel will just go ahead and do whatever the default is and not bother the process. Most signals' default behaviors are either "do nothing" or "terminate this process and maybe also produce a core dump." SIGSEGV is one of the latter.
So, to recap, we have a process that broke the memory-protection rules. The CPU suspended the process and generated a synchronous interrupt. The kernel fielded that interrupt and generated a SIGSEGV signal for the process. Let's assume the process did not set up a signal handler for SIGSEGV, so the kernel carries out the default behavior, which is to terminate the process. This has all the same effects as the _exit system call: open files are closed, memory is deallocated, etc.
Up till this point nothing has printed out any messages that a human can see, and the shell (or, more generally, the parent process of the process that just got terminated) has not been involved at all. SIGSEGV goes to the process that broke the rules, not its parent. The next step in the sequence, though, is to notify the parent process that its child has been terminated. This can happen in several different ways, of which the simplest is when the parent is already waiting for this notification, using one of the wait system calls (wait, waitpid, wait4, etc). In that case, the kernel will just cause that system call to return, and supply the parent process with a code number called an exit status.7 The exit status informs the parent why the child process was terminated; in this case, it will learn that the child was terminated due to the default behavior of a SIGSEGV signal.
The parent process may then report the event to a human by printing a message; shell programs almost always do this. Your crsh doesn't include code to do that, but it happens anyway, because the C library routine system runs a full-featured shell, /bin/sh, "under the hood". crsh is the grandparent in this scenario; the parent-process notification is fielded by /bin/sh, which prints its usual message. Then /bin/sh itself exits, since it has nothing more to do, and the C library's implementation of system receives that exit notification. You can see that exit notification in your code, by inspecting the return value of system; but it won't tell you that the grandchild process died on a segfault, because that was consumed by the intermediate shell process.FootnotesSome operating systems don't implement device drivers as part of the kernel; however, all interrupt handlers still have to be part of the kernel, and so does the code that configures memory protection, because the hardware doesn't allow anything but the kernel to do these things.
There may be a program called a "hypervisor" or "virtual machine manager" that is even more privileged than the kernel, but for purposes of this answer it can be considered part of the hardware.
The kernel is a program, but it is not a process; it is more like a library. All processes execute parts of the kernel's code, from time to time, in addition to their own code. There may be a number of "kernel threads" that only execute kernel code, but they do not concern us here.
The one and only OS you are likely to have to deal with anymore that can't be considered an implementation of Unix is, of course, Windows. It does not use signals in this situation. (Indeed, it does not have signals; on Windows the <signal.h> interface is completely faked by the C library.) It uses something called "structured exception handling" instead.
Some memory-protection violations generate SIGBUS ("Bus error") instead of SIGSEGV. The line between the two is underspecified and varies from system to system. If you've written a program that defines a handler for SIGSEGV, it is probably a good idea to define the same handler for SIGBUS.
"Segmentation fault" was the name of the interrupt generated for memory-protection violations by one of the computers that ran the original Unix, probably the PDP-11. "Segmentation" is a type of memory protection, but nowadays the term "segmentation fault" refers generically to any sort of memory protection violation.
All the other ways the parent process might be notified of a child having terminated, end up with the parent calling wait and receiving an exit status. It's just that something else happens first. |
I can't seem to find any information on this aside from "the CPU's MMU sends a signal" and "the kernel directs it to the offending program, terminating it".
I assumed that it probably sends the signal to the shell and the shell handles it by terminating the offending process and printing "Segmentation fault". So I tested that assumption by writing an extremely minimal shell I call crsh (crap shell). This shell does not do anything except take user input and feed it to the system() method.
#include <stdio.h>
#include <stdlib.h>int main(){
char cmdbuf[1000];
while (1){
printf("Crap Shell> ");
fgets(cmdbuf, 1000, stdin);
system(cmdbuf);
}
}So I ran this shell in a bare terminal (without bash running underneath). Then I proceeded to run a program that produces a segfault. If my assumptions were correct, this would either a) crash crsh, closing the xterm, b) not print "Segmentation fault", or c) both.
braden@system ~/code/crsh/ $ xterm -e ./crsh
Crap Shell> ./segfault
Segmentation fault
Crap Shell> [still running]Back to square one, I guess. I've just demonstrated that it's not the shell that does this, but the system underneath. How does "Segmentation fault" even get printed? "Who" is doing it? The kernel? Something else? How does the signal and all of its side effects propagate from the hardware to the eventual termination of the program?
| How does a Segmentation Fault work under-the-hood? |
It can.
There are two different out of memory conditions you can encounter in Linux. Which you encounter depends on the value of sysctl vm.overcommit_memory (/proc/sys/vm/overcommit_memory)
Introduction:
The kernel can perform what is called 'memory overcommit'. This is when the kernel allocates programs more memory than is really present in the system. This is done in the hopes that the programs won't actually use all the memory they allocated, as this is a quite common occurrence.
overcommit_memory = 2
When overcommit_memory is set to 2, the kernel does not perform any overcommit at all. Instead when a program is allocated memory, it is guaranteed access to have that memory. If the system does not have enough free memory to satisfy an allocation request, the kernel will just return a failure for the request. It is up to the program to gracefully handle the situation. If it does not check that the allocation succeeded when it really failed, the application will often encounter a segfault.
In the case of the segfault, you should find a line such as this in the output of dmesg:
[1962.987529] myapp[3303]: segfault at 0 ip 00400559 sp 5bc7b1b0 error 6 in myapp[400000+1000]The at 0 means that the application tried to access an uninitialized pointer, which can be the result of a failed memory allocation call (but it is not the only way).
overcommit_memory = 0 and 1
When overcommit_memory is set to 0 or 1, overcommit is enabled, and programs are allowed to allocate more memory than is really available.
However, when a program wants to use the memory it was allocated, but the kernel finds that it doesn't actually have enough memory to satisfy it, it needs to get some memory back.
It first tries to perform various memory cleanup tasks, such as flushing caches, but if this is not enough it will then terminate a process. This termination is performed by the OOM-Killer. The OOM-Killer looks at the system to see what programs are using what memory, how long they've been running, who's running them, and a number of other factors to determine which one gets killed.
After the process has been killed, the memory it was using is freed up, and the program which just caused the out-of-memory condition now has the memory it needs.
However, even in this mode, programs can still be denied allocation requests.
When overcommit_memory is 0, the kernel tries to take a best guess at when it should start denying allocation requests.
When it is set to 1, I'm not sure what determination it uses to determine when it should deny a request but it can deny very large requests.
You can see if the OOM-Killer is involved by looking at the output of dmesg, and finding a messages such as:
[11686.043641] Out of memory: Kill process 2603 (flasherav) score 761 or sacrifice child
[11686.043647] Killed process 2603 (flasherav) total-vm:1498536kB, anon-rss:721784kB, file-rss:4228kB |
I was running a shell script with commands to run several memory-intensive programs (2-5 GB) back-to-back. When I went back to check on the progress of my script I was surprised to discover that some of my processes were Killed, as my terminal reported to me. Several programs had already successively completed before the programs that were later Killed started, but all the programs afterwards failed in a segmentation fault (which may or may not have been due to a bug in my code, keep reading).
I looked at the usage history of the particular cluster I was using and saw that someone started running several memory-intensive processes at the same time and in doing so exhausted the real memory (and possibly even the swap space) available to the cluster. As best as I can figure, these memory-intensive processes started running about the same time I started having problems with my programs.
Is it possible that Linux killed my programs once it started running out of memory? And is it possible that the segmentation faults I got later on were due to the lack of memory available to run my programs (instead of a bug in my code)?
| Will Linux start killing my processes without asking me if memory gets short? |
A segmentation fault is the result of a memory access violation. The program has referred to a memory address outside of what was allocated to it, and the OS kernel responds by killing the program with SIGSEGV.
This is a mistake, since there is no point in trying to access inaccessible memory (it cannot be done). Mistakes of this sort are easy to make, however, particularly in languages such as C and C++ (which account for a lot of common applications). It indicates a bug in either the program itself or a library it links to. If you wish to report the bug (do -- this helps), it is a good idea to include a backtrace of the events that led up to the seg fault.
To do this, you can run the program inside gdb (the GNU debugger), which should be available from any linux distro if it is not installed already (the package will just be called "gdb"). If the broken application is called "brokenapp":
gdb brokenappA paragraph about copyright and licensing will appear, and at the end a prompt with the cursor:
(gdb) _ Type run and hit enter. If you need to supply arguments (e.g. -x --foo=bar whatever) append those (run -x --foo=bar whatever). The program will do what it does, you will see the output and if you need to interact you can (note you can run any sort of program, including a GUI one, inside gdb). At the point where it usually segfaults you will see:
Program received signal SIGSEGV, Segmentation fault.
0x00000000006031c9 in ?? ()
(gdb) _The second line of output here is just an example. Now type bt (for "backtrace") and hit enter. You'll see something like this, although it may be much longer:
(gdb) bt
#0 0x00000000006031c9 in ?? ()
#1 0x000000000040157f in mishap::what() const ()
#2 0x0000000000401377 in main ()If it is longer, you'll only get a screenful at a time and there will be a --More-- message. Keep hitting enter until it's done. You can now quit, the output will remain in your terminal. Copy everything from Program received signal SIGSEGV onward into a text file, and file a bug report with the application's bug tracker; you can find these online by searching, e.g. "brokenapp bug report" -- you will probably have to register so a reply can be sent to you by email. Include your description of the problem, any arguments you supplied to run, etc., and a copy of the backtrace (if it is very long, there may be a means to attach a text file in the bug tracker interface). Also include the version, if you know what it is (brokenapp --version may work, or the man page may indicate how to get this), and which distribution you are using.
Someone will hopefully get back to you in not too long. Filing bugs is a usually appreciated.
|
I have a command line application that when run does not do what it is supposed to do and at a certain point leaves the message:
Segmentation faultWhat does this mean? What should I do?
| Running application ends with "Segmentation Fault" |
If other people clean up ...
... you usually don't find anything. But luckily Linux has a handler for this which you can specify at runtime. In /usr/src/linux/Documentation/sysctl/kernel.txt you will find:core_pattern is used to specify a core dumpfile pattern name.If the first character of the pattern is a '|', the kernel will treat
the rest of the pattern as a command to run. The core dump will be
written to the standard input of that program instead of to a file.(See Core dumped, but core file is not in the current directory? on StackOverflow)
According to the source this is handled by the abrt program (that's Automatic Bug Reporting Tool, not abort), but on my Arch Linux it is handled by systemd. You may want to write your own handler or use the current directory.
But what's in there?
Now what it contains is system specific, but according to the all knowing encyclopedia:[A core dump] consists of the recorded state of the working memory of a computer
program at a specific time[...]. In practice, other key pieces of
program state are usually dumped at the same time, including the
processor registers, which may include the program counter and stack
pointer, memory management information, and other processor and
operating system flags and information.... so it basically contains everything that gdb needs (in addition to the executable that caused the fault) to analyze the fault.
Yeah, but I'd like me to be happy instead of gdb
You can both be happy since gdb will load any core dump as long as you have a exact copy of your executable: gdb path/to/binary my/core.dump. You should then be able to analyze the specific failure instead of trying and failing to reproduce bugs.
|
When a segmentation fault occurs in Linux, the error message Segmentation fault (core dumped) will be printed to the terminal (if any), and the program will be terminated. As a C/C++ dev, this happens to me quite often, and I usually ignore it and move onto gdb, recreating my previous action in order to trigger the invalid memory reference again. Instead, I thought I might be able to perhaps use this "core" instead, as running gdb all the time is rather tedious, and I cannot always recreate the segmentation fault.
My questions are three:Where is this elusive "core" dumped?
What does it contain?
What can I do with it? | Segmentation fault (core dumped) - to where? what is it? and why? |
(tl;dr: It's almost certainly a bug in your program or a library it uses.)
A segmentation fault indicates that a memory access was not legal. That is, based on the issued request, the CPU issues a page fault because the page requested either isn't resident or has permissions that are incongruous with the request.
After that, the kernel checks to see whether it simply doesn't know anything about this page, whether it's just not in memory yet and it should put it there, or whether it needs to perform some special handling (for example, copy-on-write pages are read-only, and this valid page fault may indicate we should copy it and update the permissions). See Wikipedia for minor vs. major (e.g. demand paging) vs. invalid page faults.
Getting a segmentation fault indicates the invalid case: the page is not only not in memory, but the kernel also doesn't have any remediative actions to perform because the process doesn't logically have that page of its virtual address space mapped. As such, this almost certainly indicates a bug in either the program or one of its underlying libraries -- for example, attempting to read or write into memory which is not valid for the process. If the address had happened to be valid, it could have caused stack corruption or scribbled over other data, but reading or writing an unmapped page is caught by hardware.
The reason why it works with your larger dataset and not your smaller dataset is entirely specific to that program: it's probably a bug in that program's logic, which is only tripped for the smaller dataset for some reason (for example, your dataset may have a field representing the total number of entries, and if it's not updated, your program may blindly read into unallocated memory if it doesn't do other sanity checks).
It's several orders of magnitude less likely than simply being a software bug, but a segmentation fault may also be an indicator of hardware issues, like faulty memory, a faulty CPU, or your hardware tripping over errata (as an example, see here).
Getting segfaults due to failing hardware often results in sometimes-works behaviour, although a bad bit in physical RAM might get mapped the same way in repeated runs of a program if you don't run anything else in between. You can mostly rule out this possibility by booting memtest86+ to check for failing RAM, and using software like Prime95 to stress-test your CPU (including the FP math FMA execution units).You can run the program in a debugger like gdb and get the backtrace at the time of the segmentation fault, which will likely indicate the culprit:
% gdb --args ./foo --bar --baz
(gdb) r # run the program
[...wait for segfault...]
(gdb) bt # get the backtrace for the current thread |
I am currently running a statistical modelling script that performs a phylogenetic ANOVA. The script runs fine when I analyse the full dataset. But when I take a subset it starts analysing but quickly terminates with segmentation fault. I cannot really figure out by googling if this could be due to a problem from my side (e.g. sample dataset to small for the analysis) and/or bug in the script or if this has something to do with my linux system. I read it has to do with writing data to the memory, but than why is everything fine with a larger dataset? I tried to find more information using google, but this made it more complicated.
Thanks for clarifying in advance!
| Is a "segmentation fault" a system error or program bug? |
I finally figured it out through a process of trial-and-error. The solution is kind of convoluted:
(trap 'true' ERR; exec ttf2afm "$FONT") |
grep ...Apparently the exec causes ttf2afm to take over the subshell process with the trapped error, causing it to operate in an environment where it doesn't matter if it segfaults.
Trapping the all-inclusive ERR signal will stop the subshell from dying and sending a signal to the main script–which will terminate immediately if it does–when the program fails.
The only problem is that the kernel itself will output a whole bunch of stack trace garbage directly to the console device once the process segfaults, so there's no way to prevent it from being output [that I know of], but that doesn't matter as it doesn't affect stdout or stderr.
|
I have a script that calls a program (specifically, ttf2afm, part of tetex 3.0) that sometimes segfaults and sometimes doesn't. The information I need is always printed out before it segfaults, but I'm having a hard time stopping the pipe redirection from failing and not outputting anything to the pipe when the program fails.
I've tried redirecting through a FIFO, parenthesizing the process with a true at the end, executing from a shell function and encasing in sh -c, but the script never seems to let the process output anything, redirected or otherwise–not even to stderr.
I know it is capable of output, being that it's perfectly capable of giving it from the command-line, but not from a script for some reason.
My question is, is there any way for the script to ignore the fact that the program segfaults and give me the output anyway?
I'm running BASH 4.1.10(2)-release.
| Piping output from a segfaulting program |
Those sections are marked GNU_RELRO (readonly relocations), which means that as soon as the dynamic loader has fixed up (at load time, there are no lazy relocations there) all the relocations, it marks those sections read-only. Note that most of .got.plt is on another page, so doesn't get the treatment.
You can see the linker script with ld --verbose, if you search for RELRO you'll find something similar to:
.got : { *(.got) }
. = DATA_SEGMENT_RELRO_END (12, .);
.got.plt : { *(.got.plt) }which means that the RELRO sections end 12 bytes into .got.plt (pointers to dynamic linker functions are already resolved, so can be marked read-only).
The hardened Gentoo project has some documentation about RELRO at http://www.gentoo.at/proj/en/hardened/hardened-toolchain.xml#RELRO.
|
This is Ubuntu 9.04, 2.6.28-11-server, 32bit x86$ cat test.c
main() { int *dt = (int *)0x08049f18; *dt = 1; }
$ readelf -S ./test
...
[18] .dtors PROGBITS 08049f14 000f14 000008 00 WA 0 0 4
...
$ ./test
Segmentation fault
$For the uninitiated: gcc creates a destructor segment, .dtors, in the elf executable, which is called after main() exits. This table has long been writable, and it looks like it should be in my case (see readelf output). But attempting to write to the table causes a segfault.
I realize there has been a movement toward readonly .dtors, plt, got lately, but what I don't understand is the mismatch between readelf and the segfault.
| .dtors looks writable, but attempts to write segfault |
I don't know how this remote 3D works but if the client is indeed trying to run the amd64 executable, this is definitely the reason this message appears.
|
(Follow-up on How to efficiently use 3D via a remote connection?)
I installed the amd64 package on the server and the i386 one on the client. Following the user's guide I run this on the client:
me@client> /opt/VirtualGL/bin/vglconnect me@server
me@server> /opt/VirtualGL/bin/vglrun glxgearsThis causes a segfault, using vglconnect -s for a ssh tunnel doesn't work either. I also tried the TurboVNC method, where starting vglrun glxgears works, but I'd prefer transmitting only the application window using the jpeg compression. Is the problem 32 <-> 64 bit? Or how can I fix things?
| Segmentation fault when trying to run glxgears via virtualGL |
The SIGSEGV signal is sent by the kernel to a process that has made an invalid virtual memory reference (segmentation fault).
One way sending a SIGSEGV could be more "dangerous" is if you kill a process from a filesystem that is low on space. The default action when a process receives a SIGSEGV is to dump core to a file then terminate. The core file could be quite large, depending on the process, and could fill up the filesystem.
As @Janka has already mentioned, you can write code to tell your program how you want it to handle a SIGSEGV signal. You can't trap a SIGKILL or a SIGSTOP. I would suggest using a SIGKILL or a SIGSTOP when you only want to terminate a process. Using a SIGSEGV usually won't have bad repercussions, but it's possible the process you want to terminate could handle a SIGSEGV in a way you don't expect.
|
I've realised recently that the kill utility can send any signal I want, so when I need to SIGKILL a process (when it's hanging or something), I send a SIGSEGV instead for a bit of a laugh (kill -11 instead of kill -9.)
However, I don't know if this is bad practice. So, is kill -11 more dangerous than kill -9? If so, how?
| Is there any danger in using kill -11 instead of kill -9? |
Since you can log in, nothing major is broken; presumably your shell’s startup scripts add ~/lib to LD_LIBRARY_PATH, and that, along with the bad libraries in ~/lib, is what causes the issues you’re seeing.
To fix this, run
unset LD_LIBRARY_PATHThis will allow you to run rm, vim etc. to remove the troublesome libraries and edit your startup scripts if appropriate.
|
I am connected with SSH to a machine on which I don't have root access. To install something I uploaded libraries from my machine and put them in the ~/lib directory of the remote host.
Now, for almost any command I run, I get the error below (example is for ls) or a Segmentation fault (core dumped) message.
ls: relocation error: /lib/libpthread.so.0: symbol __getrlimit, version
GLIBC_PRIVATE not defined in file libc.so.6 with link time referenceThe only commands I have been successful running are cd and pwd until now. I can pretty much find files in a directory by using TAB to autocomplete ls, so I can move through directories.
uname -r also returns the Segmentation fault (core dumped) message, so I'm not sure what kernel version I'm using.
| Almost no commands working - relocation error: symbol __getrlimit, version GLIBC_PRIVATE not defined in libc.so.6 |
After much searching, I found the answer in a Debian bug post exchange:If the okular package is not installed, kile can not start and crashes
on a segmentation fault.The solution was to run
sudo apt-get install okular |
With the latest version of Kile installed, trying to run it crashes with a segmentation fault:
$ kile
qt5ct: using qt5ct plugin
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/16/"
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/16@2x/"
Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/16/"
Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/16@2x/"
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/22/"
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/22@2x/"
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/24/"
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/24@2x/"
Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/24/"
Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/24@2x/"
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/32/"
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/32@2x/"
Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/32/"
Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/32@2x/"
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/48/"
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/48@2x/"
Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/48/"
Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/48@2x/"
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/64/"
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/64@2x/"
Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/64/"
Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/64@2x/"
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/96/"
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/96@2x/"
Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/128/"
Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/128@2x/"
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/256/"
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/256@2x/"
Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/symbolic/"
Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/symbolic/"
kf5.kio.core: Refilling KProtocolInfoFactory cache in the hope to find "mtp"
kf5.kservice.services: KServiceTypeTrader: serviceType "ThumbCreator" not found
No text-to-speech plug-ins were found.
Segmentation fault (core dumped)How can Kile be run in Linux Mint 19?
| Trying to run Kile gives a segmentation fault on Linux Mint 19 |
The undocumented semantic of si_code = SI_KERNEL with si_errno = 0 is,processor-specific traps
kernel segment memory violation (except for semaphore access)
ELF file format violations, and
stack violations.All other SIGSEGVs should have a si_errno set to a non-zero value. Read on for the details.
When the kernel sets up a userspace process, it defines a table of virtual memory pages for the process. When the kernel scheduler runs the process, it reconfigures the CPU's memory management unit (MMU) according to the page table for the process.
When a userspace process attempts to access memory that is outside of its page table, the CPU MMU detects this violation and generates an exception. Note that this happens at the hardware level. The kernel is not involved yet.
The kernel is set up to handle MMU exceptions. It catches the exception caused by the running proccess's attempt to access memory outside of its page table. The kernel then calls do_page_fault() which sends the SIGSEGV signal to the process. This is why the signal comes from the kernel and not from the process itself or from another process.
This is a highly simplified explanation of course. The best simple explanation that I have seen of this is the "Page Faults" section of William Gatliff's beautiful article The Linux Kernel’s Memory Management Unit API.
Note that on CPU's without an MMU, such as the Blackfin MPU's, Linux userspace processes can generally access any memory. i.e. there is no SIGSEGV signal for memory violations (only for traps such as stack overflow) and debugging memory access problems can be tricky.
I second jordanm's comment regarding setting the ulimit and inspecting the core file with gdb. You can do ulimit -c unlimited from the command line if you run the process from a shell, or use the libc setrlimit system call wrapper (man setrlimit) in your program. You can set the name of the core file and its location by in file /proc/sys/kernel/core_pattern. See A.P. Lawrence's excellent gloss on this at Controlling core files (Linux). To use gdb on the corefile, see this little tutorial on Steve.org.
A segmentation violation with si_code SEGV_MAPERR (0x1) is likely a null pointer dereference, an access of non-existent memory such as 0xfffffc0000004000, or malloc and free problems. Heap corruption or process exceeding its runtime limits (man getrlimit) in the case of malloc and double free or free of non-allocated address in the case of free. Look at the si_errno element for more clues.
A segmentation violation that occurs as a result of userspace process accessing virtual memory above the TASK_SIZE limit will cause a segmentation violation with an si_code of SI_KERNEL. In other words, the TASK_SIZE limit is the highest virtual address that any process is allowed to access. This is normally 3GB unless the kernel is configured for high memory support. The area above the TASK_SIZE limit is referred to as the "kernel segment". See linux-2.6//arch/x86/mm/fault.c:__bad_area_nosemaphore(...) where it calls force_sig_info_fault(...).
For each architecture there are also a number of specific traps that cause a SISEGV with SI_KERNEL. For x86 these are defined by the DO_ERROR macros in linux-2.6//arch/x86/kernel/traps.c.
The OOM handler sends SIGKILL, not SIGSEGV as can be seen in function linux-2.6//mm/oom_kill.c:oom_kill_process(...) at about line 498:
do_send_sig_info(SIGKILL, SEND_SIG_FORCED, p, true);for related processes and line 503:
do_send_sig_info(SIGKILL, SEND_SIG_FORCED, victim, true);for the process that was the proximal cause of the OOM.
You can get more information by looking at the wait status of the process that was killed from its parent process and possibly by looking at dmesg or better, by configuring the kernel log and looking at it.
|
I've got a long-running program (becomes a daemon with daemon(3) call) that exits on Signal 11 (Segmentation Violation) every so often. I can't tell why. So, I wrote a SIGSEGV handler, set using the sigaction() system call. I set the handler function so that it has this prototype: void (*sa_sigaction)(int, siginfo_t *, void *) which means it gets a pointer to a siginfo_t structure as a formal argument.
On the occasion of a mysterious SIGSEGV, the si_code element of the siginfo_t has a value of 0x80, which means, according to the sigaction man page, "The kernel" sent the signal. This is on a Red Hat RHEL system: Linux blahblah 2.6.18-308.20.1.el5 #1 SMP Tue Nov 6 04:38:29 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
Why does the kernel send a SIGSEGV? Is this from the famed OOM-Killer, or does some other reason exist for getting a SIGSEGV? As a mere user on this system, I can't see /var/log/message, and the sysadmins are more than a bit aloof, probably because they come from a Windows background.
A SIGSEGV generated on purpose (dereferencing a NULL pointer) does not get an si_code value of 0x80, it gets 0x1, which means "address not mapped to object".
| sigaction(7): semantics of siginfo_t's si_code member |
If the segmentation fault produces a "core" file, you can run file <core-filename> to identify the executable. You can also use ddd or gdb to debug the core file for more information.
|
I have a Busybox/Linux system where a mystery program is segfaulting rarely. Is there a way to find which program is doing this?
| Is there a way to find out which program is segfault-ing? |
The core(5) manpage describes the parameters affecting core dumps in detail, including their naming etc.
To answer your stated question, there is no generalisable way to find a core dump. By default, core is dumped in the process's current working directory, if the process is allowed to write there, if there's enough room on the containing filesystem, if there's no existing core dump (under some circumstances), and if the file size and core file size limits (as set by ulimit or similar mechanisms) allow it. But /proc/sys/kernel/core_pattern provides many different ways of processing core dumps, so you really need to look at that too and figure out what's going on.
In your case, I don't know why the core couldn't be found initially, but I do know why you stopped getting cores after setting the redirection up: when using a pipe in core_pattern, the processing program must be specified using an absolute pathname. tee on its own won't be used; you need to specify /usr/bin/tee. Note that you should take particular care with this type of setup on multi-user systems, because the program run to process the core dump is run as root.
On Debian derivatives I install corekeeper, which writes core dumps in an easily-usable manner to per-user directories under /var/crash.
|
Scenario (Ubuntu 16.04):
I compile and run a C program (with -g, I get the traditional Segmentation Fault (core dumped), and then (of course) there is no mythical "core" file to be found. Some digging says to modify /proc/sys/kernel/core_pattern with a command to the effect of: echo '|tee /home/me/my_core_folder/my_core_file' | sudo tee /proc/sys/kernel/core_pattern, and after doing this, I stop getting (core dumped) and start only getting the plain Segmentation Fault. I try things like gdb ./program_object_file.out core.pid which obviously doesn't exist (I was getting desperate), and of course try the plain gdb ./a.out followed by (gdb) core core.pid and variants of the commands where I spam the tab key desperately trying to get auto-completion to get me to where I need to be.
Question:
Is there a generalized way I can get to core dumps? I recognize that every machine that I touch seems to have a Michael Bay's Transformers-esque ability to reconfigure hardware and software such that no device I own can be expected to work normally out-of-the-box. Is there a simple algorithm/recipe I can follow to locate core dumps on my own machine as well as on other peoples' machines? I always find myself tutoring friends on stuff like this after no small amount of work to get things working for myself and it would be nice to be able to run a command or something to get core files dumped to the directory which the executable was run from... is there any way to do this that should work on most (I would settle for "some") Linux/Unix machines?
| How to view Core file (general) |
If the "$name" it's processing is a directory, you need to call func on its contents, and not the original argument, or you get an infinite loop, and hence the segfault.
Your code can be greatly reduced by using a function on the original argument, and have the function apply to each item separately. Right now you're repeating most of what happens in the function in the main body anyway.
#!/bin/bash
func () {
local arg="$1"
if [[ -f "$arg" ]] ; then
file -- "$arg"
return
fi
if [[ -d "$arg" ]] ; then
for file in "$arg"/* ; do
func "$file"
done
fi
}func "$1" |
My assignment is to write a bash script that reads a directory and returns the file type of each file within, including all subdirectories. Using the find command is not allowed. I've tried to implement this using essentially two for loops, but I'm getting a segmentation fault. I found that the script does work however when I pass a directory without subdirectories. Would someone be willing to look over a noob's code and tell me what's wrong? Thank you very much.
#!/bin/bash
func () {
for name in $1
do
if [ -f "$name" ]
then
file "$name"
elif [ -d "$name" ]
then
func "$1"
fi
done
}directory="$1/*"
for name in $directory
do
if [ -f "$name" ]
then
file "$name"
elif [ -d "$name" ]
then
func "$1"
fi
done | Bash script segmentation fault |
Okay, I eventually found it. It's called debug.exception-trace.
sysctl -w debug.exception-trace=0 or echo 0 > /proc/sys/debug/exception-trace will turn it off.
# dmesg -c
# dmesg
# echo 'main;' | gcc -xc - && ./a.out
<stdin>:1:1: warning: data definition has no type or storage class
Segmentation fault
# dmesg
[ 539.421736] a.out[1875]: segfault at 600874 ip 0000000000600874 sp 00007ffed85e7018 error 15 in a.out[600000+1000]
# sysctl debug.exception-trace
debug.exception-trace = 1
# ./a.out
Segmentation fault
# dmesg
[ 539.421736] a.out[1875]: segfault at 600874 ip 0000000000600874 sp 00007ffed85e7018 error 15 in a.out[600000+1000]
[ 584.492171] a.out[1878]: segfault at 600874 ip 0000000000600874 sp 00007ffc6b0d2358 error 15 in a.out[600000+1000]
# sysctl -w debug.exception-trace=0
debug.exception-trace = 0
# ./a.out
Segmentation fault
# dmesg
[ 539.421736] a.out[1875]: segfault at 600874 ip 0000000000600874 sp 00007ffed85e7018 error 15 in a.out[600000+1000]
[ 584.492171] a.out[1878]: segfault at 600874 ip 0000000000600874 sp 00007ffc6b0d2358 error 15 in a.out[600000+1000]The last segfault is not logged.
|
Linux kernel keeps logging segfaults in ring buffer.
a.out[25415]: segfault at 8049604 ip 08049604 sp bf88e3fc error 15 in a.out[8049000+1000]Is there a way to temporarily disable it? My usecase is that I'm running a testsuite that does all sorts of crazy things and I don't want to see segfaults in dmesg from that time period. dmesg -c is not an option because the test framework I'm forced to use is analyzing dmesg output and I cannot just clear it in the middle.
I was going through sysctl -a output if there's some kernel parameter, but I don't see anything that appears to be useful. kernel.print-fatal-signals looked promising but it's just for showing more detailed info.
| Is there a way to temporarily disable segfault messages in dmesg? |
No, it's not your Nvidia card that is at fault. Neither really is Chrome, either.
What happens first is that the Nvidia software crashes, stopping the render pipeline. Then, after a few seconds, chrome detects the GPU not rendering any more, tries to handle that, fails, and throws the segfault.
When the machine is in that crashed state, and you ssh into it and run "top", you'll see two processes irq/75 nvidia and nv_queue alternately running at 100% cpu (the interrupt number may be different on your system).
Also, a few seconds before the GpuWatchdog, your syslog probably contains some messages from the nvidia driver:
Feb 10 17:00:24 natascha kernel: [157260.734117] NVRM: GPU at PCI:0000:08:00: GPU-f622f482-2ad1-4992-4d8a-9d62b465e084
Feb 10 17:00:24 natascha kernel: [157260.734120] NVRM: GPU Board Serial Number:
Feb 10 17:00:24 natascha kernel: [157260.734124] NVRM: Xid (PCI:0000:08:00): 61, pid=1391, 0cde(308c) 00000000 00000000Reports of the problem are all over the internet; I didn't find any fixes yet. I had the same problem on my new PC, not running chrome didn't prevent the crash but prevented the syslog message; reverting to 430 drivers from 435 made the problem go away (so far).Update: The crash happens with 430 drivers as well. The 440 drivers, not part of Ubuntu, seem to fix this though. At least I didn't have the problem any more, and the post by amrits on https://devtalk.nvidia.com/default/topic/1060783/linux/random-xid-61-and-xorg-lock-up/7 confirms this.
As the 440 drivers are not part of the Ubuntu distribution, here's what I did - I got this info from https://linuxconfig.org/how-to-install-the-nvidia-drivers-on-ubuntu-19-10-eoan-ermine-linux which is about Ubuntu 19.10, but works on 18.04 as well:
sudo -i
add-apt-repository ppa:graphics-drivers/ppa
apt updateAt this point, ubuntu-drivers devices should output, among other things,
# ubuntu-drivers devices
== /sys/devices/pci0000:00/0000:00:03.1/0000:08:00.0 ==
modalias : pci:v000010DEd00001F02sv000010DEsd00001F02bc03sc00i00
vendor : NVIDIA Corporation
driver : nvidia-driver-440 - third-party free recommendedthen you can install the driver
apt install nvidia-driver-440and as you need to reboot anyway to make the new driver active, I recommend updating the rest of your software as well:
apt upgrade
apt autoremove
rebootUpdate Jun 15 - There is still no driver fix according to nvidia, they weren't able to reproduce the problem. See the thread at their forum. However, it seems like the issue happens on some mainboard/GPU combinations when the GPU goes from power save mode to a mode where it uses more power. Forcing the GPU to a higher frequency seems to prevent this, and some users report the following to work as a workaround:
nvidia-smi -pm ENABLED
sudo nvidia-smi -lgc 1000,1815(This must be repeated at each reboot)
This sets a permanent (until reboot) higher frequency for the card, resulting in more power consumption and possibly less lifetime, but seems to work around the crash, so may be preferable to many users.
|
System
Linux Mint 19.3 Cinnamon 64-bit, based on Ubuntu 18.04 LTS.Related HardwareGPU: NVIDIA, GeForce GTX 1060, Max-Q Design, 6 GB VRAM
CPU: Intel Core i7-7700HQCould anyone tell me if the following means anything special like that my Nvidia card is faulting? Could it be just a software error on Google Chrome (stable) side, or in nvidia-435 driver? How do I find out?
I just know my computer froze for a second or two, and this:
dmesg trail
[Thu Jan 16 16:01:38 2020] show_signal_msg: 23 callbacks suppressed
[Thu Jan 16 16:01:38 2020] GpuWatchdog[18858]: segfault at 0 ip 000055a9a5a6077d sp 00007f033f76c6c0 error 6 in chrome[55a9a1b25000+7170000]
[Thu Jan 16 16:01:38 2020] Code: 48 c1 c9 03 48 81 f9 af 00 00 00 0f 87 c9 00 00 00 48 8d 15 19 61 9c fb f6 04 11 20 0f 84 b8 00 00 00 be 01 00 00 00 ff 50 30 <c7> 04 25 00 00 00 00 37 13 00 00 c6 05 f1 6b a4 03 01 80 7d 8f 00What I was doing at that time
I was playing an HTML5 game (Forge of Empires).
| Segfault in Google Chrome - is it Nvidia card related? How do I find out? |
Your guesses seem about 100% correct.
There is hardware called a memory management unit (MMU) (Part of CPU). It is given page tables, that describe what pages do what (what are executable, readable, writable). If a process tries to do what it is not allowed to do, then the MMU interrupts the CPU. The CPU then executes the code in the starting at a particular address. This address is defined in the interrupt vector table. A table of start addresses, for each interrupt type (some CPUs store instructions in this table, not addresses, but they do the same thing).
|
I have a question about how Linux traps memory access errors. As far as I know, a user space program doesn't need to ask operating system every time it wants to access memory, now when the process tries to access a memory location not in it's address space the CPU must be having a way to stop this and communicate this event to the OS.
So my question is:
How does the CPU inform the OS about this event ?
Does it start executing a predefined code ? If yes, please let me know about where in memory is that code, what is that code section called, what does it do, etc.
| How Linux finds out about illegal memory access error? |
I installed the debuginfo tools as suggested by gdb and then I got the expression responsible for the crash:
#20 0x0000000000457ac8 in expand_compound_array_assignment (
var=<value optimized out>,
value=0x150c660 "$(logPath \"$@\")", flags=<value optimized out>
)So now I know what and where is the issue.
In my case it was in a function sourced in the .bashrc and the root cause was this wrong redefinition of the map variables in Bash:
declare -A myMap
local myMap=""...
for key in "${!myMap[@]}"; do
echo ${myMap[$key]}
done This function was called inside a sub-shell which caused the 'segmentation fault' error output to be hidden.
|
In my production server running Red Hat Linux (V6) I got frequently core dumps from bash. This occurs from a couple of time a day to dozens of time a day.
TLTR
Resolution: install the bash-debuginfo to get more details from the core and locate the statement which cause the crash.
Cause: in this case it was because of a bug not fixed in my old version of bash lists.gnu.org/archive/html/bug-bash/2010-04/msg00038.html reported in April 2010 against 4.1 and fixed in 4.2 (released in early 2011)
Details
This server runs a single web application (apache + cgi-bin) and many batches.
The webapp cgi (C program) execs system call more than often.
There's not so much shell interaction, so the core dump is probably caused by some service or the webapp and I must know what is causing this error.
The coredump backtrace is a bit dry (see below).
How can I have more details about the error ? I would like to know what is the parent processes chain (fully detailed), the current variables and the env, what was the executed script and/or command...
I have the audit system enabled, but the audit lines about this are a bit dry too. Here is one example:
type=ANOM_ABEND msg=audit(1516626710.805:413350): auid=1313 uid=1313 gid=22107 ses=64579 pid=8655 comm="bash" sig=11And this is the core backtrace:
Core was generated by `bash'.
Program terminated with signal 11, Segmentation fault.
#0 0x000000370487b8ec in free () from /lib64/libc.so.6
#0 0x000000370487b8ec in free () from /lib64/libc.so.6
#1 0x000000000044f0b0 in hash_flush ()
#2 0x0000000000458870 in assoc_dispose ()
#3 0x0000000000434f55 in dispose_variable ()
#4 0x000000000044f0a7 in hash_flush ()
#5 0x0000000000433ef3 in pop_var_context ()
#6 0x0000000000434375 in pop_context ()
#7 0x0000000000451fb1 in ?? ()
#8 0x0000000000451c84 in run_unwind_frame ()
#9 0x000000000043200f in ?? ()
#10 0x000000000042fa18 in ?? ()
#11 0x0000000000430463 in execute_command_internal ()
#12 0x000000000046b86b in parse_and_execute ()
#13 0x0000000000444a01 in command_substitute ()
#14 0x000000000044e38e in ?? ()
#15 0x0000000000448d4e in ?? ()
#16 0x000000000044a1b7 in ?? ()
#17 0x0000000000457ac8 in expand_compound_array_assignment ()
#18 0x0000000000445e79 in ?? ()
#19 0x000000000044a264 in ?? ()
#20 0x000000000042ee9f in ?? ()
#21 0x0000000000430463 in execute_command_internal ()
#22 0x000000000043110e in execute_command ()
#23 0x000000000043357e in ?? ()
#24 0x00000000004303bd in execute_command_internal ()
#25 0x0000000000430362 in execute_command_internal ()
#26 0x0000000000432169 in ?? ()
#27 0x000000000042fa18 in ?? ()
#28 0x0000000000430463 in execute_command_internal ()
#29 0x000000000043110e in execute_command ()
#30 0x000000000041d6d6 in reader_loop ()
#31 0x000000000041cebc in main ()
~Update:
The system is running in a Virual Machine handled by VMWare.What version of bash?
GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
What version of libc and other libs linked to bash? ldd (GNU libc) 2.12
(what are the other libs linked to bash ? Is there a command to get the details in a row ?does this happen while running a script or an interactive shell or both? if script, does it only happen on one script or on several or any? What, in general terms, kind of task is your bash script doing? Do you get seg faults from other processes? Have you run a memory test on your server? Does it have ECC RAM?as stated in my question: I don't know, but it should be caused by some scheduled scripts or by some system call from inside the interactive webapp.
It could also be a 'script in a script' like in this kind of construct:
myVar=$($(some command here ($and here too))However I feel that the issue is probably not a physical issue with the RAM, as there's no other random crash, just this one, and we also have it on 2 separates VM running on 2 separate physical machine.
Update 2:
From the stack I have the feeling that maybe the issue can be related to associative arrays:
#1 0x000000000044f0b0 in hash_flush ()
#2 0x0000000000458870 in assoc_dispose ()
#3 0x0000000000434f55 in dispose_variable ()
#4 0x000000000044f0a7 in hash_flush ()And these kind of variables are in almost all of our custom scripts: there is one main script used a lib that contains common variables and functions for our system.
This script is sourced in almost every one of our scripts.
| How to find why a bash exits with signal 11, Segmentation fault |
A ps aux run during the test reveals lines like:
tange 1471203 0.0 0.0 264173920 3776 pts/1 T 20:54 0:00 sort --buffer-size=50% -k3r264173920 is 50% of 500 GB, and there are 20 of those.
meminfo says:
$ grep Committ /proc/meminfo
Committed_AS: 5291525876 kBSo my assumption that 500GB would be enough was wrong.
Removing --buffer-size=50% gives:
$ grep Committ /proc/meminfo
Committed_AS: 45391448 kBand the test completes with no problem with /proc/sys/vm/overcommit_memory=2.
All in all this explains most of the situation: running sort --buffer-size=50% gobbles up a lot of (virtual) memory and since overcommit_memory=2 requires the virtual memory to be available, there is no more memory available to other processes.
With overcommit_memory=0 the memory does not need to be available and thus nothing fails (since only a tiny amount of the memory is ever used).
I can now provoke the issue with:
parallel '(seq {};sleep 10) | sort --buffer-size=50%' ::: {1..20}What annoys me, though, is why sort is not complaining sort: memory exhausted or showing up in dmesg. This would have guided me much quicker towards the error.
|
I can provoke a race condition that gives output similar to this in dmesg:
[ 5432.541379] perl[408327]: segfault at 22 ip 0000564eb8af9cc2 sp 00007ffec318cea0 error 6 in perl[564eb8af7000+1a1000]
[ 5432.541402] Code: 83 f8 05 0f 87 cf 00 00 00 0f b7 6b 22 66 81 fd 00 04 77 64 01 ed 8d 7d 05 48 63 ff 48 c1 e7 03 be 01 00 00 00 e8 4e ef ff ff <66> 89 68 22 48 89 c3 66 89 68 24 4c 89 68 08 49 8b 45 00 48 89 03
[ 5432.541638] Core dump to |/usr/share/apport/apport pipe failed
[ 5432.660093] perl[408400]: segfault at 22 ip 00005654e7ec3cc2 sp 00007ffe47312cc0 error 6
[ 5432.660106] perl[408415]: segfault at 22 ip 000055b15d088cc2 sp 00007ffe67124210 error 6
[ 5432.660119] in perl[5654e7ec1000+1a1000]
[ 5432.660131] in perl[55b15d086000+1a1000]
[ 5432.660133] Code: 83 f8 05 0f 87 cf 00 00 00 0f b7 6b 22 66 81 fd 00 04 77 64 01 ed 8d 7d 05 48 63 ff 48 c1 e7 03 be 01 00 00 00 e8 4e ef ff ff <66> 89 68 22 48 89 c3 66 89 68 24 4c 89 68 08 49 8b 45 00 48 89 03
[ 5432.660142] Code: 83 f8 05 0f 87 cf 00 00 00 0f b7 6b 22 66 81 fd 00 04 77 64 01 ed 8d 7d 05 48 63 ff 48 c1 e7 03 be 01 00 00 00 e8 4e ef ff ff <66> 89 68 22 48 89 c3 66 89 68 24 4c 89 68 08 49 8b 45 00 48 89 03
[ 5432.660221] sleep[408436]: segfault at 0 ip 00007f18c67150b2 sp 00007ffdaf402820 error 4 in ld-linux-x86-64.so.2[7f18c66fa000+2a000]
[ 5432.660248] Code: 00 00 00 00 00 0f 1f 00 41 55 48 8d 05 50 1e 01 00 49 89 f5 49 89 c9 41 54 49 89 d4 48 89 c2 48 81 ec 18 04 00 00 85 ff 75 53 <41> 80 7d 00 00 48 8d 0d 2b 1e 01 00 4c 8d 05 d4 11 01 00 4c 0f 44
[ 5432.660417] Core dump to |/usr/share/apport/apport pipe failed
[ 5432.660480] Core dump to |/usr/share/apport/apport pipe failed
[ 5432.660543] Core dump to |/usr/share/apport/apport pipe failed
[ 5432.660593] perl[408406]: segfault at 22 ip 000055d5887c3cc2 sp 00007ffcf1af5220 error 6 in perl[55d5887c1000+1a1000]
[ 5432.660629] Code: 83 f8 05 0f 87 cf 00 00 00 0f b7 6b 22 66 81 fd 00 04 77 64 01 ed 8d 7d 05 48 63 ff 48 c1 e7 03 be 01 00 00 00 e8 4e ef ff ff <66> 89 68 22 48 89 c3 66 89 68 24 4c 89 68 08 49 8b 45 00 48 89 03
[ 5432.660888] Core dump to |/usr/share/apport/apport pipe failed
[ 5432.661682] perl[408391]: segfault at 22 ip 00005645d25a8cc2 sp 00007ffc836eb8b0 error 6 in perl[5645d25a6000+1a1000]
[ 5432.661718] Code: 83 f8 05 0f 87 cf 00 00 00 0f b7 6b 22 66 81 fd 00 04 77 64 01 ed 8d 7d 05 48 63 ff 48 c1 e7 03 be 01 00 00 00 e8 4e ef ff ff <66> 89 68 22 48 89 c3 66 89 68 24 4c 89 68 08 49 8b 45 00 48 89 03
[ 5432.661969] Core dump to |/usr/share/apport/apport pipe failed
[ 5433.228271] perl[408513]: segfault at 22 ip 000055bc88f1bcc2 sp 00007ffc31bb1ab0 error 6 in perl[55bc88f19000+1a1000]
[ 5433.228302] Code: 83 f8 05 0f 87 cf 00 00 00 0f b7 6b 22 66 81 fd 00 04 77 64 01 ed 8d 7d 05 48 63 ff 48 c1 e7 03 be 01 00 00 00 e8 4e ef ff ff <66> 89 68 22 48 89 c3 66 89 68 24 4c 89 68 08 49 8b 45 00 48 89 03
[ 5433.306971] perl[408642]: segfault at 22 ip 000055e76e66dcc2 sp 00007ffd37469c20 error 6 in perl[55e76e66b000+1a1000]
[ 5433.306999] Code: 83 f8 05 0f 87 cf 00 00 00 0f b7 6b 22 66 81 fd 00 04 77 64 01 ed 8d 7d 05 48 63 ff 48 c1 e7 03 be 01 00 00 00 e8 4e ef ff ff <66> 89 68 22 48 89 c3 66 89 68 24 4c 89 68 08 49 8b 45 00 48 89 03
[ 5433.307203] Core dump to |/usr/share/apport/apport pipe failed
[ 5433.820922] perl[408816]: segfault at 20 ip 0000557b90fb3463 sp 00007ffcd78bb6f0 error 4 in perl[557b90f88000+1a1000]
[ 5433.820953] Code: 89 df e8 60 9a 0e 00 48 8b 83 e0 00 00 00 48 8b 40 10 48 8b 13 48 85 c0 0f 85 79 ff ff ff e8 44 fc 06 00 48 8b 83 e0 00 00 00 <83> 78 20 00 79 2d 83 7b 30 00 7f 1b 48 8b bb f8 02 00 00 48 83 3f
[ 5433.821219] Core dump to |/usr/share/apport/apport pipe failed(How can sleep of all programs segfault?!)
I have even now and then experienced that it takes down other programs on the machine.
Unfortunately the program to generate the race condition is quite big: (https://git.savannah.gnu.org/cgit/parallel.git/tree/testsuite/tests-to-run/parallel-local-30s.sh) and I cannot make it much smaller without the race condition disappearing.
The test spawns in total more than 10000 communicating perl processes + ordinary shell programs (sleep, sort, md5sum, bash, paste, wc).
I have tested that this problem can be reproduced on my laptop and my 512GB server (so it is not caused by, say, bad RAM, overheating or out-of-memory).
How do I debug this and make this into a decent bug report for the relevant people? (And who are the relevant people? If both perl and sleep segfault, maybe we are taking a race condition in the kernel? Or in bash? Or libc?)
Edit
I installed FreeBSD12 (Vagrant). And the test runs flawlessly in FreeBSD12. This makes me think the kernel is to blame. It might also be that Vagrant somehow makes FreeBSD12 not fail.
Both the laptop and server runs Ubuntu22.04, so next is to try running a different kernel. Maybe Debian or CentOS. Also I should try if Ubuntu22.04 fails on Vagrant.
Works: FreeBSD12(Vagrant), Centos8(Vagrant), Ubuntu20.04(Vagrant), Ubuntu22.04(laptop t), Ubuntu22.10(Vagrant).
Fails: Ubuntu22.04(laptop a, server r).
I might have found the culprit:
echo 2 > /proc/sys/vm/overcommit_memoryIf I do this:
echo 0 > /proc/sys/vm/overcommit_memorythe race condition disappears on server r.
But why on earth would that cause these errors?
Edit
Marcus suggests it may have to do with memory allocation, and when I have seen other processes die during the run, it is often with "xmalloc: cannot allocate small number bytes".
How do we test if this theory is correct?
| Debug segfault race condition |
Because for threads "unlimited" gives you only 2 MiBs on x86_64, see pthread_create man page:
If the RLIMIT_STACK resource limit is set to "unlimited", a per-architecture value is used
for the stack size. Here is the value for a few architectures: ┌─────────────┬────────────────────┐
│Architecture │ Default stack size │
├─────────────┼────────────────────┤
│i386 │ 2 MB │
├─────────────┼────────────────────┤
│IA-64 │ 32 MB │
├─────────────┼────────────────────┤
│PowerPC │ 4 MB │
├─────────────┼────────────────────┤
│S/390 │ 2 MB │
├─────────────┼────────────────────┤
│Sparc-32 │ 2 MB │
├─────────────┼────────────────────┤
│Sparc-64 │ 4 MB │
├─────────────┼────────────────────┤
│x86_64 │ 2 MB │
└─────────────┴────────────────────┘ |
My default stack size (according to ulimit -s) is 8192 kB, so naturally the code below segfaults when I try to run it. Also, naturally, it works fine if I do a 'ulimit -s 9000'.
However, when I do a 'ulimit -s unlimited' the code segfaults again. Any ideas what is going on here?
If it's useful, I'm running Debian 10 with kernel 4.19.0-6 and gcc version Debian 8.3.0-6.
#include <iostream>
#include <unistd.h>
#include <cstdlib>void* wait_exit(void*)
{
char bob[8193*1024];
return 0;
}int main()
{
pthread_t t_exit;
int ret;
if((ret = pthread_create(&t_exit,NULL,wait_exit,NULL)) !=0)
{
std::cout<<"Cannot create exit thread: "<<ret<<std::endl;
}
std::cout<<"Made thread"<<std::endl;
sleep(5);
return 0;
} | Unlimited stack size with pthreads |
These different segfaults are more likely an indication of something wrong with memory or with your disc connection than with corruption of the filesystem.
You should first check the memory by rebooting and selecting the memory checker from the grub menu and let it run at least for one pass. Re-seat the memory (after switching of the power) and retry if you see errors.
If that doesn't show errors, I would boot from a CD and run full filesystem checks on each of the umounted partitions from there. During that time keep a close watch on your logs to see if there are timeouts for the discs: the data might be OK, but the transfer might somehow have errors. If you do disconnect and reconnect the cables (after power off).
|
I have Ubuntu 11.10. server which suffered from power failure today. Ever since the power went back on the unit works only partially. Some services work OK, some does not start, e.g.
apache2ctl restart
Inconsistency detected by ld.so: ../sysdeps/i386/dl-machine.h: 640: elf_machine_rel_relative: Assertion `((reloc->r_info) & 0xff) == 8' failed!
Action 'restart' failed.
The Apache error log may have more information.do-dist-upgrade
Segmentation faultapt-get update
(no output)Upon examining dmesg, apt-get segfaults as well.
[ 552.996106] apt-get[1674]: segfault at 6f5104d2 ip b7655c03 sp bfd50ff0 error 6 in libapt-pkg.so.4.11.0[b7618000+117000]So I tried to force fsck by using
sudo touch /forcefsck
rebootand then later by
shutdown -rF nowhowever after both I still get
cat /var/log/fsck/check*
(Nothing has been logged yet.)
(Nothing has been logged yet.)I am a bit lost at what to try next. I though I would just reinstall some package which might be broken, but first of all I dont know which and then I am not sure how (dpkg works though). I really want to avoid having to reinstall the whole thing. Any advice is appreciated.
| Ubuntu broken after power failure. How to fix? |
A segmentation fault is not the same thing as a SIGSEGV signal. A signal is just a signal. When you have an actual segmentation fault, that is when the kernel will log it, and subsequently send a SIGSEGV signal to your application.
The logic behind this, and why the kernel only logs on a real segmentation fault, is that the kernel (and CPU) is what enforces the rules about what address space your program has and is allowed to access. Thus when those rules are broken, it is the one to log the action.
To properly test, you need to actually do something in your code that will generate a segmentation fault, such as accessing an uninitialized pointer.
|
When applications segfault, I generally see messages like this in dmesg:
pstree[25678]: segfault at 0 ip 00007f58be0b3ae4 sp 00007ffe65b700a0 error 4 in libc-2.24.so[7f58be04d000+195000]However, I think somehow I must have changed my kernel settings somewhere, because I no longer see these messages in dmesg. I am triggering segfaults with this C program:
#include <signal.h>int main()
{
raise(SIGSEGV);
}I know my loglevel is set at KERN_DEBUG:
$ cat /proc/sys/kernel/printk
7 4 1 7and I know I can see output in dmesg like this:
sudo sh -c "printf '<%s> Log level %s (KERN_DEBUG)\n' '7' '7' > /dev/kmsg"and I know debug.exception-trace is set to 1:
$ sysctl debug.exception-trace
debug.exception-trace = 1
$ cat /proc/sys/debug/exception-trace
1but I still don't get segfault notifications. The dmesg man page talks about coloring segfault messages, but not about turning them on or off.
| Toggling visibility of segmentation fault messsages in dmesg |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.