output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
No, it is not required. It is up to the implementation to decide, as the behavior is unspecified. In implementations which allow it, the contents of the current line after the newline is erased must be erased as well. The POSIX specification for vi states the following (source):Input Mode Commands in vi
In text input mode, the current line shall consist of zero or more of the following categories, plus the terminating <newline>:
[...]
It is unspecified whether it is an error to attempt to erase past the beginning of a line that was created by the entry of a <newline> or <carriage-return> during text input mode. If it is not an error, the editor shall behave as if the erasing character was entered immediately after the last text input character entered on the previous line, and all of the non- <newline> characters on the current line shall be treated as erase-columns. |
In the non-vim implementations of vi I've worked with, it's not possible to delete a line while in insert mode. vim does allows it—but it's my understanding that vim is not POSIX-compliant in its default configuration.
Is a POSIX-compliant vi implementation not supposed to allow the deletion of lines in insert mode? Please quote the relevant parts of the standard where possible.
| Plain vi (not vim): Can't delete a line in insert mode: Is this behavior required by POSIX? |
The following shall be declared as functions and may also be defined as macros.doesn’t imply that functions can no longer be referenced if they have also been defined as macros.
Consider isalnum, which is commonly a macro:
#include <ctype.h>
#include <stdio.h>int main(int argc, char **argv) {
printf("%p\n", isalnum);
printf("%d\n", isalnum('C'));
}In present-day C, the first isalnum reference can’t be a macro: when isalnum is a macro, it’s declared as isalnum(c) or equivalent, so a parameter-less reference doesn’t match. The second reference can be either a macro or a function (and you can see which by passing the code through your preprocessor); if you want to ensure the latter, you can #undef as appropriate, or use parentheses ((isalnum)('C')).
The actual requirement behind that statement in the C standard is that the functions introduced in this way must always be available as actual functions, even if they are (also) defined as macros.Function prototypes shall be provided.Prototypes are a different concern; the requirement there is that, rather than a simple
extern int isalnum();declaration, the header files provide complete prototypes with type information for all the arguments.
See Defining C functions in the C Standard Library as macros for references.if those said functions are named differently from what the standard saysFunctions with a different name than that specified in the standard are of no concern to the standard (as long as they satisfy the general requirements of the standard).
|
For the sake of public record, I'm asking here at SE rather than on the standardization mailing list, so that it'd be more accessible to people.
With practically every headers that specify functions (or function-like interfaces), there's the following:The following shall be declared as functions and may also be defined as macros. Function prototypes shall be provided.It's troubling me because it has the potential implication that some functions that I use may not be assignable to function pointers as they may have been defined as macros.
However, the second part ("prototypes shall be provided") seems to suggest, that, if those said functions are named differently from what the standard says, then that different name must have a identical prototype (or at least one compatible in every enumerable aspect) to that specified in the standard.
Is my interpretation correct? If not, what's the real interpretation? Can I assign the listed interfaces below that paragraph to function pointers?
| How to interpret "functions ... may also be defined as macros"? |
This document set has been ratified by three organisations. Each organisation has its own naming scheme so there are three different standards names for the same text.
|
I see some IEEE standards from https://pubs.opengroup.org/onlinepubs/9699919799/:POSIX.1-2017 is simultaneously IEEE Std 1003.1™-2017 and The Open
Group Technical Standard Base Specifications, Issue 7.So basically 3 formats of IEEE standard are involved here:POSIX.1-2017
IEEE Std 1003.1™-2017
The Open Group Technical Standard Base Specifications, Issue 7How to interpret them?
What's the naming convention?
Thanks!
| How to understand the naming convention of IEEE standards? |
The until is considering the exit status of tee.
Looking at your code it's not at all obvious why you should need tee, though, so I'd suggest you just remove it
until ssh -q root@remotehost 'lp …' >/home/printererror.log 2>&1
do
: …
done |
I'm able to get the output of a failing lp command from a remotehost to my local script like below:
until ssh -q root@remotehost 'lp -d Brother_HL_L2350DW_series /root/moht/Printed/`basename "$FILE"`' 2>&1 | tee /home/printererror.log
do
echo "Issue is: `cat /home/printererror.log`"sleep 230doneThe issue is the until does not loop even if the lp command fails.
If I change my until code and remove 2>&1 | tee /home/printererror.log like below then it works fine and starts looping for failing lp command. But like you see I'm unable to grab the error message after removing tee
until ssh -q root@remotehost 'lp -d Brother_HL_L2350DW_series /root/moht/Printed/`basename "$FILE"`'I want both the until to loop for failing lp command while logging the respecting failing messages to local echo.
| Get the output of a remote ssh to local |
This is the asker's attempt at guessing.
Just like POSIX Threads, the realtime APIs are found useful in regular applications, coupled with the fact that specifications for these APIs being implementable without major obstacle, operating systems supporting these interfaces become more common, so the standard move them to base; all because POSIX is a prescriptive standard that aims to gather consensus.
Being a realtime API doesn't mean the applications using it is a real-time application. The ability of the operating system (and to an extent, hardware) to guarantee quality of service of these APIs is dependent on various factors, most importantly system load.
It's unreasonable to expect a finite system to be able to serve infinite amount of realtime requests that're beyond its capability. I haven't experience with realtime programming, but it's a sense-based guess of mine that realtime applications have well-defined scope and goals that programmers are obligated to achieve, and beyond which, users of realtime systems are expected to avoid exceeding.
|
While reading the standard, I noticed that bunch of APIs were,Introduced in Issue 5 for alignment with POSIX realtime APIs,Marked for option group membership in Issue 6, andMoved to Base in Issue 7 (SUSv4).Q: Does this mean that all systems conforming to "Unix(R) V7" product standard are realtime systems? What are the actual capability of such system with regard to real-time requirements?
| Single Unix Specification version 4 (Issue 7) moved bunch of Real-Time APIs to Base, What Next? |
There are probably less verbose ways to do this, but the classical solution would be something like:
#!/bin/bashtrap 'rm $TMP' 0
TMP=$(mktemp)
rm $TMP
mkfifo $TMP
tee < $TMP ${log:-/tmp/log.txt} &
exec > $TMP 2>&1It should go without saying that there are massive security and reliability concerns here, as any other process can read or write from or into the fifo. If you want to do this sort of thing, you're much better off with a simple wrapper that pipes the output of your script to tee.
|
log=/tmp/log.txt
the follwing syntax write all standard output and stand art error to log.txt file
exec > $log 2>&1
what we want is to write both standard output and stand art error to log.txt but also standard output + standard error to console
is it possible ?
| linux + write both standard output and stand error to log and to console |
Given the lack of activity on the mailing list that would be used to discuss revisions to the standard, I think it’s safe to say the answer is no.
Even if a group was working on a revised standard, the standard is supposed to reflect consensus, so I suspect you’d have a hard time getting it updated to follow GoboLinux-style practices.
| ERROR: type should be string, got "\nhttps://unix.stackexchange.com/a/227625/386242 explains the myriad benefits of a simpler and more consistent filesystem hierarchy, but also that without any cross-OS standardization, such efforts are as much of a disadvantage as they are an advantage.\nConsequently, is the Linux Foundation drafting a Filesystem Hierarchy Standard 3.1 or 4.0?\n" | Are any modifications to the FHS being worked on (by the Linux Foundation)? |
The telnet protocol, described in RFC 854, includes a way to send in-band commands, consisting of the IAC character, '\255', followed by several more bytes. These commands can do things like send an interrupt to the remote, but typically they're used to send options.
A detailed look at an exchange that sends the terminal type option can be found in Microsoft Q231866.
The window size option is described in RFC 1073. The client first sends its willingness to send an NAWS option. If the server replies DO NAWS, the client can then send the NAWS option data, which is comprised of two 16-bit values.
Example session, on a 47 row 80 column terminal:
telnet> set options
Will show option processing.
telnet> open localhost
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
SENT WILL NAWS
RCVD DO NAWS
SENT IAC SB NAWS 0 80 (80) 0 47 (47)The ssh protocol is described in RFC 4254. It consists of a stream of messages. One such message is "pty-req", which requests a pseudo-terminal, and its parameters include the terminal height and width.
byte SSH_MSG_CHANNEL_REQUEST
uint32 recipient channel
string "pty-req"
boolean want_reply
string TERM environment variable value (e.g., vt100)
uint32 terminal width, characters (e.g., 80)
uint32 terminal height, rows (e.g., 24)
uint32 terminal width, pixels (e.g., 640)
uint32 terminal height, pixels (e.g., 480)
string encoded terminal modesThe telnet and ssh clients will catch the SIGWINCH signal, so if you resize a terminal window during a session, they will send an appropriate message to the server with the new size. Ssh sends the Window Dimension Change Message:
byte SSH_MSG_CHANNEL_REQUEST
uint32 recipient channel
string "window-change"
boolean FALSE
uint32 terminal width, columns
uint32 terminal height, rows
uint32 terminal width, pixels
uint32 terminal height, pixels |
When I view the length and width of my terminal emulator with stty size then it is 271 characters long and 71 lines tall. When I log into another server over SSH and execute stty size, then it is also 271 characters long and 71 lines tall. I can even log into some Cisco IOS device and terminal is still 271 characters long and 71 lines tall:
C1841#show terminal | i Len|Wid
Length: 71 lines, Width: 271 columns
C1841#Now if I resize my terminal emulator(Gnome terminal) window in local machine, both stty size in remote server and "show terminal" in IOS show different line length and number of lines. How are terminal length and width forwarded over SSH and telnet?
| How are terminal length and width forwarded over SSH and telnet? |
The answer can be found in the termios(3) man page:
VEOF (004, EOT, Ctrl-D) End-of-file character (EOF). More precisely:
this character causes the pending tty buffer to be sent to the
waiting user program without waiting for end-of-line. If it is
the first character of the line, the read(2) in the user program
returns 0, which signifies end-of-file. Recognized when ICANON
is set, and then not passed as input.The first ^D you press causes the line you have typed to be delivered to the cat, so it gets a read(2) result of a (one character, no EOL char). The second ^D causes read(2) to return 0, which signifies EOF to cat.
| Let's run cat and then type a then ^D - you will see that cat did not exit.
Compare it with cat + a + Enter + ^D - now cat did exit.
So, why two ^D presses are necessary to exit cat in the first case and only one ^D in second case?
| Why are two ^D presses necessary to exit `cat`? [duplicate] |
The first step in understanding what's going on is to be aware that there are in fact two “newline” characters. There's carriage return (CR, Ctrl+M) and line feed (LF, Ctrl+J). On a teletype, CR moves the printer head to the beginning of the line while LF moves the paper down by one line. For user input, there's only one relevant concept, which is “the user has finished entering a line”, but unfortunately there's been some divergence: Unix systems, as well as the very popular C language, use line feed to represent line breaks; but terminals send a carriage return when the user presses the Return or Enter key.
The icrnl setting tells the terminal driver in the kernel to convert the CR character to LF on input. This way, applications only need to worry about one newline character; the same newline character that ends lines in files also ends lines of user input on the terminal, so the application doesn't need to have a special case for that.
By default, ghci, or rather the haskeline library that it uses, has a key binding for Ctrl+J, i.e. LF, to stop accumulating input and start processing it. It has no binding for Ctrl+M i.e. CR. So if the terminal isn't converting CR to LF, ghci doesn't know what to do with that character.
Haskeline instructs the terminal to report keypad keys with escape sequences. It queries the terminal's terminfo settings to know what those escape sequences are (kent entry in the terminfo database). (The terminfo database is also how it knows how to enable keypad escapes: it sends the smkx escape sequence, and it sends rmkx on exit to restore the default keypad character mode.) When you press the Enter key on the keypad in ghci, that sends the escape sequence \eOM, which haskeline recognizes as a binding to stop accumulating input and start processing it.
|
In Ubuntu/gnome-terminal, if I run:
$ stty -icrnlThen launch the GHC interactive environment (a Haskell console):
$ ghciThen pressing Return does not submit the line; however, Enter does.
However, with:
$ stty icrnlBoth Return and Enter submit the line.
I don't really understand the behaviour; surely Return will be submitting the newline character in both cases?
| Understanding Return, Enter, and stty icrlf |
Your Ctrl-r is being intercepted by the kernel-based terminal cookied line processing engine.
While sleep is running, the terminal is in cooked mode, which means that the kernel-based tty line editor is working. The tty line editor supports rudimentary command line editing. The erase key (usually set to Ctrl-h (backspace) or Del) and the kill key (usually Ctrl-U) are the best known special editing keys that can be used in this mode. This line editor is useful: it's what lets interactive utilities that use neither readline nor curses to read complete lines of input from the terminal while allowing the user to make typing corrections.
But there's another special key that's active in this mode. You can see it along with the other key settings in the output of stty -a under the name rprnt and its default setting is... you guessed it... Ctrl-r. The function of this key is to repaint the current command line, in case it has become corrupted or misaligned due to other terminal output.
To avoid this, you can disable the function with stty rprnt undef.
Personally I am used to Ctrl-r being interpreted as a repaint command and I am surprised every time I try to do that in bash and it does something different!
|
Context
Typeahead in bash: good
When a bash shell is busy (initializing, running a command), one can type before the next prompt appears.
If the shell has launched a program, that program will capture the keys, but if no program is run or if the program does not capture input, what one types gets inserted in the shell after prompt appears.
For example : type sleep 5, press Enter, then type ls and press Enter. ls will be run after sleep has finished. In real life, ls would be replaced by cp, rsync or many other programs.
This is a typical Typeahead feature and it's a great time saver when you know in advance what to type.
It's also very nice since it allows to copy-paste several commands and have them run in sequence.
Real-world use case include when the shell takes time to initialize. It could be that the computer is slowed down for any reason, or the shell is on a slow network link, etc.
History search in bash: good
On a bash prompt, one can type Ctrl-R to search through history.
This is an invaluable time saver when reusing some old command lines, or even sequence of command lines. Press Ctrl-R, type a few characters typical of the command to search, press Ctrl-O as many times as needed to replay the recorded commands from there.
Typeahead in history search: how ?
There is one limitation, though. Often I use the sequence above and find that if I type e.g. Ctrl-R ls before the shell prompt has actually appeared, the Ctrl-R part is ignored but the ls part is shown.
The net effect is that one has to wait for the shell prompt to appear before typing Ctrl-R, defeating part of the time saved.
Question
Is there a way to have Ctrl-R honoured even in a typeahead situation ?
| How to have type-ahead apply to bash history search (Ctrl-R)? |
Bash needs to put the terminal into character-at-a-time mode while it's waiting for you to type in a command line, so that you can edit the command line using emacs or vi-like editing characters. That's the mode you saw when you looked at the terminal's attributes from another terminal in your example.
Just before it runs a program (in your example, stty), bash puts the terminal back into canonical mode, where you have just a few special editing characters available courtesy of the operating system, such as backspace and Control-W, and basically the program gets input only after you type Enter.
When bash regains control, say after the program finishes or is suspended, it will put the terminal into character-at-a-time mode again.
|
If I open up a terminal (xfce4-terminal 0.6.3, but I doubt it matters) and I look at what terminal attributes are set (BASH is running in the terminal),
$ stty -a
speed 38400 baud; rows 24; columns 80; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = M-^?; eol2 = M-^?;
swtch = M-^?; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W;
lnext = ^V; discard = ^O; min = 1; time = 0;
-parenb -parodd -cmspar cs8 hupcl -cstopb cread -clocal -crtscts
-ignbrk brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff
-iuclc ixany imaxbel iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt
echoctl echoke -extprocthen I have a whole bunch of terminal attributes. Fine enough. If I then take a look at what terminal I'm using:
$ tty
/dev/pts/0then, on a new tab of my terminal (which new tab happens to be /dev/pts/1) I look at the terminal attributes of my first terminal, it seems to have slightly different terminal attributes:
$ stty -a -F /dev/pts/0
speed 38400 baud; rows 24; columns 80; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = M-^?; eol2 = M-^?;
swtch = M-^?; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W;
lnext = <undef>; discard = ^O; min = 1; time = 0;
-parenb -parodd -cmspar cs8 hupcl -cstopb cread -clocal -crtscts
-ignbrk brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl ixon -ixoff
-iuclc ixany imaxbel iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
isig -icanon iexten -echo echoe echok -echonl -noflsh -xcase -tostop -echoprt
echoctl echoke -extprocNotably, the original terminal here appears to not be in canonical mode, it has no literal next special character...
So why does this happen? I'd like to be able to look at things like this to see, e.g., if modern ed uses canonical mode, but if there's a Heisenberg problem of looking at it from another terminal, I don't know how I'd do it.
So, my two main questions:Does the terminal just appear to have different attributes when I look at it from another terminal? (e.g. is my first terminal still in canonical mode, but stty -a -F returns incorrect information?)
If the terminal does really change attributes when I switch to the other terminal, how does it know? Is the 1st terminal somehow informed when I'm not directly looking at it?P.S.: I tried this on the Linux Console also, just to make sure it wasn't a pseudo-terminal thing. Same results.
| Why do the terminal attributes look different from outside the terminal? |
I was able to configure the serial port so echo behaved like screen. Here are my settings:
stty -F /dev/ttyUSB0 115200 raw -echo -echoe -echok -echoctl -echokeAnd to echo:
echo -e -n 'command_here\r' > /dev/ttyUSB0 |
I have a small LED matrix controlled by a display driver that accepts serial commands to update the display. I'm successfully controlling it via node with the node serial package, however I'd like to be able to update it with echo so that I can control it earlier in the boot up process with a shell script.
To start testing this new method, I set it up with:
chmod o+rw /dev/ttyUSB0
stty /dev/ttyUSB0 115200And I'm able to send it commands using screen:
screen -F /dev/ttyUSB0 115200However when I try to use:
echo -e 'title \r' > /dev/ttyUSB0it doesn't work, and when I monitor the response in another window with
cat -v < /dev/ttyUSB0I see that its receiving the message but it seems fragmented and also continuously responding with an error as if I'm sending lots of bad and/or blank commands.
How can I mimic the commands sent from screen using echo?
| Sending serial commands with echo vs screen session |
stty -ixon disables XON/XOFF output control; stty ixon enables it. In general, stty -flag disables the corresponding termios flag, stty flag enables it.
|
I issued a stty -ixon command which enables XON/XOFF flow control.
There is a stty -ixoff command but that enables the "sending of start/stop characters".
So once XON/XOFF flow control is enabled, how do you disable it? Similarly, how do you disable "sending of start/stop characters?"
| How do you disable XON/OFF flow control? |
@icarus' comment:Maybe saved_tty_settings=$(stty -g < /dev/tty)?is actually pointing to the right direction, but it is not the end of story.
You will need to apply the same redirection when restoring stty states too. Or else, you will still get Invalid argument or ioctl failure at the restoring stage...
Correct course of action:
saved_tty_settings="$(stty -g < /dev/tty)"# ...do terminal-changing stuff...stty "$saved_tty_settings" < /dev/ttyThis is the actual script I tested; which I rewrote the whole thing as a classic Bourne shell script:
#!/bin/sh
# This is sttytest.sh# This backs up current terminal settings...
STTY_SETTINGS="`stty -g < /dev/tty`"
echo "stty settings: $STTY_SETTINGS"# This reads from standard input...
while IFS= read LINE
do
echo "input: $LINE"
done# This restores terminal settings...
if stty "$STTY_SETTINGS" < /dev/tty
then
echo "stty settings has been restored sucessfully."
fiThe test run:
printf 'This\ntext\nis\nfrom\na\nredirection.\n' | sh sttytest.shThe result:
stty settings: 2d02:5:4bf:8a3b:3:1c:7f:15:4:0:1:ff:11:13:1a:ff:12:f:17:16:ff:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0
input: This
input: text
input: is
input: from
input: a
input: redirection.
stty settings has been restored sucessfully.Tested using Debian Almquist Shell dash 0.5.7, and GNU Coreutils 8.13's stty.
|
I'd like to save then restore the current stty settings in a script that is also consuming stdin however stty -g is complaining about it:stty: 'standard input': Inappropriate ioctl for deviceI've tried closing the stdin file descriptor and calling stty in a subshell with overridden FDs. I can't figure out how to separate stdin from stty -g and I'd appreciate help or advice.
Note, I'm specifically interested in POSIX compatibility. No Bash/Zsh-isms, please.
Minimal script to reproduce the problem:
#!/usr/bin/env sh# Save this so we can restore it later:
saved_tty_settings=$(stty -g)
printf 'stty settings: "%s"\n' "$saved_tty_settings"# ...Script contents here that do something with stdin.# Restore settings later
# (not working because the variable above is empty):
stty "$saved_tty_settings"Run with print 'foo\nbar\n' | ./sttytest to view the error.
| Can current `stty -g` settings be saved when also consuming stdin? |
People usually want to see what they're typing (unless it's a password) :-)
The terminal accepts input at any time, and buffers it until an application reads it. More than that, when the tty is in cooked mode, the kernel buffers whole lines at a time and provides some rudimentary line editing functionality that allow you to kill the entire buffered line (default binding Ctrl-u and backspace. During the time that the line is being entered and edited and until you press Enter, applications reading from the terminal read nothing at all.
The tty functionality in the kernel does not and can not know if and when an application like tail is planning to produce output on the terminal, so it would not be able to somehow... cancel (?) line editing during such times and only during such times.
Anyway, being able to prepare the next line for the shell while something else is still busy running on the terminal and the shell is not yet ready to read that command is a feature, not a bug, so I wouldn't advocate removing it. Maybe not so useful for tail (which will never terminate on its own), but pre-typing the next command during a long-running cp or make (for example), and even editing that command with Ctrl-h and Ctrl-u, all before the shell gets ahold of it, is a common thing to do. Timothy Martin wrote in a comment:It is worth mentioning that less +F somefile provides similar functionality to tail -f somefile except that (accidentally) typed keystrokes will not echo to the screen.Yeah, but less not only prevents those characters from being echoed, but it also eats them, so they are not available to the next application that wants to read them!
Finally, there is one more reason:
In historical times (before my time!) terminals with local echo were common. That is, the terminal (usually in hardware) would echo the characters you typed locally while also sending them down the serial line. This is useful for giving the user quick feedback even if there was lots of latency over the conection to the UNIX system (think 300 baud modem dialing up a terminal server with auto-telnet to a slow UNIX system over a token ring network — or whatever).
If you have a terminal with local echo, then you want stty -echo at all times on the UNIX server to which you are connected. The result is approximately the same as a terminak without local echo (the common kind today) and stty echo enabled. So from that point of view, stty echo's job is to echo charatcers immediately as soon as they are received, regardless of what software is running, in emulation of what would happen on a terminal with local echo.
(By the way, if you have a terminal with local echo, you can't hide your password.)
|
I was recently running tail -f on a log file of a server that was running, trying to diagnose a bug, when I accidentally bumped the keyboard and typed some characters. They got mixed in with the output of the log, with no way to tell which was which. I have had similarly annoying things happen to me countless times, and I'm sure it has happened to many other people here.
So my question is this: why does the shell (or terminal, or whatever is doing it) ambiguously mix keyboard input with command output?
I am not asking for a practical solution to an immediate problem. I can maybe figure out some way to make the shell run stty -echo when a command is run and stty echo when it finishes. But I want to know the rationale behind designing the terminal like this. Is there some practical purpose? Or is it something done only for compatibility reasons, or something that wasn't given much thought at all?
| Why does the terminal echo keystrokes when commands are running? |
Firstly, this question was asked on Retrocomputing, but the community found it more suitable for this site. But I got an answer in the comment section, so copy it here:The Linux virtual console emulates a (sort of) VT102 terminal in
ON-LINE mode connected to a Linux (serial) tty device. The Linux tty
driver doesn't normally echo back the escape control character, and
instead echos ^[. If you don't want the tty driver to do this, then
use stty -ctlecho. Also real VT102 terminals don't support colour, it
works with the Linux virtual console anyways because its not really
VT102 compatible.I have tried the stty -ctlecho and it works almost as expected - only one subquestion - is the real VT102 also wasn't displaying characters after person press ESC and start type escape sequence, so the person were type it in the blind manner?
|
Prerequisites
The Linux virtual terminal (tty) is an emulation of VT102 - Virtual terminal subsystem source.
The real VT100 (nearly the same as VT102) has the following behavior (I suppose):In the LINE mode all typed characters are first transmitted to the computer and then, returned to the terminal. Nothing is displayed on the terminal screen before returning from the host.
The escape sequences are no exception - they are parsed and executed only after returning from the host. That is, if I want to change the font color to red, I should type ESC[0;31m, this sequence goes to the computer, echoed back, VT102 receive this, parse and apply. There is no other way to change the terminal font color (in the LINE mode). I am not sure if the VT102 had different font colors though, but that is an example.Picture from manual:Excerpt from manual:LINE/LOCAL
The LINE/LOCAL feature allows the operator to easily place the terminal in either an ON-LINE or a LOCAL (off-line) condition. When the terminal is on-line (ON-LINE indicator is lit) all characters typed on the keyboard are sent directly to the computer and messages from the computer are displayed on the screen. In the LOCAL condition (LOCAL indicator is lit), the terminal is electrically disconnected from the computer; messages are not sent to or received from the computer; and characters typed on the keyboard are echoed on the screen directly.Source: VT100 series video terminal technical manual, third Edition, July 1982.The question
Why does the Linux tty behave in a different way?
I put the bash into the sleep mode, so it doesn't interfere, then type Esc[0;31m and get just plain text, the color haven't changed - so, escape sequence has no effect.I were ask the similar question couple of years ago - Why i can't send escape sequences from keyboard, but can do it from another tty?, but now I got the knowledge about VT102 Linux subsystem and want to understand why it works this way - not identically to the real hardware terminal in this aspect. | Echoed escape sequences doesn't interpreted in Linux tty |
There are some undocumented ioctls you can use to set non-standard speeds, provided the driver implements them. A simple way to call them is with a small piece of python. Eg put in file mysetbaud.py and chmod +x it:
#!/usr/bin/python
# set nonstandard baudrate. http://unix.stackexchange.com/a/327366/119298
import sys,array,fcntl# from /usr/lib/python2.7/site-packages/serial/serialposix.py
# /usr/include/asm-generic/termbits.h for struct termios2
# [2]c_cflag [9]c_ispeed [10]c_ospeed
def set_special_baudrate(fd, baudrate):
TCGETS2 = 0x802C542A
TCSETS2 = 0x402C542B
BOTHER = 0o010000
CBAUD = 0o010017
buf = array.array('i', [0] * 64) # is 44 really
fcntl.ioctl(fd, TCGETS2, buf)
buf[2] &= ~CBAUD
buf[2] |= BOTHER
buf[9] = buf[10] = baudrate
assert(fcntl.ioctl(fd, TCSETS2, buf)==0)
fcntl.ioctl(fd, TCGETS2, buf)
if buf[9]!=baudrate or buf[10]!=baudrate:
print("failed. speed is %d %d" % (buf[9],buf[10]))
sys.exit(1)set_special_baudrate(0, int(sys.argv[1]))This takes some code from the pyserial package with constants for the various values needed from Linux C include files, and an array for the struct termios2. You use it with a baud rate parameter and your device on stdin, eg from bash:
./mysetbaud.py <>/dev/ttyUSB0 250000 |
I wish to directly monitor the serial-over-usb connection to my 3d printer, which runs at 250000 baud. e.g I might monitor it with cat /dev/ttyUSB0
However first I need to set the baud rate, e.g stty -F /dev/ttyUSB0 115200
But if I try and set the baud rate to 250k, it fails:
stty -F /dev/ttyUSB0 250000
gives result:
stty: invalid argument 250000
It appears that baud rate 250000 is not supported under Ubuntu/Mint. Can anyone suggest an alternative way to monitor this serial connection?
| How to monitor a serial connection @ 250000 baud? |
There are a lot of ways to do it. The way you mention could be one. xterm is a program that runs another one - it wraps another program in a pty - usually your shell - and channels the input you feed it to the wrapped programs. The thing about pseudo-terminals is they are just emulated devices - and so xterm takes a guess at the device you'll eventually be typing at it on. Of course, you can get a lot more specific. xterm honors all kinds of environment variables - and, better still, xresources.
From man xterm:ttyModes (class TtyModes)Specifies a string containing terminal setting keywords and the characters to which they may be bound. Allowable keywords include: brk, dsusp, eof, eol, eol2, erase, erase2, flush, intr, kill, lnext, quit, rprnt, start, status, stop, susp, swtch and weras. Control characters may be specified as ^char (e.g., ^c or ^u) and ^? may be used to indicate delete (127). Use ^- to denote undef. Use \034 to represent ^\, since a literal backslash in an X resource escapes the next character.
This is very useful for overriding the default terminal settings without having to do an stty every time an xterm is started. Note, however, that the stty program on a given host may use different keywords;
xterm's table is built-in.
If the ttyModes resource specifies a value for erase, that overrides the ptyInitialErase resource setting, i.e., xterm
initializes the terminal to match that value. |
I just set up a new computer, and as usual had to alter the settings in xterm in order to make the delete keys work properly. (Ctrl-H sends ^H, backspace sends ^?, delete sends ^[[3~. This is, of course, the objectively correct way to do it.) While the default xterm settings are problematic, in this setup everything works fine, at least on xterm's end.
The problem is that, for some reason, the stty settings in an xterm are always set to erase = ^H. As well as messing things up in non-readline standard input, this also makes tmux begin silently translating ^H to ^? in its windows, which makes things like emacs rather painful.
I have no idea why stty is set this way. It isn't the default setting; typing stty alone to display differences from default shows the erase = ^H; line, and manually typing stty erase ^? removes this line. (This also fixes the issues with stdin and tmux.) However, typing this in every terminal I start is tedious, and while I could put it in .bashrc or something, this strikes me as not being the right way to do it.
What is it that causes stty to use this particular incorrect, non-default setting? And how can I make it stop?
| stty settings are pathologically altered |
stty lnext only affects the terminal device line discipline internal editor (the very limited one you get when running applications like cat that don't have their own line editor). For zsh's editor, you'd need to use bindkey (zle does not do like readline (bash's line editor) that queries the tty LD setting to do the same in its own editor).
stty lnext '^Q' start '' -ixon # for tty LD editor
bindkey '^Q' quoted-insert # for zleNote that you'd need to do the stty part for every terminal, and do it again any time the tty LD settings are reverted to defaults like after stty sane.
Some systems allow you do change the default tty settings like HPUX with stty lnext '^Q' < /dev/ttyconf.
And for ^V to paste the content of the X11 CLIPBOARD selection at the cursor when in the zsh line editor:
get-clipboard() {
local clip
clip=$(xclip -sel c -o 2> /dev/null && echo .) || return
LBUFFER+=${clip%.}
}
zle -N get-clipboard
bindkey '^V' get-clipboard |
You can type characters literally by usingthe "lnext" functionality (often ^V per default) in your tty driverHowever, I bind Ctrl+v to "paste" in my terminal emulator. (Since I don't use control flow) I'd like to rebind lnext to Ctrl+q. I tried the following in ~/.zshrc
setopt noflowcontrol # Don't use ^s and ^q for control flow
bindkey -r "^Q" # Unbind ^q from push-line
stty lnext '^Q' # Bind ^q to lnextHowever, it doesn't seem to work. Is there a way to rebind lnext to Ctrl+q?
EDIT
I've done more troubleshooting, and can't seem to rebind other stty keys. I removed setopt noflowcontrol for testing, then tried stty start '^A' or stty start '^B'. Neither had any effect; start was still bound to Ctrl+q. (FWIW I tried both a literal ^A or ^B and the character itself with lnext preceding it.)
| How can I rebind stty lnext to ^q? |
It is not a typo, it is in fact what POSIX also says:onlcr (-onlcr)
Map (do not map) NL to CR-NL on output. This shall have the effect of setting (not setting) ONLCR in the termios c_oflag field, as defined in XBD General Terminal Interface.The fact that the mode isn't called "onlcrnl" is probably just to keep the setting names short and consistent (or at least consistently short).
The Rationale section tells us that the standard for stty was adopted from System V, so I'm assuming there was backward compatibility to older systems to care about too.
|
In the stty documentation, the following is mentioned:[-]icrnl
translate carriage return to newline
[-]inlcr
translate newline to carriage return
* [-]ocrnl
translate carriage return to newline
* [-]onlcr
translate newline to carriage return-newlineNotice how the "cr" in icrnl and inlcr and ocrnl mean "carriage return" but it means "carriage return-newline" in onlcr.
Is this a typo, or is this how onlcr really works (i.e.
it translates \n to \r\n)?
| Confused about the "onlcr" stty flag |
It looks like you might be a bit confused about how this all works.
First, /dev/ttyACM0 does not represent the USB link, or even the USB endpoint for whatever serial adapter you have connected, it represents the UART inside the adapter that handles the serial communications. Data you read from it will not include any USB headers or framing, just like data you read from /dev/ttyS0 will not include any PCI Express headers or framing. Setting the baud rate on these affects the hardware that it represents, not the bus it's connected to, so this won't do anything to the USB connection.
Second, the baud rate is a hardware setting, not a software one. When you call stty to set it on a serial port, that is telling the kernel to tell the hardware to change what baud rate it is trying to receive data at. This means in particular that any data that was received prior to this change will either be bogus (because it wasn't interpreted correctly by the hardware, sometimes the case if the baud rates are close to each other or exact harmonics), or completely lost (because the hardware just didn't accept it, the more likely case on modern hardware).
If you plan on reading data from a serial line, you need to have the baud rate set correctly prior to any data being transmitted by the other end. This also means that changing the baud rate won't change how the kernel interprets the data. If the data is already buffered in the kernel, then it's not going to change just because you change the baud rate (although it is good practice after changing the baud rate to drain the kernel buffers so that you know any future data is good).
So, to clarify, the correct method to get data out of a USB to serial adapter without using special software is to:Set the baud rate during system startup. For a USB to serial adapter, this should probably be a udev rule so that it gets set when the device gets plugged in too.
Use cat (or od if you need the byte values instead of text) to read data. This will return the exact data that is received by the USB to serial adapter (assuing the adapter doesn't do special processing). |
Sometimes I just need to read from a serial device, so I skip the complexities of minicom or screen and just cat. However, this only works if I first set the baud rate of the terminal using stty <baud> before attempting to open the file.
This data is likely already (or can be) buffered in the kernel and, in this case, was received using a UART to USB bridge. USB transfer rates are fixed for a given standard, so setting the baud rate can only affect the interpretation of the data. Given my lack of insight into what this data might look like wrapped in USB packets, I am unsure how to visualize the "interpretation" of USB packet data at some fixed read rate (baud rate).
$ stty 115200
$ cat /dev/ttyACM0What is really going on here? I understand the implications of this setting in hardware, but what does it mean in userspace software?
| What are the implications of setting the baud rate of a terminal from userspace? |
A process can be sent that SIGTTOU signal (which causes that message), when it makes a TCSETSW or TCSETS ioctl() for instance (like when using the tcsetattr() libc function) to set the tty line discipline settings while not in the foreground process group of a terminal (like when invoked in background from an interactive shell), regardless of whether tostop is enabled or not (which only affects writes to the terminal).
$ stty echo &
[1] 290008
[1] + suspended (tty output) stty echoSee info libc SIGTTOU on a GNU system for details:Macro: int SIGTTOU
This is similar to SIGTTIN, but is generated when a process in a
background job attempts to write to the terminal or set its modes.
Again, the default action is to stop the process. SIGTTOU is
only generated for an attempt to write to the terminal if the
TOSTOP output mode is set(emphasis mine)
I believe it's not the only ioctl() that may cause that. From a cursory look at the Linux kernel source code, looks like TCXONC (tcflow()), TCFLSH (tcflush()) should too.
|
I like my background processes to freely write to the tty. stty -tostop is already the default in my zsh (I don't know why, perhaps because of OhMyzsh?):
❯ stty -a |rg tostop
isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprtBut I still occasionally get my background processes suspended (this is not a consistent behavior, and I don't know how to reproduce it):
[1] + 3479064 suspended (tty output) | zsh: Why do I get suspended background processes even when I have `stty -tostop`? |
local stty="$(stty -g)"Save the current terminal settings. stty $stty, which is executed both when the function returns normally and on SIGINT, restores these settings.trap "stty $stty; trap SIGINT; return 128" SIGINTIf the function is interrupted by SIGINT (the signal sent by pressing Ctrl+C), restore the terminal settings and return 128. (Why 128? I wonder. Normally the exit status on a signal would be 128 + signal number.)stty cbreak -echo Disable the terminal's crude editing functionality (character/word/line erase), and turn off the echo of characters as they are typed. key=$(dd count=1 2>/dev/null) || return $?Read up to 512 bytes from the terminal (count is a number of blocks, and the default block size is 512 bytes). This is a bit strange: I think the intent was to read one byte. Since dd will return as soon as at least one byte is available, this will return a single byte in practice if a user is typing, but if a program is feeding keystrokes or if the system is slow, this could read more bytes. The code has the benefit that if the user types a multibyte character, all the bytes that make up the character are likely (but not guaranteed) to be read in the loop iteration.
If dd returns a nonzero status, this indicates a read error or a signal; the function returns immediately. The terminal settings are not restored, which is a bug, though most of the time the error would be either that the user pressed Ctrl+C, in which case the terminal settings are restored, or that the terminal has disappeared, in which case the point is moot. if [ -z "$1" ] || [[ "$key" == [$1] ]]; then
break
fiExit the loop if the byte(s) that was read is one of the characters in the argument to the function. If the argument is empty, any character terminates the loop. The argument isn't exactly a list of characters, it's in wildcard character set syntax: an initial ^ or ! inverts the set, a minus sign in most positions is parsed as a range (e.g. 0-9), [:…:] and [.….] denote character classes and collating symbols respectively, and a backslash quotes the next \, [, ] or -.
|
I am having trouble understanding the purpose of trap and the multiple stty invocations in the snippet below.
I was hoping someone could give me a rundown of what is happening.
getkey() {
local stty="$(stty -g)"
trap "stty $stty; trap SIGINT; return 128" SIGINT
stty cbreak -echo
local key
while true; do
key=$(dd count=1 2>/dev/null) || return $?
if [ -z "$1" ] || [[ "$key" == [$1] ]]; then
break
fi
done
stty $stty
echo "$key"
return 0
} | Reading keypresses in shell using trap and Unix signals |
If you think that you can cause the event by a specific action or interaction, by far the simplest method is something like:
watch -d -n1 "stty -F /dev/pts/106 -a | grep -Eo '.(icanon|ixon)'"Run this on a new terminal, the option to -F is the terminal you will run the program on (run tty to see what it is before starting it). Omit | grep .. if you want to observe the complete terminal state.
Next option, if you are using Linux, is to use ltrace to trace library calls, this similar to strace (and it incorporates some strace capability) but it works with user-space libraries, not just the kernel system calls:
ltrace -tt -e tcgetattr+tcsetattr myprogram ...This will show and timestamp calls to tcgetattr() and tcsetattr(), the libc functions to get and set terminal attributes.
Ultimately, those libc calls will use the ioctl() system call, this you can trace with strace or truss, here's how to use strace on linux:
strace -tt -e trace=ioctl myprogram [...]A big advantage here is that strace will happily decode various parameters to syscalls for you.
None of the above will tell you very much about logically where in the program the problem might occur though, to do that you have two options: a debugger, or DLL injection.
Within gdb you can easily set a breakpoint on tcsetattr(), then check the call stack, but this may be tedious if there are many calls (and will require a debug build or symbols for libc to get the best results).
The most comprehensive option (assuming that the target program is dynamically linked) is to inject your own DLL which intercepts or wraps the functions you need to track, in this case tcsetattr().
#define _GNU_SOURCE
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <stdarg.h>
#include <string.h>
#include <dlfcn.h>
#include <unistd.h>
#include <termios.h>/*
* gcc -nostartfiles -shared -ldl -Wl,-soname,tcsetattr -o tc.so wraptc.c
* LD_PRELOAD=./tc.so stty -icanon ixon
*
*/#define DEBUG 1
#define dfprintf(fmt, ...) \
do { if (DEBUG) fprintf(stderr, "[%14s#%04d:%8s()] " fmt, \
__FILE__, __LINE__, __func__, __VA_ARGS__); } while (0)typedef int tcsetattr_fp(int fd, int optional_actions,
const struct termios *termios_p);
static tcsetattr_fp *real_tcsetattr;void _init()
{
dfprintf("It's alive!\n","");
real_tcsetattr = dlsym(RTLD_NEXT, "tcsetattr");
dfprintf("Hooked %p tcsetattr()\n",(void *)real_tcsetattr);
}int tcsetattr(int fd, int optional_actions, const struct termios *termios_p)
{
void *bt[20];
size_t btsz; int rc,stacktr=0;
dfprintf("Caught tcsetattr(%i,%04x,...)\n",fd,optional_actions); if ( (fd==0) && !((termios_p->c_lflag) & ICANON)) {
dfprintf("ICANON off!\n","");
stacktr=1;
}
if ( (fd==0) && !((termios_p->c_iflag) & IXON)) {
dfprintf("IXON off!\n","");
stacktr=1;
}
if (stacktr) {
btsz=backtrace(bt,sizeof(bt));
backtrace_symbols_fd(bt,btsz,STDERR_FILENO);
} rc=real_tcsetattr(fd,optional_actions, termios_p);
return rc;
}Compile and invoke as indicated in the comments. This code locates the real libc tcsetattr() function, and contains an alternative version which is used instead. This code calls backtrace() when it sees possibly interesting activity on FD 0, then calls the real libc version. It may require minor adjustment.
|
A particularly large (~10^6 LOC) program causes my stty settings to change from echo ixon icanon to -echo -ixon -icanon and I would like to find the function in this massive program that causes this change.
I obviously would not like to trace execution through this mess of OOP spaghetti code.
How can I monitor stty settings and log what changes them? I'm thinking strace with awk can probably get me the info I need, but I don't know what system calls to filter for.
| Monitor and alert user when stty settings change? |
short: no
long:
terminfo describes the features of a terminal. Those particular capabilities likely were added to AT&T's list of possible terminal capabilities in the late 1980s to describe some long-forgotten terminal that didn't use ^S and ^Q.
curses in general (ncurses specifically) doesn't pay any attention to these features (because no one uses them). stty doesn't pay any attention for a different reason: it ignores the terminal database, being essentially a platform-dependent program with hard-coded knowledge to fill in the cases where a default initial value is needed.
|
I can disable XON/XOFF flow control:
stty -ixonSo I put this in my "~/.profile". However I have started making my own terminal
with "terminfo" and "tic", and I noticed these options:
xon_xoff xon xo terminal uses xon/xoff handshakingexit_xon_mode rmxon RX turn off xon/xoff handshakingxon_character xonc XN XON characterCould I compile my terminal with some of these options and avoid having to put
the "stty" command in my startup file?
| terminfo disable XON/XOFF |
I figured this out. This looks like font problem. As soon as I installed the fonts from machine A to machine B, it started working fine.
To install fonts, I copied everything from /usr/share/fonts from machine A to machine B and then ran fc-cache /usr/share/fonts as described here
|
I have vim setup with NerdTree on a remote machine. I have two local machines. When I ssh to remote machine from one of the local machines (say A), it displays all the symbols in NerdTree correctly. However, when I ssh from the other local machine (say B) to remote machine, those symbols show up as some garbled characters. I tried to do some search on this and tried various locale and encoding things I could find. Following are the settings of the two local machines and the remote machine.
Local machine A:
$ stty
speed 38400 baud; line = 0;
eol = M-^?; eol2 = M-^?; swtch = M-^?;
ixany iutf8$ echo $LANG
en_US.UTF-8$ locale
LANG=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=Local machine B:
$ stty
speed 38400 baud; line = 0;
eol = M-^?; eol2 = M-^?; swtch = M-^?;
ixany iutf8$ echo $LANG
en_US.UTF-8$ locale
LANG=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=The settings on local machines seem to be the same. Both of them run CentOS style OS. Both of the terminals are Gnome Terminals set to use character encoding UTF-8. Only difference I can see is machine A uses version 2.16.0 with font Courier and machine B uses version 2.31.3 with font monospace. So machine B actually uses newer version.
Remote machine:
λ echo $LANG
en_US.UTF-8λ stty
speed 38400 baud; line = 0;
eol = M-^?; eol2 = M-^?;
-brkint ixanyλ locale
LANG=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=The λ symbol which is my $PS1 on remote machine is displayed correctly on both terminals.
How do I proceed from here? Do the fonts installed on local machines matter?
Thanks
| Terminal shows garbled characters with same settings as other terminal showing correct characters |
Your shell does this, to help you edit command-lines.
bash's readline library saves/restores terminal modes. You can see this in the rl_prep_terminal and rl_deprep_terminal functions, called indirectly from edit_and_execute_command.
|
Open two consoles / X terminals. From one, which is attached to say /dev/tty1 (Linux console) or /dev/pts/0 (X terminal), run $ stty -echo. (Now echoing to keyboad is turned off.) Then from the other, run $ stty --file=/dev/tty1 echo.
Now type something in the first terminal. It echoes, ok, the last stty took effect. But once you press the enter key, It reverts to -echo state. Why is this? Is a permanent change possible?
This does not apply to some combinations of stty flags, at least not for 'echo / -echo'. When `$ stty --file= ' is run from the same terminal, it affects permanently.
N.B. Zsh has its own policy for stty. See this question
EDIT: In the first post, I failed to report that this happens in bash, but not in dash. The mention to zsh case was also added.
| The effect of `stty --file=<terminal> <flag>` is only temporary for consoles in bash. Why? |
The terminal emulator sends the x character, and the terminal driver sees that this has been configured as the erase character. So instead of echoing it back to the emulator, it sends the appropriate sequence to erase the previous character (e.g. backspace-space-backspace).
Even when the erase character is set to Backspace, simply echoing it wouldn't actually erase what was typed. When a BS character is sent to a terminal, it just moves the cursor one character to the left, it doesn't clear it. So the terminal driver would still have to send an extra space-backspace to clear it and leave the cursor at that location.
|
As I understand it, when typing characters in a terminal emulator they appear because they are "echoed". We imagine that the terminal is a separate device communicating with the computer via a two-way channel, and each key typed doesn't update the screen immediately, but appears when it is sent back from the computer.
My question is how it is possible for the backspace key, or whatever key is set to "erase" with stty, to appear to erase a character on the screen. If in an xterm I do
$ stty erase x
$ cat -
aaaaaaaaaaxthe last x I type appears to erase the last a. However if this were a real terminal, separate from the computer, it wouldn't have any way of knowing what the stty erase character was. The only way I would expect to get this behaviour would be if the erase character was ^H and it was echoed, and the terminal interpreted this as a special control character telling it to erase the character before the cursor.
Is this a peculiarity of terminal emulators, where they "cheat" and look up what the stty erase character is?
| How is the display updated when the erase character is typed in a terminal emulator? |
An unquoted ~ expands to /path/to/your/home/dir in most shells.
The stty man page doesn't say what it does when the argument to erase is something other than a single character or undef or ^ followed by a character, but it looks like your stty uses the first character of the argument string.
Type stty erase '~' (with the single quotes). It's good practice to always quote the argument, because some shells treat ^ as the pipe symbol.
|
Yesterday I was fed up with being forced to type Caps+Backspace to erase character in Putty, because a Backspace was printing a ~.
I found some info on internet saying you should type stty erase ~, or at least that how I understand it.
Since then when I type on / it send a ← to the terminal and I'm not even capable to copy/paste in my putty.
Does anyone has a good idea to save me ?
Note:
/ is still working in binary like vi or more but not in bash (where I typed the command).
Additional info:
bash-3.2# stty -a
speed 38400 baud; 55 rows; 210 columns
eucw 1:1:0:0, scrw 1:1:0:0:
intr = ^C; quit = ^\; erase = /; kill = ^U; eof = ^D; eol = <undef>
eol2 = <undef>; start = ^Q; stop = ^S; susp = ^Z; dsusp = ^Y; reprint = ^R
discard = ^O; werase = ^W; lnext = ^V
-parenb -parodd cs8 -cstopb -hupcl cread -clocal -parext
-ignbrk brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl -iuclc
ixon -ixany -ixoff -imaxbel
isig icanon -xcase echo -echoe -echok -echonl -noflsh
-tostop -echoctl -echoprt -echoke -flusho -pending -iexten
opost -olcuc onlcr -ocrnl -onocr -onlret -ofill -ofdel tab3Edit 2:
Also stty -g provide a string usable by other stty so if you have a working command elsewhere you can export the result of one to import it to the buggy one.
| Since I execute 'stty erase ~' I can't type / anymore in Putty |
I presume you have something like a serial-to-usb adapter on the pi, and have setup a getty so you can login to this tty from your PX-8. Once logged in, an stty ixon from the shell will enable xon/xoff flow control for output from the pi. If you now ssh from the shell to login to some remote, the flow control is inadequate to stem large output from the remote.
What seems to be happening (do strace -v -f -o /tmp/trace ssh and look for ioctl(0,...)) is that ssh deliberately puts the terminal into a raw mode, which includes switching off the ixon setting. This is usually what is desired; you want every character typed to go to the remote, which has its own pty to handle flow control and so on.
Unfortunately, output from the remote is sent in large buffers, so an xoff character from the PX-8 will have little effect as by the time it gets to the remote, all of the large buffer already received by the pi will continue to be output, probably causing overflow and loss of data.
What you could try is re-issuing an stty ixon on the pi after the ssh connection has been made. One way of doing this automatically is to add to your ~/.ssh/config the 2 option lines
PermitLocalCommand yes
LocalCommand sleep 10 && stty ixon -F /dev/tty &PermitLocalCommand is off by default for security; see man ssh_config.
|
The puzzle I am trying to solve for is where larger outputs of text eventually fall apart into scrambled text.
For context I am working on an older machine (EPSON PX-8) connected to Pi3 over RS232 using terminal emulation software on the PX-8 called TEL
TEL Settings - Baud: 9600, Char Bits: 8, Parity: NONE, Stop Bits: 2, RTS: ON, Flow Control: ON
Initially I had observed this issue between the PX-8 and Pi3. I was able to resolve it by enabling flow control for XON/XOFF signaling. However, when I attempt to telnet or ssh to another Linux host from the Pi3 is where I get scrambled text again when attempting to output larger blocks of text.
The text output below is an example of what happens when I attempt to print my command history.
1 sudo rasp-config
2 sudo raspi-config
3 sudo nano /boot/cmdline.txt
4 tail /boot/cmdline.txt
5 sudo shutdown -r now
6 sudo vim ~/boot/cmdline.txt
7 cd /./boot
8 dir
9 sudo vim cmdline.txt
10 sudo vim config.txt
11 sudo shutdown -r now
12 dfgdf
13 vim
14 sudo vim cmdline.txt
15 cd /./boot
16 sudo vim cmdline.txt
17 sudo shutdown -r now
18 cd /./boot
19 sudo vim cmdline.txt
20 sudo shutdown -r now
21 ping 8.8.8.8
2 xprt TEM=Vvj9s9ds9j3oin so nat1 machine
x Rom =vos cngas-2goses9g3
-xtiet n n5
-s oiy
y | Does XON/XOFF flow control transmit through multiple terminal session hops? |
That eol setting is not for the key that would take you to the end of some line-editing buffer, that's a setting of the tty line discipline.
It is for its very basic line editor, the one used when entering input for applications (like cat, sed) that don't have their own line editor. That editor doesn't have cursor positioning, the only editing it can do is via backspace (stty erase), Ctrl+W (stty werase) and Ctrl+U (stty kill) possibly more on some systems.
It is done in the tty device driver itself in the kernel, the applications (cat, sed...) don't see those characters.
The eol setting is only to tell that driver to recognise a different (additional) character from linefeed (aka newline aka ^J) as the line ending character. Upon entering that character, the line discipline would send the characters entered so far to the reading application.
For instance, to input text one word at a time instead of one line at a time, you could do:
stty eol ' '; catAnd you'd see that each time you press space, cat would output the text you've entered (including that space character).
If you're at the prompt of a command that implements its own line editor, then making End move the cursor to the end of the current buffer would not be done via stty but by specific configuration of that command (if at all).
For instance, with the zsh shell, that would be done with:
bindkey '^[[F' end-of-line
bindkey '^[OF' end-of-lineAssuming your terminal sends the <ESC>[F or <ESC>OF character sequence when you press that End key as your "F" suggests.
Some application will automatically bind End to their _end-of_line_ action. To do that, they will query the local termcap or terminfo terminal databases to find out what character sequence your terminal sends upon that key press.
For that, they use the $TERM variable. If the entry for that key in that database does not match what your terminal sends, then that won't work.
You can try:
tput kend | sed -n lTo see what the database things the End key sends if your tput uses terminfo or check for your $TERM entry in /etc/termcap if using termcap. You may be able to find an entry there that more closely matches your minicom (or the terminal emulator that hosts it) behaviour.
Edit based on new info
So, most likely, you're running minicom in a modern xterm-like terminal and communicating over serial. At the other end of the serial line, getty assumes you're running an at386 console (which I believe is actually the internal console driver of old PC-based AT&T systems). That is very far from a modern xterm.
Looking at a Solaris system here which in many respects is about as modern as your old AT&T system, there is a xterm entry in terminfo but it lacks the kend capability.
What you could do is upload the terminfo definition of your terminal on the machine you run minicom on (infocmp > file), transfer that to the SysV machine, and try and compile it over there with tic (and set $TERM to the same value of there, set the TERMINFO environment variable beforehand to something like ~/.terminfo if you're not administrator there). If that doesn't work because the curses version is too ancient, you could use the vt100 entry of the AT&T system instead, and just edit in the kend=\EOF of your terminal, change the name and use tic again.
Like:
cat > my-term.info << \EOF
my-term|My VT100 compatible terminal with an end-key,
am, mir, msgr, xenl, xon,
cols#80, it#8, lines#24, vt#3,
acsc=``aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~,
bel=^G, blink=\E[5m$<2>, bold=\E[1m$<2>,
clear=\E[H\E[J$<50>, cr=\r, csr=\E[%i%p1%d;%p2%dr,
cub=\E[%p1%dD, cub1=\b, cud=\E[%p1%dB, cud1=\n,
cuf=\E[%p1%dC, cuf1=\E[C$<2>,
cup=\E[%i%p1%d;%p2%dH$<5>, cuu=\E[%p1%dA,
cuu1=\E[A$<2>, ed=\E[J$<50>, el=\E[K$<3>,
el1=\E[1K$<3>, enacs=\E(B\E)0, home=\E[H, ht=\t,
hts=\EH, ind=\n, ka1=\EOq, ka3=\EOs, kb2=\EOr, kbs=\b,
kc1=\EOp, kc3=\EOn, kcub1=\EOD, kcud1=\EOB,
kcuf1=\EOC, kcuu1=\EOA, kent=\EOM, kf0=\EOy, kf1=\EOP,
kf10=\EOx, kf2=\EOQ, kf3=\EOR, kf4=\EOS, kf5=\EOt,
kf6=\EOu, kf7=\EOv, kf8=\EOl, kf9=\EOw, rc=\E8,
rev=\E[7m$<2>, ri=\EM$<5>, rmacs=^O, rmkx=\E[?1l\E>,
rmso=\E[m$<2>, rmul=\E[m$<2>,
rs2=\E>\E[?3l\E[?4l\E[?5l\E[?7h\E[?8h, sc=\E7,
sgr=\E[0%?%p1%p6%|%t;1%;%?%p2%t;4%;%?%p1%p3%|%t;7%;%?%p4%t;5%;m%?%p9%t^N%e^O%;,
sgr0=\E[m^O$<2>, smacs=^N, smkx=\E[?1h\E=,
smso=\E[1;7m$<2>, smul=\E[4m$<2>, tbc=\E[3g, kend=\EOF
EOF
TERMINFO="$HOME/.terminfo" export TERMINFO
mkdir -p "$TERMINFO"
tic my-term.infoAnd add:
if [ "`tty`" = "the-serial-device" ] && [ "$TERM" = at386 ]; then
TERMINFO=$HOME/.terminfo
TERM=my-term
export TERM TERMINFO
fito your ~/.profile (where the-serial-device is whatever tty outputs when you login over serial.
|
How I can use the ctrl+e keyboard combination in emacs mode to go to the end of the line, but also be able to use the End key to do the same? How do I set that with stty?
I have tried this combination copied from linux terminal:
stty eol M-^?but when I press End I get F on keyboard.
The system is old ATT System V on minicom terminal.
My term is 386AT and this is terminfo definition
# Reconstructed via infocmp from file: /usr/share/lib/terminfo/3/386AT
AT386|at386|386AT|386at|at/386 console @(#)386.ti 1.4,
am, bw, eo, xon,
colors#8, cols#80, lines#25, ncv#3, pairs#64,
acsc=``aaffggjYk?lZm@nEooppqDrrsstCu4vAwBx3yyzz{{||}}~~,
bel=^G, blink=\E[5m, bold=\E[1m, clear=\E[2J\E[H,
cr=\r, cub=\E[%p1%dD, cub1=\E[D, cud=\E[%p1%dB,
cud1=\E[B, cuf=\E[%p1%dC, cuf1=\E[C,
cup=\E[%i%p1%02d;%p2%02dH, cuu=\E[%p1%dA, cuu1=\E[A,
dch=\E[%p1%dP, dch1=\E[P, dl=\E[%p1%dM, dl1=\E[1M,
ed=\E[J, el=\E[K, flash=^G, home=\E[H, ht=\t,
ich=\E[%p1%d@, ich1=\E[1@, il=\E[%p1%dL, il1=\E[1L,
ind=\E[S, indn=\E[%p1%dS, invis=\E[9m, is2=\E[0;10m,
kbs=\b, kcbt=^], kclr=\E[2J, kcub1=\E[D, kcud1=\E[B,
kcuf1=\E[C, kcuu1=\E[A, kdch1=\E[P, kend=\E[Y,
kf1=\EOP, kf10=\EOY, kf11=\EOZ, kf12=\EOA, kf2=\EOQ,
kf3=\EOR, kf4=\EOS, kf5=\EOT, kf6=\EOU, kf7=\EOV,
kf8=\EOW, kf9=\EOX, khome=\E[H, kich1=\E[@, knp=\E[U,
kpp=\E[V, krmir=\E0, op=\E[0m,
pfx=\EQ%p1%{1}%-%d'%p2%s', rev=\E[7m, rin=\E[S,
rmacs=\E[10m, rmso=\E[m, rmul=\E[m, setab=\E[4%p1%dm,
setaf=\E[3%p1%dm,
setb=\E[4%?%p1%{1}%=%t4%e%p1%{3}%=%t6%e%p1%{4}%=%t1%e%p1%{6}%=%t3%e%p1%d%;m,
setf=\E[3%?%p1%{1}%=%t4%e%p1%{3}%=%t6%e%p1%{4}%=%t1%e%p1%{6}%=%t3%e%p1%d%;m,
sgr=\E[10m\E[0%?%p1%p3%|%t;7%;%?%p2%t;4%;%?%p4%t;5%;%?%p6%t;1%;%?%p9%t;12%;%?%p7%t;9%;m,
sgr0=\E[0;10m, smacs=\E[12m, smso=\E[7m, smul=\E[4m, | stty on old AT&T unix: how to add End-key for "end of line"? |
Untested!
#!/bin/bash
ETX=$'\003'
STX=$'\002'
# Open /dev/ttyUSB0 open on FD9
exec 9< /dev/ttyUSB0
# do any stty stuff needed on fd9
# e.g.
# stty 9600 < /proc/self/fd/9 > /proc/self/fd/9
# now loop, reading from the device,
while IFS= read -rd "$ERX" -u 9 wibble
do
wibble=${wibble#"$STX"}
printf 'Got %q\n' "$wibble"
# Do something
doneWith bash, that won't work if the data includes NUL bytes. You'd need to use zsh instead if that were the case.
|
I am trying to write a bash script that is able to interpreter data coming from a serial device. I configure the port in raw and then I am able to do a simple cat of /dev/ttyUSB0 and see the data. My issue is how to get the single line of data that the device sends in a bash variable so I can freely manipulate it.
I would like to have it in bash before going to Python so I always have a script that I know it works on every linux machine.
The data I receive has the following format: STX<26 x 4Bit-Nibbles codded as ASCII Payload>ETX
Ideally I could just store the new payload (so without STX and ETX) in a Bash variable every time I get a new string of data.
Solution
with the help of @icarus answer, I cooked up something that works like a charm. First I configure the serial port to generate the interrupt with STX value:stty /dev/ttyUSB0 115200 intr 0x02 then to get the info I wanted, I use this loop:ETX=$'\003'
while read -d "$ETX" line; do echo $line; done < /dev/ttyUSB0Thanks again for the help, really great first experience in this website. Cheers
| Interpreting serial data via bash |
You could try using UART1 or UART2 on the Mira board. According to the manual they use TTL level signals. The GPS module outputs 3 V, but tolerates 5 V on input. The +3 V should be enough to be interpreted as a "1" on a TTL input. The RS-232 signals are not suitable without a buffer, because RS-232 specifies +3..+12 V for the space state and -3..-15 V (that's minus 15 V) for the mark state.
|
Hardware
PHYTEC Mira Board with i.MX6 Processor
Operating System
Yocto Image created using the BSP provided by PHYTECSource with minimal packages in it.
The board has an UART board called UART3 and its software interface within the OS is /dev/ttymxc2 Hardware Manual. The only thing available to check/set serial port on board is stty.
Task
I wish to interface an Adafruit Ultimate GPS to the UART3 to read information from it on the Mira Board.
Attempt-1
I connected both components in the following manner.
UART3_RXD_RS232 (MIRA) --> TX pin (GPS)
UART3_TXD_RS232 (MIRA) --> RX pin (GPS)Set the serial port as follows:
stty -F /dev/ttymxc2 speed 9600Read value:
cat /dev/ttymxc2Result: Garbage Values. Tried all possible settings and still obtained garbage values. Wrote a simple node script to try to read the information coming from the port but I got error stating that the characters (garbage values) are not recognized.
Attempt-2
Initially I tried to connect the GPS to a simple Arduino Nano to obtain the values from the GPS and this works, confirming that the GPS sends information and no defect occurred.
I connect the Serial Ports of the Mira and the Arduino and try to send information from the Mira to the Arduino's serial interface and read it through serial console.
setup
MIRA_Board (serial UART3) ---> Arduino Nano (Serial Pins) --USBCable--> ComputerPins
UART3_RXD_RS232 (MIRA) --> RX PIN NANO
UART3_TXD_RS232 (MIRA) --> TX PIN NANOI am logged into the Mira board through SSH. The following command is triggered, hoping to expect the same value on the computer's serial console through the Arduino
echo 'hello' > /dev/ttymxc2Result: Still Garbage Values on the Console.The configuration for /dev/ttymxc2 is as
stty -F /dev/ttymxc2 -a
speed 9600 baud;stty: /dev/ttymxc2 line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0; -parenb -parodd cs8 hupcl -cstopb cread clocal –crtscts
-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc -ixany -imaxbel -iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echokeIf this issue is due to serial configuration mismatch, how do I troubleshoot it using stty?
P.S.: Baud rate for GPS should be 9600 which is the case
| troubleshooting serial settings through stty |
The Belkin F5U109 seems to be a device of fairly old design, so perhaps the F5U409 with the same usb vendor:device id is similar. In this case the Linux driver chosen because of the id is the mct_u232.c. We can read in the .h file for Flow control:no flow control specific requests have been realized
apart from DTR/RTS settings. Both signals are dropped for no flow control
but asserted for hardware or software flow control.So it seems that XON/XOFF software flow control is not implemented in this driver, which is derived by sniffing the usb commands issued under Windows98. Perhaps the hardware itself does not provide this feature.
You could try implementing the flow control at the user level, but it is unlikely to be adequate as there will probably be a fifo on input and output so that when the XOFF arrives at the user level, there may still be too many characters already in the fifo that cannot be cancelled. Perhaps the PX-8 provides other protocols that could be used to packetize the data?
You might still be able to use hardware flow control, by connecting the extra modem lines RTS and CTS (pins 7 and 8 for 9pin DB9, 4 and 5 for DB25). You may need to swap these if you PX-8 is wired as a computer rather than a terminal. You would need stty crtscts too, and perhaps -clocal.
Alternatively, there are other serial-usb devices that Linux supports better, due to adequate documentation by the manufacturer, such as the popular FTDI series. The FTDI driver seems to have code to set the XON and XOFF characters in the device, which would allow for a rapid response by the hardware to the reception of an XOFF character, without needing to wait for the character to arrive at the kernel to be recognised. There are illegal copies of the FTDI chip, so try to buy a reputable make to ensure full compatibility.
|
This is related to a previous thread I created about a month ago and which was answered.
Today I am attempting to setup a serial console login prompt on a laptop running Ubunutu 20 with a Belkin F5U409 USB serial adapter. I have run into the same issue where larger text output will eventually fall apart into scrambled text. However, this time setting stty ixon does not resolve the behavior. See below for sample output of the issue.
For context, the computer I am using to connect to the Ubuntu laptop over RS232 is an EPSON PX-8. On the PX-8 I am using terminal emulation software called TEL.COM. See below for terminal parameters I have configured on the PX-8.
I am using systemd to enable a console on USB0 with systemctl start [emailprotected]. Do I need to configure flow control with systemd? Is there some place other than stty I need to configure parameters for ttyUSB0?
I have attempted to set this up on another laptop running Debian 10 but get the same behavior.
TEL.COM settings on the PX-8:
Baud: 9600, Char Bits: 8, Parity: NONE, Stop Bits: 2, RTS: ON, Flow Control: ONExample of this issue when I attempt to output command history:
albert@t450:/$ history
1 sudo rasp-config
2 sudo raspi-config
3 sudo nano /boot/cmdline.txt
4 tail /boot/cmdline.txt
5 sudo shutdown -r now
6 sudo vim ~/boot/cmdline.txt
7 cd /./boot
8 dir
9 sudo vim cmdline.txt
10 sudo vim config.txt
11 sudo shutdown -r now
12 dfgdf
13 vim
14 sudo vim cmdline.txt
15 cd /./boot
16 sudo vim cmdline.txt
17 sudo shutdown -r now
18 cd /./boot
19 sudo vim cmdline.txt
20 sudo shutdown -r now
21 ping 8.8.8.8
2 xprt TEM=Vvj9s9ds9j3oin so nat1 machine
x Rom =vos cngas-2goses9g3
-xtiet n n5
-s oiy
ystty configuration on the Ubuntu machine:
albert@t450:/$ stty -a
speed 9600 baud; rows 40; columns 80; line = 0; intr = ^C; quit = ^\; erase = ^?;
kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; swtch = <undef>; start = ^Q;
stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; discard = ^O; min = 1;
time = 0; -parenb -parodd -cmspar cs8 -hupcl cstopb cread -clocal -crtscts -ignbrk
-brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon ixoff -iuclc -ixany
-imaxbel iutf8 opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0
bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop
-echoprt echoctl echoke -flusho -extprocNote that all of these parameters are set in stty:
ixon
ixoff
stop = ^S
start = ^Q;
cs8
cstopb
-parenb | Serial flow control issue with ttyUSB0 |
The M- means the high bit is set, so add "0x80" to the character encoding.
The ^? means the "DEL" character so 0x7F.
Add the two together and we get 0xFF.
We can test this:
$ stty -a | grep -w eol | sed 's/.*; //'
eol = <undef>;$ stty eol 0xff $ stty -a | grep -w eol | sed 's/.*; //'
eol = M-^?;A full list of control characters and their caret notation is available under the Control code chart section of the Wikipedia article on ASCII.
|
I ran stty --all on a terminal which had been reconfigured by a badly behaved process which exited before putting my terminal back to its original settings. Part of the output reads: eol = M-^?;. What is this encoding? What does that sequence of characters mean?
The man page has this elucidating remark, presumably for those who understand a-priori:In settings, CHAR is taken literally, or coded as in ^c, 0x37, 0177 or 127; special values ^- or undef used to disable special characters. | How are characters encoded in stty's output? |
You are changing attributes of the device, not the file descriptor. The file descriptor is just a way of identifying which device you're talking about. If both stdin and stdout are the same tty (/dev/pts/0 for example) then it doesn't matter which one you use for tcgetattr and tcsetattr.
Since echoing by definition involves input and output, it's hard to imagine what it might mean for echoing to be enabled on input and not output, or vice versa. Either the tty driver will echo, or it won't. Did you have a goal in mind that involves modifying the echo behavior in some way? If so, say what you're trying to accomplish and maybe someone will know how to do it properly.
|
ECHO setting is enabled on stdin and stdout by default.
But why if we disable ECHO on stdin, it is also disabled on stdout?
They have two separate descriptors - 0 and 1, so why they are changed simultaneously as if they had one and the same file descriptor?
The following program demonstrates this:
#include <termios.h>
#include <unistd.h>
#include <stdio.h>int main(void)
{
struct termios tty_stdin;
struct termios tty_stdin_restore;
struct termios tty_stdout;
tcgetattr(STDIN_FILENO, &tty_stdin);
tcgetattr(STDIN_FILENO, &tty_stdin_restore); /* disable echo on stdin */
tty_stdin.c_lflag &= (tcflag_t) ~ECHO;
tcsetattr(STDIN_FILENO, TCSANOW, &tty_stdin); /* observe that it was automatically disabled on stdout */
tcgetattr(STDOUT_FILENO, &tty_stdout);
printf("STDOUT ECHO after changing STDIN: %d\n", tty_stdout.c_lflag & ECHO ? 1 : 0); tcsetattr(STDIN_FILENO, TCSANOW, &tty_stdin_restore);
return 0;
} | Why changing tty settings on one file descriptor affects another? |
I believe I found that the problem is caused by the script running all commands (except the first) in the background. I can force the first command to have the same problem by forking it with &.
After not being able to find a way to have the script run each command in the foreground, one-after-the-other, I have found an alternative solution...
I can put all the commands in a custom screenrc file (e.g., my_screenrc) as such:
# Import default screenrc
source ${HOME}/.screenrc# Run screen-specific commands (not bash ones)
screen # Run bash in window 0
screen vim # Run vim in windows 1 through 3 (with correct settings)
screen vim
screen vimI can then run this from bash with:
screen -c my_screenrc |
When running Screen, I can use Ctrl+ac to create a new window and run vim in each window. Also from Screen, I can run the command screen vim multiple times to open new windows with vim already running. These work as expected. However...
If I put the command multiple times in a script, such as:
#!/bin/bash
screen vim
screen vim
screen vim...and run that script from within Screen, the first command will run as expected, but the second and following ones will not.
Things I have noticed are:Window 2 and beyond does not have stty -ixon applied, which I have set in .bashrc
If I don't have colorscheme set explicitly in .vimrc, it will use one scheme in window 1 and another in all following windows
Sometimes a command will be skipped, i.e., sometimes only two new windows will be opened where the script was set to open three
If I do a :windowlist, window 2 and following will not have the login flag set (running screen vim directly will set this flag), e.g.,Num Name Flags
0 bash $
1 vim $ <-- running the script from window 0 opened 1..3 (no flag on 2 or 3)
2 vim
3 vim
4 vim $ <-- manually running `screen vim` from window 0 always sets the flagUsing Ctrl+aL on a window that's not logged in will return the message This window is not logged in and the flag will not be set. Pressing the keys again will then toggle between logged in and out (though stty -ixon etc' will still not be applied)
Running htop will show all instances of vim (including ones that are not logged in) are running under my user.Why does manually opening multiple windows apply my settings correctly, but using a script doesn't?
I am new to Linux and not sure if I'm doing something silly here.
| Starting multiple Screen windows from shell script uses wrong configuration |
facepalm I seem to have a mistake in a startup file:
stty werase 2> /dev/null
|
In my decades of experience of *n?x based OSes, I have never seen this before:
It happens during SSH commands. Regardless of terminal or terminal emulator, pressing the number 2 within SSH sends a ^W (ie: stty werase). Troublesome since I need to do ^V prior to the pressing 2. Note: this occurs within a password prompt and/or with the SSH session.
This behavior does not occur outside of SSH and ^W behaves as it should both inside and outside of SSH.
Has anyone ever seen this behavior?
Update #1 stty -a (outside of SSH) shows my werase as 2. I am not sure when or where that was set.
| ssh: 2 acts like ^w regardless of terminal |
So found answer in another forum. I will put it here, basically just add timeout timing and a while loop to constantly read the port.
stty -F /dev/ttyS1 speed 115200 cs8 -cstopb -parenb -echo time 3 min 0
while [ true ]; do
cat /dev/ttyS1
doneThat's all.
|
FYI I am running busybox. I am able to send data to my ttyS1 using the following command:
stty -F /dev/ttyS1 speed 115200 cs8 -cstopb -parenb -echo
echo -en 'data here' > /dev/ttyS1But when I try to read, I do this:
stty -F /dev/ttyS1 speed 115200 cs8 -cstopb -parenb -echo
cat /dev/ttyS1But program ends without any messages.
I also tried cat < /dev/ttyS1 doesn't work either.
I am positive that the data is being sent to this port since I have LED indicator to indicate data is coming. And Connection Settings are set to be same as 115200 baud, 8bit, Even Parity, 1stopbit.
| Reading data from serial port |
You can try things out with a script such as
#!/bin/shfor fd in 0 1 2; do
if [ -t $fd ]; then echo $fd is a TTY; fi
doneRunning this I see that:if the script is run on its own, all three FDs are TTYs
if the script is run at the start of a pipeline, stdin and stderr are TTYs
if the script is run in the middle of a pipeline, stderr is a TTY
if the script is run at the end of a pipeline, stdout and stderr are TTYsThat all seems logical. Obviously if the pipeline redirects stderr as well the behaviour will be different.
To answer your question in full, I don't think it's possible to determine the terminal characteristics in all cases, but if you can find a FD (examine all the entries in /dev/fd) that's a TTY, you can run stty on that... But it isn't possible from the middle of a pipeline to determine where the end of the pipeline is going.
As mentioned by Janis, if you want to find out information about the controlling terminal, regardless of the pipeline, you can use /dev/tty, e.g. with stty -F /dev/tty; but that will fail if the script is run without a controlling terminal, e.g. from a cron job.
|
I have a bash script that columnizes a list of items. By default it will guess at the number of columns to output, based on the terminal width as reported by stty size. But when the script is in a pipeline, stty reports "Inappropriate ioctl for device".
What I want is to allow my script, when being executed as a command in the middle of a pipeline, to discover whether the pipeline it's part of is ultimately outputting to a tty - and if so, be able to stty its characteristics.
SOLUTION: As pointed out below, stty -F /dev/tty seems to work anywhere in a pipeline.
| Can a command in a shell pipeline determine the tty characteristics of its context? |
Add this to your ~/.vimrc:
silent !stty -ixonIf that creates problems with a non-tty vim like gvim (no idea, not able to test it), try this:
silent !test -t 0 && stty -ixonIt looks like they decided to make -ixon the default in recent versions of vim, so this is soon going to stop being an issue with vim (notice that the original patch from the originator of the issue was untested garbage, but the final patch got it right).
Generally (with other programs), there's no general solution ;-)
xterm has a ttyModes resource (and -tm option) which can be used to set the initial modes, but which does not support -ixon. I don't think that kitty has even that.Running stty -ixon from inside vim doesn't workThat's because vim restores the initial (usually non-raw) state of the terminal before running external commands via :!command, and then changes it back to raw mode upon replying to the "Press ENTER or type command" prompt, so any changes performed by command are lost.As an extra note, the common folklore spread on this site & elsewhere (that ^S/^Q are just an anachronistic carryover from the time before less and tmux, etc) is wrong; software flow control is something you must use on any serial line without RTS/CTS out-of-band signaling (especially on a line with high baud rate), and something that is totally useless on any kind of virtual tty.
See this for the kind of problems that just blindly turning IXON off (in that case by the ssh client on the local tty) may cause for people using actual serial lines to connect to their devices. Most programs which call cfmakeraw (or duplicate it exactly) are bound to fail in the same way (script(1) is the first one that comes to mind).
|
I've successfully disabled Ctrl+S in terminal using instructions here.
However, if I launch a terminal programs (namely terminal vim) from "outside the terminal", Ctrl+S still freezes the screen.
What I mean by "outside the terminal":Using a keyboard shortcut to run something like kitty -e vim to open vim in terminal
Using vimserver to launch terminal vim from the file manager, which in the end pretty much does the same as aboveIs there a way around this? (Other than unlearning the habit of constantly saving with Ctrl+S ;)
(There are reasons I cannot and don't want to use gVim.
Running stty -ixon from inside vim doesn't work, and I'm sure there are good reasons why not – I'm not really familiar with stty yet...)
| How to disable Ctrl+S in terminal applications started from file manager or via shortcut? |
echo is a setting of the terminal device (the discipline part in the tty kernel driver), termcap is about controlling the terminal (real or emulator) via escape sequences, it's two separate things.
Here you want to prevent the application to do a specific ioctl. One way could be by detaching it from the terminal.
socat - exec:okular,pty,rawWould run okular attached to a different pseudo-terminal device and socat would pass along data from your terminal to that one.
To pass arbitrary arguments, with zsh:
okular() {
CODE="$0 ${(j[ ])${(qq)argv}}
" socat - 'system:"eval \"$CODE\"",pty,raw'
} |
Recently, more and more GUI applications (okular and inkscape, among others) started disabling terminal echo while they are active. This seems unnecessary at best (if you don't background the process, you wouldn't type in it anyway and if you did, who cares? And if you background it, it has no effect anyway). But what is worse is, that if you happen to do something wrong and instinctively kill it with ctrl-C (I do that if I accidentally call okular on many files, for instance), the echo stays disabled. It's not the end of the world, but needing to blindly type stty echo every time this happens is making me nervous.
Is there a way to tell the terminal to disable toggling these settings? I think it might be possible to modify the termcap but that's where my knowledge about it ends. Of course, this probably makes invisible password visible but I want to know how to do it anyway, having one tamper-proof terminal open would be enough.
| Prevent application to tamper with terminal settings? |
Those are xterm-style "mouse" events. You could in principle turn off those using a suitable printf or echo, but reset does it already as part of the rs1 or rs2 string in the terminal description (see output of "infocmp").
reset uses this for instance:
rs1=\Ec,
rs2=\E[!p\E[?3;4l\E[4l\E>,and prefers the latter (the former is a hard-reset). The \E is the escape character. Offhand, that first chunk in rs2, \E\[!p is a soft-reset, which generally resets the mouse along with most other useful things. A printf would be
printf '\033[!p'which is more typing than
reset(even if you use some non-standard echo which knows about \E). But that comment about arrow keys: the soft reset puts the cursor-keys back in normal mode, while vi thinks they're in application mode.
To disable only the mouse, take a look at the output of infocmp -x:
XM=\E[?1006;1000%?%p1%{1}%=%th%el%;,That tells ncurses how to enable/disable the mouse. Your terminal description isn't exactly that, but the 1000 is the normal mouse mode that your example shows. So... you could do this
printf '\033[?1000l'(lowercase L disables), and kill just the mouse.
You're seeing those because "some program" doesn't clean up after itself.
|
While running some program (Configure), my terminal will be messed up. My typing is not shown. I can use "stty sane" to fix it but I notice that whenever I click my mouse on the terminal (I use PuTTY), strange characters appear. e.g.
# O:#O: O:#O: 7-#7- BE#BE ...They seem to be 5 character sequences and if I click on the same location, the same sequence appears.
I know that I can fix it using "reset" but I want to understand what they are and if there is a way to fix it without reset. And maybe even a way to find the root cause of what inside "Configure" messed up my terminal.
| strange 5-characters-sequence appear in PuTTY terminal with mouse click |
That's the readline(3) line-editing library, which is usually statically built as part of bash, but is also used by other programs.
Every time it starts reading a command from the user, readline saves the terminal settings, and puts the terminal into "raw" mode [1], so it could be able to handle moving the insertion point right and left, recall commands from the history etc. When readline(3) returns (eg. when the user has pressed Enter), the original settings of the terminal are restored. Readline will also mess with signals, which may result in some puzzling behaviour.
If you strace bash, look for ioctl(TCSETS*) (which implements tcsetattr(3)) and for ioctl(TCGETS) (tcgetattr(3)). Those are the same functions used by stty(1). If you run bash with --noediting you will see that it leaves the terminal settings alone.
[1] not exactly the "raw" mode of cfmakeraw(3); you can see the exact details here. All those terminal settings are documented in the termios(3) manpage.
|
Open xterm, run tty and see pseudo terminal slave file (let's say it is /dev/pts/0).
Then open another xterm and run
$ stty -F /dev/pts/0
speed 38400 baud; line = 0;
lnext = <undef>; discard = <undef>; min = 1; time = 0;
-brkint -icrnl -imaxbel iutf8
-icanon -echoThen run /bin/sleep 1000 in first xterm. Then run the same stty command in second xterm again:
$ stty -F /dev/pts/0
speed 38400 baud; line = 0;
-brkint -imaxbel iutf8Then terminate sleep command in first xterm. Then run the same stty command in second xterm again:
$ stty -F /dev/pts/0
speed 38400 baud; line = 0;
lnext = <undef>; discard = <undef>; min = 1; time = 0;
-brkint -icrnl -imaxbel iutf8
-icanon -echoWe see that bash changes tty attributes before running a command and restores them back after running a command. Where is it described in bash documentation? Are all tty attributes restored, or some attributes may not be restored if they are changed by program?
| How bash sets tty attributes before and after running a command? |
The character-based console (tty1, tty2 etc.) is a terminal emulator: it mimics the operation of a serial-port connected terminal, with some Linux-specific extensions. This emulation includes support for XON/XOFF handshaking... and the characters used for this type of handshaking can be easily produced on a keyboard, even by accident.
If you press Control-S, it sends the XOFF control character, which stops output to the terminal until you press Control-Q (aka the XON character). Note that input is not actually stopped along with the output: if you type anything while XOFF is in effect, those characters will be output as soon as you press Control-Q.
This feature can be controlled using the stty command, specifically with the ixon and ixany flags. To disable the feature completely, use stty -ixon; to enable it again, use stty ixon. If you use stty ixany while the feature is enabled, any key will resume the output; if you use stty -ixany, only Control-Q can be used to resume.
The ixoff flag is for transmission in the opposite direction: if the server-side input buffer was in danger of getting overrun, the server would send a Control-S to the terminal to make it stop transmitting until the previous input was processed. On the Linux console, this is obviously not very useful, as both the "server" and the "terminal" are sharing the same physical processor. But the ixon flag is still useful, as you can use it to e.g. pause a scrolling text in order to read it.
The default state for the Linux console seems to be ixon -ixoff -ixany, i.e. Control-S can stop output to the console, and only Control-Q can resume it.
|
I have Ubuntu 16.04.04 LTS server. I was doing some work in tty1 and then took a lunch break. In the meantime, my desktop monitor went asleep. When I came back, I pressed Esc and my screen awoke and I was at the tty1 prompt, exactly where I left off. However, when I began typing, I noticed that the characters I entered were not displaying at the prompt. It was as if the keyboard was not working, but I could see that the pre-existing text on the screen would flicker with each key press I made. So, I think (A) the physical keyboard connection is not the issue and (B) the key presses are registering somewhere because of the brief flicker.
Just to double-check, I pressed CtrlAltF2 to go to tty2 and I began typing (username & password) and the characters I was pressing on the keyboard were displaying as they should. So, my keyboard seemed to work fine in tty2 (and in tty3, tty4, and so on).
So, I went back to tty1 and, again, the key presses were not displaying. No key combination seemed to work in tty1 except the function keys to go to another terminal.
Because, I could not figure out how make my key presses show up in tty1, I tried shutdown, but I could not enter shutdown in tty1. So, I went to tty2 and entered shutdown. In tty2, the command registered and stated the time the computer would shutdown. When it was time to shutdown, a message displayed stating that (I don't remember the exact wording) there was another process taking place. I tried to CtrlC out of the shutdown, but it was stuck too. So, I went to tty3 and tried to shutdown there and got stuck again. So, I ultimately pushed the power button on my computer for a few seconds and shut down my computer the ugly way. When the computer started back up, the problem was gone and everything seemed to work normally.
What was going on with key presses not displaying (but making the screen flicker) in tty1? Eager to understand this.
| key press not displaying in tty1 |
Pragmatically, if what you have seems to work, and you are not expecting any change in the setup, or it is a one-off hack, then stay with your solution.
However, Linux has an API for rs485 for appropriate hardware, that you can try. Some hardware has a built-in half-duplex mode that will work if you put the serial port in the appropriate state. The ioctls are, for example,
to enable RS485 mode:
#include <linux/serial.h>
struct serial_rs485 rs485conf = {0};
rs485conf.flags |= SER_RS485_ENABLED;
if (ioctl (fd, TIOCSRS485, &rs485conf) < 0) ...or set logical level for RTS pin equal to 1 when sending:
rs485conf.flags |= SER_RS485_RTS_ON_SEND; |
I have an app that expects to do I/O over a full-duplex serial line. That is:
char buf[4];
write(fd, "ping", 4);
read(fd, buf, 4);... expects to end up with whatever four bytes the remote device transmitted in response to the "ping" string.
But I'm running on a half-duplex RS485 line, so every byte that gets transmitted on the serial line is also received on the on the serial line (because they are the same line). So the code snippet above always reads "ping" into the buffer before the remote device transmits anything.
Obviously the host code isn't expecting this.
The best solution I've come up with is to always to a read following a write and verify that the received characters match those that were sent, and then to ignore them.
But is there a better way? Is there a reliable Unix idiom for inhibiting the reception of characters during the time that characters are being transmitted?
(I appreciate that there are lots of subtleties and twisty little mazes in my question. For example, does the UART have a fifo? Is the receive process running in the same thread? Etc. If it was easy, I wouldn't be asking unix.stackexchange! :)
update
I implemented a simple routine that I call after each write() to read() an equivalent # of bytes and then uses strncmp() to verify that they match. It appears to be robust, but I'm still interested to know if there's a driver level approach that might do this better, or at least differently.
| How to inhibit serial reception during serial writes for a RS485 (half-duplex) serial line |
The conversion of certain keystrokes into a signal is performed at a very low level: it's done by the generic terminal driver in the kernel, when it relays between the terminal emulator process or physical terminal hardware and the program running in the terminal. This driver only understands characters (and by characters, I mean bytes —back when these interfaces were designed, multibyte character sets weren't a thing).
Most keystrokes involving modifiers other than Shift or involving function keys have no corresponding character, so the terminal must send a multi-character escape sequence (beginning with the ASCII escape character). See bash - wrong key sequence bindings with control+alt+space and How do keyboard input and text output work? for more background on this topic.
There are characters corresponding to Ctrl+letter (the ASCII control characters), but not to Super+letter. Meta+C sends the two-character sequence ESC C, and Super+C sends just C by default in most terminals. So you can rebind the signals on Ctrl+letter (and a few punctuation symbols), to a single-byte printable character, and that's about it (also to a non-ASCII byte but that would wreak havoc with UTF-8).
The only solution is to configure your terminal emulator to send Ctrl+C for whatever keystroke you want to cause to trigger SIGINT. For Rxvt, you're on the right track, but ^C inserts the two characters ^ and C, you need the control-c character instead, which you can specify with an octal escape. Furthermore M-c is Meta+C, not Super+C; for Super+C, run xmodmap -pm, note which of mod1 through mod5 is Super, and use the corresponding digit as the modifier character, e.g.
URxvt.keysym.5-c: \003 |
I'd like to swap Ctrl & Super so that I can use the physical super/meta key for copy/paste/select-all/find/etc. without application-specific configuration, which often doesn't exist.
However, I'd like to keep SIGINT, EOF, suspend, and the like on the physical control key.
I've tried setting URxvt.keysym.M-c: ^C, and I'm aware that stty can change it to be e.g. ^K, but as far as I've been able to tell it can't change the modifier key - either to a different keysym or to the keycode directly.
Is this possible?
| Key other than Control to send SIG*? |
There is a readline option to stop it taking up the current stty settings. Add to your ~/.inputrc
set bind-tty-special-chars Off then you will be able to bind Control-w as you wish.
Interactively, you can try:
$ bind 'set bind-tty-special-chars Off'
$ bind -ps | grep C-w
"\C-w": unix-word-rubout
$ bind -x '"\C-w": date'then typing the character runs the date command, but the stty settings are unaffected.
|
I want to bind C^w to a non-default action in bash, but it requires disabling werase in the terminal. This, unfortunately, affects other programs launched by bash, especially my ssh sessions: when I type C^w there it echoes ^W.
Is there a way to enable some non-default terminal setting only in interactive mode in bash and have it automatically disable/restore it when it runs commands?
| disable terminal werase setting only in interactive mode in bash |
One simple way is to add the line stty -echo to a file in /etc/profile.d where the filename ends in .sh. For example, a file named /etc/profile.d/disable-tty-echo.sh.
This works because, at least on Ubuntu, each /etc/profile.d/*.sh file will be sourced by /etc/profile, and the latter file will be read and executed (along with $HOME/.profile) by any POSIX login shell. This is documented in the man pages for bash, dash, ksh, etc.
Do note that if users are able to set their shell to one that does not read and execute /etc/profile, such as csh, and you need to be absolutely sure that stty -echo will be set, then you will also need to restrict the allowed login shells by editing /etc/shells.
|
I would like to disable TTY echoing for all users that are connecting via SSH.
The Linux command stty -echo does the trick, however instead of users having to execute this command on their TTY I would like to give them a TTY that by default has echoing disabled from the start.
how can I achieve this? thanks in advance!
| force stty -echo for all ssh users |
Because that's what reset does.
Its manual tells you that it "sets the terminal modes to 'sane' values".
Sanity in this case involves turning off CLOCAL.
|
I have a connection from my debian computer to a device with debian linux on it.
If I type the reset command on the serial commandline the clocal will be set to -clocal.
I search the internet why, but I couldn't find out why. The problem is that some commands, like sudo -i hang if -clocal is set but working if clocal is set.
Is there a reason, why reset is setting -clocal?
I tried this with picocom, screen, TeraTerm and putty.
| reset command over serial connection setting clocal to -clocal |
On the "monitor keyboard" stty -a output, there is -isig -icanon -iexten, while the SSH session has these features enabled (no - sign in the front).
And the man stty describes these as follows:[-]icanon
enable special characters: erase, kill, werase, rprnt
[-]iexten
enable non-POSIX special characters
[-]isig
enable interrupt, quit, and suspend special charactersSo, with -isig, the Control-C, Control-\ and Control-Z special functions are switched off. Likewise, -icanon indicates the erase, kill, werase and rprnt functions won't take effect either. You can fix this with stty isig icanon on the monitor keyboard session.
This might be because you are starting /bin/sh directly in /etc/inittab without using something like a getty process to initialize the TTY device settings to well-known values at the beginning of the login process.
If you want to run a shell directly from inittab without a login requirement, you should do any necessary TTY initialization yourself in /etc/init.d/rcS, for example:
for i in 1 2 3 4 5 6; do
stty -F /dev/tty$i isig icanon
doneOn a SSH session, pseudo-TTY devices (PTYs for short) are used, and sshd takes care of their initialization automatically.
/dev/console as the result of the tty command output is a bit suspicious. On the x86 architecture at least, the /dev/console is essentially a compatibility alias for whatever is the currently active console device: it might normally connect to /dev/tty0 (i.e. the currently-active KVM virtual console) but it might point to a serial port like /dev/ttyS0 if a console= kernel boot option was used.
Does your kernel have all the following configuration options enabled?CONFIG_VT_CONSOLE=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
either CONFIG_VGA_CONSOLE=y or CONFIG_FRAMEBUFFER_CONSOLE=y and an appropriate framebuffer console driver, like CONFIG_FB_VESA=y for classic BIOS or CONFIG_FB_EFI=y for UEFI, and/or maybe also CONFIG_DRM_FBDEV_EMULATION=y if using newer Direct Rendering GPU drivers on the console.Are the console device nodes set up correctly? On a Debian 12 system, ls -l /dev/console /dev/tty[0-9] has the following output:
crw--w---- 1 root tty 5, 1 Nov 10 02:26 /dev/console
crw--w---- 1 root tty 4, 0 Nov 10 02:25 /dev/tty0
crw--w---- 1 root tty 4, 1 Jan 30 13:37 /dev/tty1
crw--w---- 1 root tty 4, 2 Nov 10 02:25 /dev/tty2
crw--w---- 1 root tty 4, 3 Nov 10 02:25 /dev/tty3
crw--w---- 1 root tty 4, 4 Nov 10 02:25 /dev/tty4
crw--w---- 1 root tty 4, 5 Nov 10 02:25 /dev/tty5
crw--w---- 1 root tty 4, 6 Nov 10 02:25 /dev/tty6
crw--w---- 1 root tty 4, 7 Nov 10 02:26 /dev/tty7
crw--w---- 1 root tty 4, 8 Nov 10 02:25 /dev/tty8
crw--w---- 1 root tty 4, 9 Nov 10 02:25 /dev/tty9The permissions and ownerships may vary according to current logins & how your distribution chooses to handle them, but the major/minor device numbers (the two digits separated by comma and whitespace) should be exactly like this.
|
I customized the root file system using Busybox, and its version is 1.36.1. I don't know why my device cannot be terminated by ctrl+c when typing ping an IP on the monitor keyboard. However, when I connect to the device through SSH, the ping an IP can be terminated by ctrl+c? The same goes for ctrl+u.
I referred to similar problems, but did not solve my problem.
By the way, my system does not have a desktop environment. My processor is Intel x86_64 bit.
#1
#stty -a (SSH connection)
speed 38400 baud; rows 41; columns 143; line = 0;
intr = ^C; quit = ^\; erase = ^H; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z;
rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;
-parenb -parodd -cmspar cs8 -hupcl -cstopb cread -clocal -crtscts
-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc ixany -imaxbel iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke -flusho -extproc#stty -a (monitor keyboard)
speed 38400 baud; rows 37; columns 100; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>;
eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R;
werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;
-parenb -parodd -cmspar cs8 hupcl -cstopb cread -clocal -crtscts
-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl -ixon -ixoff
-iuclc -ixany -imaxbel iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
-isig -icanon -iexten -echo echoe echok -echonl -noflsh -xcase -tostop -echoprt
echoctl echoke -flusho -extproc#2
#cat /etc/inittab (old)
::sysinit:/etc/init.d/rcS
::respawn:-/bin/sh
::ctrlaltdel:/bin/umount -a -r
::shutdown:/bin/umount -a -r
::shutdown:/sbin/swapoff -a#cat /etc/inittab (new)
::sysinit:/etc/init.d/rcS
tty1::respawn:/sbin/getty 38400 tty1
tty2::respawn:/sbin/getty 38400 tty2
tty3::respawn:/sbin/getty 38400 tty3
tty4::respawn:/sbin/getty 38400 tty4
tty5::respawn:/sbin/getty 38400 tty5
tty6::respawn:/sbin/getty 38400 tty6
::ctrlaltdel:/bin/umount -a -r
::shutdown:/bin/umount -a -r
::shutdown:/sbin/swapoff -a#3
#echo $SHELL (SSH connection)
/bin/bash#echo $SHELL (monitor keyboard)
/bin/sh#4
#echo $TERM (SSH connection)
xterm#echo $TERM (monitor keyboard)
xterm#5
#tty (SSH connection)
/dev/pts/0#tty (monitor keyboard)
/dev/console#6
#ls -l /dev/console /dev/tty[0-9]
crw-rw---- 1 root root 5, 1 Jan 4 04:31 /dev/console
crw-rw---- 1 root root 4, 0 Jan 4 04:31 /dev/tty0
crw-rw---- 1 root root 4, 1 Jan 4 04:32 /dev/tty1
crw-rw---- 1 root root 4, 2 Jan 4 04:31 /dev/tty2
crw-rw---- 1 root root 4, 3 Jan 4 04:31 /dev/tty3
crw-rw---- 1 root root 4, 4 Jan 4 04:31 /dev/tty4
crw-rw---- 1 root root 4, 5 Jan 4 04:31 /dev/tty5
crw-rw---- 1 root root 4, 6 Jan 4 04:31 /dev/tty6
crw-rw---- 1 root root 4, 7 Jan 4 04:31 /dev/tty7
crw-rw---- 1 root root 4, 8 Jan 4 04:31 /dev/tty8
crw-rw---- 1 root root 4, 9 Jan 4 04:31 /dev/tty9 | Customized Linux systems cannot use crtl+c to terminate ping commands? |
Your stty shows -opost which switches off all output processing, so the onlcr has no effect.
|
TLDR - I have onlcr configured in my terminal, but I don't see the \r getting addedIf plug two FTDI serial converters together, and plug them both into my computer, I get two ports called /dev/ttyUSB0 and /dev/ttyUSB1
If I open them both with picocom in different terminals, I can make sure they are connected, properly, by sending messages back and forth, and if I quit using C-A C-Q it leaves the ports configured as follows:
$ stty -F /dev/ttyUSB0 -a
speed 9600 baud; rows 0; columns 0; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>;
swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V;
discard = ^O; min = 1; time = 0;
-parenb -parodd -cmspar cs8 -hupcl -cstopb cread clocal -crtscts
-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl -ixon -ixoff -iuclc
-ixany -imaxbel -iutf8
-opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
-isig -icanon -iexten -echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl
echoke -flusho -extproc
$ stty -F /dev/ttyUSB1 -a
speed 9600 baud; rows 0; columns 0; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; discard = ^O;
min = 1; time = 0;
-parenb -parodd -cmspar cs8 -hupcl -cstopb cread clocal -crtscts
-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl -ixon -ixoff -iuclc -ixany -imaxbel -iutf8
-opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
-isig -icanon -iexten -echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke -flusho -extprocNow if I open two terminals for listening to the two serial ports:
$ cat -v /dev/ttyUSB0
$ cat -v /dev/ttyUSB1(actually I am only going to need one of them for this specific experiment, but it doesn't hurt to have both)
and then in a third terminal, send a single line like this:
$ echo this is a test > /dev/ttyUSB1and then send one more line to prove that I am printing carriage returns:
$ echo -e 'this is another test\r' > /dev/ttyUSB1then this is what I see:
$ cat -v /dev/ttyUSB0
this is a test
this is another test^MWhy don't I see ^M in line #1, and ^M^M in line #2?
icrnl is off on both devices, so it shouldn't be converting it back right?
Why is (or isn't) this happening?
| why doesn't the stty setting onlcr add a carriage return? |
Any process can use tcsetattr to change the terminal driver characteristics (see man -s 3 termios for the complete 608-line story). The terminal itself only preserves the last state it saw -- it keeps no other history.
The polite usage is to nest any changes: any process that changes them should first read and save the current set, alter only those it wants to, and restore the original set it started with before it exits (including signal handlers for any terminations that it can).
The switch from line-at-a-time (canonical input) to single-character is a one-bit change in the c_lflag member of the termios structure, and there is a definition ICANON for the bitmask required. Likewise, the ECHO or not is a one-bit flip.
|
From an answer to one my previous question I learned that shells (such as bash) have an ability not to follow the rules of terminal input processing set by stty(1). In particular, they can operate in raw mode while there is a setting enabled that turns on canonical mode (stty icanon) (with line discipline editing rules, and so on).
In this regardIs it correct to say that every running process (process group) can configure its own settings with respect to its terminal? In other words, there is no system-wide point of the settings of a tty instance, it is all individual to every single process (process group). (So bash explicitly sets up raw mode before starting to read a command name.)
What does stty(1) exactly affect? My guess is this is a set of user preferences that is implemented by the terminal emulator with respect to the terminal used, which is a pty master side.
When there are two sides communicating, bash on the slave one and the terminal emulator on master and they have set different tty configs (bash: "send me characters immediately, no line editing", the emulator: "send characters on EOL, line editing please"), why does the rules of bash win? What circumstances have an effect on such priority?
If we run some cat command through bash it will obey the stty(1) settings. So does it mean bash explicitly defaults to these before executing the program or they are "inherited" to cat in some other way? | Why can bash not to follow stty settings? |
I just tested this at my shell prompt and got similar results. However, closer examination shows an error.
ctrl+/ (which you might think is ctrl-?) actually produces ctrl-_ which is typically bound to "undo". If you want ctrl-? you need to press ctrl+shift+?. You can test this by typing ctrl+v ctrl+/
Note that the stty command affects terminal editing in "cooked" mode where the undo key doesn't work. So my guess is that you are using a shell like bash that implements its own command line editing, which might honor the stty settings but doesn't have to and adds a lot of fancier editing keys as well.
|
Among the stty -a settings on my machine there are such as erase = ^?; kill = ^U;. The man page reports that
erase CHAR
CHAR will erase the last character typed kill CHAR
CHAR will erase the current lineBut I found out the corresponding keyboard shortcuts effectively do the same thing, i.e. when I type boo at the terminal and then press <ctrl>+U or <ctrl>+? the line would be erased completely in both cases.
So why the erase character does not erase only the last character?
| Why erase and kill characters do the same thing? |
; just separates commands so they are run one after the other.
Here, if you enter that at the prompt of an interactive shell, the terminal device local echo will have been disabled and reenabled by the time you you get back to the prompt as long as you exit cat normally (with Ctrl+D twice, or on an empty line).
If cat is interrupted with SIGINT or SIGQUIT (if you press Ctrl+C or Ctrl+\), shells like bash cancel the whole command line, so the stty echo command will not be run, and the local echo won't be reenabled.
In the zsh shell, you could do instead:
STTY=-echo cat -vtWhich is special syntax to change some tty settings only for the duration of a command. That way, the tty settings will be restored even if cat is interrupted.
Though zsh always restores the tty local echo by itself anyway.
In bash, you could do something similar with a helper function:
with_different_tty_settings() (
tty_settings=$(stty -g) # save current settings
trap 'stty "$tty_settings"' INT EXIT QUIT
set -o noglob
local IFS
stty $STTY # split $STTY on default IFS characters
"$@"
)And call cat as:
STTY=-echo with_different_tty_settings cat -vt(contrary to zsh's STTY, it doesn't handle job suspension (with Ctrl+Z for instance) though).
If you change it to STTY='-echo -isig', you'll be able to see what character Ctrl+C sends.
With STTY='raw -echo', you'd be able to see all characters (and unmodified by the tty line discipline, and as soon as you enter them), but then you wouldn't be able to terminate cat.
But you could do STTY='raw -echo time 30 min 0' for cat to exit after 3 seconds of inactivity.
|
stty -echo; cat -v; stty echo is a technique to see what key you send to terminal. But I just wonder how this command work? When I remove stty -echo it will print twice what you typed in. I know stty -echo is disabling terminal printing you type. More specifically, my quest is "why can I use ';' connect commands to achieving disabling echo first then opening after cat -v command?" or Is there any correlation with ";" at all?
| How does `stty -echo; cat -v; stty echo` work to echo special keys? |
Upgrade your kernel to a newer kernel. There were issues with this driver in the earlier kernels.
|
I have a sama5d36 device running Debian jessie (kernel 4.1.10) with a DMA USART. To get the DMA USART to output correctly I had to turn off ECHO and ONLCR.
stty -F /dev/ttyS2 -echo -onlcr speed 115200If I do a test where I send a bunch of bytes, I will receive 2048 bytes and then it stops receiving until I restart.
cat testLines > /dev/ttyS2
cat < /dev/ttyS2Here is the output of /proc/tty/driver/atmel_serial
2: uart:ATMEL_SERIAL mmio:0xF0020000 irq:31 tx:2185 rx:2048 DSR|CD|RIHere is my stty output (stty -F /dev/ttyS2 -a):
atmel_usart f0020000.serial: using dma0chan4 for rx DMA transfers
atmel_usart f0020000.serial: using dma0chan5 for tx DMA transfers
speed 115200 baud; rows 0; columns 0; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>;
eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R;
werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;
-parenb -parodd -cmspar cs8 hupcl -cstopb cread clocal -crtscts
-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff
-iuclc -ixany -imaxbel -iutf8
opost -olcuc -ocrnl -onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
isig icanon iexten -echo echoe echok -echonl -noflsh -xcase -tostop -echoprt
echoctl echokeThe DMA buffer for atmel_serial is 512 bytes.
Any insight?
Update: Further playing has demonstrated that the serial will write more than 2048 bytes but it freezes after 2048 bytes and then will only write once 2048 bytes have been written. Looking at atmel_serial.c in the kernel it looks like the ring buffer is set for 1024. So I am still confused why 2048 bytes is significant.
| serial DMA pausing after 2048 bytes |
bind -xexpects a shell command. Just do
bind '"\C-j": backward-kill-word' |
bind -x '"\C-j": backward-kill-word'says,
backward-kill-word: command not foundAs mikeserv quoted here, i am able to do backward-kill-word using Ctrl-w. However this is the same key i use in emacs for select/cut text, causing confusion.
I am trying to bind backward-kill-word to a different key sequence, C-j and got this error command not found.
| Bind C-j to backward-kill-word, says command not found |
Simplified, it goes more or less like this:
The kernel logs messages (using the printk() function) to a ring buffer in kernel space. These messages are made available to user-space applications in two ways: via the /proc/kmsg file (provided that /proc is mounted), and via the sys_syslog syscall.
There are two main applications that read (and, to some extent, can control) the kernel's ring buffer: dmesg(1) and klogd(8). The former is intended to be run on demand by users, to print the contents of the ring buffer. The latter is a daemon that reads the messages from /proc/kmsg (or calls sys_syslog, if /proc is not mounted) and sends them to syslogd(8), or to the console. That covers the kernel side.
In user space, there's syslogd(8). This is a daemon that listens on a number of UNIX domain sockets (mainly /dev/log, but others can be configured too), and optionally to the UDP port 514 for messages. It also receives messages from klogd(8) (syslogd(8) doesn't care about /proc/kmsg). It then writes these messages to some files in /log, or to named pipes, or sends them to some remote hosts (via the syslog protocol, on UDP port 514), as configured in /etc/syslog.conf.
User-space applications normally use the libc function syslog(3) to log messages. libc sends these messages to the UNIX domain socket /dev/log (where they are read by syslogd(8)), but if an application is chroot(2)-ed the messages might end up being written to other sockets, f.i. to /var/named/dev/log. It is, of course, essential for the applications sending these logs and syslogd(8) to agree on the location of these sockets. For these reason syslogd(8) can be configured to listen to additional sockets aside from the standard /dev/log.
Finally, the syslog protocol is just a datagram protocol. Nothing stops an application from sending syslog datagrams to any UNIX domain socket (provided that its credentials allows it to open the socket), bypassing the syslog(3) function in libc completely. If the datagrams are correctly formatted syslogd(8) can use them as if the messages were sent through syslog(3).
Of course, the above covers only the "classic" logging theory. Other daemons (such as rsyslog and syslog-ng, as you mention) can replace the plain syslogd(8), and do all sorts of nifty things, like send messages to remote hosts via encrypted TCP connections, provide high resolution timestamps, and so on. And there's also systemd, that is slowly phagocytosing the UNIX part of Linux. systemd has its own logging mechanisms, but that story would have to be told by somebody else. :)
Differences with the *BSD world:
On *BSD there is no klogd(8), and /proc either doesn't exist (on OpenBSD) or is mostly obsolete (on FreeBSD and NetBSD). syslogd(8) reads kernel messages from the character device /dev/klog, and dmesg(1) uses /dev/kmem to decode kernel names. Only OpenBSD has a /dev/log. FreeBSD uses two UNIX domain sockets /var/run/log and var/rub/logpriv instead, and NetBSD has a /var/run/log.
|
As I understand, Linux kernel logs to /proc/kmsg file(mostly hardware-related messages) and /dev/log socket? Anywhere else? Are other applications also able to send messages to /proc/kmsg or /dev/log? Last but not least, am I correct that it is the syslog daemon(rsyslog, syslog-ng) which checks messages from those two places and then distributes those to various files like /var/log/messages or /var/log/kern.log or even central syslog server?
| Understand logging in Linux |
Truncating a logfile actually works because the writers open the files for writing using O_APPEND.
From the open(2) man page:O_APPEND: The file is opened in append mode. Before each write(2), the
file offset is positioned at the end of the file, as if with lseek(2).
The modification of the file offset and the write operation are
performed as a single atomic step.As mentioned, the operation is atomic, so whenever a write is issued, it will append to the current offset matching the end of file, not the one saved before the previous write operation completed.
This makes an append work after a truncate operation, writing the next log line to the beginning of the file again, without the need to reopen the file.
(The same feature of O_APPEND also makes it possible to have multiple writers appending to the same file, without clobbering each other's updates.)
The loggers also write a log line using a single write(2) operation, to prevent a log line from being broken in two during a truncate or concurrent write operation.
Note that loggers like syslog, syslog-ng or rsyslog typically don't need to use copytruncate since they have support to reopen the log files, usually by sending them a SIGHUP. logrotate's support for copytruncate exists to cater for other loggers which typically append to logfiles but that don't necessarily have a good way to reopen the logfile (so rotation by renaming doesn't work in those cases.)
Please note also that copyrotate has an inherent race condition, in that it's possible that the writer will append a line to the logfile just after logrotate finished the copy and before it has issued the truncate operation. That race condition would cause it to lose those lines of log forever. That's why rotating logs using copytruncate is usually not recommended, unless it's the only possible way to do it.
|
we would like to understand copytruncate before rotating the file using logrotate with below configuration:
/app/syslog-ng/custom/output/all_devices.log {
size 200M
copytruncate
dateext
dateformat -%Y%m%d-%s
rotate 365
sharedscripts
compress
postrotate
/app/syslog-ng/sbin/syslog-ng-ctl reload
endscript
}RHEL 7.x, 8GB RAM, 4 VCpu
Question:
How does logrotate truncate the file, when syslog-NG already opened file for logging? Is it not the contention of resource? Does syslog-NG close the file immediately, when it has nothing to log?
| How copytruncate actually works? |
We had the same problem on Debian 8.1, but fixed it by changing our syslog-ng local configuration to use unix-dgram instead of unix-socket.
I was clued in by this comment at RedHat Bugzilla:Note about custom syslog-ng configurations files
People with custom syslog-ng configurations will most likely face
upgrade problems due to the unix socket type mismatch between systemd
and syslog-ng old configuration file:systemd creates /dev/log as unix-dgram
syslog-ng < 3.2.5 expected /dev/log to be unix-stream (configuration file)If you use 'unix-stream ("/dev/log")' in one of your log messages
sources, you will need to manually change it to 'unix-dgram
("/dev/log")'. |
I have a freshly installed version on CentOS 7 once which I have installed syslog-ng from the EPEL repositories.
~: yum list | grep syslog
syslog-ng.x86_64 3.5.6-1.el7 @epelWhen I try to start it via systemctl, it fails as follows :
/usr/lib/systemd/system: systemctl start syslog-ng
Job for syslog-ng.service failed. See 'systemctl status syslog-ng.service' and 'journalctl -xn' for details.When looking into the journals, we can see that their is a dependency on the socket which "starts" fine but that the process returns an error about the arguments being incorrect as shown below :
May 07 17:26:15 superserver.company.corp systemd[1]: Starting Syslog Socket.
May 07 17:26:15 superserver.company.corp systemd[1]: Listening on Syslog Socket.
May 07 17:26:15 superserver.company.corp systemd[1]: Starting System Logger Daemon...
May 07 17:26:15 superserver.company.corp systemd[1]: syslog-ng.service: main process exited, code=exited, status=2/INVALIDARGUMENT
May 07 17:26:15 superserver.company.corp systemd[1]: Failed to start System Logger Daemon.
May 07 17:26:15 superserver.company.corp systemd[1]: Unit syslog-ng.service entered failed state.
May 07 17:26:15 superserver.company.corp systemd[1]: syslog-ng.service holdoff time over, scheduling restart.
May 07 17:26:15 superserver.company.corp systemd[1]: Stopping System Logger Daemon...
May 07 17:26:15 superserver.company.corp systemd[1]: Starting System Logger Daemon...
May 07 17:26:15 superserver.company.corp systemd[1]: syslog-ng.service: main process exited, code=exited, status=2/INVALIDARGUMENTIf we look into the service configuration file, we can confirm the dependency on the socket and the command that is used to start the service.
[Service]
Type=notify
Sockets=syslog.socket
ExecStart=/usr/sbin/syslog-ng -F -p /var/run/syslogd.pidThe problem is that if I run the above-mentionned command, it starts up just fine and it works as expected.
My question is : what is difference between me running the program startup command and systemd starting up the same program ? What can I do to find out what is actually wrong with it ?Edit 1
I enabled the debug output as suggested by Raymond in the answers and the output doesn't teach us much more.
May 08 10:31:29 server.corp systemd[1]: Starting System Logger Daemon...
May 08 10:31:29 server.corp systemd[1]: About to execute: /usr/sbin/syslog-ng -F -p /var/run/syslogd.pid
May 08 10:31:29 server.corp systemd[1]: Forked /usr/sbin/syslog-ng as 3121
May 08 10:31:29 server.corp systemd[1]: syslog-ng.service changed dead -> start
May 08 10:31:29 server.corp systemd[1]: Set up jobs progress timerfd.
May 08 10:31:29 server.corp systemd[1]: Set up idle_pipe watch.
May 08 10:31:29 server.corp systemd[3121]: Executing: /usr/sbin/syslog-ng -F -p /var/run/syslogd.pid
May 08 10:31:29 server.corp systemd[1]: Got notification message for unit syslog-ng.service
May 08 10:31:29 server.corp systemd[1]: syslog-ng.service: Got message
May 08 10:31:29 server.corp systemd[1]: syslog-ng.service: got STATUS=Starting up... (Fri May 8 10:31:29 2015
May 08 10:31:29 server.corp systemd[1]: Got notification message for unit syslog-ng.service
May 08 10:31:29 server.corp systemd[1]: syslog-ng.service: Got message
May 08 10:31:29 server.corp systemd[1]: syslog-ng.service: got STATUS=Starting up... (Fri May 8 10:31:29 2015
May 08 10:31:29 server.corp systemd[1]: Received SIGCHLD from PID 3121 (syslog-ng).
May 08 10:31:29 server.corp systemd[1]: Child 3121 (syslog-ng) died (code=exited, status=2/INVALIDARGUMENT)
May 08 10:31:29 server.corp systemd[1]: Child 3121 belongs to syslog-ng.service
May 08 10:31:29 server.corp systemd[1]: syslog-ng.service: main process exited, code=exited, status=2/INVALIDARGUMENT
May 08 10:31:29 server.corp systemd[1]: syslog-ng.service changed start -> failed
May 08 10:31:29 server.corp systemd[1]: Job syslog-ng.service/start finished, result=failed
May 08 10:31:29 server.corp systemd[1]: Failed to start System Logger Daemon. There are a few warnings that are displayed at the start of the syslog-ng processes (nothing that keeps it from starting properly) so I redirected all output to /dev/null but the end result is the same.
Also, as a side note, my entire system does not boot anymore if systemd is unable to syslog. This can be disabled with kernel options to log to kmesg.
| syslog-ng service not starting with systemd but command works fine |
You have a lot going on there..... My best answer to this is to explain simply how I have seen session logging done in the past. Hopefully that will give you some options to explore. As you have already mentioned, pulling the bash history from
the user accounts. This only works after the session has ended. Not
really the best option but it's easy and reliable.
Using a virtual terminal such as the screen command in Linux. This is not very robust as it starts on the user login however if they know it's being logged you can still kill the service. This works well in an end-user scenario. End users generally are trapped in a specified area anyway and don't have the knowledge to get around this.
Pam_tty_audit module & aureport --tty This is a tool that allows you to
specify which users get logged and allow you to specify the storage
location of said logs... as always keep the logs off of the host
server. I have the session logs on our SFTP server being copied off
to a central logging server and a local cronjob moving them to a non
shared location for archive.This is built in for RedHat and Fedora however you can install it on Debian and Ubuntu. It's part of the auditd package I believe. Here is some documentation on auditd and the required configuration changes to pam (in /etc/pam.d/system-auth), specifying a single user here(root):
session required pam_tty_audit.so disable=* enable=rootExample output of aureport --tty:
TTY Report
===============================================
# date time event auid term sess comm data
===============================================
1. 1/29/2014 00:08:52 122249 0000 ? 4686960298 bash "ls -la",<ret> |
I installed syslog-ng to use on my desktop (Gentoo 64bit, upgraded to systemd i.e. was OpenRC before, with Openbox and Slim only) with my normal user to log all commands I type in the shell (bash first, then eventually zsh). I've explored different solutions, and different ways of setting this up, old and new and often this is achieved using the .bash_history file. I'm trying to implement this solution from a few years ago, with reliance on the companion trap. First I've modified .bashrc and set some history variables because the solution relies on formatting the history data, then making sure it is saved to its file, then pushing it to the log messaging system with logger in a function called in the shell environment. So first the variables:
export HISTCONTROL=
export HISTFILE=$HOME/.bash_history
export HISTFILESIZE=2000
export HISTIGNORE=
export HISTSIZE=1000
export HISTTIMEFORMAT="%a %b %Y %T %z "typeset -r HISTCONTROL
typeset -r HISTFILE
typeset -r HISTFILESIZE
typeset -r HISTIGNORE
typeset -r HISTSIZE
typeset -r HISTTIMEFORMATshopt -s cmdhist
shopt -s histappendPROMPT_COMMAND="history -a"
typeset -r PROMPT_COMMANDex. history command output with timestamps
860 Tue Jan 2014 10:33:50 -0900 exit
861 Tue Jan 2014 10:33:56 -0900 ls
862 Tue Jan 2014 10:33:58 -0900 historyThen, as explained in the linked article, you must add this trap which uses logger in .bashrc (there is reference to /etc/profile, but here I want this for my regular user only and ~/.profile is not sourced by something like lxterminal):
function log2syslog
{
declare command
command=$(fc -ln -0)
logger -p local1.notice -t bash -i -- $USER : $command
}
trap log2syslog DEBUGA single long hyphen was (mistakenly?) used in the original doc, followed by a space and $USER.
I've replaced my original syslog-ng configuration file. I've tried the suggested config from Arch, but after some warnings, I've configured it like so explained for Gentoo which is what the Arch doc is based on anyway:
@version: 3.4
options {
chain_hostnames(no); # The default action of syslog-ng is to log a STATS line
# to the file every 10 minutes. That's pretty ugly after a while.
# Change it to every 12 hours so you get a nice daily update of
# how many messages syslog-ng missed (0).
stats_freq(43200);
};source src {
unix-dgram("/dev/log" max-connections(256));
internal();
};source kernsrc { file("/proc/kmsg"); };# define destinations
destination authlog { file("/var/log/auth.log"); };
destination syslog { file("/var/log/syslog"); };
destination cron { file("/var/log/cron.log"); };
destination daemon { file("/var/log/daemon.log"); };
destination kern { file("/var/log/kern.log"); };
destination lpr { file("/var/log/lpr.log"); };
destination user { file("/var/log/user.log"); };
destination mail { file("/var/log/mail.log"); };destination mailinfo { file("/var/log/mail.info"); };
destination mailwarn { file("/var/log/mail.warn"); };
destination mailerr { file("/var/log/mail.err"); };destination newscrit { file("/var/log/news/news.crit"); };
destination newserr { file("/var/log/news/news.err"); };
destination newsnotice { file("/var/log/news/news.notice"); };destination debug { file("/var/log/debug"); };
destination messages { file("/var/log/messages"); };
destination console { usertty("root"); };# By default messages are logged to tty12...
destination console_all { file("/dev/tty12"); };# ...if you intend to use /dev/console for programs like xconsole
# you can comment out the destination line above that references /dev/tty12
# and uncomment the line below.
#destination console_all { file("/dev/console"); };# create filters
filter f_authpriv { facility(auth, authpriv); };
filter f_syslog { not facility(authpriv, mail); };
filter f_cron { facility(cron); };
filter f_daemon { facility(daemon); };
filter f_kern { facility(kern); };
filter f_lpr { facility(lpr); };
filter f_mail { facility(mail); };
filter f_user { facility(user); };
filter f_debug { not facility(auth, authpriv, news, mail); };
filter f_messages { level(info..warn)
and not facility(auth, authpriv, mail, news); };
filter f_emergency { level(emerg); };filter f_info { level(info); };
filter f_notice { level(notice); };
filter f_warn { level(warn); };
filter f_crit { level(crit); };
filter f_err { level(err); };
filter f_failed { message("failed"); };
filter f_denied { message("denied"); };# connect filter and destination
log { source(src); filter(f_authpriv); destination(authlog); };
log { source(src); filter(f_syslog); destination(syslog); };
log { source(src); filter(f_cron); destination(cron); };
log { source(src); filter(f_daemon); destination(daemon); };
log { source(kernsrc); filter(f_kern); destination(kern); };
log { source(src); filter(f_lpr); destination(lpr); };
log { source(src); filter(f_mail); destination(mail); };
log { source(src); filter(f_user); destination(user); };
log { source(src); filter(f_mail); filter(f_info); destination(mailinfo); };
log { source(src); filter(f_mail); filter(f_warn); destination(mailwarn); };
log { source(src); filter(f_mail); filter(f_err); destination(mailerr); };log { source(src); filter(f_debug); destination(debug); };
log { source(src); filter(f_messages); destination(messages); };
log { source(src); filter(f_emergency); destination(console); };# default log
log { source(src); destination(console_all); };Of note is the comment from Arch wiki about the unix-stream reference misbehaving and prohibiting loading syslog-ng at startup. Changing the reference to unix-dgram takes care of that is basically the only change from the model used, except for providing a version number on the first line. After that you can do systemctl enable syslog-ng to have that available at boot.
So it is up and running manually here:
# systemctl status syslog-ng
syslog-ng.service - System Logger Daemon
Loaded: loaded (/usr/lib64/systemd/system/syslog-ng.service; disabled)
Active: active (running) since Tue 2014-01-28 20:23:36 EST; 1s ago
Docs: man:syslog-ng(8)
Main PID: 9238 (syslog-ng)
CGroup: /system.slice/syslog-ng.service
\u2514\u25009238 /usr/sbin/syslog-ng -FJan 28 20:23:36 gentoouser3x86_64 systemd[1]: Starting System Logger Daemon...
Jan 28 20:23:36 gentoouser3x86_64 systemd[1]: Started System Logger Daemon.And I get the desired basic ouput in /var/log/messages:
Jan 28 20:42:00 gentoouser3x86_64 bash[9878]: myuser : shopt
Jan 28 20:42:04 gentoouser3x86_64 bash[9880]: myuser : su -
...
Jan 29 03:30:58 gentoouser3x86_64 bash[4386]: myuser : ls
Jan 29 03:30:58 gentoouser3x86_64 bash[4389]: myuser : ls <--- duplicate
Jan 29 03:30:58 gentoouser3x86_64 bash[4391]: myuser : ls <--- entries
Jan 29 04:36:31 gentoouser3x86_64 bash[4491]: myuser : cat .bashrc
Jan 29 04:37:14 gentoouser3x86_64 bash[4495]: myuser : cat .bashrc <---
Jan 29 04:37:14 gentoouser3x86_64 bash[4497]: myuser : cat .bashrc <---
Jan 29 04:37:35 gentoouser3x86_64 bash[4500]: myuser : nedit .bashrc
Jan 29 04:37:35 gentoouser3x86_64 bash[4503]: myuser : nedit .bashrc <---
Jan 29 04:37:35 gentoouser3x86_64 bash[4505]: myuser : nedit .bashrc <---Or, if I disable syslog-ng with systemctl stop syslog-ng, I get the very same output from the journald binary log using journalctl -f (or journalctl -f -o verbose for the details) because systemd "takes over" in that case. Restarting syslog-ng and/or its socket reclaims the output immediately and sends it to its assorted files specified in its configuration.
QuestionsWhether I use syslog-ng or journald, I get many duplicate lines in the logs whereas the commands were only executed once. Listing the contents of a directory for instance may show ls 2-3 times in the logs when I use many terminal windows. In particular, pressing enter in the CLI seems to echo the the last command in the log. Why? (Is it because the variable in the trap is still set to the last line of the history file? If so how can this be remedied?)The main source link indicates that since version 4.1, bash can write to syslog directly... the changelog says: "There is a new configuration option (in config-top.h) that forces
bash to forward all history entries to syslog." So is the trap function used here still useful or is it obsolete? Is the a more modern/elegant way of doing this? Is that >4.1 option exposed somewhere or do you need to recompile bash to do that? What is it?
Aside from built-in options that are native to bash, can we expect implementing a similar solution for zsh? Or again, is there a better and more integrated way of doing this?
Is there lots of overhead generated from sending all the commands to the logs, and are journald and syslog-ng equal in this respect? | Log every command typed in any shell: output (from logger function to syslog-ng/journald) contains duplicate entries for commands? |
BEFORE:
SERVER:/etc/syslog-ng # tail -3 syslog-ng.conf
#
#
log { source(src); destination(/var/log/messages); };
SERVER:/etc/syslog-ng # EDIT THE syslog-ng.conf FILE:
vi /etc/syslog-ng/syslog-ng.confAFTER:
SERVER:/etc/syslog-ng # tail -3 syslog-ng.conf
#log { source(src); destination(/var/log/messages); };
filter heartbeat_filter { not match("PFILTER-DROP") and not match("DST=192.168.202.255") and not match("PROTO=UDP"); };
log { source(src); filter(heartbeat_filter); destination(/var/log/messages); };
SERVER:/etc/syslog-ng # RESTART SYSLOG-NG
/etc/init.d/syslog restart # or whatever you use to restart syslog-ng
# now checkROTATE IF NEEDED
logrotate /etc/logrotate.conf |
I need to exclude a given line in the messages file:
Oct 25 04:09:23 SERVERNAME PFILTER-DROP: IN=ifeth4 OUT= MAC=ff:ff:ff:ff:ff:ff:AA:AA:AA:AA:AA:AA:AA:AA SRC=192.168.202.4 DST=192.168.202.255 LEN=238 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=32776 DPT=705 LEN=218
Oct 25 04:09:23 SERVERNAME PFILTER-DROP: IN=ifeth4 OUT= MAC=ff:ff:ff:ff:ff:ff:AA:AA:AA:AA:AA:AA:AA:AA SRC=192.168.202.6 DST=192.168.202.255 LEN=183 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=32770 DPT=700 LEN=163
Oct 25 04:09:23 SERVERNAME PFILTER-DROP: IN=ifeth4 OUT= MAC=ff:ff:ff:ff:ff:ff:AA:AA:AA:AA:AA:AA:AA:AA SRC=192.168.202.8 DST=192.168.202.255 LEN=176 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=32768 DPT=714 LEN=156
Oct 25 04:09:23 SERVERNAME PFILTER-DROP: IN=ifeth4 OUT= MAC=ff:ff:ff:ff:ff:ff:AA:AA:AA:AA:AA:AA:AA:AA SRC=192.168.202.10 DST=192.168.202.255 LEN=175 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=33628 DPT=715 LEN=155
Oct 25 04:09:23 SERVERNAME PFILTER-DROP: IN=ifeth4 OUT= MAC=ff:ff:ff:ff:ff:ff:AA:AA:AA:AA:AA:AA:AA:AA SRC=192.168.202.30 DST=192.168.202.255 LEN=185 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=32770 DPT=713 LEN=165
Oct 25 04:09:23 SERVERNAME PFILTER-DROP: IN=ifeth4 OUT= MAC=ff:ff:ff:ff:ff:ff:AA:AA:AA:AA:AA:AA:AA:AA SRC=192.168.202.34 DST=192.168.202.255 LEN=237 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=32781 DPT=704 LEN=217 they are afaik heartbeat udp messages, but they aren't needed in the logs.
# rpm -qa | grep -i syslog-ng
Security.syslog-ng-1.6.8.0-1
# uname -a
Linux SERVERNAME 2.6.5-7.325-bigsmp #1 SMP Tue Jan 18 23:36:49 UTC 2011 i686 i686 i386 GNU/Linux
# cat /etc/SuSE-release
SUSE LINUX Enterprise Server 9 (i586)
VERSION = 9
PATCHLEVEL = 4Q: How can I exclude these kind of messages from the /var/log/messages?
| How to exclude given lines in syslog-ng? |
Change the name of the executable (note that that also affects PAM configuration).
ln /path/to/sshd /path/to/sshd-whateverStart as /path/to/sshd-whatever. And define PAM configuration in /etc/pam.d/sshd-whatever. Log entries will show as sshd-whatever instead of sshd.
|
Is there a common way of distinguishing between the messages of multiple processes in syslog-ng beside setting different facilities?
+1 if filtering and therefore logging in different files would be possible.
I have a system setup with two running sshd instances. One is running in a chrooted environment. Since syslog is used, all messages end up in the same logfile.
One possibility would be to change the facility of the jailed sshd to something like local0, but I wonder if there is some 'cleaner' way to do this.
Installing other syslog daemon, for example rsyslog is not an option here.
This question is somehow related to:https://stackoverflow.com/questions/20010873/syslog-process-specific-priority
and
syslog: process specific priority | Separate messages of multiple sshds in syslog-ng |
Its probably related to this change in 3.2: syslog-ng traditionally expected an optional hostname field even
when a syslog message is received on a local transport (e.g.
/dev/log). However no UNIX version is known to include this
field. This caused problems when the application creating the log
message has a space in its program name field. This behaviour has
been changed for the unix-stream/unix-dgram/pipe drivers if the
config version is 3.2 and can be restored by using an explicit
'expect-hostname' flag for the specific source.You receive the warning because you use the unix-stream("/dev/log"); in your source. If you don't experience any problems with your local logs, there is nothing else to do except changing the first line to @version: 3.2
If your distro adds the hostname to log messages coming from /dev/log (which they rarely do), then include flags(expect-hostname) in the source.
Regards,
Robert Fekete
syslog-ng documentation maintainer
|
Just rebooted my system to this warning
:: Starting Syslog-NG [BUSY]
WARNING: Configuration file format is too old, please update it to use the 3.2 format as some constructs might operate inefficiently;
WARNING: the expected message format is being changed for unix-domain transports to improve syslogd compatibity with syslog-ng 3.2. If you are using custom applications which bypass the syslog() API, you might need the 'expect-hostname' flag to get the old behaviour back;Anyone know of any good resources on converting formats? my syslog-ng.conf is primarily from the Gentoo Security Handbook and thus simply using the the .pacnew file won't work
here's my current conf file
@version: 3.0
#
# /etc/syslog-ng.conf
#options {
stats_freq (0);
flush_lines (0);
time_reopen (10);
log_fifo_size (1000);
long_hostnames(off);
use_dns (no);
use_fqdn (no);
create_dirs (no);
keep_hostname (yes);
perm(0640);
group("log");
};source src {
unix-stream("/dev/log");
internal();
file("/proc/kmsg");
};destination d_authlog { file("/var/log/auth.log"); };
destination d_syslog { file("/var/log/syslog.log"); };
destination d_cron { file("/var/log/crond.log"); };
destination d_daemon { file("/var/log/daemon.log"); };
destination d_kernel { file("/var/log/kernel.log"); };
destination d_lpr { file("/var/log/lpr.log"); };
destination d_user { file("/var/log/user.log"); };
destination d_uucp { file("/var/log/uucp.log"); };
destination d_mail { file("/var/log/mail.log"); };
destination d_news { file("/var/log/news.log"); };
destination d_ppp { file("/var/log/ppp.log"); };
destination d_debug { file("/var/log/debug.log"); };
destination d_messages { file("/var/log/messages.log"); };
destination d_errors { file("/var/log/errors.log"); };
destination d_everything { file("/var/log/everything.log"); };
destination d_iptables { file("/var/log/iptables.log"); };
destination d_acpid { file("/var/log/acpid.log"); };
destination d_console { usertty("root"); };# Log everything to tty12
destination console_all { file("/dev/tty12"); };
#destination knotifier { program('/usr/local/bin/knotifier'); };filter f_auth { facility(auth); };
filter f_authpriv { facility(auth, authpriv); };
filter f_syslog { program(syslog-ng); };
filter f_cron { facility(cron); };
filter f_daemon { facility(daemon); };
filter f_kernel { facility(kern) and not filter(f_iptables); };
filter f_lpr { facility(lpr); };
filter f_mail { facility(mail); };
filter f_news { facility(news); };
filter f_user { facility(user); };
filter f_uucp { facility(cron); };
filter f_news { facility(news); };
filter f_ppp { facility(local2); };
filter f_debug { not facility(auth, authpriv, news, mail); };
filter f_messages { level(info..warn) and not facility(auth, authpriv, mail, news, cron) and not program(syslog-ng) and not filter(f_iptables); };
filter f_everything { level(debug..emerg) and not facility(auth, authpriv); };
filter f_emergency { level(emerg); };
filter f_info { level(info); };
filter f_notice { level(notice); };
filter f_warn { level(warn); };
filter f_crit { level(crit); };
filter f_err { level(err); };
filter f_iptables { match("IN=" value("MESSAGE")) and match("OUT=" value("MESSAGE")); };
filter f_acpid { program("acpid"); };log { source(src); filter(f_acpid); destination(d_acpid); };
log { source(src); filter(f_authpriv); destination(d_authlog); };
log { source(src); filter(f_syslog); destination(d_syslog); };
log { source(src); filter(f_cron); destination(d_cron); };
log { source(src); filter(f_daemon); destination(d_daemon); };
log { source(src); filter(f_kernel); destination(d_kernel); };
log { source(src); filter(f_lpr); destination(d_lpr); };
log { source(src); filter(f_mail); destination(d_mail); };
log { source(src); filter(f_news); destination(d_news); };
log { source(src); filter(f_ppp); destination(d_ppp); };
log { source(src); filter(f_user); destination(d_user); };
log { source(src); filter(f_uucp); destination(d_uucp); };
#log { source(src); filter(f_debug); destination(d_debug); };
log { source(src); filter(f_messages); destination(d_messages); };
log { source(src); filter(f_err); destination(d_errors); };
log { source(src); filter(f_emergency); destination(d_console); };
log { source(src); filter(f_everything); destination(d_everything); };
log { source(src); filter(f_iptables); destination(d_iptables); };#log { source(src); filter(f_messages); destination(knotifier); };
# Log everything to tty12
log { source(src); destination(console_all); }; | Converting syslog-ng 3.0? format to 3.2 format |
This is done via templates, like this:
$template HostDynFile,"/var/log/HOSTS/%HOSTNAME%/%$YEAR%/%$MONTH%/%$DAY%/%syslogfacility-text%_%HOSTNAME%_%$YEAR%_%$MONTH%_%$DAY%"This template can then be used when defining an output selector line, e.g.:
*.* -?HostDynFileMore info is available here: Building A Central Loghost On CentOS And RHEL 5 With rsyslog
|
I'm moving from a syslog-ng central log host to rsyslog. I can't even seem to find syslog-ng in the CentOS repos these days. I want to filter logs by hostname and facility.
Here is how I do it in syslog-ng
destination std {
file("/var/log/HOSTS/$HOST/$YEAR/$MONTH/$DAY/$FACILITY_$HOST_$YEAR_$MONTH_$DAY"
owner(root) group(root) perm(0600) dir_perm(0700) create_dirs(yes)
);
};Is there a simple way to do this with rsyslog?
| Rsyslog Central Log Host |
Syslog-ng itself does not do this. However, here are a few ways to achieve high-availability logging, sorted in increasing levels of complexity/cost:Simply adding both servers and always write to both is the easiest one.
Have a backup syslog config around with the second servername. Have a cronjob or something to check the local syslog for reports on connection errors to the server and, when one is found, restart syslog with the backup configuration
Setting up a haproxy in tcp-mode on your local server, and setting it to use the primary server first and go to the second in case the first fails. Set syslog to log to the local haproxy instance instead of directly to the remote servers
Setting up a cluster of log servers with shared disks on a SAN, commercial HA (e.g. Veritas Cluster)... This is expensive; whether it's worth it will depends on how much it will cost you to lose some logs. |
Is it possible to set up a failover mechanism with Syslog-NG opensource edition?
I want that the syslog-ng Daemon logs to a remote Loghost and switches to an other server if the first would go down for some reason...
| Failover with Syslog-ng? |
For most commands a wrapper will have to be written because Syslog-ng will only execute the command when it starts. This means that the command has to effectively be a daemon itself always accepting input from stdin.
That's simple though...
#!/bin/dashwhile read line
do
/execute/my/app $line
doneunfortunately this script doesn't work for me, probably because it doesn't know which display to use. But if your script doesn't need an X server then a simple format like this should work you.
Although this is no way helpful due to the fact that Syslog-ng will only start the program on startup I found the fact that xargs can create positional arguments from standard input interesting.
echo 'test' | /usr/bin/xargs -I '{}' /usr/bin/kdialog --passivepopup '{}' 2 |
According to the documentation I can execute a program somewhat like so:
destination knotifier { program('/path/to/executable'; };And it will send the log to the stdin of the executable. But what if the program I'm executing would need the input as an argument to an option? Is there a way to do that? Or do I have to write a wrapper for the program I'm executing?
| Can I send the ouput of a log to a command as an argument to an option in syslog-ng? |
The Syslog-ng daemon was not booting properly here. Albeit it was configured for being a remote syslog server, port 514/UDP was also not showing in netstat.
Debugging the problem with the command:
/usr/sbin/syslog-ng -dWe saw the error:
Error opening file for writing; filename='\xe2\x80\x9c/var/log/cisco/cisco.log\xe2\x80\x9d'As the \xe2\x80\x9c are control codes for UTF-8 character codes, we arrived to the conclusion there were extraneous characters in the syslog-ng.confconfiguration file. They are probably due to copy&pasting the configuration from a web page, together with a system being configured with UTF-8.
Editing with with LANG=C for having minimal character translations with the command:
LANG=C vi /etc/syslog-ng/syslog-ng.confThe user reported the following line without UTF-8 translation:
file(▒~@~\/var/log/cisco/cisco.log▒~@~]); Editing it as it should be, and restarting it, fixed the problem:
file("/var/log/cisco/cisco.log");From: UTF-8UTF-8 is a character encoding capable of encoding all possible
characters, or code points, in Unicode. The encoding is
variable-length and uses 8-bit code units.Why “LANG=C”In the C programming language, the locale name C “specifies the
minimal environment for C translation”Recommendation: be very careful when copying & pasting configurations directly from web pages. Not all Unix utils understand character sets other than the traditional ASCII representation.
|
I have installed syslog-ng 3.5 on RasperryPi Debian Jessie. When I try to start the service, it fails
-- Unit syslog.socket has begun starting up.
Feb 10 12:29:28 blackbox systemd[1]: Socket service syslog.service not loaded, r
Feb 10 12:29:28 blackbox systemd[1]: Failed to listen on Syslog Socket.
-- Subject: Unit syslog.socket has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit syslog.socket has failed.
--
-- The result is failed.
Feb 10 12:29:28 blackbox systemd[1]: Starting System Logger Daemon...
-- Subject: Unit syslog-ng.service has begun with start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit syslog-ng.service has begun starting up.
Feb 10 12:29:29 blackbox systemd[1]: Started System Logger Daemon.
-- Subject: Unit syslog-ng.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit syslog-ng.service has finished starting up.
--
-- The start-up result is done.I checked netstat and port 514 is not used by another process. This is the configuration I used, that worked fine with an older version:
source s_net { udp(ip(0.0.0.0) port(514)); };
destination d_cisco { file(“/var/log/cisco/cisco.log”); };
log { source(s_net); destination(d_cisco); };When I try to run it manually:
root@blackbox:~# /usr/sbin/syslog-ng -dRunning application hooks; hook='1'
Running application hooks; hook='3'
syslog-ng starting up; version='3.5.6'
Incoming log entry; line='<164>Feb 10 2016 15:03:59: %PIX-4-400037: IDS:6053 DNS all records request from 5.172.120.51 to 192.168.0.3 on interface outside\x0a'
Error opening file for writing; filename='\xe2\x80\x9c/var/log/cisco/cisco.log\xe2\x80\x9d', error='No such file or directory (2)'
Incoming log entry; line='<164>Feb 10 2016 15:04:03: %PIX-4-400037: IDS:6053 DNS all records request from 5.172.120.51 to 192.168.0.3 on interface outside\x0a'
Error opening file for writing; filename='\xe2\x80\x9c/var/log/cisco/cisco.log\xe2\x80\x9d', error='No such file or directory (2)'
Incoming log entry; line='<164>Feb 10 2016 15:04:07: %PIX-4-400037: IDS:6053 DNS all records request from 5.172.120.51 to 192.168.0.3 on interface outside\x0a'
Error opening file for writing; filename='\xe2\x80\x9c/var/log/cisco/cisco.log\xe2\x80\x9d', error='No such file or directory (2)'
Incoming log entry; line='<164>Feb 10 2016 15:04:07: %PIX-4-400011: IDS:2001 ICMP unreachable from 198.48.92.104 to 192.168.0.3 on interface outside\x0a'
Error opening file for writing; filename='\xe2\x80\x9c/var/log/cisco/cisco.log\xe2\x80\x9d', error='No such file or directory (2)'
Incoming log entry; line='<164>Feb 10 2016 15:04:07: %PIX-4-313005: No matching connection for ICMP error message: icmp src outside:198.48.92.104 dst inside:192.168.0.3 (type 3, code 3) on outside interface. Original IP payload: udp src 192.168.0.3/53 dst 198.48.92.104/17106.\x0a'
Error opening file for writing; filename='\xe2\x80\x9c/var/log/cisco/cisco.log\xe2\x80\x9d', error='No such file or directory (2)'
^Csyslog-ng shutting down; version='3.5.6'
Running application hooks; hook='4'root@blackbox:~# cd /var/log/cisco/
root@blackbox:/var/log/cisco# ls -l
total 0
-rwxrw-rw- 1 root root 0 Feb 10 11:43 cisco.log
root@blackbox:/var/log/cisco# | Syslog service fails to start |
By default, at least in the Debian version of syslog-ng, console output goes to tty10, which is supposed to be the tenth virtual console. The reason tty10 is configured in a separate configuration file is that its value depends on the system; on Linux-based systems, it’s /dev/tty10, on kFreeBSD-based systems, it’s /dev/ttya. The package build process picks the appropriate file and installs it.
See README.source for details.
|
syslog-ng has the option to include a config snippet:
@include "`scl-root`/system/tty10.conf"and many examples on-line include that file; but I can't understand what it's for?
The entire included file consists of:
@define tty10 "/dev/tty10" | What is `tty10` used for in syslog-ng |
Try
destination d_netsrv {
network( "10.3.2.1" port(601) transport(tcp) so-keepalive(yes) keep-alive(yes) flags(syslog-protocol)
);
};or if it not works:
destination d_netsrv {
network( "10.3.2.1" port(601) transport(tcp) flags(syslog-protocol) );
};The destination address does not have ip(), and you do not need ip-protocol(4) as it is the default.
See syslog-ng Example 7.28. Using the network() driver
|
I have installed syslog-ng v3.5.6 to the Debian GNU/Linux 8.7 (jessie):
# syslog-ng --version
syslog-ng 3.5.6
Installer-Version: 3.5.6
Revision: 3.5.6-2+b1 [@416d315] (Debian/unstable)
Compile-Date: Oct 1 2014 18:23:11
Available-Modules: confgen,basicfuncs,afstomp,afsocket-tls,csvparser,syslogformat,affile,cryptofuncs,redis,afsql,afsmtp,afsocket-notls,afamqp,afprog,afsocket,system-source,dbparser,json-plugin,afmongodb,linux-kmsg-format,tfgeoip,afuser
Enable-Debug: off
Enable-GProf: off
Enable-Memtrace: off
Enable-IPv6: on
Enable-Spoof-Source: on
Enable-TCP-Wrapper: on
Enable-Linux-Caps: on
Enable-Pcre: onNow trying to configure network destination:
destination d_netsrv {
network(
transport("tcp")
ip-protocol(4) ip(10.3.2.1) port(601)
so-keepalive(yes) keep-alive(yes)
flags(syslog-protocol)
);
};But syslog-ng doesn't like transport() option:
# syslog-ng --syntax-only
Error parsing afsocket, syntax error, unexpected KW_TRANSPORT, expecting LL_IDENTIFIER or LL_STRING in /etc/syslog-ng/syslog-ng.conf at line 53, column 5: transport("tcp")
^^^^^^^^^syslog-ng documentation: http://www.balabit.com/support/documentation/?product=syslog-ng
mailing list: https://lists.balabit.hu/mailman/listinfo/syslog-ngAny ideas?
| syslog-ng network() destination doesn't like transport("tcp") |
that's a tricky one. The problem imho is, that there are JSON objects intermixed with plain text fields. I think you have the following options (note that you'll need a recent syslog-ng version to use the json and the k-v parsers, I'd go for version 3.8): If you can, configure mongodb to log into pure json, and parse that with syslog-ng's json-parser. (Don't know if mongodb can do this.)
You could build a pattern database to cover the individual messages, but that can take lot of time
But the most likely option would be to use a combination of syslog-ng parsers. Namely, try the following: use a csv-parser to split the message into two columns at the first { character
parse the first column using a key-value parser (the colon is a separator in this part of the message)
use a json-parser to parse the second part of the message (since some messages have multiple json parts, you might have to add another csv+json combo here)
These parsers will create name-value pairs of the parsed values, and you can use a template or a template function to output them as you need (for example, using the format-welf template function).Or now that I think of it, if you do not need the JSON structure (only the flat names+values), then you can try to simply use a rewrite rule to remove the {} characters from the messages, and use a key-value parser.
If the above option does not work, you can write a custom parser in python and process the messages there.HTH.
Please let me know if you succeed.
|
I'm receiving a lot of Mongodb logs with my Syslog-ng. below is the sample of logs parsed and stored like this:
2016-10-18 19:01:08 f:local1.p:info h:10.133.126.81 prog:sharmongo-log m:sharmongo-log 2016-10-18T19:01:02.439+0330 I COMMAND [conn71796] command CLM.TroubleTicket command: find { find: "TroubleTicket", filter: { $and: [ { troubleTicket.serviceCode: "8118415922" } ] }, projection: { troubleTicket.referenceNumber: 1, troubleTicket.ticketGenerationDate: 1, troubleTicket.ticketCreatedDate: 1, troubleTicket.currentStatus: 1, troubleTicket.currentStatusReason: 1, troubleTicket.thirdPartyIncidentNumber: 1, troubleTicket.troubleTicketCatId: 1, troubleTicket.troubleTicketSubCatId: 1, troubleTicket.troubleTicketSubSubCatId: 1, troubleTicket.serviceCode: 1, troubleTicket.lastUpdateDate: 1, $sortKey: { $meta: "sortKey" } }, sort: { troubleTicket.ticketCreatedDate: -1 }, ntoreturn: 5, shardVersion: [ Timestamp 232000|1, ObjectId('578fb3a6e0f9dacf6705e34c') ] } planSummary: IXSCAN { troubleTicket.serviceCode: 1.0 }, IXSCAN { troubleTicket.serviceCode: 1.0 } cursorid:85032809863 keysExamined:97798 docsExamined:97798 hasSortStage:1 keyUpdates:0 writeConflicts:0 numYields:764 nreturned:5 reslen:2354 locks:{ Global: { acquireCount: { r: 1530 } }, Database: { acquireCount: { r: 765 } }, Collection: { acquireCount: { r: 765 } } } protocol:op_command 572ms
2016-10-18 19:01:17 f:local1.p:info h:10.133.126.80 prog:sharmongo-log m:sharmongo-log 2016-10-18T19:01:10.226+0330 I SHARDING [conn6447] request split points lookup for chunk CLM.ActionLevelDetails { : MinKey } -->> { : MaxKey }
2016-10-18 19:01:17 f:local1.p:info h:10.133.126.80 prog:sharmongo-log m:sharmongo-log 2016-10-18T19:01:10.229+0330 W SHARDING [conn6447] possible low cardinality key detected in CLM.ActionLevelDetails - key is { actionLevelDetails.activityType: "CNFRMREG" }
2016-10-18 19:01:17 f:local1.p:info h:10.133.126.80 prog:sharmongo-log m:sharmongo-log 2016-10-18T19:01:10.229+0330 W SHARDING [conn6447] possible low cardinality key detected in CLM.ActionLevelDetails - key is { actionLevelDetails.activityType: "DOCSUPLOAD" }
2016-10-18 19:01:17 f:local1.p:info h:10.133.126.80 prog:sharmongo-log m:sharmongo-log 2016-10-18T19:01:10.234+0330 I SHARDING [conn6447] request split points lookup for chunk CLM.ActionLevelDetails { : MinKey } -->> { : MaxKey }
2016-10-18 19:01:17 f:local1.p:info h:10.133.126.80 prog:sharmongo-log m:sharmongo-log 2016-10-18T19:01:10.237+0330 W SHARDING [conn6447] possible low cardinality key detected in CLM.ActionLevelDetails - key is { actionLevelDetails.activityType: "CNFRMREG" }
2016-10-18 19:01:17 f:local1.p:info h:10.133.126.80 prog:sharmongo-log m:sharmongo-log 2016-10-18T19:01:10.237+0330 W SHARDING [conn6447] possible low cardinality key detected in CLM.ActionLevelDetails - key is { actionLevelDetails.activityType: "DOCSUPLOAD" }
2016-10-18 19:01:17 f:local1.p:info h:10.133.126.80 prog:sharmongo-log m:sharmongo-log 2016-10-18T19:01:10.350+0330 I SHARDING [conn6447] request split points lookup for chunk CLM.ActionLevelDetails { : MinKey } -->> { : MaxKey }
2016-10-18 19:01:17 f:local1.p:info h:10.133.126.80 prog:sharmongo-log m:sharmongo-log 2016-10-18T19:01:10.353+0330 W SHARDING [conn6447] possible low cardinality key detected in CLM.ActionLevelDetails - key is { actionLevelDetails.activityType: "CNFRMREG" }
2016-10-18 19:01:18 f:local1.p:info h:10.133.126.81 prog:sharmongo-log m:sharmongo-log 2016-10-18T19:01:16.762+0330 I ACCESS [conn6012] Successfully authenticated as principal dba_admin on adminnote that Mongodb log message contains JSON format as you can see in logs. The config of syslog-ng for these logs are as below:
source s_all {
udp(ip("0.0.0.0") port(514));
tcp(ip("0.0.0.0") port(514) keep-alive(no) max-connections(1000));
};destination d_clm_mongodb {
file("/storage/sensage/incoming/mtn/syslog-ng/clm_mongodb/clm_mongodb.log"
template("$YEAR-$MONTH-$DAY $HOUR:$MIN:$SEC f:$FACILITY.p:$PRIORITY h:$HOST_FROM prog:$PROGRAM m:$MSG\n")
template_escape(no) );
};filter f_clm_mongodb { program("sharmongo-log"); };log { source(s_all); filter(f_clm_mongodb); destination(d_clm_mongodb); flags(final); };I need to parse these logs to CSV format (comma separated) meaning that event JSON part should be seperated with comma. I searched a lot about this issue. I need to now is there a capability in syslog-ng that parse the JSON logs (Smaples) and store with CSV format?
Note: The mongodb log format is as following link:
https://github.com/rueckstiess/mongodb-log-spec
| parse Mongodb logs with syslog-ng |
Sounds like rsyslog queueing might do what you want. Messages can also be stored for transmission during off-peak hours.
Specifically, the following:
The "$QueueDequeueSlowdown" directive allows to specify how long (in microseconds) dequeueing should be delayed.
|
Where I work, we have a site-to-site VPN tunnel set up between our main data center and a third party data center that does some Oracle PaaS for us. Since they charge per-VM, we're monitoring the Oracle audit logs on a local VM that is running the analytics for the Security and Audits department. The third party (or rather a department within it) are complaining about the amount of network traffic going over the tunnel due to syslog.
It's currently running unencrypted over udp/514, so I think I need to do some compression. After some research it looks like both rsyslog (what the third party is using) and syslog-ng (what we're using) support TLS compression. We have the CPU cycle's to spare, the third party's networking team is just complaining because apparently it impacts their SLA's.
My question is this:
Is there any way for rsyslog to essentially batch together several messages at once (say over a five minute window) and have them sent to our syslog-ng server all at once?
The reason I'm interested in doing this is because we would be switching over to TCP there's going to be more overhead. It's my understanding that most compression methods will do substitution on redundant data, which I'd imagine would be the case with a stream of syslog messages. The goal is to batch together then remove the redundant parts prior to transmission.
| Syslog TLS Compression and Message Buffering |
Simply create a new user and a new group:
sudo adduser fooThen, change the group of the file:
sudo chgrp foo /app/syslog-ng/etc/syslog-ng.confAnd add the write permission:
sudo chmod 664 /app/syslog-ng/etc/syslog-ng.confExecuting /app/syslog-ng/sbin/syslog-ng should, according to the permissions, be already possible for every user.
|
Installed an application as root owner but not as non-root. why? Because we had to install this application in custom location(/app)
So, after installing an application(Syslog-NG), below are the files with current ownership:
# ls -l /app/syslog-ng/etc/syslog-ng.conf
-rw-r--r-- 1 root root 938 Aug 20 12:43 /app/syslog-ng/etc/syslog-ng.conf
# ls -l /app/syslog-ng/sbin/syslog-ng
-rwxr-xr-x 1 root root 39768 Aug 20 12:43 /app/syslog-ng/sbin/syslog-ngRequirement is to have a new local user(non root) on this RHEL server,
# uname -a
Linux abc123.xy.ef.com 3.10.0-693.17.1.el7.x86_64 #1 SMP Sun Jan 14 10:36:03 EST 2018 x86_64 x86_64 x86_64 GNU/Linuxwhich can read/modify file(syslog-ng.conf) and execute file(syslog-ng)
Goal - Application should not need elevated privileges to run. This new username is suppose to belong to that application but not to any specific user. This new username cannot be in /etc/sudoers for elevated privilege. Every LDAP user(employee) logging into that machine will sudo to this new username, before working with that application.1) Do I need to create a new group(say newgrp) with some permissions? Command syntax please..
If yes...
2) What is the command syntax to add new user to be part of that group? chgrp newgrp filename would suffice...
| Changing group ownership of files - User management |
I don't know if this is still on going for you, but for reference for anyone else trying to filter by username, this is how I was able to do it.
I created the following filter.
filter sampleUserFilter {
facility (user) and match ("myUser" value ("PROGRAM"));
};That allowed me to route messages from a specific user (using logger) to a separate file.
|
I've found a nice feature of the syslog-ng: if I use the logger to log things from the user process, I get the logging user name. Around so:
peterh$ echo test log message|loggerthen I get this in /var/log/messages:
Oct 12 16:38:29 thehost peterh: test log messageWhere "thehost" is the hostname of the server, and "peterh" is the user name below I gave the command.
Now what I want: I want to collect the log entries of a specific user into a specific file with the syslog-ng.
The bad-looking, but sometimes working features of the syslog-ng to filter the whole log entry didn't work:
# Doesn't work - it doesn't do anything
filter f_peterh { match('peterh'); };
destination d_filter { file("/var/log/peterh.log"); };
log { source(s_src); filter(f_peterh); destination(d_peterh); };The documentation available manywhere on the net talk about everything from the binary format of the log messages until the related RFCs, except that how can I filter the syslog for user names.
How to do it?
| How can I make syslog-ng to filter for a user name? |
configure is looking for the relevant openssl development files and cannot find them. On your RHEL 7 system, easiest way to achieve this is to yum install openssl-devel and then retry.
|
Building latest syslog-ng (3.17.2, rather than the packaged version in EPEL, which is 3.5.6, built 30-Dec-2015) from https://github.com/balabit/syslog-ng/releases
Amidst ./configure --prefix=/app/syslog-ng, it gives error:
configure: error: Cannot find OpenSSL libraries with version >= 0.9.8 it is a hard dependency from syslog-ng 3.7 onwards# yum install openssl
Loaded plugins: package_upload, product-id, search-disabled-repos, subscription-manager .......Package 1:openssl-1.0.2k-8.el7.x86_64 already installed and latest version# openssl version
OpenSSL 1.0.2k-fips 26 Jan 2017
# rpm -qa|grep -i openssl
pyOpenSSL-0.13.1-3.el7.x86_64
openssl-libs-1.0.2k-8.el7.i686
openssl-1.0.2k-8.el7.x86_64
openssl-libs-1.0.2k-8.el7.x86_64
# cat /etc/system-release
Red Hat Enterprise Linux Server release 7.4 (Maipo)
#How do I resolve this config error?
| syslog-ng configure on RHEL 7 fails with OpenSSL version 0.9.8 dependency |
I believe you need the ${PRIORITY} or the ${LEVEL} macros, see https://www.balabit.com/documents/syslog-ng-ose-latest-guides/en/syslog-ng-ose-guide-admin/html/reference-macros.html
|
I have systems running syslog and rsyslog across my environment and I would like to have similar outputs.
I have created a template in rsyslog that looks like the following:
$template TraditionalFormatWithPRI,"%TIMESTAMP:::date-rfc3339% %HOSTNAME% %pri-text%:%syslogtag%%msg:::drop-last-lf%\n"I would like to create something similar in syslog-ng but I cannot seem to find a replacement for %pri-text% which outputs the source of the message and the log level. Does anyone know if there is an equivalent in syslog-ng or if there is a combination of two template functions that I can use to create the same output?
thanks in advance!
| syslog-ng equivilent of pri-test of rsyslog |
From the Documentation (my boldface):Match a regular expression to the headers and the message itself (i.e., the values returned by the MSGHDR and MSG macros).
Match a regular expression to the text of the log message, excluding the headers (i.e., the value returned by the MSG macros). |
What is the difference between these two?:
filter f_avc { not message(something); };and:
filter f_avc { not match(something); };it's hard to google for these types of questions :D
| syslog-ng difference between "not message(something)" and "not match(something)"? |
Solved! You must force syslog-ng to reopen its target log files after each log rotation
So, I figured it out. Thanks to @Murray Jensen for the hint about it here.
Whenever logrotate rotates my /var/log/messages file, it renames it to /var/log/messages.1. However, syslog-ng is writing to the file pointed to by the original file descriptor (fd) it opened up. Renaming the file from /var/log/messages to /var/log/messages.1 does not change the file descriptor, so the file descriptor syslog-ng is now writing to points to the file now named /var/log/messages.1. The fix is to simply force syslog-ng to reopen its log files and obtain new file descriptors after each log rotation, thereby causing it to obtain a new file descriptor for the newly-created target log file which now exists at /var/log/messages.
There are 3 ways to do this, which I've written about here: https://github.com/syslog-ng/syslog-ng/issues/1774#issuecomment-1270517815
See here for where I learned about syslog-ng-ctl reopen, which is the recommended way: https://github.com/syslog-ng/syslog-ng/issues/1774#issuecomment-346624252
The 3 ways are:
# Option 0 (no longer recommended): call the heavier `reload` command after log
# rotation
syslog-ng-ctl reload# Option 1 (RECOMMENDED): call the new `reopen` command after log rotation
syslog-ng-ctl reopen# Option 2 (same thing as Option 1 above): send the `SIGUSR1` kill signal to the
# running `syslog-ng` process
pid="$(cat /var/run/syslog-ng.pid)" kill -SIGUSR1 $pidSo, to force logrotate to call one of the 3 ways above automatically after each log rotation, you must add the proper command as a postrotate script inside your /etc/logrotate.d/syslog-ng (or similar--can be named anything) logrotate configuration file. Here is what a fixed logrotate config file might now look like:
From my notes here:
Sample /etc/logrotate.d/syslog-ng logrotate config file:
/var/log/auth.log
/var/log/user.log
/var/log/messages
{
rotate 7
size 20M
delaycompress
missingok
# Required for syslog-ng after each rotation, to cause it to reopen log
# files so it can begin logging to the new log file under a new file
# descriptor, rather than to the old log file which has now been rotated
# and renamed.
postrotate
# After rotating the log files, cause syslog-ng to reopen the
# destination log files so it will log into the newly-created log files
# rather than into the now-rotated and renamed ones.
#
# This ensures, for example, that syslog-ng will move its file
# descriptor to begin logging into the main "/var/log/messages" log
# file again, instead of into the now-rotated "/var/log/messages.1"
# file, which the old file descriptor (fd) is now pointing to since
# that fd's filename was just renamed from "/var/log/messages"
# to "/var/log/messages.1" during the log rotation. # Option 1:
syslog-ng-ctl reopen
# OR, Option 2
# pid="$(cat /var/run/syslog-ng.pid)" kill -SIGUSR1 $pid
endscript
}Note: I've also opened up a documentation change request here: https://github.com/syslog-ng/syslog-ng/issues/4166. It is now recommended to use syslog-ng-ctl reopen after each log rotation instead of syslog-ng-ctl reload.
Old attempt at an answer (what I tried first)
On the board running syslog-ng, I reflashed the rootfs (root filesystem) image entirely, and rebooted the board, and now I see the /var/log/messages file again:
Part of the output of ls -1 /var/log:
messages
messages.1
messages.2
messages.3
messages.4
messages.5
messages.6
messages.7I can't explain it. tail -f /var/log/messages does indeed show active logs coming into that file, as expected, and tail -f /var/log/messages.1 shows that the file is static, with no new messages coming in, also as expected.
I can prove that this board is indeed running syslog-ng by looking at the output of ps aux | grep syslog:
# ps aux | grep syslog
803 root 0:00 {syslog-ng} supervising syslog-ng
804 root 0:02 /usr/sbin/syslog-ng
12571 root 0:00 grep syslog...as opposed to that same command's output when run on the syslog board:
# ps aux | grep syslog
789 root 0:19 /sbin/syslogd -n -n -s 0
2993 root 0:00 grep syslogAgain, I'm not sure what happened, nor why.
On both boards, ps aux | grep logrotate shows that logrotate is running. Ex:
# ps aux | grep logrotate
1299 root 0:00 runsv logrotate-periodically
14208 root 0:00 grep logrotateBoth boards have the same /etc/logrotate.conf file, and only the syslog-ng board has the /etc/syslog-ng.conf file, which contains the contents as shown in the question.
If I figure out anything new in the coming days, I'll come back and update this answer.
|
I am building an embedded Linux board with Buildroot (user manual here).
I have syslog-ng running on the board. It's config file is specified in buildroot here: https://github.com/buildroot/buildroot/blob/master/package/syslog-ng/syslog-ng.conf:
@version: 3.37source s_sys {
file("/proc/kmsg" program_override("kernel"));
unix-stream ("/dev/log");
internal();
};destination d_all {
file("/var/log/messages");
};log {
source(s_sys);
destination(d_all);
};Notice it specifies the destination as "/var/log/messages", yet active logging on the board is going into a file named /var/log/messages.1, and the /var/log/messages file doesn't even exist. Why is that? Is there a way to get logging into the /var/log/messages file instead?
Syslog, which we used to use, logs into /var/log/messages, and we are trying to keep that behavior for consistency.
Additional notesls -1 /var/log on a board running syslog contains these messages files:
messages
messages.1
messages.2
messages.2.gz
messages.3
messages.4
messages.5
messages.6
messages.7ls -1 /var/log on a board running syslog-ng contains these messages files (notice messages is missing):
messages.1
messages.2
messages.3
messages.4
messages.5
messages.6
messages.7On the syslog-ng board, tail -f /var/log/messages.1 shows it is continually receiving logged messages, which is unexpected, since when using syslog the "active" file is /var/log/messages instead. | Buildroot: syslog-ng logs into the "/var/log/messages.1" file instead of "/var/log/messages" |
By default, syslog-ng loads the configuration from a hard-coded default configuration path (you can check that path with the syslog-ng --help command, it's next to the --cfgfile option.
This can be changed via the command line with the mentioned option.
If you want to see all the configuration files loaded recursively (@include), you can run syslog-ng in debug mode:
$ syslog-ng -FedStarting to read include file; filename='/usr/share/syslog-ng/include/scl/sudo/sudo.conf', depth='2'
...If you want to see the full preprocessed configuration of a running syslog-ng instance, you can query it with the sbin/syslog-ng-ctl config --preprocessed command.
If you want to ensure that the correct version of the configuration is running in syslog-ng (there might be a newer config on the disk that hasn't been applied yet), you can use the following command:
sbin/syslog-ng-ctl config --verify
Configuration file matches active configurationYou can also get a hash or identifier for similar purposes:
sbin/syslog-ng-ctl config --id |
I am running syslog-ng on debian.
How do I check which conf file was loaded upon startup?
Neither
systemctl status syslog-ngnor
systemctl show syslog-ngtell me.
| How do I check which conf file was loaded by syslog-ng when starting? |
Several errors are present. For one the system() and internal() need
to be terminated with ;
source s_test {
system();
internal();
syslog(ip(0.0.0.0) port(516) transport("tcp"));
};
destination d_test { file("/var/log/test"); };
log { source(s_test); destination(d_test); };This yields new errors and warnings when syslog-ng is started manually
(the messages from systemd were useless for debugging on Centos 7):
# syslog-ng -F -p /var/run/syslogd.pid
WARNING: Configuration file has no version number, assuming syslog-ng 2.1 format. Please add @version: maj.min to the beginning of the file to indicate this explicitly;
...
Error parsing source, source plugin system not found in /etc/syslog-ng/syslog-ng.conf at line 16, column 5:system();
^^^^^^These we can correct by including statements from the default
syslog-ng configuration file:
@version:3.5
@include "scl.conf"
source s_test {
system();
internal();
syslog(ip(0.0.0.0) port(516) transport("tcp"));
};
destination d_test { file("/var/log/test"); };
log { source(s_test); destination(d_test); };And now syslog-ng starts:
# syslog-ng -F -p /var/run/syslogd.pid
...However, a logger(1) test fails from another terminal; the
/var/log/test log reports an Invalid frame header error:
# logger --server 127.0.0.1 --tcp --port 516 fooThis can be corrected by using network instead of syslog:
@version:3.5
@include "scl.conf"
source s_test {
system();
internal();
network(ip(0.0.0.0) port(516) transport("tcp"));
};
destination d_test { file("/var/log/test"); };
log { source(s_test); destination(d_test); };However I do not know whether you remote device needs syslog or
network.
|
I am new to syslog-ng, and want to test writing to a syslog from an external device. The external device shows that it is "connected" to my syslog on port 516. However, on my CentOS7 host nothing is being written to the log file (and now errors in /var/log/messages). I tried telneting to localhost:516 and dumping in some text (as a test) but nothing is logged anywhere. netstat confirms syslog is listening on tcp 516.
My config is below:
source s_test {
system()
internal()
syslog(ip(0.0.0.0) port(516) transport("tcp"));
};
destination d_test { file("/var/log/test"); };
log { source(s_test); destination(d_test); };Is there an error in my config?
| syslog-ng not writing to file |
iv.h comes from libivykis. You don’t specify which distribution you’re using; on Debian and derivatives you’ll need to install libivykis-dev.
|
I am trying to install syslog-ng-3.13.2 from source code on embedded linux. The ./configure command worked without any error. When I do make, I get the following error:In file included from /source/lib/cfg-grammar.y:41:0,
from modules/native/native-grammar.y:39: ./lib/logthrdestdrv.h:33:16: fatal error: iv.h: No such file or
directory #include
^ compilation terminated. make[2]: * [modules/native/modules_native_libsyslog_ng_native_connector_a-native-grammar.o]
Error 1 make[1]: * [all-recursive] Error 1 make: *** [all] Error 2Is this because of some package dependency? Which package has iv.h header file?
UPDATE:
As suggested in Stephen's answer I installed libivykis. Now I am getting a different error i.e
Makefile:18272: recipe for target 'lib/ivykis/src/libivykis.la' failed.
The libivykis.la is in /usr/local/lib . I don't know why it is trying to build it in /syslog-ng-3.13.2/lib/ivykis/src
| syslog-ng make error - iv.h: No such file or directory |
If you want to combine multiple match statements, use or:
filter send_remote {
match("01CONFIGURATION\/6\/hwCfgChgNotify\(t\)", value("MESSAGE"))
or
match("01SHELL\/5\/CMDRECORD", value("MESSAGE"))
or
match("10SHELL", value("MESSAGE"))
or
match("ACE-1-111008:", value("MESSAGE")); }... and then use that filter name once:
log { source(s_network); filter(send_remote); destination(remote_log_server); }; | Below is the current configuration for Syslog-NG logging, locally,
source s_network {
udp(
flags(syslog_protocol)
keep_hostname(yes)
keep_timestamp(yes)
use_dns(no)
use_fqdn(no)
);
};destination d_all_logs {
file("/app/syslog-ng/custom/output/all_devices.log");};log {
source(s_network);
destination(d_all_logs);
};To forward certain messages... below is the configuration to be added.
filter message_filter_string_1{
match("01CONFIGURATION\/6\/hwCfgChgNotify\(t\)", value("MESSAGE"));
}filter message_filter_string_2{
match("01SHELL\/5\/CMDRECORD", value("MESSAGE"));
}filter message_filter_string_3{
match("10SHELL", value("MESSAGE"));
}filter message_filter_string_4{
match("ACE-1-111008:", value("MESSAGE"));
}destination remote_log_server {
udp("192.168.0.20" port(25214));
};log { source(s_network); filter(message_filter_string_1); destination(remote_log_server); };log { source(s_network); filter(message_filter_string_2); destination(remote_log_server); };log { source(s_network); filter(message_filter_string_3); destination(remote_log_server); };log { source(s_network); filter(message_filter_string_4); destination(remote_log_server); };Actually there are more than 80 such filters
Does Syslog-NG config allow writing a syntax with single filter statement having match of regex1 or regex2 or regex3?
(or)
Does Syslog-NG config allow writing a syntax with single log statement having multiple filter?
| SyslogNG-How to optimise filter and log statements? [closed] |
Solved!
As @Alexander pointed the problem was that SELinux was blocking the port but I'm receiving the logs in 515 so i cannot change it.
The solution was to set SELinux from enforcing to permissive with setenforce 0.
Additionally, I've changed the config file to apply this configuration after restart by changing the line SELINUX=permissive
|
I've recently rebooted one of my machines after a long time and a now I'm having a lot of problems with configuration changes.
syslog-ng service is not working anymore with the following error from journactl:
-- Unit syslog-ng.service has begun starting up.
Oct 01 17:13:48 SIEM-ConnLinuxLR systemd[1]: syslog-ng.service: Got notification message from PID 18672, but reception only permitted for main PID 18670
Oct 01 17:13:48 SIEM-ConnLinuxLR syslog-ng[18670]: [2018-10-01T17:13:48.128987] WARNING: window sizing for tcp sources were changed in syslog-ng 3.3, the configuration value was divided by the value of max-con
Oct 01 17:13:48 SIEM-ConnLinuxLR syslog-ng[18670]: [2018-10-01T17:13:48.129414] Error binding socket; addr='AF_INET(0.0.0.0:515)', error='Permission denied (13)'
Oct 01 17:13:48 SIEM-ConnLinuxLR syslog-ng[18670]: [2018-10-01T17:13:48.129438] Error initializing message pipeline;
Oct 01 17:13:48 SIEM-ConnLinuxLR systemd[1]: syslog-ng.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 01 17:13:48 SIEM-ConnLinuxLR systemd[1]: Failed to start System Logger Daemon.
-- Subject: Unit syslog-ng.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit syslog-ng.service has failed.
--
-- The result is failed.Here is the service configuration:
Description=System Logger Daemon
Documentation=man:syslog-ng(8)
After=network.target[Service]
Type=notify
User=root
Group=root
ExecStart=/usr/sbin/syslog-ng -p /var/run/syslogd.pid
ExecReload=/bin/kill -HUP $MAINPID
EnvironmentFile=-/etc/syslog-ng
EnvironmentFile=-/etc/default/syslog-ng
EnvironmentFile=-/etc/sysconfig/syslog-ng
StandardOutput=journal
StandardError=journal
Restart=on-failure[Install]
WantedBy=multi-user.targetSo, as you can see it is supposed to be running as root but it's still returning a error='Permission denied (13)'. The funny thing is that if I try to run the command from console /usr/sbin/syslog-ng -p /var/run/syslogd.pid then it works perfectly without any kind of error.
EDIT1:
No other process is running in port 515, as I said when I try to run command manually it works perfectly.
I'm adding syslog configuration:
@version:3.7
@include "scl.conf"# syslog-ng configuration file.
#
# This should behave pretty much like the original syslog on RedHat. But
# it could be configured a lot smarter.
#
# See syslog-ng(8) and syslog-ng.conf(5) for more information.
#
# Note: it also sources additional configuration files (*.conf)
# located in /etc/syslog-ng/conf.d/options {
flush_lines (0);
time_reopen (10);
log_fifo_size (1000);
chain_hostnames (off);
use_dns (no);
use_fqdn (no);
create_dirs (no);
keep_hostname (yes);
};source s_sys {
system();
internal();
# udp(ip(0.0.0.0) port(514));
};destination d_cons { file("/dev/console"); };
destination d_mesg { file("/var/log/messages"); };
destination d_auth { file("/var/log/secure"); };
destination d_mail { file("/var/log/maillog" flush_lines(10)); };
destination d_spol { file("/var/log/spooler"); };
destination d_boot { file("/var/log/boot.log"); };
destination d_cron { file("/var/log/cron"); };
destination d_kern { file("/var/log/kern"); };
destination d_mlal { usertty("*"); };filter f_kernel { facility(kern); };
filter f_default { level(info..emerg) and
not (facility(mail)
or facility(authpriv)
or facility(cron)); };
filter f_auth { facility(authpriv); };
filter f_mail { facility(mail); };
filter f_emergency { level(emerg); };
filter f_news { facility(uucp) or
(facility(news)
and level(crit..emerg)); };
filter f_boot { facility(local7); };
filter f_cron { facility(cron); };#log { source(s_sys); filter(f_kernel); destination(d_cons); };
log { source(s_sys); filter(f_kernel); destination(d_kern); };
log { source(s_sys); filter(f_default); destination(d_mesg); };
log { source(s_sys); filter(f_auth); destination(d_auth); };
log { source(s_sys); filter(f_mail); destination(d_mail); };
log { source(s_sys); filter(f_emergency); destination(d_mlal); };
log { source(s_sys); filter(f_news); destination(d_spol); };
log { source(s_sys); filter(f_boot); destination(d_boot); };
log { source(s_sys); filter(f_cron); destination(d_cron); };# Source additional configuration files (.conf extension only)
@include "/etc/syslog-ng/conf.d/*.conf"Configuration from apache.conf
source s_net_t515 {
network(
transport("tcp")
port(515)
log-msg-size(2097152)
max-connections(100)
);
};destination d_apachea { file("/opt/arcsight/logs/Apache/${HOST}.log"); };destination d_apachee {
file("/opt/arcsight/logs/Apache/error/${HOST}-error.log");
};destination d_a {
file("/opt/arcsight/logs/Apache/test.log");
};filter f_apachea { (netmask(***.***.***.5/32) or netmask(***.***.***.6/32)) and not message('error]') and message('.*\d+\s\d+\s\".*') ; };
filter f_apachee { (netmask(***.***.***.5/32) or netmask(***.***.***.6/32)) and message('error]'); };log {
source(s_net_t515);
filter(f_apachea);
destination(d_apachea);
};log {
source(s_net_t515);
filter(f_apachee);
destination(d_apachee);
}; | syslog-ng won't start because error binding socket with permission denied |
You can combine multiple directives in your configuration file.
As an example, based on your code, you define a filter:
filter f_warn { level(warn); };then a destination:
destination remote_log_server {
udp("192.168.0.20" port(25214));
};and put them all together with something like:
log { source(src); filter(f_warn); destination(remote_log_server); };Obviously, you have to configure your source, filter and destination based on your needs.
I suggest you take a good read at the official manual, as there are lots of options to customize your logging.
|
Scenario is to receive all incoming messages and store all of those messages in /app/syslog-ng/custom/output/all_devices.log, but forward only certain messages(by filtering).
filter tag is used to filter incoming messages to Syslog-NG, which is not the right usage, in this scenario. For example: filter f_warn { level(warn); };Edit:
My current configuration is:
@version: 3.17source s_network {
udp(
flags(syslog_protocol)
keep_hostname(yes)
keep_timestamp(yes)
use_dns(no)
use_fqdn(no)
);
};destination d_all_logs {
file("/app/syslog-ng/custom/output/all_devices.log");};log {
source(s_network);
destination(d_all_logs);
};After storing all messages in all_devices.log, Does Syslog-NG provide syntax(configuration) to forward only certain messages(after filtering) to remote log server?
| Forwarding messages with certain filter |
In the configuration file for local network interface (a file matching the name pattern /etc/systemd/network/*.network) we have to either specify we want to obtain local DNS server address from DHCP server using DHCP= option:
[Network]
DHCP=yesor specify its address explicitly using DNS= option:
[Network]
DNS=10.0.0.1In addition we need to specify (in the same section) local domains using Domains= option
Domains=domainA.example domainB.example ~exampleWe specify local domains domainA.example domainB.example to get the following behavior (from systemd-resolved.service, systemd-resolved man page):Lookups for a hostname ending in one of the per-interface domains are
exclusively routed to the matching interfaces.This way hostX.domainA.example will be resolved exclusively by our local DNS server.
We specify with ~example that all domains ending in example are to be treated as route-only domains to get the following behavior (from description of this commit) :DNS servers which have route-only domains should only be used for the
specified domains.This way hostY.on.the.internet will be resolved exclusively by our global, remote DNS server.
Note
Ideally, when using DHCP protocol, local domain names should be obtained from DHCP server instead of being specified explicitly in configuration file of network interface above. See UseDomains= option. However there are still outstanding issues with this feature – see systemd-networkd DHCP search domains option issue.
We need to specify remote DNS server as our global, system-wide DNS server. We can do this in /etc/systemd/resolved.conf file:
[Resolve]
DNS=8.8.8.8 8.8.4.4 2001:4860:4860::8888 2001:4860:4860::8844Don't forget to reload configuration and to restart services:
$ sudo systemctl daemon-reload
$ sudo systemctl restart systemd-networkd
$ sudo systemctl restart systemd-resolvedCaution!
Above guarantees apply only when names are being resolved by systemd-resolved – see man page for nss-resolve, libnss_resolve.so.2 and man page for systemd-resolved.service, systemd-resolved.
See also: Description of routing lookup requests in systemd related man pages is unclear
How to troubleshoot DNS with systemd-resolved? References:Man page for systemd-resolved.service, systemd-resolved
Man page for resolved.conf, resolved.conf.d
Man page for systemd-network |
I'm connected to local area network with access to the Internet through gateway. There is DNS server in local network which is capable of resolving hostnames of computers from local network.
I would like to configure systemd-resolved and systemd-networkd so that lookup requests for local hostnames would be directed (routed) exclusively to local DNS server and lookup requests for all other hostnames would be directed exclusively to another, remote DNS server.
Let's assume I don't know where the configuration files are or whether I should add more files and require their path(s) to be specified in the answer.
| How to configure systemd-resolved and systemd-networkd to use local DNS server for resolving local domains and remote DNS server for remote domains? |
When you run a command such as ping foobar the system needs to work out how to convert foobar to an ip address.
Typically the first place it looks is /etc/nsswitch.conf.
This might have a line such as:
hosts: files dns mdns4This tells the lookup routine to first look in "files", which is /etc/hosts. If that doesn't find a match then it will then try to do a DNS lookup. And if we still don't know the answer then it'll try to do a mDNS lookup.
The DNS lookup is where the system then looks at /etc/resolv.conf. This tells it what DNS servers to look at. On my machines I have this auto-configured by DHCP.
% cat /etc/resolv.conf
# Generated by NetworkManager
search mydomain
nameserver 10.0.0.1
nameserver 10.0.0.10How resolv.conf is built can change, depending on the operating system, what optional components you got, other configuration entries, boot sequence... In your case, on Ubuntu, you're running the systemd programs that configure this file to point to your local systemd-resolved and that will know how to talk to the real DNS servers.
On my primary servers, which have static IP addresses and no systemd-resolved, I manually edit this file.
Finally mdns4 tells the routines to try asking avahi-daemon if it knows the name.
You can change the rules. eg if /etc/nsswitch.conf just said:
hosts: filesthen only the local /etc/hosts file is used.
Other entries are possible; eg ldap would make it do an LDAP lookup.
|
I'm currently working on a project that has required some DNS troubleshooting. However I am fairly new to the wonderful world of networking and I'm at a bit of a loss as to where to begin.
My specific problem probably belongs on the Raspberry Pi Stack Exchange, so I'll avoid crossposting. Just looking for information here.
Looking for information, I was lead to the resolv.conf(5) file, resolvconf(8), systemd-resolve(1), and the beast that avahi appears to be.
My Raspberry Pi with Raspbian Buster appears to have avahi-daemon running.
My Ubuntu 18.04.4 LTS has systemd-resolved AND avahi-daemon.
Does resolvconf(8) (man page only on Ubuntu) coordinate the two?
When is /etc/resolv.conf used/ignored?
On Ubuntu:
$ cat /etc/resolv.conf# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.nameserver 127.0.0.53
search telusOn Raspbian:
$ cat /etc/resolv.conf# Generated by resolvconf
nameserver 192.168.0.1
nameserver 8.8.8.8
nameserver fd51:42f8:caae:d92e::1Which utilities are responsible for this?
I don't really understand enough jargon to sift through the man pages and differentiate all these, and I'd love an explanation of how their roles are related.
| What is the difference between resolvconf, systemd-resolve, and avahi? |
I bumped my Ubuntu server 18.04->22.04 and ran into this issue. As you clearly stated (thank you!) updating /etc/network/if-up.d/resolved and removing 2 quoted variables in such a way:
sudo vim /etc/network/if-up.d/.resolved.broken-origOld: "$DNS"="$NEW_DNS"
-->
New: $DNS="$NEW_DNS"Old: "$DOMAINS"="$NEW_DOMAINS"
-->
New: $DOMAINS="$NEW_DOMAINS"Solves this problem after a reboot.
|
TL;DR
sudo cp -p /etc/network/if-up.d/resolved /etc/network/if-up.d/.resolved.broken-orig#Edit /etc/network/if-up.d/resolved and take out the extraneous quotes on lines 48 and 52
#The fix looks like:
diff /etc/network/if-up.d/.resolved.broken-orig /etc/network/if-up.d/resolved
48c48
< "$DNS"="$NEW_DNS"
---
> $DNS="$NEW_DNS"
52c52
< "$DOMAINS"="$NEW_DOMAINS"
---
> $DOMAINS="$NEW_DOMAINS"At least, this appears to be effective.Recently upgraded an older system to Ubuntu 22.04.1 LTS via do-release-upgrade and ran into DNS issues, error messages were:
nslookup google.com
Server: 127.0.0.53
Address: 127.0.0.53#53** server can't find google.com: SERVFAILand
/etc/network/if-down.d/resolved: 12: mystatedir: not found
/etc/network/if-up.d/resolved: 71: DNS: not found
/etc/network/if-up.d/resolved: 1: /run/network/ifupdown-inet-em1: DNS=8.8.8.8: not found
/etc/network/if-up.d/resolved: 2: /run/network/ifupdown-inet-em1: DOMAINS=local_search_domain.com: not found
Failed to parse DNS server address: DNS
Failed to set DNS configuration: Invalid argumentwhen attempting to run an nslookup via a network connection (em1) defined in /etc/network/interfaces prior to the system upgrade.
After a period of self-soothing I located https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1981103 and https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1910273 which helped enhance my calm. It appears that, at a minimum, the errors encountered were due to a pair of typos in /etc/network/if-up.d/resolved which are easily fixed with a text editor; after manually removing the extraneous quotes I am able to bring up the network interface and query DNS servers. It's unclear to me if there are other issues with the ifupdown package currently shipped by Ubuntu 22.04.1 LTS (0.8.36+nmu1ubuntu3) or the manual edit I described above. Are there any documented fixes that don't involve editing lines 48 and 52 of /etc/network/if-up.d/resolved or is this the best workaround currently available for legacy systems that have been upgraded to Ubuntu 22.04.1 LTS?
| DNS broken when using ifupdown and systemd-resolved after upgrade to Ubuntu 22.04 |
So, changing my wired eth0 interface to be managed solved this issue for me.
Changing ifupdown to managed=true in /etc/NetworkManager/NetworkManager.conf
[ifupdown]
managed=trueThen restart NetworkManager
sudo systemctl restart NetworkManagerAfter this it works flawlessly..
This was not 100%. I also applied theses changes to try and kill resolver
sudo service resolvconf disable-updates
sudo update-rc.d resolvconf disable
sudo service resolvconf stopBig thanks to this blog post regarding the subject:
https://ohthehugemanatee.org/blog/2018/01/25/my-war-on-systemd-resolved/ (if unavailable use https://github.com/ohthehugemanatee/ohthehugemanatee.org/blob/main/content/blog/source/2018-01-25-my-war-on-systemd-resolved.markdown)
Lets pray this works.. This whole systemd-resolve business is just so ugly.
|
I'm using a local BIND9 server to host some local dns records. When trying to dig for a local domain name I can't find it if I don't explicitly tell dig to use my local BIND9 server.
user@heimdal:~$ dig +short heimdal.lan.se
user@heimdal:~$ dig +short @192.168.1.7 heimdal.lan.se
192.168.1.2Ubuntu 17.04 and systemd-resolved are used. This is the content of my /etc/resolved
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.nameserver 127.0.0.53And the output from systemd-resolve --status
Global
DNS Servers: 192.168.1.7
192.168.1.1
DNSSEC NTA: 10.in-addr.arpa
16.172.in-addr.arpa
168.192.in-addr.arpa
17.172.in-addr.arpa
18.172.in-addr.arpa
19.172.in-addr.arpa
20.172.in-addr.arpa
21.172.in-addr.arpa
22.172.in-addr.arpa
23.172.in-addr.arpa
24.172.in-addr.arpa
25.172.in-addr.arpa
26.172.in-addr.arpa
27.172.in-addr.arpa
28.172.in-addr.arpa
29.172.in-addr.arpa
30.172.in-addr.arpa
31.172.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
testThe DNS Servers section does seem to have rightfully configured 192.168.1.7 as the main DNS server (my local BIND9 instance). I can't understand why it's not used ... ?
| Why doesn't systemd-resolved use my local DNS server? |
My guess is that you are getting your IP configuration from DHCP, which overrides the DNS information in your resolved.conf file (from systemd.network(5)):[DHCP] SECTION OPTIONS
[...]
UseDNS= When true (the default), the DNS servers received from the
DHCP server will be used and take precedence over any statically
configured ones.
This corresponds to the nameserver option in resolv.conf(5).Try adding the following to your {networkname}.network file (in /etc/systemd/network):
[DHCP]
UseDNS=false |
I have read about systemd-resolved.service https://www.freedesktop.org/software/systemd/man/systemd-resolved.service.html and learnt four modes of handling /etc/resolv.conf./run/systemd/resolve/stub-resolv.conf
/usr/lib/systemd/resolv.conf
/run/systemd/resolve/resolv.conf
/etc/resolv.conf may be managed by other packageI have read it for several times, but still feel confused about how to determine which mode of /etc/resolv.conf I should choose as a normal user.
For example, I try to add some custom dns servers, so,Add DNS=8.8.8.8 8.8.4.4 in /etc/systemd/resolved.conf and check /run/systemd/resolve/resolv.conf, 8.8.8.8 and 8.8.4.4 exist in it.
If symlinking /run/systemd/resolve/resolv.conf to /etc/resolv.conf, 8.8.8.8 and 8.8.4.4 are gone in
/run/systemd/resolve/resolv.conf.Update 1:
test@instance-1:~$ cat /run/systemd/resolve/resolv.conf
...
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 8.8.8.8
nameserver 8.8.4.4test@instance-1:/etc$ sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf
test@instance-1:/etc$ ls -alh /etc/resolv.conf
lrwxrwxrwx 1 root root 32 Mar 18 07:22 /etc/resolv.conf -> /run/systemd/resolve/resolv.conf
test@instance-1:/etc$ sudo reboottest@instance-1:~$ cat /etc/resolv.conf
domain c.prime-poetry-197705.internal
search c.prime-poetry-197705.internal. google.internal.
nameserver 169.254.169.254test@instance-1:~$ cat /run/systemd/resolve/resolv.conf
domain c.prime-poetry-197705.internal
search c.prime-poetry-197705.internal. google.internal.
nameserver 169.254.169.254test@instance-1:~$ ls -alh /etc/resolv.conf
lrwxrwxrwx 1 root root 32 Mar 18 07:22 /etc/resolv.conf -> /run/systemd/resolve/resolv.confUpdate 2:
symlinking from /etc/resolv.conf
test@instance-1:~$ sudo ln -sf /etc/resolv.conf /run/systemd/resolve/resolv.conf
test@instance-1:~$ ls -alh /run/systemd/resolve/resolv.conf
lrwxrwxrwx 1 root root 16 Mar 18 07:51 /run/systemd/resolve/resolv.conf -> /etc/resolv.conf
test@instance-1:~$ sudo reboottest@instance-1:~$ ls -alh /run/systemd/resolve/resolv.conf
-rw-r--r-- 1 systemd-resolve systemd-resolve 603 Mar 18 07:52 /run/systemd/resolve/resolv.conf | Clarifying four modes of handling /etc/resolv.conf in systemd-resolved |
This issue gets very long to explain. The short (imprecise) description is:
where will the single-label lookup requests go?
Single label? (not localhost et al.): Always to the LLMNR system.
Multi-label?: To the DNS servers of each interface. On failure (or if not configured), to the global DNS servers.Yes, the general sequence goes as described in systemd-resolved.service(8) BUT:Routing of lookups may be influenced by configuring per-interface domain names. See systemd.network(5) for details.Sets the systemd.network(5) as an additional resource for DNS resolution.
And, understand that From RFC 4795:Since LLMNR only operates on the local link, it cannot be considered a substitute for DNS.The sequence (simplified) is:The local, configured hostname is resolved to all locally configured IP addresses ordered by their scope, or — if none are configured — the IPv4 address 127.0.0.2 (which is on the local loopback) and the IPv6 address ::1 (which is the local host).The hostnames "localhost" and "localhost.localdomain" (and any hostname ending in ".localhost" or ".localhost.localdomain") are resolved to the IP addresses 127.0.0.1 and ::1.The hostname "_gateway" is resolved to …The mappings defined in /etc/hosts are included (forth and back).If the name to search has no dots (a name like home. has a dot) it is resolved by the LLMNR protocol.LLMNR queries are sent to and received on port 5355. RFC 4795Multi word (one dot or more) names for some domain suffixes (like ".local", see full list with systemd-resolve --status) are resolved via MulticastDNS protocol.Multi word names are checked against the Domains= list in systemd.network(5) per interface and if matched, the list of DNS servers of that interface are used.Other multi-label names are routed to all local interfaces that have a DNS server configured, plus the globally configured DNS server if there is one.#Edit
The title of your question reads:How single-label dns lookup requests are handled by systemd-resolved?So, I centered my answer on systemd-resolved exclusively.
Now you ask:If an interface is configured with search domain "mydomain" and LLMNR disabled, will any single-label lookup request be routed into this interface?If an interface is configured with search domain "mydomain" and LLMNR enabled and a lookup request for "xyz" comes in, will "xyz" through LLMNR and "xyz.mydomain" throuth specified dns server both happen?Those appear to be outside of systemd-resolved exclusively.
Let's try to analyze them:LLMNR disabled ?
How? Might I ask?. By disabling systemd-resolved itself with something similar to systemctl mask systemd-resolved?If systemd-resolved is disabled/stopped there is no LLMNR in use (most probably, unless you install Avahi, Apple bonjour or a similar program) But certainly, that is outside of systemd-resolved configuration.
In this case, we should ask: what happens when a name resolution fails? (as there is no server to answer it). That is configured in nsswitch (file /etc/nsswitch.conf). The default configuration for Ubuntu (as Debian) contains this line:
hosts: files mdns4_minimal [NOTFOUND=return] dns myhostnameThat means (in nsswitch parlance):Begin by checking the /etc/hosts file. If not found, continue.Try mdns4_minimal (Avahi et al.) which will attempt to resolve the name via multicast DNS only if it ends with .local. If it does but no such mDNS host is located, mdns4_minimal will return NOTFOUND. The default name service switch response to NOTFOUND would be to try the next listed service, but the [NOTFOUND=return] entry overrides that and stops the search with the name unresolved. If mdns4_minimal return UNAVAIL (not running) then go to dns.The plot thickens everybody wants to be first on the list to resolve names and everyone offers to do all resolutions by themselves.The dns entry in nsswitch actually calls nss-resolve first which replace nss-dnsnss-resolve is a plug-in module for the GNU Name Service Switch (NSS) functionality of the GNU C Library (glibc) enabling it to resolve host names via the systemd-resolved(8) local network name resolution service. It replaces the nss-dns plug-in module that traditionally resolves hostnames via DNS.Which will depend on the several DOMAINS= entries in general /etc/systemd/resolved.conf and/or each interface via /etc/systemd/network files. That was explained above the EDIT entry.
Understand that sytemd-resolved might query the dns servers by itself before the dns entry in nsswitch.If not found yet (without a [notfound=return] entry) then try the DNS servers. This will happen more-or-less immediately if the name does not end in .local, or not at all if it does. If you remove the [NOTFOUND=return] entry, nsswitch would try to locate unresolved .local hosts via unicast DNS. This would generally be a bad thing , as it would send many such requests to Internet DNS servers that would never resolve them. Apparently, that happens a lot.The final myhostname acts as a last-resort resolver for localhost, hostname, *.local and some other basic names.If systemd-resolved has a LLMNR=no set in /etc/systemd/resolved.conf the same list as above apply but systemd-resolved is still able to resolve localhost and to apply DOMAINS= settings (global or per interface).
Understand that There's the LLMNR setting in systemd-resolved and there's also the per-link LLMNR setting in systemd-networkd. link.
#What does all that mean?
That it is very difficult to say with any certainty what will happen unless the configuration is very very specific. you will have to disable services and try (in your computer with your configuration) what will happen.
#Q1If an interface is configured with search domain "mydomain" and LLMNR disabled, will any single-label lookup request be routed into this interface?Yes, of course, it may. That LLMNR is disabled only blocks local resolution (no other servers on the local (yes: .local) network will be asked) but the resolution for that name must find an answer (even if negative) so it may (if there is no NOTFOUND=return entry, for example) happen that the DNS servers for a matching interface are contacted to resolve mylocalhost.mylocaldomain when a resolution for mylocalhost was started and there is an entry for mylocaldomain in the "search domain". to answer in a general sense is almost imposible, too many variables.
#Q2If an interface is configured with search domain "mydomain" and LLMNR enabled and a lookup request for "xyz" comes in, will "xyz" through LLMNR and "xyz.mydomain" throuth specified dns server both happen?No. if all is correctly configured a single label name "xyz" should only be resolved by LLMNR, and, even if asked, a DNS server should not try to resolve it. Well, that's theory. But the DNS system must resolve com (clearly, or the network would fall down as it is now). But there is a simple workaround: ask for com., it has a dot, it is a FQDN. In any case, a DNS server shuld answer with NOERROR (with an empty A (or AAAA)) if the server doesn't have enough information about a label and the resolution should continue with the root servers (for .). Or with a NXDOMAIN (the best answer to avoid further resolutions) for domains it knows that do not exist.
The only safe way to control this is to have a local DNS server and choose which names to resolve and which not to.
|
I looked into the systemd-networkd and systemd-resolved:systemd.network(5) on manpages.ubuntu.com
systemd-resolved.service(8) on manpages.ubuntu.comand I am confused by some words:systemd-resolved.service(8)Single-label names are routed to all local interfaces capable of IP multicasting, using the LLMNR protocol.
Lookups for a hostname ending in one of the per-interface domains are exclusively routed to the matching interfaces.systemd.network(5)Both "search" and "routing-only" domains are used for routing of DNS queries: look-ups for host names ending in those domains (hence also single label names, if any "search domains" are listed), are routed to the DNS servers configured for this interface.my question is: for hosts with a bunch of interfaces configured with "search domains" and LLMNR enabled, where will the single-label lookup requests go?
more details of my confusion:if an interface is configured with search domain "mydomain" and LLMNR
disabled, will any single-label lookup request be routed into this
interface?
if an interface is configured with search domain "mydomain" and
LLMNR enabled and a lookup request for "xyz" comes in, will "xyz"
through LLMNR and "xyz.mydomain" throuth specified dns server both
happen? | How single-label dns lookup requests are handled by systemd-resolved? |
After some investigation and a systemd bug report, here is what I discovered.
systemd-resolved gets all its DNS information from systemd-networkd, so focus on systemd-networkd as fixing the rogue server there will flow on into systemd-resolved.
The data is stored in /var/run/systemd/netif/ with one file per interface. This is internal and subject to change so might have moved by the time you read this, however I was able to grep these files for the rogue server and delete the file that had it. When I restarted systemd-networkd, it recreated the deleted file in full.
In my case it recreated the file with the rogue DNS server still listed, which meant it was not being cached by systemd but rather it was still being advertised somewhere on the network.
As it was an IPv6 address, I installed radvd (the IPv6 Router Advertisement daemon) and ran radvdump to show all IPv6 RAs that were arriving on the machine. Sure enough before too long one arrived with the rogue DNS server listed, so I could hunt it down and fix it.
Should this not be an option for you, there are some systemd-networkd options you can use to work around the issue. These must be placed in one of the files where your network is configured (/etc/systemd/network/*.network).
# Don't use DNS servers from DHCP responses received via IPv4 (default is true)
[DHCPv4]
UseDNS=false# Don't use DNS servers from DHCPv6 responses received via IPv6 (default is true)
[DHCPv6]
UseDNS=false[IPv6AcceptRA]
# Don't use DNS servers from IPv6 Router Advertisement (RA) messages (default is true)
UseDNS=false
# Don't start a DHCPv6 client when an RA message is received.
DHCPv6Client=false |
So I was testing a router and it added some random IPv6 addresses to all the machines on my network, including my DNS server. Somehow those IPs were broadcasted around as valid DNS servers (not sure how as only the real router sends IPv6 RA packets) but long story short, now all my machines are sending DNS queries to an IP address that doesn't exist.
If I restart resolved with systemctl restart systemd-resolved then resolvectl still shows these bogus IPs as valid name servers.
They are listed in /etc/resolv.conf so if I delete them there and restart systemd-resolved it just adds the bogus IPs back in again.
If I look in the logs with journalctl --unit=systemd-resolved then it tells me the bogus IPs are operating in "degraded feature mode" but doesn't tell me where it found those IPs to begin with.
Where is it picking up these wrong IP addresses from?? Is there some cache file I need to delete to make it go back to only using the IPs supplied from the IPv6 router advertisements only?
| How do you remove bad DNS server IPs from systemd-resolved? |
Sorry, i am late to the party, but maybe this will help others.
I had the same problem, and actually, the /run/systemd/resolv directory was missing.
Then I realized that systemd-resolved.service was not running. For some reason, it got disabled.
So I had to simply just bring it up again.
sudo systemctl enable --now systemd-resolved.service |
I'd like to start using systemd-resolved on Oracle Linux 7.6.
I've installed systemd-networkd and systemd-resolved. I've enabled these services and I've disabled network and NetworkManager.
From the possible working modes I'd like to use systemd-resolved as a local resolver and for the compatibility reasons I'd like to link /etc/resolv.conf to /run/systemd/resolve/stub-resolv.conf which is supposed to point to nameserver 127.0.0.53.
However the /run/systemd/resolve/stub-resolv.conf file is missing in my installation. Would you please be able to tell why? /run/systemd/resolve/resolv.conf is present though.
| Why is my stub-resolve.conf missing? |
If both dnscrypt-proxy and systemd-resolved are using 127.0.0.1:53, this should not be the case. You need to disable systemd-resolved as recommended by dnscrypt-proxy wiki, and also lock /etc/resolv.conf for possible changes made by your Network Manager. So, here are the steps:Disable systemd-resolved:sudo systemctl stop systemd-resolved
sudo systemctl disable systemd-resolvedCheck if there is anything else using the same address:port pair as dnscrypt-proxy: sudo ss -lp 'sport = :domain'. If there are, disable them.
If dnscrypt-proxy is listening on 127.0.0.1:53 and resolv.conf has nameserver 127.0.0.1, add options edns0(required to enable DNS Security Extensions) below the nameserver entry in resolv.conf so that it looks like:nameserver 127.0.0.1
options edns0Lock /etc/resolv.conf file for changes: sudo chattr +i /etc/resolv.conf.
You might want to restart dnscrypt-proxy.Overall, the point is to make sure that:Only dnscrypt-proxy is using 127.0.0.1:53
resolv.conf has the same address used by dnscrypt-proxy
resolv.conf is protected from changes made by other software such as your Network Manager.Also, just because the dnsleak test shows Google IPs does not mean that the dns resolver service is operated by Google. It could be that the servers are owned by Google but operated by another entity. If you dont' want it, you can choose a different resolver from dnscrypt-proxy public resolvers list. Make sure that dnssec support is present for the selected resolver. I personally use dnscrypt.eu resolvers, which are no-log, no-filter, non-google and dnssec-enabled.
References:https://github.com/DNSCrypt/dnscrypt-proxy/wiki/Installation-linux
https://github.com/DNSCrypt/dnscrypt-proxy/wiki/systemd |
I have been doing an effort to go full on DNSSEC on my system with the following setup:dnscrypt-proxy installed, up and running on 127.0.0.1 with require_dnssec = true
systemd-resolved running, with DNSSEC=yes and DNS=127.0.0.1
only nameserver 127.0.0.1 in /etc/resolv.conf
connected through NetworkManager to a WiFi network about which I know DHCP configuration sets 8.8.8.8 and 8.8.8.4 as DNS servers/run/systemd/resolve/resolv.conf lists 8.8.8.8 and 8.8.8.4 below 127.0.0.1.
resolvectl status shows
DNSSEC setting: yes
DNSSEC supported: yes
Current DNS Server: 127.0.0.1
DNS Servers: 127.0.0.1in the Global section, but
DNSSEC setting: yes
DNSSEC supported: yes
Current DNS Server: 8.8.8.8
DNS Servers: 8.8.8.8
8.8.8.4in my interface's section (why?).
tcpdump shows no activity at all on udp:53 when using a web browser, dig, or other normal usage. This I take to mean that my local dnscrypt-proxy is dealing with all DNS requests on my system. I also assume that because of the configuration settings mentioned above, I am going DNSSEC all the way.
However, from time to time the journal contains lines like:
Nov 30 09:10:41 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question v.dropbox.com IN SOA: failed-auxiliary
Nov 30 09:10:41 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question bolt.v.dropbox.com IN DS: failed-auxiliary
Nov 30 09:10:41 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question bolt.v.dropbox.com IN SOA: failed-auxiliary
Nov 30 09:10:41 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question bolt.v.dropbox.com IN A: failed-auxiliary
Nov 30 09:10:43 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question v.dropbox.com IN SOA: failed-auxiliary
Nov 30 09:10:43 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question d.v.dropbox.com IN A: failed-auxiliary
Nov 30 09:10:43 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question v.dropbox.com IN SOA: failed-auxiliary
Nov 30 09:10:43 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question d.v.dropbox.com IN A: failed-auxiliary
Nov 30 09:10:43 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question d2e801s7grwbqs.cloudfront.net IN SOA: failed-auxiliary
Nov 30 09:10:43 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question d2e801s7grwbqs.cloudfront.net IN A: failed-auxiliaryresolvectl query v.dropbox.com results in the same DNSSEC validation error
dig v.dropbox.com works just fine
dig v.dropbox.com @8.8.8.8 also works just fine (of course resulting in two lines of output for tcpdump)I also checked https://dnsleaktest.com, which tells me that a lot of 172.253.x.x servers are receiving a request to resolve domain names I enter into my webbrowser. These IPs seem to be owned by Google.
So, what does this mean? Is there any (non DNSSEC) querying going on on this system?
Any insights are appreciated!
| Going all-in on DNSSEC |
The reason was statically configured list of IP addresses in /etc/NetworkManager/system-connections/name.nmconnection.
|
When I connect to my phone hotspot I would expect systemd-resolved to use the DHCP-provided DNS list. For some reason it seems to not be the case for me.I am using Ubuntu 22.04.1 LTS/etc/systemd/resolved.conf is emptyWhen I connect to my phone WiFi I get the results below:IP-Address:
ip a show wlp3s03: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
inet 192.168.138.69/24 brd 192.168.138.255 scope global dynamic noprefixroute wlp3s0
valid_lft 3411sec preferred_lft 3411secCorresponding system log entry:
journalctlpaź 03 09:25:46 pc systemd-resolved[5434]: wlp3s0: Bus client set DNS server list to: 192.168.185.139, 192.168.22.175, 192.168.78.16Corresponding TCP traffic
tcpdump -i wlp3s0 -e -nn -vvtcpdump: listening on wlp3s0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
09:25:46.427512 xx:xx:xx:xx:xx:xx > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 339: (tos 0xc0, ttl 64, id 0, offset 0, flags [DF], proto UDP (17), length 325)
0.0.0.0.68 > 255.255.255.255.67: [udp sum ok] BOOTP/DHCP, Request from xx:xx:xx:xx:xx:xx, length 297, xid 0xd83b70be, secs 1, Flags [none] (0x0000)
Client-Ethernet-Address xx:xx:xx:xx:xx:xx
Vendor-rfc1048 Extensions
Magic Cookie 0x63825363
DHCP-Message (53), length 1: Request
Client-ID (61), length 7: ether xx:xx:xx:xx:xx:xx
Parameter-Request (55), length 17:
Subnet-Mask (1), Time-Zone (2), Domain-Name-Server (6), Hostname (12)
Domain-Name (15), MTU (26), BR (28), Classless-Static-Route (121)
Default-Gateway (3), Static-Route (33), YD (40), YS (41)
NTP (42), Unknown (119), Classless-Static-Route-Microsoft (249), Unknown (252)
RP (17)
MSZ (57), length 2: 576
Requested-IP (50), length 4: 192.168.138.69
Hostname (12), length 13: "pc"
09:25:46.445267 yy:yy:yy:yy:yy:yy > xx:xx:xx:xx:xx:xx, ethertype IPv4 (0x0800), length 366: (tos 0x0, ttl 64, id 44909, offset 0, flags [DF], proto UDP (17), length 352)
192.168.138.79.67 > 192.168.138.69.68: [udp sum ok] BOOTP/DHCP, Reply, length 324, xid 0xd83b70be, Flags [none] (0x0000)
Your-IP 192.168.138.69
Server-IP 192.168.138.79
Client-Ethernet-Address xx:xx:xx:xx:xx:xx
Vendor-rfc1048 Extensions
Magic Cookie 0x63825363
DHCP-Message (53), length 1: ACK
Server-ID (54), length 4: 192.168.138.79
Lease-Time (51), length 4: 3599
RN (58), length 4: 1799
RB (59), length 4: 3149
Subnet-Mask (1), length 4: 255.255.255.0
BR (28), length 4: 192.168.138.255
Default-Gateway (3), length 4: 192.168.138.79
Domain-Name-Server (6), length 4: 192.168.138.79
Hostname (12), length 13: "pc"
Vendor-Option (43), length 15: 65.78.68.82.79.73.68.95.77.69.84.69.82.69.68The expected DNS is used if I execute dhclient -r wlp3s0 && dhclient wlp3s0:System log entry:
journalctlpaź 03 09:45:02 pc systemd-resolved[5434]: wlp3s0: Bus client set DNS server list to: 192.168.138.79Issuing systemctl restart systemd-resolved brings back the unexpected ips.Why does systemd-resolved use the IP addresses: 192.168.185.139, 192.168.22.175, 192.168.78.16 as DNS servers instead of the DHCP-provided 192.168.138.79? How does it come up with the addresses?
| Systemd-resolved setting unexpected DNS list |
I am not sure which version of NetworkManager implemented the change, but the problem has since been fixed with the current versions of NM.
|
I'm trying to use the DNS over at /etc/systemd/resolved.conf in their capacity as DNS over TLS providers, but NetworkManager somehow manages to set ~. as search domain on the connections it sets up. That causes all the DNS queries to be funneled over the specific interface's DHCP-resolved DNS instead of the global DNS I configured:
Link 3 (wlo1)
Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6 mDNS/IPv4 mDNS/IPv6
DefaultRoute setting: yes
LLMNR setting: yes
MulticastDNS setting: yes
DNSOverTLS setting: opportunistic
DNSSEC setting: allow-downgrade
DNSSEC supported: yes
Current DNS Server: 192.168.10.1
DNS Servers: 192.168.10.1
DNS Domain: ~.
lanI find the lan domain useful for local devices, so I can't simply disable the per-link DNS settings outright, but I can't find anyway to make NetworkManager not set ~. there. It's not part of the resolv.conf modifications either:
[user@machine ~]$ cat /run/NetworkManager/resolv.conf
# Generated by NetworkManager
search lan
nameserver 192.168.10.1I need to make either NM not set any search domain except lan or make per-link configuration a fallback when global configurations doesn't work. Though the latter is probably quite unrealistic prospect.
| Prevent NetworkManager from setting ~. search domain per-link on systemd-resolved |
It looks like you may have dnsmasq process in 127.0.0.1 and systemd-resolved process in 127.0.0.53 passing queries back and forth between each other, causing a loop. Even dnsmasq alone might be capable of looping, as by default it looks into /etc/resolv.conf to find the real DNS servers to use for the names it does not have information for.
Your DNS configuration probably has quite many layers:first, there is the DNS server information you get from your ISP by DHCP or similar.
then, there is NetworkManager, which could be configured to override the information and use dnsmasq instead, but isn't currently configured that way.
instead, NetworkManager is configured to use the resolvconf tool to update the real /etc/resolv.conf. And dnsmasq may include a drop-in configuration for resolvconf to override any DNS services received by DHCP and use 127.0.0.1 instead while dnsmasq is running.
systemd-resolved may also include a drop-in configuration for resolvconf, but is apparently getting overridden by dnsmasq.What I don't yet understand is where the 127.0.1.1 and 127.0.0.53 come from. Are they perhaps mentioned in dnsmasq default configuration in Ubuntu?
As it says in the comment of /etc/resolv.conf, run this command to see more information on systemd-resolved configuration:
systemd-resolve --statusAlso check the contents of the /run/resolvconf/interface/ directory: that is where the resolvconf tool collects all the DNS server information it gets from various sources. The /etc/resolvconf/interface-order will determine the order in which each source is checked, until either a loopback address is encountered or 3 DNS servers have been listed for real /etc/resolv.conf.
Since you are using dnsmasq to set up a wildcard domain, you'll want to keep 127.0.0.1 in your /etc/resolv.conf - but you'll want to configure dnsmasq to not use that file, but instead get the DNS servers it should use from somewhere else.
If /run/NetworkManager/resolv.conf contains those DNS servers you get from your ISP by DHCP, you can easily use that for dnsmasq by adding this line to its configuration:
resolv-file=/run/NetworkManager/resolv.confThis tells dnsmasq where to get DNS information for those things it don't already know about. So if you want to use Google DNS, you could configure dnsmasq with
resolv-file=/etc/google-dns-resolv.confand put the DNS configuration lines for Google DNS in the usual format to /etc/google-dns-resolv.conf.
|
Problem:
Running Ubuntu 17.10
I have been trying to resolv (hehe) this issue for about a week now and despite countless Google searches and about 20 different attempts, I can not stop dnsmasq from periodically causing my CPU to spike for about a minute with the following offenders:systemd-resolved
systemd-journald
dnsmasqMonitoring journalctl -f I see this every time it happens:maximum number of concurrent dns queries reached (150)Accompanied/preceded by a crazy loop of requests to some domain (usually ubuntu connection check) like the following:
query[A] connectivity-check.ubuntu.com from 127.0.0.1
forwarded connectivity-check.ubuntu.com to 127.0.1.1
forwarded connectivity-check.ubuntu.com to 127.0.0.53
query[A] connectivity-check.ubuntu.com from 127.0.0.1
forwarded connectivity-check.ubuntu.com to 127.0.0.53
query[AAAA] connectivity-check.ubuntu.com from 127.0.0.1
forwarded connectivity-check.ubuntu.com to 127.0.0.53
query[AAAA] connectivity-check.ubuntu.com from 127.0.0.1
forwarded connectivity-check.ubuntu.com to 127.0.0.53
query[A] connectivity-check.ubuntu.com from 127.0.0.1
forwarded connectivity-check.ubuntu.com to 127.0.0.53
query[AAAA] connectivity-check.ubuntu.com from 127.0.0.1
forwarded connectivity-check.ubuntu.com to 127.0.0.53I've found that changing my /etc/resolv.conf to use nameserver 127.0.0.53 causes the spike to dissipate almost instantaneously.
However, as that file is updated regularly by Network Manager, I have to do this about once an hour.Configuration:
/etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.nameserver 127.0.0.1
search fios-router.home/etc/NetworkManager/NetworkManager.conf
[main]
plugins=ifupdown,keyfile[ifupdown]
managed=false[device]
wifi.scan-rand-mac-address=no/etc/dnsmasq.conf
// All default except this at the very end for my wildcard DNS
address=/asmar.d/127.0.0.1/run/dnsmasq/resolv.conf
nameserver 127.0.0.53/run/resolvconf/interfaces:
lo.dnsmasq:
nameserver 127.0.0.1systemd-resolved:
nameserver 127.0.0.53/etc/resolvconf/interface-order:
# interface-order(5)
lo.inet6
lo.inet
lo.@(dnsmasq|pdnsd)
lo.!(pdns|pdns-recursor)
lo
tun*
tap*
hso*
em+([0-9])?(_+([0-9]))*
p+([0-9])p+([0-9])?(_+([0-9]))*
@(br|eth)*([^.]).inet6
@(br|eth)*([^.]).ip6.@(dhclient|dhcpcd|pump|udhcpc)
@(br|eth)*([^.]).inet
@(br|eth)*([^.]).@(dhclient|dhcpcd|pump|udhcpc)
@(br|eth)*
@(ath|wifi|wlan)*([^.]).inet6
@(ath|wifi|wlan)*([^.]).ip6.@(dhclient|dhcpcd|pump|udhcpc)
@(ath|wifi|wlan)*([^.]).inet
@(ath|wifi|wlan)*([^.]).@(dhclient|dhcpcd|pump|udhcpc)
@(ath|wifi|wlan)*
ppp*
*systemd-resolve --status:
Global
DNS Servers: 127.0.0.1
DNSSEC NTA: 10.in-addr.arpa
16.172.in-addr.arpa
168.192.in-addr.arpa
17.172.in-addr.arpa
18.172.in-addr.arpa
19.172.in-addr.arpa
20.172.in-addr.arpa
21.172.in-addr.arpa
22.172.in-addr.arpa
23.172.in-addr.arpa
24.172.in-addr.arpa
25.172.in-addr.arpa
26.172.in-addr.arpa
27.172.in-addr.arpa
28.172.in-addr.arpa
29.172.in-addr.arpa
30.172.in-addr.arpa
31.172.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
testLink 5 (br-b1f5461ac410)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: noLink 4 (docker0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: noLink 3 (wlp62s0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: noLink 2 (enp61s0)
Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 8.8.8.8
8.8.4.4
::1Questions:How can I resolve this issue while still using my wildcard domain name?
Optional: How can I achieve this while using Google DNS?Please do not recommend upping the concurrent dns queries. That is not a solution.SOLVED!
See telcoM's DNS crash course (the accepted answer) that led me to the solution
See my follow-up & final solution as I experimented with the knowledge gained from that answer
| dnsmasq & systemd Causing Intermittent CPU Spikes |
The capability of using mDNS should be enabled in the /etc/systemd/resolved.conf file, in the [Resolve] section, by setting MulticastDNS=yes. Moreover, it should be enabled in the [Network] section of each interface configuration file (the one for systemd-network) by setting MulticastDNS=yes.
The status of the MulticastDNS setting can be verified with:
~# systemd-resolve --status
Global
Protocols: +LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: uplink
Fallback DNS Servers: 1.1.1.1#cloudflare-dns.com 8.8.8.8#dns.google 1.0.0.1#cloudflare-dns.com 8.8.4.4#dns.google 2606:4700:4700::1111#cloudflare-dns.com 2001:4860:4860::8888#dns.google 2606:4700:4700::1001#cloudflare-dns.com
2001:4860:4860::8844#dns.googleLink 2 (eth0)
Current Scopes: LLMNR/IPv4 LLMNR/IPv6 mDNS/IPv4 mDNS/IPv6
Protocols: -DefaultRoute +LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupportedLink 3 (enp1s0)
Current Scopes: none
Protocols: -DefaultRoute +LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupportedFor each interface, +mDNS means that MulticastDNS is enabled in that interface. Global refers to the systemd-resolved global configuration.
Services can be configured by creating /service/.dnssd files with the following format:
[Service]
Name=%H
Type=_http._tcp
Port=80
TxtText=path=/stats/index.html t=temperature_sensorsee https://www.freedesktop.org/software/systemd/man/systemd.dnssd.html for more informations.
Configuration files can be saved in:/etc/systemd/dnssd
/run/systemd/dnssd
/usr/lib/systemd/dnssd |
I'm creating a custom distribution where I need a DNS responder. I'm already using systemd so I would like to use systemd-resolved to manage mDNS (the device should announce itself as capable of a couple of services); I'm not sure whether or not this is possible, but the systemd-resolved documentation pages reportssystemd-resolved is a system service that provides network name resolution to local applications. It implements [...] MulticastDNS resolver and responder.I already added set MulticastDNS=yes in the configuration file, as well as under the [Network] section of the interfaces where I want mDNS to be enabled (I can verify that with systemd-resolve --status eth0).
However, I'm not able to understand how to configure the available services to be announces, as it was done with avahi by adding them in /etc/avahi/services.
Are there any other configuration file for systemd-resolved? Is this not possible at all?
| systemd-resolved as mDNS responder |
As I mentioned in a comment to the question, I ran systemd-resolved in strace, while watch[ing] netstat -tunlp. I noticed that the port is only opened once I make the first request to resolve a DNS name.
I captured the traffic using tcpdump -i eth0 -nn -w capture_file, noted down the port I see in netstat and looked at the output using Wireshark. The filter in Wireshark is simple: udp.port eq 37078 (using the previously noted down udp port).
I can confirm that the UDP port that is being opened by systemd-resolved is the port that is used to communicate with the DNS server.
|
Why does systemd-resolved from systemd version 219 listen on one random UDP port?
One of my machines listens on port 58557 (CentOS 7 with systemd version 219).
sudo netstat -tunlp|grep -P '^Active|^Proto|systemd'
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 0 0 0.0.0.0:58557 0.0.0.0:* 372/systemd-resolveAnother machine listens on port 52010 (also CentOS 7 with systemd version 219).
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 768 0 0.0.0.0:52010 0.0.0.0:* 398/systemd-resolveOnce I reboot the machines, systemd-resolved listens on another UDP port.
I have a third machine, which runs Fedora 27 with systemd version 234. Here, systemd does not open a random UDP port.
As a side note, I have disabled LLMNR, both in /etc/systemd/network/20-eth0.network and /etc/systemd/network/20-eth0.network, so this can't be it. Also, LLMNR would open port 5355.
$ grep LLMNR /etc/systemd/resolved.conf
LLMNR=no
$ grep LLMNR /etc/systemd/network/20-eth0.network
LLMNR=no | systemd version 219 from CentOS 7 listening on random UDP port |
You haven't indicated if this is a Desktop or Server installation. Either way, I'll try to explain.
First, let's look at the contents of /etc/resolv.conf:
nameserver 127.0.0.53
options edns0 trust-ad
search lanThis is showing 127.0.0.53 as the nameserver, which is expected and is the local caching stub resolver. In fact, if you type the command ls -l /etc/resolv.conf, you'll notice that it's a symlink to /run/systemd/resolve/stub-resolv.conf(or at least it should be), which is where the local nameserver is defined.
$ ls -l /etc/resolv.conf
lrwxrwxrwx 1 root root 39 Dec 31 2021 /etc/resolv.conf -> ../run/systemd/resolve/stub-resolv.confThe stub resolver caches DNS queries so that future queries are faster and don't have to query any uplink nameservers. Let's see this in action.
Open up two terminal windows. In one terminal, which we'll call Terminal #1, run a tcpdump with the following command. Substitute wlp9s0 with your interface that DNS queries are expected to go out. If it's the only interface connected to the internet, then use that.
sudo tcpdump -ni wlp9s0 -p port 53Then on the other window, Terminal #2, run the following command:
dig google.comThe output in Terminal #2 will be the following and indicates 127.0.0.53 as the nameserver:
$ dig google.com; <<>> DiG 9.16.1-Ubuntu <<>> google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54920
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;google.com. IN A;; ANSWER SECTION:
google.com. 294 IN A 142.251.46.206;; Query time: 8 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Wed Jul 12 19:47:07 PDT 2023
;; MSG SIZE rcvd: 55In Terminal #1 with tcpdump running, there will either be no output or some activity. If you've visited this site before, then the query should be in your local cache and won't show any output from tcpdump. If not, then you should see output with a query to your uplink server. I'll assume you've previously queried google.com as I proceed, which means there's no output just yet.
Next, let's clear the local DNS cache. In Terminal #2, enter the following:
sudo resolvectl flush-cachesAnd then run dig google.com again in Terminal #2. This time around, you should see output via tcpdump in Terminal #1 querying an uplink server you've defined or received via DHCP. In the following output, I see the server 208.65.212.34 being queried, which is one of my DNS servers given to me via DHCP on my wifi link.
$ sudo tcpdump -ni wlp9s0 -p port 53
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on wlp9s0, link-type EN10MB (Ethernet), capture size 262144 bytes
19:50:55.534164 IP 172.23.123.82.39024 > 208.65.212.34.53: 36626+ [1au] A? google.com. (39)
19:50:55.550242 IP 208.65.212.34.53 > 172.23.123.82.39024: 36626 1/0/1 A 142.251.46.206 (55)In both queries we've done, the output of dig google.com showed 127.0.0.53 as the nameserver being queried, and this will always be the case. But behind the scenes, the uplink server is queried if it hasn't been queried before and stored in cache.
So let's now setup DNS servers on a specific interface in both a Desktop environment and a server.Defining DNS servers per interface - DESKTOP
Within the desktop environment, it's very easy to assign DNS servers on each interface. With a wifi interface, it's most likely setup with DHCP. So open wifi settings, click the IPv4 page, and simply add a DNS server similar to the following image where I define 8.8.8.8 as an additional DNS server:If you don't want to use any DNS servers delivered to you from the DHCP server, simply uncheck Automatic.
In the terminal, run resolvectl status and you'll see that this additional DNS server has been added to that interface. If it's not there yet, you may have to toggle your wifi adapter on/off or reboot.
Link 3 (wlp9s0)
Current Scopes: DNS
DefaultRoute setting: yes
LLMNR setting: yes
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
Current DNS Server: 208.65.212.34
DNS Servers: 208.65.212.34
208.65.212.2
8.8.8.8
DNS Domain: ~. Just repeat as necessary for other interfaces.Defining DNS servers per interface - SERVER
In the server environment, you can define nameservers per interface within the Netplan configuration file. The following example is from a Virtual Machine I created with two interfaces. The first interface, eth0, gets it's IP address via DHCP, but I'm overriding the DNS servers given to me, effectively ignoring them, and then defining my own. The second interface, eth1, is set with a static IP address and manually defined DNS Servers.
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: true
dhcp4-overrides:
use-dns: false
nameservers:
addresses:
- 1.1.1.1
- 9.9.9.9
eth1:
dhcp4: false
addresses: [192.168.5.11/24]
nameservers:
addresses:
- 8.8.8.8
- 8.8.4.4After any changes, run sudo netplan try to apply the changes. Then run resolvectl status, and you'll see that each interface has their own DNS servers. If not, you may need to reboot the system before the changes actually take effect, which is what I've experienced when making changes.
$ resolvectl status
Global
Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stubLink 2 (eth0)
Current Scopes: DNS
Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 1.1.1.1
DNS Servers: 1.1.1.1 9.9.9.9Link 3 (eth1)
Current Scopes: DNS
Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 8.8.8.8
DNS Servers: 8.8.8.8 8.8.4.4This kind of configuration is reaching the limit of what Netplan can do, however. To have complete control over the interface DNS settings, specifically regarding the DNS Default Route, then the interfaces need to be configured with networkd directly and not Netplan. See the man page for systemd.network(5).In the end, what we're talking about here is basically split DNS, where you can configure specific domains to query specific servers. This helps with regards to VPNs, DNS Default Routes, search domains, etc. To read up more on the subject, check out the following links:systemd-resolved: introduction to split DNS
Understanding systemd-resolved, Split DNS, and VPN Configuration
systemd-resolved.service and VPNs |
Is there a way I can permanently set the DNS preference of my Ubuntu laptop for specific network interfaces?
I am relying on a wifi network that is not that predictable.
I don't have access to the edit-mode of the wifi router management console, and every now and then the wifi connection drops off and then comes back again.
This means I have to constantly invoke this command to "refresh" my DNS entries: sudo resolvectl dns 3 1.1.1.1 8.8.8.8.
The problem originally generated from web browser DNS_* errors (many different e.g. DNS_PROBE_STARTED, DNS_PROBE_FINISHED_NO_INTERNET, DNS_PROBE_FINISHED_NXDOMAIN, DNS_PROBE_FINISHED_BAD_CONFIG, DNS_PROBE_FINISHED_NO_INTERNET etc) I was seeing, so I figured out this wifi network (router)
is using dedicated DNS entries from the ISP (Internet Service Provider).
I can see these details (custom IP addresses) on the router config dashboard but I cannot change those settings.
Also: I can see that my Ubuntu DNS relies on the default gateway of the wifi network (the router).
I found this out with these commands where the IP addresses match:
# check my local Ubuntu DNS details for all interfaces
resolvectl dns
# find out the default gateway (it should be the router IP address)
ip routeI think this means that my laptop is dynamically relying on the gateway (router) for DNS resolution and that the router is configured to use some obscure IP addresses from the ISP (Internet Service Provider). Is this correct?
I changed the Global DNS by modifying this file /etc/systemd/resolved.conf
by appending this:
DNS=1.1.1.1 8.8.8.8
FallbackDNS=8.8.4.4Then I did this:
# make sure to restart the DNS daemon
sudo systemctl restart systemd-resolved.service
# check what DNS is being used by each interface
resolvectl statusBut of course this is being overridden by my wifi interface on which I don't know how to act, what to configure to make it work with my preferred DNS entries.
For the sake of completeness I also did this to make sure the web browser was relying on a fresh DNS cache without throwing DNS_* errors (I am not sure this is correct/needed, is it?):
# check current DNS cache
resolvectl statistics
# flush DNS cache
resolvectl flush-cachesIf I check the bottom of the file /etc/resolv.conf then I see this:
nameserver 127.0.0.53
options edns0 trust-ad
search lanI think this DNS trouble might be related to that entry nameserver 127.0.0.53
but I also know that this file /etc/resolv.conf is generated automatically (and perhaps refreshed automatically) by systemd-resolved.service where 127.0.0.53 means that the laptop relies on this local IP address for DNS which is managed by systemd-resolved
so I think I shouldn't be manually changing it.
I have this feeling that different programs/commands use different places/layers to figure out the DNS configuration.
Like if somehow I change the resolvectl / systemd-resolved.service settings, then maybe the browser may be reading the DNS config from somewhere else like that /etc/resolv.conf file or things like /etc/nsswitch.conf? Is this the case?
I would like to:make this command sudo resolvectl dns 3 1.1.1.1 8.8.8.8 PERMANENT (across reboot and across wifi disconnect/connect cycles)
change all the other config files e.g. /etc/resolv.conf or /etc/nsswitch.conf and similarto rely on my preferred DNS configuration details. How do I do this?
| How do I permanently configure the DNS resolution in Ubuntu for ALL programs/layers for specific interfaces |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.