output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
The M-F key combination must be used after first pressing ^R. Unless you first press ^R, the M-F command is tied to "Invoke a program to format/arrange/manipulate the buffer". After pressing ^R, you'll see M-F at the bottom of the screen with the description "New Buffer". So, to open a file in a new buffer in the nano editor, you do ^R M-F some filename EnterTip: You may add set multibuffer to your ~/.nanorc file. Doing so will change the default behaviour of ^R so that it always inserts the read file into a new buffer. Pressing M-F then would revert to the old behaviour to insert the file into the current buffer.
I love the nano feature that enables you to open multiple files at once and the fact that switching between them is very easy: nano file1 file2 etc.However, when I try to open a new file while working on one with ^R, it just inserts the whole fine within the file I was working on after I enter it. What I want to know is how to open a separate tab with a different file. M-F just gives me an error that says "[No formatter is defined for this type of file]".
Nano: how to just open a new file buffer inside of nano without altering the current one?
You can use the stat command to check the file's modification time before and after nano. Something like: oldtime=`stat -c %Y "$filename"` nano "$filename"if [[ `stat -c %Y "$filename"` -gt $oldtime ]] ; then echo $filename has been modified fiOf course, this won't detect whether nano modified the file, or some other program did, but that could be considered a feature. (You can use some other program to edit the file, and then exit nano without saving.)
I am currently writing a small setup script for a Linux application that needs the user to edit a configuration file before the application is started. I've chosen to make the script simply open the configuration file in Nano, and resume the script afterwards. I do, however, need to detect whether the user saved the changes (to then continue starting the application), or whether he discarded them (which would indicate the user doesn't want to continue). I have already checked whether this is possible with the returned exit code from Nano, and it apparently isn't - it always returns 0 even if the changes were discarded. Is there another way to figure out whether the file was changed and saved, or will I have to do this in an entirely different way?
How do I detect whether changes in nano were discarded or saved?
man nano say's nothing about files' paths. /etc/nanorc nano configuration file also gives no such option. But if you want nano to display filename's full path, use full path during file's opening: nano /home/user/test/test.sh will show it the next way:
The nano editor by default displays only the filename being edited, but not its full path. This can be a problem, if I need to edit old vs new versions of files by the same name. Does anyone know of a way to toggle or change this behaviour to display a full path? Or is there no way short of figuring out where this happens in the source code and recompiling?
Display full path in title bar of nano editor
In Vim: gg0<ctrl-v>GI;<Esc>
In a text file I need to comment out all lines by adding a ; as the first character of each line. What is a good way to do this? I thought of Vim's visual block mode, but I couldn't find a "select all" option and marking several hundred lines manually also isn't great. Any idea? I have nano, vi and vim at hand, I would prefer one of those for this task.
How to comment all lines in a text file?
To find CLI packages in Debian, you can look for packages tagged as interface::command-line, either using the tag search engine, or on your system by installing debtags and running debtags search interface::command-lineBoth approaches have options to refine the search. See the Debtags wiki page for more details. This does have limits: packages aren’t all tagged appropriately. You can also look for packages which depend on libncurses6: apt-rdepends -r libncurses6
I recently installed Debian for CLI purposes. I am looking to install CLI packages, I want to know how to search for packages (CLI packages such as nano)?
How to search for Debian CLI packages?
If you see messages like [ backup files enabled ] when you try those shortcuts, it means they are producing Alt+B and Alt+F escape sequences. So you can just rebind those like so: unbind M-B all bind M-B prevword main unbind M-F all bind M-F nextword mainSee this bug report thread for more info.
I'd like to rebind the alt-left and alt-right keybindings in nano, can't seem to get it to work. I'm on ubuntu 16.04 my ~/.nanorc file: bind M-right nextword main bind M-left prevword mainversion info: GNU nano, version 2.5.3 (C) 1999..2016 Free Software Foundation, Inc. Email: [emailprotected] Web: http://www.nano-editor.org/ Compiled options: --disable-libmagic --disable-wrapping-as-root --enable-utf8
Re-bind alt left/right in nano
You can enable and disable the lines number for each individual line instead through: ALT + # see the help (Open nano then type Ctrl + G ): M-# Line numbering enable/disableNote that you can also enable line-numbering from the command-line too, using the argument -l, for example: $ nano -l somefile.txt
I am looking for a keyboard shortcut to show the line numbers in nano editor when navigating up and and down within the file. I have seen in multiple posts about CTRL+C but this doesn't automatically refresh the line number while navigating. I remember that in the past I have used such shortcut in Linux terminal and if I recall it was for nano. I also think that it worked out of the box, I didn't have to change any parameters like this article shows: https://askubuntu.com/questions/73444/how-to-show-line-numbering-in-nano-when-opening-a-file Is there any shortcut to do this? I have Ubuntu 16.04 and GNU nano 2.5.3.
How to constantly show line number when navigating in nano?
It looks like you need to install libncursesw5-dev and/or libslang2-dev; that’s what’s missing according to the config log.
I want to compile Nano from source for my friend. I have successfully compiled it on all of my computers and he's having Linux Mint 18.1 too. I don't know why; or better said I don't know what is missing in his system for UTF-8 support as per this configuration message:*** Insufficient UTF-8 support was detected in your curses and/or C *** libraries. If you want UTF-8 support, please verify that your slang *** was built with UTF-8 support or your curses was built with wide *** character support, and that your C library was built with wide *** character support.I tried to install various development packages and it solved several other issues, but this one I am unable to solve since I didn't manage to google much about this issue. I am quite exhausted, so I temporarily installed the compiled Nano editor with disabled UTF-8 support on his computer. Any clues appreciated.
Compiling Nano with UTF-8 support on one computer failed
To set the number of characters in a justified line, just use the set fill configuration in ~/.nanorc: set fill 80If you need a different size for only one file, you can use the -r flag: nano -r 60 myfile.txtIn this case, however, the lines will be justifined while you are typing (which my not be always convenient). Both options accept zero or negative values. In this case, the size of the lines will be the number of columns in the terminal plus the non-positive number. I.e. if your terminal has 80 columns and .nanorc contains set fill -8 then the lines will be justified to 72 columns.
When justifying paragraphs, nano by default justifies into the number of columns in the screen. How can I force it to justify to a different number of columns (e.g. 80 characters)?
How to set up the numer of columns on a justified paragraph in nano
Based On details in The comments it seems that you are running from a windows machine using putty. Due to limitations with the putty connection, You would need x forwarding to use a native terminal to have multiple tabs (as your putty terminal each tab is a separate connection, controlled by windows rather than being a single point of access to the linux machine). As a result your best option is to use an editor that has built in tab support (which to my knowledge nano unfortunately does not). If you are up for using vim as an alternative, it does support tab's as an option. You can open all your items in vim in separate tabs with vim -p *.cppthen control them further with the following: :tabe <file> #Opens <file> in a new tab :tabp #Switches to previous tab :tabn # switches to next tabAlternatively you can use window splits :split # Opens a second editor in a horizontal split :vsplit # Opens a second editor in a vertical splitctrl-wctrl-w Jumps between splits. Or Multiple buffers :e <file> #edits opens <file> in a new buffer :bn # switches to the next buffer :bp # switches to previous bufferThe real usage would be a combination of all three. Us multiple tabs to have different configurations of window splits for different purposes, and switch window split buffers between different buffers as you need them. As a final Alternative, if you download an ssh client that supports x forwarding, I use mobaXterm You could run something like gnome-terminal (Assuming you have, or can install a Window Manager) as a GUI program from the Linux system, and possibly get your tab solution that way as well.
In my folder I have a bunch of .cpp files. I use nano for editing my files. Is there a way for me to be able to write a single command and open all the .cpp files in different tabs? If I do nano *.cpp, the next file opens after I close the current one and that isn't the desired behavior. The desired behavior is that all files open simultaneously in tabs.I'm using MTPuTTY.
Open files in new tab for nano
In most terminals (a side-effect of the way ASCII is encoded), ^/ is the same as ^_ (control_). The help-screen for nano on my Debian 7 machine shows that as Go to line and column number. M-/ is harder, since there is no standard for this. However, nano uses the assumption that the meta keys simply have an escape character as prefix. Again, the help-screen shows a binding (which, lacking a standard) cites this as Go to the last line of the file. Your terminal may not send the key that nano expects. In the nanorc manual page, the binding of control for non-alphas is not mentioned. A quick check shows that it does recognize ^_ but not ^/. Also (setting xterm to Meta sends escape), the M-/ binding is recognized. Here's the example I tried: bind ^/ help all bind ^_ exit all bind M-/ help allThe binding for ^/ is ignored whether or not I comment-out the ^_ binding. On my keyboard, the two send the same character. Very likely you have the same behavior.
I've been configuring nano with the hopes of giving it the same keybindings as emacs, so that I can use nano for quick edits and emacs when I'm working on actual projects. However, I've run into a small problem: nano does not seem to want to let me reassign the ^/ key combination (to undo). Does anyone know how/if this can be done?
Is it possible to bind ^/ and M-/ in nano
On Ubuntu 2018. In ~/.nanorc put: set positionlogJust as a tip, I also have these: set tabsize 4 set tabstospaces set autoindent set smooth
Can Nano save the current position of the cursor at exit and, when you reopen the file, restore the old cursor position, like vim does?
Nano: Remember cursor position at start
Nano cannot do that. The best way to get this done is by learning the basics of file manipulation with the command line to chop the file in pieces, sort the piece you want to sort and put everything back together. If you want to have a editor that can do everything, even run shell scripts on your file from within the editor, you should have a look at VIM for instance.
Is it possible to sort a selected area of text alphabetically using the nano editor? ( similar to the F10 in xed ) I use Linux Mint 20 , nano 4.8 Thank you
Sorting lines in nano?
Nano doesn't store the compiled options as provided on the ./configure command-line, it reconstructs them based on detected features and the requested target ("tiny" Nano or normal Nano). For tiny Nano, it reports enabled options, since they add to the default; for normal Nano, it reports disabled options, since they remove from the default (in most cases). In your case, you're building normal Nano, so for most options it only reports if they're disabled; the exceptions are debug, utf8 and slang. All your --enable options are defaults for normal Nano, so it doesn't report them in the compiled options; you'd get the same result with ./configure and no options. You end up with --disable-magic because you don't have the development files for libmagic (see Thomas Dickey's answer), and with --enable-utf8 because you do have the necessary features for UTF-8 support (and it's enabled by default).
I am trying to compile my favorite nano command-line text editor with some of the options. Actually, most of the options in order to enable all features.First, I go to Downloads directory and download the tarball: cd Downloadswget --continue https://www.nano-editor.org/dist/v2.8/nano-2.8.0.tar.xzThen, I verify its integrity: wget --continue https://www.nano-editor.org/dist/v2.8/nano-2.8.0.tar.xz.ascgpg --verify nano-2.8.0.tar.xz.ascIt should say:gpg: Good signature from "Benno Schulenberg <[emailprotected]>"I have tried to run the configuration script as follows: ./configure --enable-nanorc --enable-color --enable-extra --enable-multibuffer --enable-utf8 --enable-libmagic --enable-speller --disable-wrapping-as-rootAfter compilation, I end up with this; directly executed in the compiled directory:Compiled options: --disable-libmagic ...I stress the:--disable-libmagicAs I specifically configured it with:--enable-libmagic After no success:I delete the folder to start the process over: rm -rf nano-2.8.0/I extract again the archive: tar -xJf nano-2.8.0.tar.xzI have tried different combinations of options, but no luck.Is there anything missing in the system or am I just doing something wrong?Direct execution after the compilation: user@computer ~/Downloads/nano-2.8.0/src $ ./nano --versionGNU nano, version 2.8.0 (C) 1999..2016 Free Software Foundation, Inc. (C) 2014..2017 the contributors to nano Email: [emailprotected] Web: https://nano-editor.org/ Compiled options: --disable-libmagic --disable-wrapping-as-root --enable-utf8
Compiling Nano editor with options
I found the problem by comparing my saved session in PuTTY for the "problem" server to one for a "working" server. Under the terminal emulation options, I had "DEC Origin Mode initially on" checked. Unchecking this option solved the problem.
When I use PuTTY to connect to a specific Linux server via the SSH protocol, and I try to edit a file using the nano editor, the "enter" does not update the display. When I press enter to insert another line break, the following lines do not move down. However, if I save the file and re-open it, the new line breaks are there. I have further discovered that this only occurs on the first 3-4 lines of the file. This particular server runs CentOS 6. When I connect to a different server, I don't have the same problem. Where does the problem lie and how do I fix it? Running infocmp $TERM reports: # Reconstructed via infocmp from file: /usr/share/terminfo/l/linux linux|linux console, am, bce, ccc, eo, mir, msgr, xenl, xon, colors#8, it#8, ncv#18, pairs#64, acsc=+\020\,\021-\030.^Y0\333`\004a\261f\370g\361h\260i\316j\331k\277l\332m\300n\305o~p\304q\304r\304s_t\303u\264v\301w\302x\263y\363z\362{\343|\330}\234~\376, bel=^G, blink=\E[5m, bold=\E[1m, civis=\E[?25l\E[?1c, clear=\E[H\E[J, cnorm=\E[?25h\E[?0c, cr=^M, csr=\E[%i%p1%d;%p2%dr, cub1=^H, cud1=^J, cuf1=\E[C, cup=\E[%i%p1%d;%p2%dH, cuu1=\E[A, cvvis=\E[?25h\E[?8c, dch=\E[%p1%dP, dch1=\E[P, dim=\E[2m, dl=\E[%p1%dM, dl1=\E[M, ech=\E[%p1%dX, ed=\E[J, el=\E[K, el1=\E[1K, flash=\E[?5h\E[?5l$<200/>, home=\E[H, hpa=\E[%i%p1%dG, ht=^I, hts=\EH, ich=\E[%p1%d@, ich1=\E[@, il=\E[%p1%dL, il1=\E[L, ind=^J, initc=\E]P%p1%x%p2%{256}%*%{1000}%/%02x%p3%{256}%*%{1000}%/%02x%p4%{256}%*%{1000}%/%02x, kb2=\E[G, kbs=\177, kcbt=\E[Z, kcub1=\E[D, kcud1=\E[B, kcuf1=\E[C, kcuu1=\E[A, kdch1=\E[3~, kend=\E[4~, kf1=\E[[A, kf10=\E[21~, kf11=\E[23~, kf12=\E[24~, kf13=\E[25~, kf14=\E[26~, kf15=\E[28~, kf16=\E[29~, kf17=\E[31~, kf18=\E[32~, kf19=\E[33~, kf2=\E[[B, kf20=\E[34~, kf3=\E[[C, kf4=\E[[D, kf5=\E[[E, kf6=\E[17~, kf7=\E[18~, kf8=\E[19~, kf9=\E[20~, khome=\E[1~, kich1=\E[2~, kmous=\E[M, knp=\E[6~, kpp=\E[5~, kspd=^Z, nel=^M^J, oc=\E]R, op=\E[39;49m, rc=\E8, rev=\E[7m, ri=\EM, rmacs=\E[10m, rmam=\E[?7l, rmir=\E[4l, rmpch=\E[10m, rmso=\E[27m, rmul=\E[24m, rs1=\Ec\E]R, sc=\E7, setab=\E[4%p1%dm, setaf=\E[3%p1%dm, sgr=\E[0;10%?%p1%t;7%;%?%p2%t;4%;%?%p3%t;7%;%?%p4%t;5%;%?%p5%t;2%;%?%p6%t;1%;%?%p7%t;8%;%?%p9%t;11%;m, sgr0=\E[0;10m, smacs=\E[11m, smam=\E[?7h, smir=\E[4h, smpch=\E[11m, smso=\E[7m, smul=\E[4m, tbc=\E[3g, u6=\E[%i%d;%dR, u7=\E[6n, u8=\E[?6c, u9=\E[c, vpa=\E[%i%p1%dd,
Nano editor - display not updating with PuTTY
Terminal resize generates a SIGWINCH signal that is sent to the foreground applications. Said applications are supposed to catch that signal (provided they care about terminal size to begin with), and adjust accordingly. What seems to be going on when you resize the terminal while nano is running from ipython is that ipython receives the SIGWINCH, but doesn't re-send it to nano. I don't think there is anything you can do about that (except report it as a bug to IPython developers).
I am using the OS X Terminal (I know, not Linux or Unix, but hopefully that's irrelevant here) for a number of operations, and commonly use nano to quickly edit files. In doing so I often resize the Terminal window, and the nano interface resizes accordingly to fit the window. This is great, allows me to see my work better, and is expected behavior. However, I also use ipython for computations and development in python, but when I load this interactive shell it seems to size the shell's width and height at the current Terminal window's bounds, but then will not resize dynamically if the window is resized. As a result, if I use a command like "!nano" in ipython to launch the nano editor, it will initially load and look correct at the current window size, but resizing the window will garble the nano interface as it wraps around or extends beyond the apparent bounds of the ipython shell. This appears to be some limitation with how ipython is interacting with the bounds of the window or shell running in the window. I cannot tell, and am hoping someone with experience in this can point me to a way (if possible) of having ipython (or whatever is involved here) properly resize itself according to the Terminal's window. Here is a screenshot of what it looks like. The window was initially larger, so I ran ipython to load the interpreter, and then ran !nano to launch nano within ipython. I then resized the window and this is what results. I'd expect (and hope to find a way) for such resizing to preserve the nano interface. This only happens in ipython, so its clearly a restriction somewhere in that environment.
Resizing the terminal window for ipython command prompt
There's a de facto standard format for compiler or linter error messages, which is the same format as grep -n: FILE_NAME:LINE_NUMBER:MESSAGE. Experimentally, nano supports that. I haven't researched if it supports any other format, but in any case it doesn't support lacheck's format. You can define a wrapper to the lacheck command that rewrites its messages in the standard format, and tell nano to invoke that wrapper instead of invoking lacheck directly. #!/usr/bin/env bash set -o pipefail lacheck "$@" 2>&1 | sed 's/^"\([^"]*\)", line /\1:/'
In my latex.nanorc file, I have the following instructions: syntax "LaTeX" "\.(la)?tex$" linter lacheckHowever, when I press the keyboard shortcut to run the linter, I get an error message La commande « lacheck » n'a produit aucune ligne analysable (i.e The 'lacheck' command did not produce any analysable lines in English). When I run lacheck by its own on my tex file it produces this output: "article.tex", line 21: missing `\ ' after "e.g.".My guess is that the format of the message is not understood by nano (version 5.8). Is there a standard protocol a linter must comply to in order to be recognised by nano?
Which linters are supported by nano?
I added the following to ~/.emacs: (setq-default indent-tabs-mode t) (setq backward-delete-char-untabify-method nil) (setq indent-tabs-mode t)(defun my-insert-tab-char () "Insert a tab char. (ASCII 9, \t)" (interactive) (insert "\t")) (global-set-key (kbd "TAB") 'my-insert-tab-char) ; same as Ctrl+i
How can I configure ~/.emacs so that I indent how nano does by default?Uses a tab character instead of 5 spaces I can add as many tabs to a line as I please
How to make 'emacs' indent with tabs exactly how 'nano' does...?
Syntax highlighting tends to be language specific. However, if you want to do it for all files, you can simply create a very very simple language definition. I took the Perl syntax style (which treats lines starting with # as comments) from /usr/share/nano/perl.nanorc and adapted it to: syntax "All" "." color green "^\s*#.*"As far as I can tell, the nano syntax highlight format needs at least one test to define the file type, and then you can set filters for the color. So, I used the most simple test I can think of, that the file's name contains at least one character, and I named this syntax style All: syntax "All" "."I then told it to color lines starting with 0 or more spaces and then a # in green: color green "^\s*#.*"So, if you create a file called $HOME/.nanorc and paste those two lines into it, your comments will be highlighted in green.
I was wondering how could I make lines commented with a # highlighted in a different colour in nano? I saw this question on askubuntu that shows how to syntax highlight for different languages. However this is overkill for just highlighting comments.
How to auto highlight comments in nano?
The problem is not the ., but the .1 . This extension is used for man pages - and so the viewer tries to interpret this as a man page. Edit: this is controlled by the file /etc/mc/mc.ext - you'll find a # Manual page regex/(([^0-9]|^[^\.]*)\.([1-9][A-Za-z]*|[ln])|\.man)$ Open=/usr/lib/mc/ext.d/text.sh open man %var{PAGER:more} View=%view{ascii,nroff} /usr/lib/mc/ext.d/text.sh view man %var{PAGER:more}Entry.
It seems to ignore newline characters in "parsed" view. In raw view everything is fine. Same thing happens in nano "justified" mode. a) parsed text fileb) same file in raw modec) vi with :set listd) same file in parsed mode when the name is changed to "tt"Can anyone explain this?
mc (midnight commander) view (F3) strange behavior when there is a "." in the filename
I don't have the latest version but browsing through the sources online shows that M-# is setting an option called linenumbers, and indeed this is described in the new man page for nanorc:set linenumbers Display line numbers to the left of the text area.There is also a command line -l or --linenumbers option.
I'd like the line numbers to show automatically whenever I use nano. I've seen the set const command in ~/.nanorc but I want to see the line numbers in the column to the left, activated by meta-# within nano. Is there a way to automatically use meta-# whenever I use nano without having to do it myself? Thanks
Use keyboard shortcut automatically in nano? [duplicate]
The Issue at it's simplest is that your terminal emulator has two ways to deal with the mouse (besides ignore it). They are do something intelligent with the mouse since the program being run doesn't know what to do with it, or let the application deal with it. Most terminal emulators do both and chose between the two based on whether or not the application says it can use the mouse (termcap and terminfo come into play here but let's skip the details). If the terminal emulator has decided to do something intelligent with the mouse in most cases the reasonable decision is to implement copy and paste. If the terminal emulator is just passing mouse information to the application it is entirely the responsibility of the application to do the right thing, and applications vary widely in what they do. vim implements copy and paste and visual mode and is well thought out (if you like vi). aptitude does not it only does selection (which is decent in the menu and a couple other places but often leaves me reaching for the shift key). Then there is xterm and those that emulate it to some degree, where they decided that if the application is wrong you can hold down the shift key and change what the mouse does, which is how I copy urls from aptitude, and once in a blue moon send mouse events to cat (I think this still works I haven't done it in years). In the case of nano I avoid it as its vi compatibility mode is broken, so I can't give you advice beyond what is mentioned in the man pages (and besides I haven't read them lately).
In normal mode, I can use mouse left button to copy and right button to paste, but not with mouse mode:-m --mouse Enable the use of the mouseIs copy/paste still possible with mouse mode?
How to copy and paste in nano editor with mouse enabled? [duplicate]
I figured it out, thanks for the pointers guys. The problem here was that grep was recursively searching for "TODO:" in the todo.txt file and then writing those results back to the todo.txt file. When I opened todo.txt it was filled with the same text looped over and over again. Evidently, I should have used the --exclude="todo.txt" option in grep. After adding that, it works perfectly.
I have a professor who stores homework assignments in many files spread across different sub-directories in a lecture folder with the header "TODO:" I'd like to output all these todo's to a single text file in nano instead of navigating from one assignment file to another. I tried to make an alias for this command, since I use it so much, but whenever I try to execute it, the cursor just blinks and nothing happens. alias todo='cd /home/csc103/Desktop/shared/csc103-lectures && grep -Rw "TODO:" --after-context=6 --include="*.cpp" . > todo.txt && nano todo.txt'What am I doing wrong here?Edit by "Nothing Happens" I mean that the cursor keeps blinking and the next prompt doesn't come up. As in the left terminal pane in the image below.However, when I force-quit the process with ctrl-C I do end up in the directory I wanted the todo command alias to take me to. And there is a todo.txt file in there.Also, if it's of any relevance I'm issuing these commands on an Arch Linux install in VirtualBox.
Grep alias piped to nano. Nothing happens when command is issued
The problem is likely to be the placement of the - sign in your character list. You have already used the fact that ranges of characters can be expressed by [start-end], as in [a-z] being shorthand for [abcdefghijkl...xyz] (although see the caveat below). That means that the - is a special character, and if it occurs between two "regular" characters, it is interpreted as indicating yet another range encompassing these two characters and every one in between. Of course, this only works if the character after the - is lexicographically "later" in the sort order then the character preceding it, which is also the reason for your error message (you will see that it goes away if you say (-_ instead, although that will not solve your problem). Since you obviously want to match the literal -, and depending on how regular expressions are interpreted in the .nanorc, you eitherhave to escape it (i.e. \-), or place it first or last in the character list (i.e. [-etc] or [etc-]) which would be standard in POSIX and GNU regular expressions and therefore the most likely solution on a Linux system.See e.g. here for further reference. Caveat: The statement above "[a-z] being shorthand for [abcdefghijkl...xyz] is not unconditionally true! How the range is interpreted depends on the locale settings, specifically the collation order.In the "C" locale, the order is according to ASCII code value, i.e. ABC...XYZ...abc...xyz. Here, [a-z] actually means "all lowercase characters". In most other locales, upper- and lower-case-characters are grouped together, i.e. the order is aAbBcC...xXyYzZ. Here, [a-z] would mean "all lowercase characters and all uppercase characters except Z. The treatment of non-ASCII characters like "umlauts" is yet another issue.See here and here for further discussions on the subject.
Can you help with this regex in my sourceslist.nanorc ? Regex: cdrom:\[[a-zA-Z0-9\._-\(\) ]+\]/Error: Bad regex "cdrom:\[[a-zA-Z0-9\._-\(\) ]+\]/": Invalid range endThank you.
Bad regex : Invalid range end
So /etc/nanorc.pacnew is the new rc file that came with the new distribution upgrade? How about sed '/tabsize/ {s/^# *//; s/[0-9]*$/4/}' /etc/nanorc.pacnew > /etc/nanorc , then? Another possible trick might be to have a symbolic link ~/.nanorc in every user's home dir pointing to a central file with the relevant commands. on demand: sed '/tabsize/ # if the line matches "tabsize" {s/^# *//; # remove "#" and trailing spaces from begin-of-line (BOL) s/[0-9]*$/4/ # substitute any sequence of digits at EOL by "4" }' /etc/nanorc.pacnew # input file > /etc/nanorc # redirection to target file
I want all users of nano to have tabsize 4 instead of the default 8. What is the best way to achieve this? I would prefer a file that overrides /etc/nanorc at the system level so I don't have to maintain separate user nanorc's for this purpose. In the simple case, my override would only need to contain: set tabsize 4Here's another way to state my question: Does nano recognize /etc/nanorc.d/ and config files placed therein? If so, what is the required naming and/ content of config files placed there? What I tried so far was to create /etc/nanorc.d/ and place a file named tabsize.conf in that directory and put only the following contents in the file: set tabsize 4My naive attempt did not work, but I am hoping there is a way to use this config.d/ pattern with nano. I will make my question even more specific. I am using Arch Linux. I have do do these steps when the package has a new nanorc: mv /etc/nanorc.pacnew /etc/nanorcThen edit /etc/nanorc, search for tabsize, uncomment the line, change the value from 8 to 4 and save the file. My goal is to only have to do this step: mv /etc/nanorc.pacnew /etc/nanorcAnd to have a file similar to /etc/nanorc.d/tabsize.conf that contains my desired tab size. It's a small savings of time, but multiplied across a number of computers it adds up. This year it seems like I have gotten new /etc/nanorc.pacnew files about six times. It is very inefficient to keep editing tabsize over and over.
How to override /etc/nanorc systemwide?
The default nano on macOS is release 2.0.6. Soft line wrapping, enabled via the -a command line option or via Esc+$ inside the editor, was introduced in release 2.2. To install a more recent release of the editor on macOS, use e.g. Homebrew: brew install nanoThis would (currently) install nano release 4.9.2, which also happens to be the most recent release.
I tried the following recommended approaches unsuccessfully to toggle softwrap in nano on MacOS Catalina:I can't do Alt+S as in the manual, it gives ß on my keyboard. Esc-S gives "Smooth scrolling enabled" Esc-$ gives "Unknown command"What can I do?
How to toggle nano softwrap on mac
gpg -d just prints the file to standard output, but you can redirect the output to a file instead: gpg -d filename.txt.gpg > filename.txt. Or use the -o outputfilename option. Also, you can just run gpg filename.txt.gpg, which cause gpg to guess what you want, and in that case it decrypts the file to filename.txt (dropping the final .gpg). Of course, note that when you decrypt the file on a regular filesystem, the OS may write it to the disk and removing the file afterwards will not clear remains of the file data from the disk. To avoid that, make sure to decrypt sensitive data only to RAM based filesystems. On Linux, that would be the tmpfs filesystem. In some distributions, /tmp is a tmpfs by default. If it isn't, you can mount a new tmpfs simply with mkdir /ramfs; mount -t tmpfs tmpfs /ramfs (as root, change the ownership and permissions as required). Just mounting a filesystem doesn't mean that your files would be saved there, but a full discussion of safely handling sensitive data is outside the scope of this answer.
I encrypted a text file in terminal using "gpg -c filename" and got "filename.txt.gpg" created in my file manager. I deleted the original unencrypted file. Now I want to decrypt it in Nano so I can continue working on it. If, in a terminal, I do "gpg -d filename.txt.gpg", the file opens in terminal where I can read it, but do nothing else. I want to open the encrypted file in Nano, and add data to the file in Nano. I've tried every way I can think of, but not able to decrypt and open the file in Nano. Any ideas? Thx.
How decrypt a file in nano text editor?
The virtual terminal you are using nano on is modelled after the physical serial terminals of past decades. By convention, these terminals produced ASCII control codes when a letter key or one of the keys labelled @, [, \, ], ^, _ were pressed together with the Ctrl key. The control code emitted has the ASCII code of the code of the letter subtracted by 64. Thus pressing Ctrl-M produces the ASCII code of M (0x53) subtracted by 0x40 = 0x13, which is the code for Carriage Return. The Return key also produces Carriage Return, because that's the function of the the key.
I wanted to make Nano have more common shortcuts (i.e. Ctrl-F for search, Ctrl-H for replace, etc) and edited the nanorc file to add: bind ^F whereis all bind ^H replace all bind ^M mark all ...To my astonishment, pressing Backspace activates the Replace function and pressing Enter activates Mark. Then i realized their virtual keycodes are: Backspace 0x08, H 0x48, Enter 0x0D, M 0x4D. Are the Ctrl keybindings actually bitmasked on 0x40?
Nano editor: are Ctrl keybindings actually bitmasked?
There is also clang-format command
Is there a SIMPLE way to format nano file text? My code is getting pretty messy, so it MAY help to format it.
nano text file formatter? [closed]
The command would have started the nano editor and instructed it to edit the root directory. nano would have complained with a [ "/" is a directory ] message. No files would have been changed.
Instead of using sudo find / -name "supervisor" I ran sudo nano / -name "supervisor" by mistake, which looked like it opened a blank nano file before I hit CRTL-X... $ sudo nano / -name "supervisor" Use "fg" to return to nano.[1]+ Stopped sudo nano / -name "supervisor"The operating system is Ubuntu 16.04. Does anyone have any idea what it was trying to do? I mainly want to know if I have messed anything up on the server. Thank you.
Mistyped command: sudo nano / -name "supervisor"
^O is a representation of a character, the one usually sent by your terminal upon pressing Ctrl+o, not a key. In terminals, applications get input by reading a stream of bytes from a /dev/tty* or /dev/pt* device file, not by handling keyboard events, and those bytes are the ones that are sent there by your terminal when you press some key or key combination. With an ASCII-based terminal as used on ASCII based systems (the norm these days though most systems/terminals extend it to support non-American-English characters), when you type a to z, the terminal sends byte 97 to 122 (or 0x60 | letter), the ones representing the a to z characters in ASCII; if you do the same whilst holding the Shift key, they send bytes 65 to 90 (0x40 | letter), A to Z in ASCII. With Ctrl, they send bytes 1 to 26 (0x00 | letter¹). Now, bytes 0 to 31 are control characters, they don't have glyphs, font representations. They have names (see man ascii). Like 9 (as sent upon Ctrl+i or ⭾) is tab, 10 is newline (sent upon Ctrl+j), 13 is carriage-return (sent upon Ctrl+m or Enter, though beware the terminal driver often translates it to a newline). The character sent upon Ctrl+o is the shift-in control character. If nano had <shift-in> Write Out instead of ^O Write Out, I'd bet most people wouldn't know how to send that control character. ^A ... ^Z (and ^[ ^\ ^] ^^ ^_ for 27..31, and ^@ for 0, ^? for 127¹) sometimes called hat notation are common visual representations of those characters. You'll find that it's also the one used by cat -vt or by the terminal line discipline after stty echoctl. Other notations include \CA or \C-A. Some control characters have representations in C strings such as \n for ^J/newline, \a for ^G/BEL. But the ^A / \C-A ones are more useful to indicate how you can generate them with a keyboard.¹ @ being 0x40 in ASCII and ? 0x3f, so the ^X character is rather obtained with 0x40 ^ X, that is the byte value of X with the second most significant bit flipped. You'll also find the M-^X and M-X representation for bytes 0x80 to 0x9f and 0xa0 to 0xff which ASCII-only terminals used to send upon Meta+Ctrl+X and Meta+X, though nowadays they rather send ESC (^[) followed by ^X/X as the 8th bit is used instead for non-ASCII characters and I'd expect nano these days to expect ^[U rather than byte 0xd5 to mean M-U Undo as that byte 0xd5 is also found as the first byte of the UTF-8 encoding of Armenian characters U+0540 to U+057F or is Õ in ISO8859-1 (aka latin1)
Extremely simple question. In the attached image the majority of options seem to entail typing ^ and another character at the same time. The problem is that to type ^ I need to press Shift + 6, at which point I am actually typing ^ before I have a chance to press the second character, e.g. ^T.
How to handle caret + character options in GNU nano editor in Ubuntu
Edit the nanorc file, and add the following lines: set titlecolor COLOR_1,COLOR_2 # COLOR_1 is the text, COLOR_2 is the background. Supported colors are white, black, blue, green, red, cyan, yellow, magenta set numbercolor COLOR_1,COLOR_2 # same as above
How can I change the actual editor theme in .nanorc, I am not speaking about the syntax highlighting but editor elements such as titlebar or line numbers color/background color? For instance, I would like to set the title bar and line numbers background to black/transparent, and the font color to white.
nano change line numbers color
In the comments, @JeffSchaller noted that a terminated nano process saves the unwritten file in file_path.extension.save. If it is there, which it was for me, it is a simple matter of mving the file into its original name.
I was ssh'ing into a Raspberry Pi running Raspbian, editing a file with nano, when I lost my internet connection (by leaving the WiFi zone). After reconnecting an hour later, I found that the pi had kicked me out, but after logging back in, I saw that it did not stop the task. When I reopened nano, it told me that the file was being edited by the previous nano process, and it gave me the PID. I made substantial changes to the file and forgot to save, but presumably the changes are still there. How can I tell nano to save & quit (^O -> Enter -> ^X) or reopen the task in a new shell, from outside the original process?
How to exit nano from a different terminal?
Check out what the package does in its postinst: update-alternatives --install /usr/bin/editor editor /bin/nano 40 \ --slave /usr/share/man/man1/editor.1.gz editor.1.gz \ /usr/share/man/man1/nano.1.gzThis installs /bin/nano as an alternative for /usr/bin/editor (the alternative named editor, so /etc/alternatives/editor), with priority 40, and associates the nano manpage as an alternative for the editor manpage. That way, selecting nano as the configured alternative automatically sets up the manpage to match. When this is run, if an editor alternative already exists, nano will be added, and if the alternative is in automatic mode, selected if it has the highest priority; if not, the alternative will be created in automatic mode, nano will be added and selected. So you probably want something like update-alternatives --install /usr/bin/editor editor /usr/local/bin/nano 100(assuming you want to automatically select nano; the highest priority I see for an editor in Debian is 70, so 100 will win).
I have compiled GNU/Nano editor myself and I wish to add it to the system editors list. which nanoTells me the following location:/usr/local/bin/nanoSo it should be something like: sudo update-alternatives --install /usr/bin/editor editor /usr/local/bin/nano 1But I need to put pieces together. Could you help me with understanding the manual please?COMMANDS --install link name path priority [--slave link name path]...Add a group of alternatives to the system. link is the generic name for the master link, name is the name of its symlink in the alternatives directory, and path is the alternative being introduced for the master link. The arguments after --slave are the generic name, symlink name in the alternatives directory and the alternative path for a slave link. Zero or more --slave options, each followed by three arguments, may be specified. Note that the master alternative must exist or the call will fail. However if a slave alternative doesn't exist, the corresponding slave alternative link will simply not be installed (a warning will still be displayed). If some real file is installed where an alternative link has to be installed, it is kept unless --force is used. If the alternative name specified exists already in the alternatives system's records, the information supplied will be added as a new set of alternatives for the group. Otherwise, a new group, set to automatic mode, will be added with this information. If the group is in automatic mode, and the newly added alternatives' priority is higher than any other installed alternatives for this group, the symlinks will be updated to point to the newly added alternatives.
Compiled GNU/Nano: How to add to system editors list
/lib/modules/3.18.1+/kernel/drivers/video/fbdev/fbtft is a directory. modinfo fbtft or modprobe fbtft looks for a file called fbtft.ko, which should be in that directory. The fbtft driver can either be compiled as a module or linked into the main kernel binary. If it's in the main kernel binary then there won't be a file under /lib/modules. But at runtime there will be a directory in sysfs, /sys/module/fbtft, containing various information about the driver. Note that the driver you should be loading is actually fbtft_device. See the wiki for more information. If the fbtft modules are not included in the kernel you're using, then you'll need to recompile them. But 3.4 is a pretty old kernel; the tbtft drivers were added in 4.0. So you should look for a more recent kernel. Debian jessie, the latest stable release, shipped with 3.16; that's almost recent enough but not quite. There are more recent kernels in the backports.
When I trying this command modinfo fbtft I get this result: modinfo: ERROR: Module fbtft not found.But when I do checking, I have fbtft file in this location : /lib/modules/3.18.1+/kernel/drivers/video/fbdev/fbtftDo I have kernel support for fbtft or not? if not, how to add it? My system is an arm-based computer(nanopi-m1) with Allwinner H3 sun8iw7p1 SoC and Debian Jessie OS. This is the result of uname -r : 3.4.39-h3
Do I have kernel support for fbtft?
I suppose in that case nano is using another .nanorc?Yep. When you run sudo nano file, HOME environment variable is set to /root, so nano looks for .nanorc there. Just add the setting into /root/.nanorc, and you should be fine.
I needed to be able to set the cursor using mouse when editing text via nano over putty. I have already set the below in ~/.nanorc: set mouseIf I start nano file just like that then it works, but if I require elevated rights and sudo nano file then it's not. How on earth is that possible? I suppose in that case nano is using another .nanorc?
Setting nano cursor using mouse via putty doesn't work if sudo
Why? From this answer:Selections in X work by two X clients cooperating: One X client claims it has a selection (primary, secondary, clipboard), and another X client that wants to paste the selection contacts the first client to receive it.In your case the original xclip exits but its child survives to claim it has a selection and to serve for future clients that want to paste. Usually (e.g. in case of printf foo | xclip -sel c) you don't notice the child because it's in the background. With xsel the mechanism is the same: a child survives in the background. The difference is the child xsel is smart and redirects its stdin and stdout to /dev/null, its stderr to a log. The child xclip is not that smart, it keeps "using" the standard streams inherited from the the original xclip; it does not really use them, it just keeps them open. In the [ Executing... ] state nano is waiting for the output of the invoked command. The command is a writer and nano is a reader of some pipe. If any process keeps the pipe open for writing then nano will keep waiting for further output. In case of xclip the child keeps the pipe open and thus prevents nano from continuing*. In case of xsel the child does not keep the pipe open. Your problem is very similar to this one: ssh command doesn't terminate.* Trivia: copying from elsewhere, i.e. making some other process claim it has a selection, will make the child xclip exit; this will unblock nano.Fix Forcing a child to close its inherited standard streams without affecting the parent first is not easy in general, I think. Luckily here you don't really need nano to capture any output from xclip (no matter if from the parent or from the child). Redirecting output from the parent xclip to /dev/null is enough to fix the problem. In my tests the standard shell syntax works: "{execute}|xclip -sel c >/dev/null 2>&1{enter}{undo}"
In nano v8.0 there's an option to send selection to X clipboard in the nanorc: "{execute}|xsel -ib{enter}{undo}"which does perfectly. I tried using xclip utility instead: "{execute}|xclip -sel c{enter}{undo}"The result was an infinite execution [ Executing... ], insensitive to interaction until hit ^C. After the cancelling it appeared done the copying, no errors, but what's with the stuckage? Why does it happen and is there a fix? I'm on Kubuntu 20.04.6 LTS (Focal Fossa).
nano command execution stuck
I compared the files visually, and nothing was different. However in just one point aside the permission error @A.B suggested me I found also that there was a line with \r\n instead of \n in my head file. Editing and saving the file with nano fixed automatically the newline error but with the diff tool everything was more clear. Do not trust visual comparison use diff! :D
I'm having a weird issue under debian 11 with resolvconf package. No matter what kind of configuration I throw in, the /etc/resolv.conf file created by resolvconf service is kinda corrupt. Dig says: dig: parse of /etc/resolv.conf failedIf I use an editor like nano to add just a space or newline or even nothing like overwriting the file from nano to /etc/resolv.conf dig goes back reading the file again, the same goes with the os that is able to perform the dns lookups otherwise not. To me the syntax is ok, the actual content is: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTENnameserver 127.0.0.1 nameserver 127.0.2.1I have a configured bind server on the machine that works properly and the actual configuration works but ONLY if I read it from nano and re-save it in place. Regarding the file itself I tought maybe could be a permission difference between what does resolvconf daemon and what nano does. This is before nano edit: lrwxrwxrwx 1 root root 29 Apr 10 19:24 /etc/resolv.conf -> ../run/resolvconf/resolv.confThis is after nano edit: lrwxrwxrwx 1 root root 29 Apr 10 19:24 /etc/resolv.conf -> ../run/resolvconf/resolv.confI also made a cp of the first one and swapped, the original one doesn't work, the new one yes. The files compared are identically... I have no idea what is happening :(
resolvconf package create corrupt /etc/resolv.conf file
If you don't want autoindent enabled, you should not set autoindent, as that enables it. Instead, use unset autoindent in your .nanorc. For more information, see man nanorc You can also toggle autoindent with ALT+I (see CTRL+G for all shortcuts).
When I copy and paste text from the clipboard, the code looks like [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true [remote "heroku"]instead of [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true [remote "heroku"]Here is my .nanorc file set tabsize 4 set autoindentAccording to Why does Vim indent pasted code incorrectly? the set autoindent is inserting the leading tabs into the code when I am pasting from the clipboard. Is there a way I can configure .nanorc to turn-off autoindenting while pasting from the clipboard and turn it back on otherwise?
Why does nano indent pasted text or code improperly?
I'm not aware of any way to do this. The cheatsheet that's used in Nano is built into the source code, and the editor it's modeled after, Pico, has one as well. Vim, however, doesn't have such an option as far as I'm aware. There are many more commands that are commonly used in Vim than in Nano. My version of Nano lists 22 commands, and I routinely use more than that in daily editing even if you only include letters of the alphabet in normal mode, so there's not necessarily a good limited set to expose. Part of the problem is that Vim has both actions and motions and you need to combine both to be even moderately effective, so the number of things to show is potentially very expansive. Vim does provide a full built-in help mode which you can invoke with :help, possibly followed by a topic you're interested in, which makes things a little easier. Then again, Nano also has built-in help, and it's even translated.
I am using Debian GNU/Linux. Are there any options or extensions to vim that give me the possibility to display a cheat sheet in vim as shown in Nano?
Is it possible to show a vim cheatsheet like in Nano at the bottom of the page?
The issue here is that sudoedit copies the file to a temporary file before opening it in the editor. When the file has an extension, the temporary file is created with the same extension, and filename-based syntax highlighting modes are selected appropriately (e.g. for C files). When the file doesn’t have an extension, as is the case with nanorc, it is created with a random extension; this confuses filename-based syntax highlighting mode selection, and nano ends up treating the file as a standard text file. If you can reconfigure nano to treat any nanorc* file as a configuration file, you’ll be able to restore the behaviour you’re after. Otherwise I’m not sure there’s a way to handle this automatically.
Here are my personal aliases for editing root owned files: # CLI superuser nano; compiled; version 2.8.0function sunano { export SUDO_EDITOR='/usr/local/bin/nano' sudoedit "$@" }# GUI superuser xed; packaged; version 1.2.2function suxed { export SUDO_EDITOR='/usr/bin/xed' sudoedit "$@" }# GUI superuser sublime-text; packaged; version 3126function susubl { export SUDO_EDITOR='/opt/sublime_text/sublime_text -w' sudoedit "$@" }Let me take it from end:Sublime Text works great now thanks to Stephen Kitt's advice. Xed seems to work good too, it shows that the privileges are elevated, which I personally don't like to be reminded of, but there seems to be no problem with it, colors are there and it didn't even need some wait switch like Sublime. The problem I have is with Nano as follows: If I invoke it as I was used to, e.g.: sudo nano /etc/nanorcThe colors are there. But if I call it with the new alias: sunano /etc/nanorcThere are no colors whatsoever. The configuration seems to have been read though, because it looks the same as I've configured it.EDIT1: Apparently this issue affects at minimum the config file: -rw-r--r-- 1 root root 8.6K Apr 8 02:30 /etc/nanorcOther files, e.g. Bash or C++ are colored, I'm confused.
Nano through Sudoedit = No colors
You can use a here document but with this way it is not possible to provide a special output document. $ cat | nano <<-EOF one two threeEOFReceived SIGHUP or SIGTERMBuffer written to nano.saveThis behaviour is mentioned in the man page under notesIn some cases nano will try to dump the buffer into an emergency file. This will happen mainly if nano receives a SIGHUP or SIGTERM or runs out of memory. It will write the buffer into a file named nano.save if the buffer didn't have a name already, or will add a ".save" suffix to the current filename. If an emergency file with that name already exists in the current directory, it will add ".save" plus a number (e.g. ".save.1") to the current filename in order to make it unique. In multibuffer mode, nano will write all the open buffers to their respective emergency files.So i think nano is not the best choice for non interactive texting. If you only want to input multi line text to a file you can also use a here document as well without nano. cat > foo.txt <<-EOF > one > two > three > > EOF cme@itp-nb-1-prod-01 ~ $ cat foo.txt one two threeMaybe this is what you need.
How to input text into a new text file using nano from command line? I would like the same as with the following, but using nano: echo 'Hello, world.' >foo.txtResult:nano is not capable of handling non-interactive text input. echo is available in every Linux/Unix system, while nano is not installed by default in every Linux/Unix system. Echo can be also used in shell scripts, too.Conclusion: The most compatible solution is to use echo 'Hello, world.' >foo.txtas solution to create a file and fill with input text non-interactively.
How to input text into a new text file using nano from command line?
It seems that v2.4 (obviously, v2.2 too) has either bug or cannot display some of the unicode characters. v2.5 can display rupee sign
I have a file which contain ₹ (rupees) sign.but with nanoUpdate: Is there something wrong with my locale? :~$ cat /etc/default/locale LANG="en_GB.UTF-8" LANGUAGE="en_GB:en" LC_NUMERIC="en_GB.UTF-8" LC_TIME="en_IN.UTF-8" LC_MONETARY="en_IN.UTF-8" LC_PAPER="en_GB.UTF-8" LC_IDENTIFICATION="en_GB.UTF-8" LC_NAME="en_GB.UTF-8" LC_ADDRESS="en_GB.UTF-8" LC_TELEPHONE="en_IN.UTF-8" LC_MEASUREMENT="en_IN.UTF-8" LC_MESSAGES="en_GB.UTF-8"I don't think that colour profile of my terminal is making it invisible because I tried different types of profile to make sure...
GNU nano 2.2.6 won't show ₹ (rupees sign)
I since found that there is a bug in nano < 2.7.4-1 nano: /etc/nanorc is ignored, if ~/.nanorc existsLatest from the bug report: I just made the dist-upgrade to Debian 9.0, which included an update of package nano to version 2.7.4-1 and the problem vanished, the bug is solved in 2.7.4-1.The bug report: bug
I'm trying to set my color syntax highlighting in nano, but it doesn't work as expected.One system everything works. This is an Fedora 21 laptop. Two systems everything I've tried except man something works. This is an Fedora 21 desktop and an Fedora 21 vm in VirtualBox. One system only one file I've tried works(opening nanorc itself gives highlighting). This is an Debian Wheezy desktop.If I do man emacs it only works as expected on one system. I also have syntax highlighting for many other types of files, I thought the only thing I needed to set this up was to have .nanorc located in the users home directory so nano could find it. This is very confusing. I've tried to look for differences in bash_profile, /etc/profile, bashrc but nothing stands out and maybe that's irrelevant. I've looked at the permissions. I've started an new terminal and restarted the system. Here is a piece from my .nanorc file: ####################################################################### Manpages ##include "/usr/share/nano/man.nanorc"## Here is an example for manpages. ## syntax "man" "\.[1-9]x?$" color green "\.(S|T)H.*$" color brightgreen "\.(S|T)H" "\.TP" color brightred "\.(BR?|I[PR]?).*$" color brightblue "\.(BR?|I[PR]?|PP)" color brightwhite "\\f[BIPR]" color yellow "\.(br|DS|RS|RE|PD)"#####################################################################Questions: Why is the same .nanorc file not working the same on four Linux systems(Fedora 21 is working, two Fedora 21 not working and Debian Wheezy not working at all). What am I missing? What are the steps to set a custom .nanorc file to be used by nano and be sure it's not in some kind of conflict or something? -------------------------------------------- Here is the full nanorc file on pastebin.com.
Color syntax highlighting working on one system but not the others. Same nanorc file
From http://www.nano-editor.org/dist/v2.2/nano.html: -s <prog>, --speller=<prog>Invoke the given program as the spell checker. By default, nano uses the command specified in the SPELL environment variable, or, if SPELL is not set, its own interactive spell checker that requires the spell program to be installed on your system.Nano runs an external program to spell check. You probably didn't have spell installed (or the SPELL environment variable pointed to something else that wasn't installed or working .. maybe it was set to Spell which might explain the capitalization in the error message). The nanorc command overriddes the speller and tells Nano to run spell check using the external program aspell passing the -x and -c options (at least). From the aspell man page the -x option disabled backups and the -c option checks a single file.
When I pressed Ctrl+T in Nano it gave the error Spell checking Failed: Error invoking Spell. So I followed this answer and added set speller "aspell -x -c" to my ~/.nanorc and the spell check is now working perfectly. But what did this command do? What was causing the error? And why did this nanorc command fix it?
What does the command set speller "aspell -x -c" in nanorc do?
Since man nano does not show a way to specify anything other than ~/.nanorc, you'll have to keep master copies of each variant, and cp it to ~/.nanorc before each edit. For example, if you have $HOME/nanorc/nanorcA and$HOME/nanorc/nanorcB , and in $HOME/.bashrc: alias nanoA="cp $HOME/nanorc/nanorcA $HOME/.nanorc;nano" alias nanoB="cp $HOME/nanorc/nanorcB $HOME/.nanorc;nano"then, "nanoA" will run nano with nanorcA, "nanoB" will run nano with nanorcB, and "nano" will use whichever was used last.
I am not sure if this can be done or not. I have two different .nanorc files (.nanorc1 and .nanorc2). Each file has different settings. For example one has tabs set to 9 spaces and the other has tabs set to 4 spaces. Each .nanorc file is used for different files that require different settings. Is there a cli option that will let me choose what .nanorc file I want to load? Something like nano -l ~/.nanorc/.nanorc1 filetoeditI have read the man pages and I could not find one. Any help would be appreciated.
Using multiple .nanorc files
To edit the first file only, find . -name helloworld.py -exec nano {} \; -quitThis looks for files named helloworld.py, and for each such file found, runs nano /path/to/helloworld.py, and then quits (which means that only the first file will be processed). To edit all the matching files, find . -name helloworld.py -exec nano {} +This runs nano with as many files as will fit on the command line. Use CtrlX to close each file in turn.
How can I combine the "find" command with the "nano" command? For example, find . -name "helloworld.py" | nanoHow to open that file (first out of several lets say) after it is found using "nano" (without using a function but a single line of chained commands)?
How to combine the commands "find" and "nano"
You have to use Ctrl+x to exit nano and install new crontab. Ctrl+z will just stop/send to background nano without installing new crontab. See attached screenshot:
I haven't been able to get crontab to execute any of my scripts on start-up. I want to know why it doesn't work. Below is an example of me trying to use it, and I've tried to provide as much troubleshooting information as I can. $crontab -l no crontab for server $crontab -e #I scroll down to the bottom of the file and add the line below in @reboot /usr/bin/teamspeak3-server_linux-amd64/ts3server_minimal_runscript.sh #I make a carriage return at the bottom of the fileI press ctrl+o to save the file (as it opened in nano), and ctrl+z to exit. I now issue "crontab -e" to check the contents is there. The file shows up, just without the changes I made to it. I even tried adding just a commented line in the crontab file & this also doesn't save. Anyway I checked the script does actually work normally. $cd /usr/bin/teamspeak3-server_linux-amd64/ $./ts3server_minimal_runscript.shIt then gives loads of output as it reads the script and loads the script perfectly. So I ctrl+c to quit the application, and check the permissions. $ls -l | grep ts3server_minimal -rwxr-xr-x 1 server server bla bla bla ts3server_minimal_runscript.shSo everyone can execute it. I reboot anyway, and find the application doesn't start. Why?
Using nano to edit a crontab [closed]
All files are made of 1s and 0s, each of which is called a bit. A "byte" is 8 bits. For 8 bits, there are 256 possible combinations of 1s and 0s. The data in plain text files, which is what nano is meant to edit (and terminals expect to output with cat) is divided into bytes. Typically, each character in the file makes up one byte. In ASCII encoding, for example, the letter "A" is the byte 01000001. Some character encodings like UTF-8 sometimes use multiple bytes to represent a character, since there are more than 256 characters they need to cover, but they still divide the file into bytes. (There are also bytes for "Control characters" like "Control-J" for a linebreak.) Images are binary files, not text files; their bits can be divided into bytes as well, but these bytes are not meant to represent characters/letters. What's happening when a non-text file is opened as a text file is that the text editor tries to interpret the bytes of the binary file as if they were meant to represent characters. Since they weren't meant to do that, there's a pretty much random correlation between the bytes of the binary file and the characters those same bytes would represent if the file actually were a text file. However, that's what nano is trying to interpret the file as, so you get random characters, many of which are control characters which aren't typically meant to be printed and so produce strange results. That's the way I understand what's going on anyway. I'm not a computer scientist by any stretch of the imagination, so I hope commenters will improve upon my answer if need be. Obviously, if you want to edit an image you should use an image editor like gimp or krita instead, not a text editor. You could I suppose use a binary editor or hex editor, etc., but that would require very detailed knowledge about how the image format turns the data it represents into bits and bytes, which I believe is different for different image formats.
So i am pretty new to linux and i am learning basic commands. I am also very interested to know how things work under the hood, so sometime after i learned both the cat and nano commands, i tried using them on an image, and all these weird symbols appeared (linux mint):I also tried it on kali, i got the same thing, so i believe it has to do with how images are stored at a bit level, but again i am at the beginning and i couldn't find any explication to what exactly these symbols mean.
What are the weird characters from an image file?
Using login on OS X is a workaround for this issue. $ login login: your username password: your password Last login: Day Month Date HH:MM:SS on ttys000 $ whoami your username Thanks to user grg on Apple Stack Exchange
In Terminal I su - admin while in my user account to brew update, modify .bashrc, etc. I have noticed that when in my user session changing the size of the window while running nano doesn't scramble the text at all. The window resizes perfectly. However, after opening a new Terminal window, running su - admin, then nano and trying to resize - the text is scrambled and there is no way to recover it. control-l doesn't work. When I log into the admin account open Terminal and run nano there is no issue with resizing the window — text doesn't scramble. The same issue happens in reverse. From my admin account using su - user and running nano and resizing will cause the text to scramble. Any idea what's happening here and how to resolve this issue? I am using nano 4.9
How to solve the issue where nano screen text is scrambled when using su - admin
The problem arises from using grep --color=always as the first command. Apparently, bash keeps track of color for its shell using the control characters I listed above. So, ./^[[35m^[[Kapp.vue^[[m^[[K^[[36m^[[K is actually telling bash, "this string has text app.vue, with purple text coloring". Nano, however, is colorblind. If you want, you can try and escape these color indicators from the string using normal regex procedures. In my case, I was lucky enough to just tell grep --color=never instead of --color=always. The problem went away with that.
I'm looking to write a bash script that recursively searches directories, find a particular string, then opens to that string in that particular file. I call this function exert because it takes away the exertion of finding a particular variable in my software. For e.g. if my website uses a phrase like "I like big cats" in a div, and I want to edit just that div, I can type "exert 'I like big cats'" and it will open the file holding that div along with the exact spot that div is at. Here is my code so far: #!/bin/bash echo "Searching for $1" echo "----Inside files----" grep --color=always -nr "$1" * > /tmp/tmpsearch; #searches for ' I like big cats' filesNew=`cat /tmp/tmpsearch` #stores it in a variable that can be split - future version will keep this in memory readarray -t files <<<"$filesNew" #it is split to an array called $files x=0; IFS=":"; #":" is the string delimter for grep's output of files with the string, alongside the line number the string is found inside the file for i in "${files[@]}"; do #The colon operator separates grep's output of line numbers and files that contain the string 'I like big cats' read -ra FOUND <<< "$i" x=$[$x+1]; printf "Found index: $x\n" line=`echo "${FOUND[1]}"` file=`echo "${FOUND[0]}"` nano +$line ./$file #This should open nano at the exact line number done exit 1Everything works fine, but nano seems to be interpreting an encoding error when being called with an array output from a bash script. Although the strings are fine inside the bashscript, nano will read a file name like ./app.vue as ./^[[35m^[[Kapp.vue^[[m^[[K^[[36m^[[K. There a lot of command characters around the file name that I can't seem to get rid of. By placing nano app.vue at the start of the script, which works, I know this isn't a problem with just nano or just bash, but a problem with them working with an array output (the string split from grep)
Bash script's array element can't act as a filename for nano, some kind of encoding error
You can assign the $EDITOR variable a script which first calls an editor and then produces the output: #! /bin/bashvim "$1" echo "foo bar baz"and use this call EDITOR=/path/to/script.sh crontab -e
To edit the root crontab in Debian I do for example sudo crontab -e. To exit from the preferred text editor (Nano), I do CTLR+X. So far so good, but what if I want that each time I exit crontab, a text will be echoed into the console (into "stdout"). The purpose is to echo a reminder message like:If you haven't already, change p to your password in password[p] to your password!To make sure I'm clear here --- I desire that each time the user finished editing the crontab and then quited back to the console, the message will appear. Is there any way to do so in the current release of Bash?
Echo something to console each time you quit crontab
sed -i 's/\r/\n/g' thefile.txt"classic" macos used \r as the end-of-line character. *nix uses \n
I have a .txt file from my Mac that when I send it over to my Raspberry Pi running Raspbian and open it in nano it converts weirdly. Example: Text file in Mac OSX: http://welcome.hp.com/country/us/en/prodserv/servers.html http://www8.hp.com/us/en/products/data-storage/overview.htmlText file in Raspbian: Servers & Blades Storage http://welcome.hp.com/country/us/en/prodserv/servers.html^Mhttp://www8.hp.com/us/en/products/data-storage/overview.html Any help is appreciated.
.txt File from Mac not converting properly
You can not do it. The keys which could be mapped are described here: https://www.nano-editor.org/dist/latest/nano.html#Rebinding-Keys It also has a statement:Rebinding ^[ (Esc) is not possible, because its keycode is the starter byte of Meta keystrokes and escape sequences.
I've been playing with the keyboard mapping in the .nanorc file some today, and you can map the F keys like...bind F3 cancel all, however, i haven't had any luck with trying find how nano recognizes the escape key. I've already tried Esc, esc, nano keeps giving me an error message. Is there a way to bind/map the escape key in the corner of your keyboard?
Nano: can the escape key be mapped?
Regexes like that are a bit of a write-only language, but I think the (\[([[:space:]]*[[:alnum:]_]+[[:space:]]*|@)\])? in the middle catches the array indexes. It also doesn't recognize [*] as an index. It's hard to fix that properly, as array indexes can be almost arbitrary shell "expressions". In an integer-indexed array, the index is taken as an arithmetic expansion, and something like [i+j] works to use the sum of i and j. In an associative array, it could be e.g. [$x$y] for concatenation. It could also be [i+a[j]] if one were to be doing something excessively complex in the shell. Parsing that for syntax highlighting would pretty much require a full parser, not a simple regex. (And then there's command substitutions, but let's not go there...) Anyway, it's easy to make it accept the [*] and one $ in front of the variable name, here's the changed part: ... (\[([[:space:]]*\$?[[:alnum:]_]+[[:space:]]*|[@*])\])? ... ^^^ ^^^^And the resulting full line: color brightred "\$\{[#!]?([-@*#?$!]|[0-9]+|[[:alpha:]_][[:alnum:]_]*)(\[([[:space:]]*\$?[[:alnum:]_]+[[:space:]]*|[@*])\])?(([#%/]|:?[-=?+])[^}]*\}|\[|\})"As far as I can see, ${arr[]} is an error, so I'm not sure if it should be highlighted in full or not. If that's the regex from the latest version, you might want to consider also posting a bug report.
The last two array expansions don't get proper highlighting:This is the setting in sh.nanorc that defines it: # More complicated variable names; handles braces and replacements and arrays. color brightred "\$\{[#!]?([-@*#?$!]|[0-9]+|[[:alpha:]_][[:alnum:]_]*)(\[([[:space:]]*[[:alnum:]_]+[[:space:]]*|@)\])?(([#%/]|:?[-=?+])[^}]*\}|\[|\})"What do I have to fix in the regex to catch this miss?
nano highlighting fails in matching shell array brackets
In Tilix version: 1.9.6 you can manually change and set key commands. Disable Alt + 6 which is set to switch to terminal Nr.6. by default.
The key combination Alt + 6 for copy in nano does not work in tilix Does anyone know how to fix this? I had a look through all the key commands, but did not find any entry for Alt + 6 being already in use.
The key combination Alt + 6 for copy in nano does not work in GNOME terminal emulator Tilix
Those files don't belong to you - they belong to root: -rw-r--r-- 1 root root 1030 Sep 23 23:59 'Mounting a device' ^ownerIf that's not correct you need to change the ownership back to yourself. To change the directory and its files you can do thisChange to the ~/Documents directory cd ~/DocumentsChange the ownership of the directory and all its contents. You will need to be root to do this (hence use of the sudo command), because taking or giving away files between owners is a privileged operation sudo chown -R "$USER" 'Bash command instructions'Here, the "$USER" is a variable that corresponds to your username. I could guess it's peter but to be sure the command will work I've just used a standard variable.
To make the question better understood -> I currently have to run nano with sudo when i just want to run nano to edit a file. So while I was fiddling with Ubuntu 20.04 in a VM I had everything more or less working in a way that I felt comfortable putting it onto an old laptop. In the VM I could just run ~$ nano , write what I wanted to into that file and then write out and save/exit nano and my file will be there. Now with the laptop that I put Ubuntu 20.04 on I need to run nano with sudo in order to write out and save/exit the file I was working on. When trying to edit my file 'Mounting a device' with just nano I am greeted with: [File 'Mounting a device' is unwritable] and if I try to write out I get: [Error writting Mounting a device: Permission denied] To be clear these aren't system files or config files or anything of that sort. These are files like "Bash commands and how to use", "How to mount a device", "test file 1", etc. Is it something to do with the user account I created or what could I have done wrong where? I have only been using Ubuntu for about 6 months or so as my daily system so I'm still getting used to everything. I would like to be able to run nano to make day-to-day files and file edits without having to do it with sudo. Any idea on how I can change this? The output of the ls -l is as follows: peter@pbes:~/Documents/Bash command instructions$ ls -l total 20 -rw-r--r-- 1 root root 351 Nov 1 08:35 AppImage -rw-r--r-- 1 root root 570 Feb 5 00:20 Docker -rw-r--r-- 1 root root 1030 Sep 23 23:59 'Mounting a device' -rw-r--r-- 1 root root 442 Nov 8 21:13 PPA -rw-r--r-- 1 root root 361 Oct 11 09:37 'Shutter install'
I need to run ~$ sudo nano in order write and save a file and can't do so with just the nano command. How do I change this?
Esc+E works. According to the nano online help:Meta-key sequences are notated with 'M-' and can be entered using either the Alt, Cmd, or Esc key, depending on your keyboard setup.
I'm using:Debian bullseye with its MATE desktop environment MATE caja 1.22.1 MATE Terminal 1.22.1 GNU nano, version 4.3In nano, Redo is mapped to Meta+E. In Keyboard Preferences I have selected "German German (no dead keys)" and in the Keyboard Layout Options under Alt/Win key behaviour, I have selected Meta is mapped to Left Win. When I press Alt+E in a nano running in a MATE Terminal, it opens the Edit menu of the terminal window. When I press Meta+E (that is, the left logo key and E) anywhere, Caja is opened to the home directory. I have not found any way to disable either of these shortcuts to enable me to use nano's Undo function. In Preferences → Keyboard Shortcuts, the Meta+E shortcut (or Super, or Win, or Logo) is not listed. In MATE Terminal's Edit → Keyboard Shortcuts... the Alt+E shortcut is not listed.
How to use nano's Redo shortcut (Meta+E) in MATE terminal
You can't. Terminals send characters, not keys. (See How do keyboard input and text output work? for more details.) But not all keys have a corresponding character. When you press a key or key chord that doesn't have a corresponding character, the terminal sends a sequence of characters that represents it, or in a few case a non-printable control character). These sequences always start with a particular character, which is called the escape character. This character is also what Ctrl+[ sends. So if you could bind ^[ (Ctrl+[), that would break all keys that send escape sequences. For example, Up sends either the three characters (^[, [, A) or the three characters (^[, O, A), depending on the terminal. If you could rebind ^[ then the Up key would do the action of ^[ and then insert [ and A. Alt+char sends the escape character followed by char. So if you rebound M-[, you'd really be rebinding the two-character sequence (^[, [), which would break some cursor and function keys. Nano technically allows rebinding ^[ (as of version 2.5.3) but this has no effect because when it reads ^[, it classifies this as the start of an escape sequence (I'm simplifying a bit) and it never looks up a binding for ^[. Nano explicitly forbids rebinding M-[. There are ways around this on some terminals, but only a few editors take advantage of them. Nano is a relatively simple editor, which primarily targets users who don't use terminals where such ways exist, and doesn't support this feature.
I have been trying to bind the shortcut CTRL-[ to the unindent function, but it seems like that if you type in bind ^[ unindent main (CTRL-[) into nanorc, the text will be still formatted in red, not the usual green which tells you that the binding would work. I tried changing it to bind M-[ unindent main (ALT-[), but it still didn't work. Strangely, both CTRL-] and ALT-] works. Is there a way to solve this problem?
How can you bind ^[ or M-[ shortcut to an action in nano?
You're probably missing your table or chain. nft list rulesetwill give you what you are working with. If it prints out nothing, you're missing both. nft add table ip filter # create table nft add chain ip filter INPUT { type filter hook input priority 0 \; } # create chainThen you should be able to add your rule to the chain. NOTE: If you're logged in with ssh, your connection will be suspended.
I am trying to apply below nftables rule which I adopted from this guide: nft add rule filter INPUT tcp flags != syn counter dropsomehow this is ending up with:Error: Could not process rule: No such file or directoryCan anyone spot what exactly I might be missing in this rule?
nftables rule: No such file or directory error
UPDATE: iptables-nft (rather than iptables-legacy) is using the nftables kernel API and in addition a compatibility layer to reuse xtables kernel modules (those described in iptables-extensions) when there's no native nftables translation available. It should be treated as nftables in most regards, except for this question that it has fixed priorities like the legacy version, so nftables' priorities still matter here.iptables (legacy) and nftables both rely on the same netfilter infrastructure, and use hooks at various places. it's explained there: Netfilter hooks, or there's this systemtap manpage, which documents a bit of the hook handling:PRIORITY is an integer priority giving the order in which the probe point should be triggered relative to any other netfilter hook functions which trigger on the same packet. Hook functions execute on each packet in order from smallest priority number to largest priority number. [...]or also this blog about netfilter: How to Filter Network Packets using Netfilter–Part 1 Netfilter Hooks (blog disappeared, using a Wayback Machine link instead.) All this together tell that various modules/functionalities can register at each of the five possible hooks (for the IPv4 case), and in each hook they'll be called by order of the registered priority for this hook. Those hooks are not only for iptables or nftables. There are various other users, like systemtap above, or even netfilter's own submodules. For example, with IPv4 when using NAT either with iptables or nftables, nf_conntrack_ipv4 will register in 4 hooks at various priorities for a total of 6 times. This module will in turn pull nf_defrag_ipv4 which registers at NF_INET_PRE_ROUTING/NF_IP_PRI_CONNTRACK_DEFRAG and NF_INET_LOCAL_OUT/NF_IP_PRI_CONNTRACK_DEFRAG. So yes, the priority is relevant only within the same hook. But in this same hook there are several users, and they have already their predefined priority (with often but not always the same value reused across different hooks), so to interact correctly around them, a compatible priority has to be used. For example, if rules have to be done early on non-defragmented packets, then later (as usual) with defragmented packets, just register two nftables chains in prerouting, one <= -401 (eg -450), the other between -399 and -201 (eg -300). The best iptables could do until recently was -300, ie it couldn't see fragmented packets whenever conntrack, thus early defragmentation was in use (since kernel 4.15 with option raw_before_defrag it will register at -450 instead, but can't do both, but iptables-nft doesn't appear to offer such choice).So now about the interactions between nftables and iptables: both can be used together, with the exception of NAT in older kernels where they both compete over netfilter's nat ressource: only one should register nat, unless using a kernel >= 4.18 as explained in the wiki. The examples nftables settings just ship with the same priorities as iptables with minor differences. If both iptables and nftables are used together and one should be used before the other because there are interactions and order of effect needed, just sligthly lower or increase nftables' priority accordingly, since iptables' can't be changed. For example in a mostly iptables setting, one can use nftables with a specific match feature not available in iptables to mark a packet, and then handle this mark in iptables, because it has support for a specific target (eg the fancy iptables LED target to blink a led) no available in nftables. Just register a sligthly lower priority value for the nftables hook to be sure it's done before. For an usual input filter rule, that would be for example -5 instead of 0. Then again, this value shouldn't be lower than -149 or it will execute before iptables' INPUT mangle chain which is perhaps not what is intended. That's the only other low value that would matter in the input case. For example there's no NF_IP_PRI_CONNTRACK threshold to consider, because conntrack doesn't register something at this priority in NF_INET_LOCAL_IN, neither does SELinux register something in this hook if something related to it did matter, so -225 has no special meaning here.
When configuring a chain in nftables, one has to provide a priority value. Almost all online examples set a piority of 0; sometimes, a value of 100 gets used with certain hooks (output, postrouting). The nftables wiki has to say:The priority can be used to order the chains or to put them before or after some Netfilter internal operations. For example, a chain on the prerouting hook with the priority -300 will be placed before connection tracking operations. For reference, here's the list of different priority used in iptables:NF_IP_PRI_CONNTRACK_DEFRAG (-400): priority of defragmentation NF_IP_PRI_RAW (-300): traditional priority of the raw table placed before connection tracking operation NF_IP_PRI_SELINUX_FIRST (-225): SELinux operations NF_IP_PRI_CONNTRACK (-200): Connection tracking operations NF_IP_PRI_MANGLE (-150): mangle operation NF_IP_PRI_NAT_DST (-100): destination NAT NF_IP_PRI_FILTER (0): filtering operation, the filter table NF_IP_PRI_SECURITY (50): Place of security table where secmark can be set for example NF_IP_PRI_NAT_SRC (100): source NAT NF_IP_PRI_SELINUX_LAST (225): SELinux at packet exit NF_IP_PRI_CONNTRACK_HELPER (300): connection tracking at exitThis states that the priority controls interaction with internal Netfilter operations, but only mentions the values used by iptables as examples. In which cases is the priority relevant (i.e. has to be set to a value ≠ 0)? Only for multiple chains with same hook? What about combining nftables and iptables? Which internal Netfilter operations are relevant for determining the correct priority value?
When and how to use chain priorities in nftables
The mark is a 32 bits integer value attached to a network packet. Some network parts interacting with it (see below) can do bitwise operations on this value, it can then be interpreted between one single 32 bits value up to a collection of 32 flags, or a mix of flags and smaller values, depending on how one chooses to organise its use (tc can't do this). Of course this mark exists only as long as it's handled by the Linux kernel. It's only purely virtual and internal, as it can have no existence on the wire. Depending on where's it's used, it may be called firewall mark, fwmark or simply mark. Each network packet processed by the kernel, is handled by a structure called sk_buff, defined in linux/include/linux/skbuff.h. This structure includes various meta-data related to the packet when applicable, like IPsec information if any, related conntrack entry once looked up, ... and also its mark. Various parts of the network stack can interact with or alter this mark, such as:tc, the routing stack can have special rules set with ip rule (eg ip rule add fwmark 1 lookup 42), to alter its routing decisions with this fwmark (eg to use a routing table sending those packets to an other interface than default), of course iptables, its candidate successor nftables, Various (usually tunnel) interfaces types can set a mark or interact with it, such as vti, erspan, gre, gretap, ipip, sit, WireGuard ... or XFRM transformations (ip xfrm state/policy). Depending on types, the mark can be set on the payload or on the envelope (eg: WireGuard) and can sometimes be inherited during decapsulation/encapsulation, socket option SO_MARK, eBPF,and probably a few other places... The main goal of this mark is to have all these network parts interact with each other by using it as a kind of message. The Packet flow in Netfilter and General Networking can help see in what order those elements will receive handling of the packet and thus its mark. There are other related marks beside fwmark:connmark, which isn't stored with a packet's sk_buff, but in a conntrack entry tracking packet flows. Its connmark can of course be used by iptables with its connmark match and CONNMARK target, with an usage example there: Netfilter Connmark To Linux and beyond !. It allows the decision made based on one single packet to be memorized and then applied to all the packets of the same connection. secmark and likewise its associated connsecmark which are intended to interact with Linux Security Modules such as SELinux.
On Netfilter, you have the option --set-mark for packets that pass through the mangle table. The majority of tutorials and examples over the Internet, say that this just adds a mark on the packet, like this, but there's no additional detail of what mark is set and where it resides on the packet: iptables -A PREROUTING -t mangle -i eth0 -p tcp --dport 80 -j MARK --set-mark 1 My question is:What kind of mark is set and exactly where in the packet this mark resides?
How --set-mark option works on Netfilter (IPTABLES)?
With a recent enough nftables, you can just write: meta l4proto {tcp, udp} th dport 53 counter accept comment "accept DNS"Actually, you can do even better: set okports { type inet_proto . inet_service counter elements = { tcp . 22, # SSH tcp . 53, # DNS (TCP) udp . 53 # DNS (UDP) }And then: meta l4proto . th dport @okports acceptYou can also write domain instead of 53 if you prefer using port/service names (from /etc/services).
How can i do this in a single line? tcp dport 53 counter accept comment "accept DNS" udp dport 53 counter accept comment "accept DNS"
How to match both UDP and TCP for given ports in one line with nftables
Usually the main criterion for SNAT is "traffic that's going out a given interface" (i.e. -o eth0). What interface a packet will go out is determined by routing, so to apply that criterion you need to run it in a POSTROUTING context. DNAT rewrites the destination address of a packet, meaning it can affect where a packet goes to — for example, a packet that looks like it's destined for the gateway could end up being rewritten to go to a machine on the network instead. Since you want the routing to be able to take that rewritten destination into account when it makes its decision, so that the packet actually goes where it needs to, DNAT should run in a PREROUTING context.
Why does SNAT(modifies source IP and/or ports) happen in nat table POSTROUTING chain, i.e after routing? And why does DNAT(modifies destination IP ant/or ports) happen in PREROUTING chain? I guess latter is because there might be multiple NICs in PC with different private networks and PC does not know how to route packet if destination IP address is still publickly routable address? However, for SNAT I can not see a reason why this couldn't take place in PREROUTING.
Why does SNAT happen in POSTROUTING chain and DNAT in PREROUTING chain?
A variant of this problem was addressed recently in Kubernetes, so it’s worth looking at what was done there. (The variant is whether to use iptables-legacy or iptables-nft and their IPv6 variants to drive the host’s rules.) The approach taken in Kubernetes is to look at the number of lines output by the respective “save” commands, iptables-legacy-save and iptables-nft-save (and their IPv6 variants). If the former produces ten lines or more of output, or produces more output than the latter, then it’s assumed that iptables-legacy should be used; otherwise, that iptables-nft should be used. In your case, the decision tree could be as follows:if iptables isn’t installed, use nft; if nft isn’t installed, use iptables; if iptables-save doesn’t produce any rule-defining output, use nft; if nft list tables and nft list ruleset don’t produce any output, use iptables.If iptables-save and nft list ... both produce output, and iptables isn’t iptables-nft, I’m not sure an automated process can decide.
Given a host that is in an unknown state of configuration, I would like to know if there is an effective way of non-interactively determining if the firewall rule set in place is managed by iptables or nftables. Sounds pretty simple and I've given this quite a bit of thought, but haven't come back with a meaningful answer to put on a script...
Check whether iptables or nftables are in use
I think I finally understood how redirecting ingress to IFB is working: +-------+ +------+ +------+ |ingress| |egress| +---------+ |egress| |qdisc +--->qdisc +--->netfilter+--->qdisc | |eth1 | |ifb1 | +---------+ |eth1 | +-------+ +------+ +------+My initial assumption in figure 2, that the ifb device is inserted between ingress eth1 and netfilter and that packets first enter the ingress ifb1 and then exit through egress ifb1 was wrong. In fact redirecting traffic from an interface's ingress or egress to the ifb's egress is done directly by redirecting ("stealing") the packet and directly placing it in the egress of the ifb device. Mirroring/redirecting traffic to the ifb's ingress is currently not supported as also stated in the documentation, at least on my version: root@deb8:~# tc -V tc utility, iproute2-ss140804 root@deb8:~# dpkg -l | grep iproute ii iproute2 3.16.0-2 root@deb8:~# uname -a Linux deb8 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-1 x86_64 GNU/LinuxDocumentation I was able to get this information thanks to the following documentation:linux-ip.net Intermediate Functional Block dev.laptop.org ifb-README people.netfilter.org Linux Traffic Control Classifier-Action Subsystem Architecture PaperDebugging And some debugging using iptables -j LOG and tc filter action simple, which I used to print out messages to syslog when an icmp packet is flowing through the netdevs. The result is as follows: Jun 14 13:02:12 deb8 kernel: [ 4273.341087] simple: tc[eth1]ingress_1 Jun 14 13:02:12 deb8 kernel: [ 4273.341114] simple: tc[ifb1]egress_1 Jun 14 13:02:12 deb8 kernel: [ 4273.341229] ipt[PREROUTING]raw IN=eth1 OUT= MAC=08:00:27:ee:8f:15:08:00:27:89:16:5b:08:00 SRC=10.1.1.3 DST=10.1.1.2 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=53979 DF PROTO=ICMP TYPE=8 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341238] ipt[PREROUTING]mangle IN=eth1 OUT= MAC=08:00:27:ee:8f:15:08:00:27:89:16:5b:08:00 SRC=10.1.1.3 DST=10.1.1.2 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=53979 DF PROTO=ICMP TYPE=8 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341242] ipt[PREROUTING]nat IN=eth1 OUT= MAC=08:00:27:ee:8f:15:08:00:27:89:16:5b:08:00 SRC=10.1.1.3 DST=10.1.1.2 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=53979 DF PROTO=ICMP TYPE=8 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341249] ipt[INPUT]mangle IN=eth1 OUT= MAC=08:00:27:ee:8f:15:08:00:27:89:16:5b:08:00 SRC=10.1.1.3 DST=10.1.1.2 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=53979 DF PROTO=ICMP TYPE=8 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341252] ipt[INPUT]filter IN=eth1 OUT= MAC=08:00:27:ee:8f:15:08:00:27:89:16:5b:08:00 SRC=10.1.1.3 DST=10.1.1.2 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=53979 DF PROTO=ICMP TYPE=8 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341255] ipt[INPUT]nat IN=eth1 OUT= MAC=08:00:27:ee:8f:15:08:00:27:89:16:5b:08:00 SRC=10.1.1.3 DST=10.1.1.2 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=53979 DF PROTO=ICMP TYPE=8 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341267] ipt[OUTPUT]raw IN= OUT=eth1 SRC=10.1.1.2 DST=10.1.1.3 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=37735 PROTO=ICMP TYPE=0 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341270] ipt[OUTPUT]mangle IN= OUT=eth1 SRC=10.1.1.2 DST=10.1.1.3 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=37735 PROTO=ICMP TYPE=0 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341272] ipt[OUTPUT]filter IN= OUT=eth1 SRC=10.1.1.2 DST=10.1.1.3 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=37735 PROTO=ICMP TYPE=0 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341274] ipt[POSTROUTING]mangle IN= OUT=eth1 SRC=10.1.1.2 DST=10.1.1.3 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=37735 PROTO=ICMP TYPE=0 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341278] simple: tc[eth1]egress_1 Jun 14 13:02:12 deb8 kernel: [ 4273.341280] simple: tc[ifb0]egress_1The debugging was done using the following settings: iptables -F -t filter iptables -F -t nat iptables -F -t mangle iptables -F -t raw iptables -A PREROUTING -t raw -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[PREROUTING]raw ' iptables -A PREROUTING -t mangle -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[PREROUTING]mangle ' iptables -A PREROUTING -t nat -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[PREROUTING]nat ' iptables -A INPUT -t mangle -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[INPUT]mangle ' iptables -A INPUT -t filter -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[INPUT]filter ' iptables -A INPUT -t nat -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[INPUT]nat ' iptables -A FORWARD -t mangle -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[FORWARD]mangle ' iptables -A FORWARD -t filter -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[FORWARD]filter ' iptables -A OUTPUT -t raw -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[OUTPUT]raw ' iptables -A OUTPUT -t mangle -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[OUTPUT]mangle ' iptables -A OUTPUT -t nat -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[OUTPUT]nat ' iptables -A OUTPUT -t filter -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[OUTPUT]filter ' iptables -A POSTROUTING -t mangle -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[POSTROUTING]mangle ' iptables -A POSTROUTING -t nat -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[POSTROUTING]nat ' iptables -A PREROUTING -t raw -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[PREROUTING]raw ' iptables -A PREROUTING -t mangle -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[PREROUTING]mangle ' iptables -A PREROUTING -t nat -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[PREROUTING]nat ' iptables -A INPUT -t mangle -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[INPUT]mangle ' iptables -A INPUT -t filter -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[INPUT]filter ' iptables -A INPUT -t nat -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[INPUT]nat ' iptables -A FORWARD -t mangle -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[FORWARD]mangle ' iptables -A FORWARD -t filter -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[FORWARD]filter ' iptables -A OUTPUT -t raw -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[OUTPUT]raw ' iptables -A OUTPUT -t mangle -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[OUTPUT]mangle ' iptables -A OUTPUT -t nat -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[OUTPUT]nat ' iptables -A OUTPUT -t filter -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[OUTPUT]filter ' iptables -A POSTROUTING -t mangle -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[POSTROUTING]mangle ' iptables -A POSTROUTING -t nat -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[POSTROUTING]nat 'export TC="/sbin/tc"$TC qdisc del dev eth1 root $TC qdisc del dev eth1 ingress ip link set dev ifb0 down ip link set dev ifb1 down $TC qdisc del dev ifb0 root $TC qdisc del dev ifb1 root rmmod ifbmodprobe ifb numifbs=2$TC qdisc add dev ifb0 root handle 1: htb default 2 $TC class add dev ifb0 parent 1: classid 1:1 htb rate 2Mbit $TC class add dev ifb0 parent 1: classid 1:2 htb rate 10Mbit $TC filter add dev ifb0 parent 1: protocol ip prio 1 u32 \ match ip protocol 1 0xff flowid 1:1 \ action simple "tc[ifb0]egress" $TC qdisc add dev ifb0 ingress $TC filter add dev ifb0 parent ffff: protocol ip prio 1 u32 \ match ip protocol 1 0xff \ action simple "tc[ifb0]ingress"$TC qdisc add dev ifb1 root handle 1: htb default 2 $TC class add dev ifb1 parent 1: classid 1:1 htb rate 2Mbit $TC class add dev ifb1 parent 1: classid 1:2 htb rate 10Mbit $TC filter add dev ifb1 parent 1: protocol ip prio 1 u32 \ match ip protocol 1 0xff flowid 1:1 \ action simple "tc[ifb1]egress" $TC qdisc add dev ifb1 ingress $TC filter add dev ifb1 parent ffff: protocol ip prio 1 u32 \ match ip protocol 1 0xff \ action simple "tc[ifb1]ingress"ip link set dev ifb0 up ip link set dev ifb1 up$TC qdisc add dev eth1 root handle 1: htb default 2 $TC class add dev eth1 parent 1: classid 1:1 htb rate 2Mbit $TC class add dev eth1 parent 1: classid 1:2 htb rate 10Mbit $TC filter add dev eth1 parent 1: protocol ip prio 1 u32 \ match ip protocol 1 0xff flowid 1:1 \ action simple "tc[eth1]egress" pipe \ action mirred egress redirect dev ifb0 $TC qdisc add dev eth1 ingress $TC filter add dev eth1 parent ffff: protocol ip prio 1 u32 \ match ip protocol 1 0xff \ action simple "tc[eth1]ingress" pipe \ action mirred egress redirect dev ifb1
I would like to know the exact position of the following device in the packet flow for ingress traffic shaping:IFB: Intermediate Functional BlockI would like to better understand how packets are flowing to this device and exactly when this happens to understand what methods for filtering / classification can be used of the following:tc filter ... u32 ... iptables ... -j MARK --set-mark ... iptables ... -j CLASSIFY --set-class ...It seems hard to find documentation on this topic, any help where to find official documentation would be greatly appreciated as well. Documentation as far as I know:tc: tldp.org HOWTO, lartc.org HOWTO ifb: linuxfoundation.org, tc-mirred manpage, wiki.gentoo.org netfilter packet flow: kernel_flow, docum.org kptdFrom the known documentation I interpret the following: Basic traffic control figure 1 +-------+ +------+ |ingress| +---------+ |egress| |qdisc +--->netfilter+--->qdisc | |eth0 | +---------+ |eth0 | +-------+ +------+IFB? tc filter add dev eth0 parent ffff: protocol all u32 match u32 0 0 action mirred egress redirect dev ifb0 will result in? figure 2 +-------+ +-------+ +------+ +------+ |ingress| |ingress| |egress| +---------+ |egress| |qdisc +--->qdisc +--->qdisc +--->netfilter+--->qdisc | |eth0 | |ifb0 | |ifb0 | +---------+ |eth0 | +-------+ +-------+ +------+ +------+
How is the IFB device positioned in the packet flow of the Linux kernel
However, are there any other clever tools/methods to see if process listening on TCP port receives a message? You can use strace with -e trace=network. This is what it prints on accepting a TCP connection, receiving an HTTP request, sending an HTTP response and closing the connection: $ strace -v -f -e trace=network -p `cat logs/my_server.pid` Process 2361 attached with 44 threads - interrupt to quit [pid 2422] accept(11, {sa_family=AF_INET, sin_port=htons(56289), sin_addr=inet_addr("172.30.1.60")}, [16]) = 14 [pid 2422] getsockname(14, {sa_family=AF_INET, sin_port=htons(7754), sin_addr=inet_addr("172.30.1.60")}, [16]) = 0 [pid 2422] setsockopt(14, SOL_TCP, TCP_NODELAY, [1], 4) = 0 [pid 2422] setsockopt(14, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 [pid 2422] getsockopt(14, SOL_SOCKET, SO_OOBINLINE, [515004615020773376], [4]) = 0 [pid 2388] recvfrom(14, "GET /OPEN_", 10, MSG_PEEK, NULL, NULL) = 10 [pid 2388] recvfrom(14, "GET /OPEN_SESSION?LOGIN=HAS_ADMI"..., 4096, 0, NULL, NULL) = 246 [pid 2388] sendto(14, "HTTP/1.1 200 OK\r\nServer: MY_SER"..., 192, 0, NULL, 0) = 192 [pid 2388] sendto(14, "<?xml version='1.0' encoding = '"..., 680, 0, NULL, 0) = 680 [pid 2361] --- SIGIO (I/O possible) @ 0 (0) --- [pid 2388] recvfrom(14, "", 4096, 0, NULL, NULL) = 0 [pid 2388] shutdown(14, 2 /* send and receive */) = 0
According to tcpdump, my server receives the following TCP packet: 12:52:29.603233 00:19:e2:9e:df:f0 00:16:3e:6a:25:3f, ethertype IPv4 (0x0800), length 74: 10.10.10.65.38869 192.168.215.82.22: Flags [S], seq 567054335, win 5840, options [mss 1460,sackOK,TS val 2096335479 ecr 0,nop,wscale 0], length 0As seen above, it's a TCP SYN packet to TCP port 22, where in my case listens a sshd. I would like to see, if this TCP packet reaches the sshd process. I guess one option would be to restart the sshd in debug mode. However, are there any other clever tools/methods to see if process listening on TCP port receives a message? In case of TCP SYN packet, I guess it's the kernel TCP/IP stack which will send the TCP SYN+ACK and not the sshd?
Is there a way to see if process listening on TCP port receives a message?
Linux' bridge filter framework has available mechanisms where the layer 2 bridge code can do an upcall to iptables (as well as arptables or ip6tables) and have filtering travel from layer 2 (bridged frames) through layer 3 (iptables with packets) and then back to layer 2. This is much beyond the use the BROUTING chain which only gives the logical choice of staying at layer 2 or continuing at layer 3 (by doing a frame dnat/broute to local). This layering violation allows for example to leverage the conntrack facility and have stateful firewalling available at layer 2. It also caused troubles when people didn't expect this to happen and got issues hard to debug, or hindered performances when it was (most of the time) not needed. So starting with kernel 3.18, the br_netfilter code was split from the bridge code and modularized and is not automatically loaded anymore. To use this feature now with iptables, one has to modprobe br_netfilter and keep the sysconf parameter net.bridge.bridge-nf-call-iptables set to 1 (equivalent to echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables). This will now allow all the wonderful complexity of OP's link: ebtables/iptables interaction on a Linux-based bridge. Note that this module can also be automatically loaded when iptables uses the physdev) match and this can subtly alter the whole firewall behaviour if not careful when using both ebtables and iptables. Note: nftables (as well as iptables-nft) is also affected. The current status is considered a bit messy (because of layering violations' additional complexity) and some reorganization was done to have direct conntrack support in the bridge path without using br_netfilter anymore: since kernel 5.3 Linux provides the kernel module nf_conntrack_bridge allowing nftables to handle connection tracking directly in the bridge layer without reaching the ip (nor ip6 and inet) families: connection tracking support for bridge.
Setup: VM1 --- Bridge -- VM2 VM1 and 2 are on same subnet. Bridge has 2 interfaces added to brctl bridge. When I block VM2 ip using uptables -A FORWARD -s (VM1 ip) -j DENY it doesn't work. I understand the packet never goes to network layer but this says "all iptables chains will be traversed while the IP packet is in the bridge code". Even MAC filtering doesn't work on iptables. ebtables work fine. What is wrong?
How does iptable work with linux bridge?
If you want a filter to capture on packets mathing 130.190.0.0/17: tcpdump net 130.190.0.0/17
I'm using nfsen and I need to apply a filter to get specific ip range and I can't find the syntax. I searched in the doc of nfdump and tcpdump but nothing. For now the netflows captured provides from multiples address and the ip range I want to get (and only those address) is from 130.190.0.0 to 130.190.127.255 with a mask /17 Or another way to explain this, I only want adress that start by 130.190 I don't care about other like 216.58, 51.254...etc there are a lot more
tcpdump ip range
You must be talking of (the former) project Application Layer Packet Classifier for Linux, which was implemented as patches, for the 2.4 and the 2.6 kernels. The major problem with this project, is that the technology which it proposed to control, quickly outpaced the usefulness and efficacy of the implementation. The members of the project, also had no time (and money) to further invest in outpacing some advancements of the technology, as far as I remember, and then sold the rights to the implementation, which killed for good an already problematic project. The challenges this project/technology has faced over the years are, by no particular order:adapting the patches to the 3.x/4.x kernel versions; scarcity of processing power - in several countries, nowadays the speed of even domestic gigabit broad will demand ASICs to do efficient layer 7 traffic-shapping; bittorrent started using heavy obfuscation; HTTPS started being used heavily to encapsulate several protocols and/or to avoid detection; peer-to-peer protocols stopped using fixed ports, and started trying to get their way by any open/allowed port; the rise of ubiquitous voIP and video in real time, that makes traffic very sensitive to even small time delays: the widespread use of VPN connections.Heavy R&D was then invested heavily, into professional traffic shaping products. The state of the art ten years ago, involved already specific ASICs and (heavy use) of heuristics, for detecting encrypted/obfuscated traffic. At the present, besides of more than a decade of experience in advanced heuristics, with the advancement of global broadband, traffic-shapping (and firewall) vendors, are also using peer-2-peer sharing in real-time, of global data, to enhance the efficacy of their solutions. They are combining advanced heuristics, with real time profiling / sharing data from thousands of locations in the world. It would be very difficult, to put together a open source product, that will work as efficiently as an Allot NetEnforcer. Using open source solutions, for the purpose of infra-structure bandwidth health, it is not so usual, anymore, trying to traffic shape by the type/nature of traffic that IP address is using at the network level. Nowadays, for generic traffic control and protecting the bandwidth capacity of the infra-structure, the usual strategy is (besides firewalling), without using advanced traffic shaping hardware, allocating a small part of the bandwidth per IP address.
The L7-filter project appears to be 15 years old, requires kernel patches with no support for kernels past version 2.6, and most of the pattern files it has appear to have been written in 2003. Usually when there's a project that is that old, and that popular, there are new projects to replace it, but I can't find anything more recent for Linux that does layer 7 filtering. Am I not looking in the right places? Was the idea of layer 7 filtering abandoned entirely for some reason? I would think that these days, with more powerful hardware, this would be even more practical than it used to be.
Is there a way to do layer 7 filtering in Linux?
There is a comment module for iptables which should do what you need. When adding a rule, one can add a comment like this: iptables -A INPUT -p icmp -j ACCEPT -m comment --comment "Allow incoming ICMP"
I am going to use iptables for port forwarding to listen on requests from my LAN on port 8080 and answer with container at port 80, like this: iptables -t nat -A PREROUTING -p tcp -d 192.168.1.15 --dport 8080 -j DNAT --to 10.0.3.103:80I am not sure if the rule is right (feel free to correct it), but the question is: How to annotate this rule so that I can easily find and purge it? If iptables cannot do this, what can?
How to tag IPTABLES rules?
Most folks I know who are working with the Linux network stack use the below diagram (which you can find on Wikipedia under CC BY-SA 3.0 license).As you can see, in addition to the netfilter hooks, it also documents XFRM processing points and some eBPF hook points. tc eBPF programs would be executed as part of the ingress and egress qdiscs. BPF networking hook points other than XDP and tc (e.g., at the socket level) are not documented here. As far as I know, IPVS is built on top of netfilter so it wouldn't directly appear here.
When it comes to packet filtering/management I never actually know what is going on inside the kernel. There are so many different tools that act on the packets, either from userspace (modifying kernel-space subsystems) or directly on kernel-space. Is there any place where each tool documents the interaction with other tools, or where they act. I feel like there should be a diagram somewhere specifying what is going on for people who aren't technical enough to go and read the kernel code. So here's my example: A packet is received on one of my network interfaces and I have:UFW iptables IPv4 subsystem (routing) IPVs eBPFOk, so I know that UFW is a frontend for iptables, and iptables is a frontend for Netfiler. So now we're on kernel space and our tools are Netfiler, IPVs, IPv4 and eBPF. Again, the interactions between Netfilter and the IPv4 subsystems are easy to find since these are very old (not in a bad way) subsystems, so lack of docs would be very strange. This diagram is an overview of the interaction:But what about IPVs and eBPF? What's the actual order in which kernel subsystems act upon the packets when these two are in the kernel? I always find amazing people who try to go into the guts and help others understand, for example, this description of the interaction between LVS and Netfilter. But shouldn't this be documented in a more official fashion? I'm not looking for an explanation here as to how these submodules interact, I know I could find it myself by searching. My question is more general as to why is there no official documentation that actually tries to explain what is going on inside these kernel subsystems. Is it documented somewhere that I just don't know of? Is there any reason not to try to explain these tools? I apologize if I'm not making any sense. I just started learning about these things.
How do packets flow through the kernel
What xx-tables is the best to filter (limit, not drop) ARP packets?iptables iptables starts from IP layer: it's already too late to handle ARP. arptables While specialized in ARP, arptables lacks the necessary matches and/or targets to limit rather than just drop ARP packets. It can't be used for your purpose. ebtables ebtables can be a candidate (it can both handle ARP and use limit to not drop everything). pro: − quite easy to use con: − it's working on ethernet bridges. That means if you're not already using a bridge, you have to create one and enslave your (probably unique) interface on it, for the sake of having it being usable at all. This comes with a price, both for configuration, and probably some networking overhead (eg: network interface is set promiscuous). − as it doesn't have the equivalent of iptables's companion ipset, limiting traffic is crude. It can't do per-source on-the-fly metering (so such source MACs or IPs must be manually added in the rules). nft (nftables) pro: − this tool was made with the goal to replace other tools and avoid duplication of code, like duplicating match modules (one could imagine that arptables could also have received a limit match, but that would just be the third implementation of such a match module, after ip(6)tables' xt_limit and ebtables' ebt_limit ones). So it's intended to be generic enough to use the same features at any layer: it can limit/meter traffic at ARP level while also doing it per source rather than globally. con: − some features might require a recent kernel and tools (eg: meter requires kernel >= 4.3 and nftables >= 0.8.3). − since its syntax is more generic, rules can be more difficult to create correctly. Sometimes documentation can be misleading (eg: non-working examples). tc (traffic control)? It might perhaps be possible to use tc to limit ARP traffic. As tc feature works very early in the network stack, its usage could limit ressource usage. But this tool is also known for its complexity. Even to use it for ingress rather than egress traffic requires steps. I didn't even try on how to do this.CONFIG_IP_NF_ARPFILTER As seen in previous point, this is moot: arptables can't be used. You need instead NF_TABLES_ARP or else BRIDGE_NF_EBTABLES (or maybe if tc is actually a candidate, NET_SCHED). That doesn't mean it's the only prerequisite, you'll have to verify what else can be needed (at least what to make those options become available, and various match kernel modules needed to limit ARP). What layer is best? I'd say using the most specific layer doing the job would be the most easier to handle. At the same time, the earlier handled, the less overhead is needed, but it's usually more crude and so complex to handle then. I'm sure there are a lot of different possible advices here. ARP can almost be considered being between layer 2 and 3. It's implemented at layer 2, but for example equivalent IPv6's NDP is implemented at layer 3 (using multicast ICMPv6). That's not the only factor to consider. Does ebtables have any advantage over arptables? See points 1 & 2. What is the best source on the internet to learn about limiting/filtering network traffic for different kind of packets and protocols? Sorry there's nothing that can't be found using a search engine with the right words. You should start with easy topics before continuing with more difficult. Of course SE is already a source of informations.Below are examples both for ebtables and nftableswith ebtables So let's suppose you have an interface eth0 and want to use ebtables with it with IP 192.0.2.2/24. The IP that would be on eth0 becomes ignored once the interface becomes a bridge port. It has to be moved from eth0 to the bridge. ip link set eth0 up ip link add bridge0 type bridge ip link set bridge0 up ip link set eth0 master bridge0 ip address add 192.0.2.2/24 dev bridge0Look at ARP options for ebtables to do further filtering. As told above ebtables is too crude to be able to limit per source unless you manually state each source with its MAC or IP address with rules. To limit to accepting one ARP request per second (any source considered). ebtables -A INPUT -p ARP --arp-opcode 1 --limit 1/second --limit-burst 2 -j ACCEPT ebtables -A INPUT -p ARP --arp-opcode 1 -j DROPThere are other variants, like creating a veth pair, putting the IP on one end and set the other end as bridge port, leaving the bridge without IP (and filtering with the FORWARD chain,stating which interface traffic comes from, rather than INPUT).with nftables To limit to accepting one ARP request per second and on-the-fly per MAC address: nft add table arp filter nft add chain arp filter input '{ type filter hook input priority 0; policy accept; }' nft add rule arp filter input arp operation 1 meter per-mac '{ ether saddr limit rate 1/second burst 2 packets }' counter accept nft add rule arp filter input arp operation 1 counter drop
I have an embedded Linux on some network device. Because this device is pretty important I have to make many network tests (I have a separate device for that). These tests include flooding my device with ARP packets (normal packets, malformed packets, packets with different size etc.) I read about different xx-tables on the internet: ebtables, arptables, iptables, nftables etc. For sure I'm using iptables on my device.What xx-tables is the best to filter (limit, not drop) ARP packets? I heard something about /proc/config.gz file which suppose to have information what is included in the Kernel. I checked CONFIG_IP_NF_ARPFILTER which is not included. So - in order to use arptables - I should have Kernel compilled with CONFIG_IP_NF_ARPFILTER option enabled, correct? And the same goes to for example ebtables? I read that ebtables & arptables works on OSI level 2 when iptables works on OSI level 3. So I would assume that filtering anything on level 2 is better (performance?) then on level 3, correct? I found somewhere on this website answer to use ebtables to filter ARP packets. Does ebtables have any advantage over arptables? EXTRA ONE. What is the best source on the internet to learn about limiting/filtering network traffic for different kind of packets and protocols?
Best way to filter/limit ARP packets on embedded Linux
Let's say we have an ipset named MYTESTSET, and that this ipset is of type hash:ip. It will store just ip adresses. Then match against your IPset and after match against connlimit match extension, with the parameters you want. iptables -A INPUT -p tcp -m set --match-set MYTESTSET src -m connlimit --connlimit-above 1 --connlimit-saddr --connlimit-mask 32 -j DROP This will do the following: for each source inside the IP set, connections will be counted and if there is more than one (--connlimit-above 1), it will be droped, thus limiting the number of connection per source in the ipset to 1. (You can also match the other way, using --connlimit-upto xxx and -j ACCEPT instead of DROP) If you want to consider the whole set and allow 1 connection for all sources in the ipset then set the --connlimit-mask switch to 0.
connlimit lets me limit the number of connections per client/service. How would I go about to combine such a rule with the IP sets available in more recent versions of the Linux kernel and netfilter?
How to combine connlimit with IP sets?
Use the following code: #if LINUX_VERSION_CODE >= KERNEL_VERSION(4,13,0) nf_register_net_hook(&init_net, reg) #else nf_register_hook(reg) #endifReference: init_net
I just started learning about netfilter and I was trying to make a simple netfilter module, all the tutorials and HOW TOs register a hook function with nf_register_hook(), but I could not find one in linux kernels above 4.13-rc1. As far as I understand, the nf_register_hook() function used to call the _nf_register_hook() function which further called nf_register_net_hook() function, iterating over each member of the net linked list, but then it gets a bit difficult for me to understand. With, the nf_register_hook() function gone, I am in a fix as to how to register a hook. The nf_register_net_hook() function is still there but, I am not really sure how that works. So my question boils down to, How to register a netfilter hook in kernels above 4.13-rc1?
nf_register_hook not found in linux kernel 4.13-rc2 and later
I doubt iptables alone will be enough, as TCP and UDP are fundamentally different protocols. You can forget setting up an IPsec VPN with such scenario (ISP blocking all UDP ports). Tunnel all the traffic via ICMP. (best old school solution I know of. Lots of organizations still do not filter out any kind of ICMP) see https://github.com/DhavalKapil/icmptunnel'icmptunnel' works by encapsulating your IP traffic in ICMP echo packets and sending them to your own proxy server. The proxy server decapsulates the packet and forwards the IP traffic. The incoming IP packets which are destined for the client are again encapsulated in ICMP reply packets and sent back to the client. The IP traffic is sent in the 'data' field of ICMP packets. RFC 792, which is IETF's rules governing ICMP packets, allows for an arbitrary data length for any type 0 (echo reply) or 8 (echo message) ICMP packets. So basically the client machine uses only the ICMP protocol to communicate with the proxy server. Applications running on the client machine are oblivious to this fact and work seamlessly.also, as A.B. points out, you have a UDP-to-raw tunneling software at https://github.com/wangyu-/udp2raw-tunnel Or in alternative, setup an OpenVPN solution. If you manage to talk outside run OpenVPN over port 53/UDP, or on the lack of that, run it over TCP. Mind you that doing a VPN over TCP will be slower than UDP, but it works. As for the actual question of changing an IP field: You want to look at the mangle table in iptables, however:I suspect your ISP is blocking that too I know mangle supports modifying some IP fields, not sure about the one you need. See https://serverfault.com/questions/467756/what-is-the-mangle-table-in-iptablesMore alternatives, you can try GRE tunnels (protocol 47), see https://www.tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.tunnel.gre.html ( it is easier than trying to develop an application ). Some organizations block this. Basically it is encapsulating IP/ICMP/UDP over protocol 47. Or if nothing else works, you can tunnel it via an IP tunnel over SSH (mind you tun over SSH, not TCP port tunneling). see Ip Tunnel Over Ssh With Tun http://www.marcfargas.com/posts/ip-tunnel-over-ssh-with-tun/ By the way, no technology of smart/adaptative/deep inspection traffic shapper/firewall that will detect and block all the methods on this thread will be able to block a TUN over SSH. PS. It is hard to believe an ISP blocking UDP, and furthermore, NTP and DNS UDP ports.
I have a wonderful ISP blocking all UDP traffic (except DNS to its own DNS servers). However, I want to use UDP for my VPN solution. I have root over both VPN endpoints, and both of them are using Linux. My idea is to simply overwrite the packet type field in my outgoing UDP packets to look as TCP, and doing the reverse on the server side. Thus, the routers/firewall of my wonderful ISP will see bad TCP packets, while my VPN processes will be able to communicate on UDP. I strongly suspect, that the firewall of the ISP is not smart enough to detect that something is not okay. Of course it would be a dirty trick, but not more dirty than simply forbidding the second most used IP protocol and selling this as ordinary internet connection. As far I know, there are some iptables rules for that, but which one?
How can I transform UDP to TCP with netfilter?
Actually nftables's egress hook was added in kernel 5.16, and improved support (fwd) in 5.17. There were several attempts earlier, and one of them was NACK-ed at the same time it was initially committed, making it appear in Kernel Newbies for version 5.7, and apparently even nftables' wiki has it wrong by linking to Kernelnewbies for Linux 5.7 instead of Linux 5.16. Here is a relevant mailing list link from March 2020 (around kernel 5.7):Subject: Re: [PATCH 00/29] Netfilter updates for net-next From: David Miller <davem () davemloft ! net> From: Alexei Starovoitov <[emailprotected]> Date: Tue, 17 Mar 2020 20:55:46 -1000On Tue, Mar 17, 2020 at 2:42 PM Pablo Neira Ayuso [emailprotected] wrote:Add new egress hook, from Lukas Wunner.NACKed-by: Alexei Starovoitov [emailprotected]Sorry I just saw this after pushing this pull request back out. Please someone deal with this via a revert or similar.It was subsequently reverted in this commit: netfilter: revert introduction of egress hook. This revert might not have been posted in all relevant mailing lists, adding a bit to the confusion.Fast forward almost two years. Issues and concerns (among them about interactions with tc/qdisc) having been addressed, egress was added again in kernel 5.16 (9 Jan 2022). Kernelnewbies for Linux 5.16 has this entry:NetfilterSupport classifying packets with netfilter on egress commit, commit, commit, commitOn Linux Kernel Driver Database:CONFIG_NETFILTER_EGRESS: Netfilter egress support [...]found in Linux kernels: 5.16–5.19, 5.19+HEADLikewise, nftables userland support for egress was officially added only after kernel support was committed in nf-next (so a bit before 5.16 was out) and was made available in the nftables 1.0.1 release:This release contains new features available up to the Linux kernel 5.16-rc1 release: [...]egress hook support (available since 5.16-rc1).OP's ruleset is accepted on a kernel with relevant kernel option CONFIG_NETFILTER_EGRESS which has to be version >= 5.16 along nftables >= 1.0.1.
I'm looking to apply firewall rules on egress to control DHCP output from a Docker container. I don't want the DHCP container to share the host's network stack as adding CAP_NET_ADMIN effectively gives the container control of the network stack. I notice here that an egress hook was added to netfilter in kernel 5.7 (uname -r says I have 5.10). According to information in this commit, I have added the following table: table netdev filterfinal_lan { chain egress { type filter hook egress device enp1s0 priority 0; policy accept; } }However when I attempt to apply the config it tells me it's not recognised: /etc/nftables.conf:107:20-25: Error: unknown chain hook type filter hook egress device enp1s0 priority 0; policy accept; ^^^^^^I'm unsure which version of nftables supports the egress hook, but my nft --version is nftables v0.9.8 (E.D.S.). Information on the egress hook seems quite elusive. What is required to enable the use of this hook?
NFTables Egress Hook?
This is a bug in Netfilter's ARP logs.There was a bug report about this problem. It was discovered that ARP didn't log using the correct data (it used data from link layer header instead of ARP's network layer). A patch was committed to fix this a few days later and appeared in kernel 5.19:netfilter: nf_log: incorrect offset to network header NFPROTO_ARP is expecting to find the ARP header at the network offset. In the particular case of ARP, HTYPE= field shows the initial bytes of the ethernet header destination MAC address. netdev out: IN= OUT=bridge0 MACSRC=c2:76:e5:71:e1:de MACDST=36:b0:4a:e2:72:ea MACPROTO=0806 ARP HTYPE=14000 PTYPE=0x4ae2 OPCODE=49782NFPROTO_NETDEV egress hook is also expecting to find the IP headers at the network offset. Fixes: 35b9395104d5 ("netfilter: add generic ARP packet logger") Reported-by: Tom Yan Signed-off-by: Pablo Neira AyusoIt appears this fix was not backported to vanilla kernel 5.10, possibly because the file to patch was not yet consolidated elsewhere so was in a different place, or this patch was just missed for a backport. When fixed (eg: vanilla 5.19.17): ah = skb_header_pointer(skb, nhoff, sizeof(_arph), &_arph);When not fixed (eg: vanilla 5.10.174, so including Debian bullseye's unless patched): ah = skb_header_pointer(skb, 0, sizeof(_arph), &_arph);Someone has to make a bug report about it. Meanwhile you could try a bullseye-backports kernel (eg currently: 6.1.12-1~bpo11+1) which is guaranteed to not have it anymore.Tested affected on today's bullseye kernel (5.10.162). Just logging any ARP table arp t { chain cout { type filter hook output priority filter; policy accept; log } }will log HTYPE=65535 when trying to reach a non-existent IP address on the LAN because it incorrectly uses the start of the broadcast MAC address and that's what is used as described in the patch. The same test done with the kernel in package linux-image-6.1.0-0.deb11.5-amd64-unsigned logs instead HTYPE=1 as should be.
I am currently testing netfilter / nftables / nft. As a starting point, I have made a ruleset that drops nearly everything in and out, and have written the rules so that every dropped packet is logged. As always, and as it probably has to be, I don't understand the very first thing the machine tries to do and that I notice in the logs: ... IN= OUT=enp0s3 ARP HTYPE=37 PTYPE=0x90bd OPCODE=21According to this document:Opcode 21 means MARS-Grouplist-Reply. Neither did I ever hear of it, nor did I find a single reference to it on the net, except in RFCs or IANA documents, but it is nowhere explained there. HTYPE 37 means HFI hardware. As with the opcode, I have never heard of such a thing, nor did I find any explanation on the net. I am pretty sure that I don't have that type of hardware. In this case, the networking hardware is a virtual NIC in QEMU. PTYPE 0x90bd: During today's research, I have seen a list of protocol types; unfortunately, I can't remember where. But anyway, 0x90bd for sure was not mentioned there.Could somebody please explain what the opcode, the hardware type and the protocol type mean, and why the system in question wants to send such packets? This happens in a vanilla debian Bullseye installation, up to date at the time of writing, in a virtual machine with virtualized standard x64 Intel hardware and virtio NIC.
What are ARP hardware type 37, opcode 21 and protocol type 0x90bd?
The wiki says what you tried is not yet implemented: You have to obtain the handle to delete a rule. The example is: $ sudo nft -a list table inet filter table inet filter { ... chain output { type filter hook output priority 0; ip daddr 192.168.1.1 counter packets 1 bytes 84 # handle 5 } }The -a shows the assigned handle "5" as a comment, so you can $ sudo nft delete rule filter output handle 5
Current system:Distro: Ubuntu 20.04 kernel: 5.4.0-124-generic nft: nftables v0.9.3 (Topsy)I am new and learning nftables, Here is my nft ruleset currently: $sudo nft list ruleset taxmd-dh016d-02: Wed Sep 21 12:09:08 2022table inet filter { chain input { type filter hook input priority filter; policy accept; } chain forward { type filter hook forward priority filter; policy accept; } chain output { type filter hook output priority filter; policy accept; ip daddr 192.168.0.1 drop } }I want to delete ip daddr 192.168.0.1 drop from the output chain. I tried the following: sudo nft del rule inet filter output ip daddr 192.168.0.1 drop sudo nft delete rule inet filter output ip daddr sudo nft 'delete element ip daddr 192.168.0.1 drop' sudo nft 'delete element ip' sudo nft delete rule filter output ip daddr 192.168.0.1 dropBut nothing works, I keep getting this error: Error: syntax error, unexpected inet delete inet filter chain output ip daddr 192.168.0.1 drop ^^^^Why can't I delete a specific element? I would think this would be straight forward, but I am missing something.
How do I delete a specific element in a chain in nftables?
You're probably adding a rule intended for the nat table in the filter table block suitable for iptables-restore, and with inappropriate syntax. Until you know how to edit /etc/iptables/rules.v4 directly (by studying the output of iptables-save), you should do this instead:be careful, since the rule will be applied immediately, change the current running firewall rules with: iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADEstudy the results: are they worth changing the configuration? if worthy, ask netfilter-persistent to save the rules. It will in turn run iptables-persistent's plugins which will use iptables-save under the hood. netfilter-persistent saveYou will notice that the new configuration file (a file suitable for use by iptables-restore) now has a block for the nat table with your rule (and without -t nat), separate from the filter table block.
I use netfilter-persistent to manage a firewall. I would like to share a connection between two interfaces using masquerading (example, or another). When I run those operations by invoking iptables it works. But if I try to update firewall rules stored in /etc/iptables/rules.v4 adding such a line: -t nat -A POSTROUTING -o wlan0 -j MASQUERADELines starting with -t make netfilter-persistent fail to run and the firewall is not updated: Nov 16 11:51:32 helena systemd[1]: netfilter-persistent.service: Main process exited, code=exited, status=1/FAILURE Nov 16 11:51:32 helena systemd[1]: Failed to start netfilter persistent configuration.So I am wondering if it is possible to store this kind of rules with netfilter-persistent orIs it a known limitation? Is there a good reason why it cannot work? Is there a hack to make it work?
Masquerade rule with netfilter-persistent
The identity of a TCP connection is defined by a set of four things:the IP address of endpoint A the IP address of endpoint B the port number of endpoint A the port number of endpoint BThe TCP protocol standard says that if any of these four things are changed, the packet must not be considered part of the same connection. As a result, it makes no sense to start applying a NAT rule to a SYN/ACK packet if the initial SYN was also not NATted. You must either apply the same kind of NAT mapping for the entire connection from the start to finish, or not NAT it at all; any attempt to add or change a NAT mapping mid-connection will just cause the TCP connection to fail. This is a fundamental fact of the TCP protocol, and the Linux iptables/netfilter code is designed to take it into account. In your case 2), the SYN/ACK is preceded by a SYN from 10.b/16. That SYN has a source of 10.b/16, so it does not match the MASQUERADE rule and gets routed with addresses kept as-is. Then, if the SYN/ACK from 10.a/16 back to 10.b/16 would be translated, the sender of the original SYN would no longer recognize it as a response to its own SYN, as the source IP + destination IP + source port + destination port combination would be different from what is expected for a valid response. Essentially, the TCP protocol driver in the system that initiated the connection in 10.b/16 would then be thinking: "Sigh. The 10.a.connection.destination is not answering. And 10.b.NAT.system is bothering me with clearly spurious SYN/ACKs: I'm attempting to connect 10.a.connection.destination, not him. If I have time, I'll send a RST or two to 10.b.NAT.system; hopefully he realizes his mistake and stops bothering me."
I observed that MASQUERADE target does not match on packets in the reply direction (in terms of netfilter conntrack). I've a single simple -t nat -A POSTROUTING -s 10.a.0.0/16 -d 10.b.0.0/16 -j MASQUERADE rule, nothing else besides of ACCEPT policies on all chains, and it seems that case 1) SYN packets of connection initialization attempts from 10.a/16 network get NAT-ed (this is OK), while case 2) SYN/ACK packets again from 10.a/16 network (in response to SYN from 10.b/16, ie. the initiator is 10.b/16 in this case) do not get translated, but src address is kept as-is, simply routed. I'm not sure is it the expected behaviour or i missed something? I mean I dont want it to behave any other way, everything seems working. but the documentation did not confirm to me that this is the factory-default behaviur of the MASQUERADE target. Could you confirm it? thanks.
in iptables, does MASQUERADE match only on NEW connections (SYN packets)?
Yes, the string extension is still supported (see also your local man iptables-extensions documentation). No, you can’t match against encrypted payloads — they’re still encrypted in the filtering layer...
I was reading the book Linux Firewalls - Attack Detection and Response (by M. Rash, No Starch Press, 1 Ed., Oct. 2007). In one of its chapter it discusses string matching using iptables. I was wondering:if string matching is still supported by Linux kernel and iptables/Netfilter if yes, can string matching search the encrypted payloads (e.g. HTTPS packets)?I searched the net but most of the links are old, and the book itself is published in 2007.
Is iptables string matching still supported?
yep, man page debian; [!] -o, --out-interface name Name of an interface via which a packet is going to be sent (for packets entering the FORWARD, OUTPUT and POSTROUTING chains). When the "!" argument is used before the interface name, the sense is inverted. If the interface name ends in a "+", then any interface which begins with this name will match. If this option is omitted, any interface name will match.
Reading in detail about iptables / netfilter here, when I read about the -o argument:"Indicates the interface through which the outgoing packets are sent through the INPUT, FORWARD, and PREROUTING chain."This seems to me to be wrong as they have written the same thing for the -i argument. It seems to me it should instead be:"Indicates the interface through which the outgoing packets are sent through the OUTPUT, FORWARD, and POSTROUTING chain."correct?
-o in iptables is for specifying the interface for OUTPUT, FORWARD, and POSTROUTING Correct?
The nat hook (as all other hooks) is provided by Netfilter to nftables. The NAT hook is special: only the first packet of a connection is traversing this hook. All other packets of a connection already tracked by conntrack aren't traversing any NAT hook anymore but are then directly handled by conntrack to continue performing the configured NAT operations for this flow. That explains why you should never use this hook to drop: it won't affect already tracked connections, NAT-ed or not. Just change the hook type from type nat to type filter for the part dropping traffic. Contrary to iptables a table is not limited to one hook type and actually has to use multiple types for this kind of case, because the set is local to a table and can't be shared across two tables. For the same reason, this table should logically not be called inet nat anymore because it's not just doing NAT (but I didn't rename it). So in the end: nftables.conf: table inet nat { set blocked { type ipv4_addr } chain block { type filter hook postrouting priority 0; policy accept; ip daddr @blocked counter drop } chain postrouting { type nat hook postrouting priority 100; policy accept; oifname "ppp0" masquerade iifname "br-3e4d90a574de" masquerade } }Now:all packets will be checked by the inet nat block chain allowing the blocked set to immediately affect the traffic rather than having to wait for the next flow to be affected.as usual only the first packet of a new flow (tentative conntrack state NEW) will traverse the inet nat postrouting chain.Please also note that iifname "br-3e4d90a574de" masquerade; requires a recent enough kernel (Linux kernel >= 5.5): before only filtering by outgoing interface was supported in a postrouting hook. Also, this looks like a Docker-related interface, and adding this kind of rule might possibly interact with Docker (eg: it might do NAT on traffic between two containers in the same network) because it's referencing a bridge interface. That's because Docker makes bridged traffic seen by nftables (as well as iptables) by loading the br_netfilter module).
I have the following in nftables.conf: table inet nat { set blocked { type ipv4_addr } chain postrouting { type nat hook postrouting priority 100; policy accept; ip daddr @blocked counter drop; oifname "ppp0" masquerade; iifname "br-3e4d90a574de" masquerade; } }The set blocked is a named set which can be updated dynamically. It is in this set I wish to have a collection of IPs to block, updated every n minutes. In order to preserve the atomicity, I am not using the following (updateblock.sh) to update the list: #!/bin/bashsudo nft flush set inet nat blocked sudo nft add element inet nat blocked {$nodes}But rather blockediplist.ruleset: #!/usr/sbin/nft -fflush set inet nat blocked add element inet nat blocked { <example_ip> }I use the following order of commands: nft -f /etc/nftables.conf nft -f blockediplist.rulesetHowever the changes in blockediplist.ruleset are not immediately applied. I know the ruleset now contains the new IPs because the IPs are present in nft list ruleset and nft list set inet nat blocked. Even just with nft add element inet nat blocked { <IP> } is the IP not being instantly blocked. An alternative method would be to define a new set and reload nftables.conf in its entirety, though I think this would be a poor and inefficient way of doing things. Is there a way to force the changes in blockediplist.ruleset to be applied immediately? UPDATE: I've just discovered that when I block an IP which I haven't pinged, it gets blocked instantly. However when adding an IP to the blocklist mid-ping it takes a while for it to be blocked. When I try a set with netdev ingress the IP gets blocked instantly. Maybe this avenue of investigation might reveal something.
nftables Named Set Update Delay
Here's a Packet flow in Netfilter and General Networking schematic:While the ingress packet was marked in PREROUTING, the locally generated reply packet did egress through OUTPUT: there's no rule to mark it there, so there's no mark and the route is different. Altering the packet in mangle/OUTPUT, including changing meta-information such as the mark, triggers a reroute check. This reroute should switch the route from eth0 to ge-0.0.0-Iosv6 (note: with nftables instead of iptables, the dedicated route chain type is required to have this effect). This rule will do that: iptables -t mangle -A OUTPUT -s 6.6.6.6 -j MARK --set-mark 1Instead of marking independent packets with specific rules in both ways, it's possible to mark automatically the whole flow (as tracked by conntrack). The connmark match and its CONNMARK target counterpart can be used. This blog gives examples of use: Netfilter Connmark. For this case, instead of the iptables rule above:should be the last rule in mangle/PREROUTING: iptables -t mangle -A PREROUTING -m mark ! --mark 0 -j CONNMARK --save-markshould be the first rule in mangle/OUTPUT so it can still be altered if needed. This will trigger a reroute check: iptables -t mangle -I OUTPUT -m connmark ! --mark 0 -j CONNMARK --restore-markThere are also a few things to know and caveats quite difficult to predict reliably without testing:Toggling fwmark_reflect (eg: sysctl -w net.ipv4.fwmark_reflect=1) might have been enough for this specific case and used instead of the rules above, but wouldn't help for a more general case. Likewise there's tcp_fwmark_accept to ease the TCP case. There's no equivalent for other protocols like UDP.sometimes the route fails before the reroute check because of Strict Reverse Path Forwarding and the packet is dropped early, before it got a chance to be marked and rerouted. Obviously that's not the case here (SRPF might even not be enabled), but should this happen, relaxing the check too Loose Mode should be done on one of the involved interfaces (tests required to figure out which one) by changing rp_filter settings (eg: sysctl -w net.ipv4.conf.eth0.rp_filter=2).sometimes some additional routes from the main table must be duplicated in the additional table since it's read first before falling back to the main table and might not match. It's difficult to figure out when it's required, especially when marks are involved. Eg: ip route add table threehundred 192.168.100.2/32 dev ge-0.0.0-Iosv6the command ip route get ... even if supplied with the adequate mark appears to not always predict accurately what is currently happening when marks and iptables are involved.the behaviour related to interactions between mark, route and maybe ip route get prediction can be altered with the undocumented src_valid_mark toggle (also via sysctl). Use it only if it appears to fix things.UDP server behaviour in case of policy routing can be found to differ from TCP server behaviour for reasons complex to explain. Using marks can only increase the complexity.
I have a following mangle table rule in a lab server which marks UDP traffic with 1 if the destination address is 6.6.6.6: $ sudo iptables -t mangle -L PREROUTING 2 -v -n --line-numbers 2 17 884 MARK udp -- ge-0.0.0-Iosv6 * 0.0.0.0/0 6.6.6.6 MARK set 0x1 $6.6.6.6/32 is configured on lo in that server. Each time I execute the traceroute towards 6.6.6.6, the rule counter above increases which is expected. In other words, the packets seem to get marked. My routing policy database looks like this: $ ip rule show 0: from all lookup local 32764: from all fwmark 0x2 lookup twohundred 32765: from all fwmark 0x1 lookup threehundred 32766: from all lookup main 32767: from all lookup default $.. and the table threehundred looks like this: $ ip r sh table threehundred default via 192.168.100.2 dev ge-0.0.0-Iosv6 $However, the marked packets are not routed based on the entry in table threehundred, but rather based on the entry in table main. I can confirm with the tcpdump, that the UDP packets ingress the server via ge-0.0.0-Iosv6, but ICMP port unreachable reply is sent out via eth0 which is associated with the default route in the main table. As I mentioned earlier, the mangle table PREROUTING rule #2 is incremented during this. What might cause such behavior? I'm running Ubuntu 16.04.6 LTS.
marked packets not detected by routing policy database
iptables -t mangle -A PREROUTING -m dscp --dscp-class AF12 -j CONNMARK --set-xmark 12 iptables -t mangle -A POSTROUTING -m connmark --mark 12 -j DSCP --set-dscp-class AF12(not 100% dynamic as the DSCP value need to be known in advance in order to get a match)
I have seen connmark or ctinfo could work for this but couldn't find a simple effective command to make it work (Not familiar within this area). The command can be applied to the TCP termination node or any linux node as intermediary router.
Example command to set same DSCP value in the IP header for return packets within the same TCP connection
You have two problems:using a too old version of nftables. I could reproduce the error Error: syntax error, unexpected saddr, expecting comma or '}' using nftables version 0.7 (as found in Debian 9). Meters (nftables wiki) suggests nftables >= 0.8.1 and kernel >= 4.3. Upgrade nftables. Eg, on Debian 9, using the stretch-backports (stretch-backports, not buster-backports) version 0.9.0-1~bpo9+1, sorry you'll have to search how to do that on other distributions. using the wrong table, as told by the command (when using nftables 0.9.2): # nft add rule ip filter input tcp dport @rate_limit meter syn4-meter \{ ip saddr . tcp dport timeout 5m limit rate 20/minute \} counter accept Error: No such file or directory; did you mean set ‘rate_limit’ in table inet ‘filter’?Indeed, many objects are local to the table where they are declared. So you can't declare it in the inet filter "namespace" and use it in the ip filter "namespace". That's a difference with for example iptables + ipset, where the same ipset set can be used in any table. This will work (once you get a recent enough nftables): nft add rule inet filter input tcp dport @rate_limit meter syn4-meter \{ ip saddr . tcp dport timeout 5m limit rate 20/minute \} counter acceptOr alternatively you can move back the meter definition to the ip filter table.
I have below nftable rule to add a connection rate meter: nft add rule ip filter input tcp dport @rate_limit meter syn4-meter \{ ip saddr . tcp dport timeout 5m limit rate 20/minute \} counter acceptIt generates the error: Error: syntax error, unexpected saddr, expecting comma or '}' add rule ip filter input tcp dport @rate_limit ct state new meter syn4-meter { ip saddr . tcp dport timeout 5m limit rate 20/minute } counter accept ^^^^^nftables ruleset table ip filter { chain input { type filter hook input priority 0; policy accept; } } table inet filter { set rate_limit { type inet_service size 50 } chain input { type filter hook input priority 0; policy accept; } }Initially I tried just inet but due to the error I added ip to see if it make any difference to no success. Any pointers?
nftables meter error: syntax error, unexpected saddr, expecting comma or '}'
Various Internet Control Message Protocol (ICMP) packets may be "related" to some protocol's connection (or an attempt at such), but these ICMP packets are different than the protocol that caused them, hence the "related" notion. This may happen when a host or firewall rejects a TCP or UDP connection attempt with a destination unreachable ICMP packet; allowing RELATED lets that related ICMP packet through. (TCP does have a RST, so may or may not issue a related ICMP response, and firewall admins may or may not allow ICMP replies...) Application level protocols (such as FTP) will each require a custom module, as custom code is necessary to dig into the packets and figure out whether and how it relates to anything else netfilter knows about. These modules could be written by anyone for any application, though inspecting what nf_conntrack_* files are available may be a good place to start for a list: % print -l /lib/modules/3.10.0-327.13.1.el7.x86_64/kernel/net/netfilter/nf_conntrack_*(:t) nf_conntrack_amanda.ko nf_conntrack_broadcast.ko nf_conntrack_ftp.ko ...
conntrack match module --ctstate argument supports RELATED packet state. How does Netfilter know that for example in case of active FTP a connection from FTP server data port(TCP port 20) to the unprivileged data port the client specified earlier, is a RELATED connection? Does Netfilter have some modules where each protocol, which is supported by RELATED, is described? Last but not least, is there a list of protocols which are supported by this RELATED state?
How does Netfilter understand that packet is RELATED?
The problem was a missing module XT_TCPUDP There is the full list of dynamic loaded module for my command : xt_nat 1527 1 - Live 0xbf12f000 xt_tcpudp 1961 1 - Live 0xbf12b000 iptable_nat 2396 1 - Live 0xbf127000 nf_conntrack_ipv4 11354 1 - Live 0xbf120000 nf_defrag_ipv4 1331 1 nf_conntrack_ipv4, Live 0xbf11c000 nf_nat_ipv4 3401 1 iptable_nat, Live 0xbf118000 nf_nat 13364 3 xt_nat,iptable_nat,nf_nat_ipv4, Live 0xbf10f000 nf_conntrack 72079 4 iptable_nat,nf_conntrack_ipv4,nf_nat_ipv4,nf_nat, Live 0xbf0f2000 ip_tables 10836 1 iptable_nat, Live 0xbf0eb000 x_tables 16429 3 xt_nat,xt_tcpudp,ip_tables, Live 0xbf0e1000
I am trying to use DNAT on a new custom Linux target, but I get an error with the following basic command: #iptables -t nat -A PREROUTING -d 10.110.0.250 -p tcp --dport 9090 -j DNAT --to 10.110.0.239:80 $iptables: No chain/target/match by that name.I think all modules are correctly loaded: # lsmod | grep ip ipt_MASQUERADE 1686 1 - Live 0xbf15c000 iptable_nat 2396 1 - Live 0xbf150000 nf_conntrack_ipv4 11354 1 - Live 0xbf149000 nf_defrag_ipv4 1331 1 nf_conntrack_ipv4, Live 0xbf145000 nf_nat_ipv4 3401 1 iptable_nat, Live 0xbf141000 nf_nat 13364 4 ipt_MASQUERADE,xt_nat,iptable_nat,nf_nat_ipv4, Live 0xbf138000 nf_conntrack 72079 6 ipt_MASQUERADE,xt_conntrack,iptable_nat,nf_conntrack_ipv4,nf_nat_ipv4,nf_nat, Live 0xbf11b000 ip_tables 10836 1 iptable_nat, Live 0xbf114000 x_tables 16429 4 ipt_MASQUERADE,xt_conntrack,xt_nat,ip_tables, Live 0xbf10a000The forwarding is active: # cat /proc/sys/net/ipv4/ip_forward 1strace doesn't give me any clue about the problem: # ... socket(PF_LOCAL, SOCK_STREAM, 0) = 3 bind(3, {sa_family=AF_LOCAL, sun_path=@"xtables"}, 10) = 0 socket(PF_INET, SOCK_RAW, IPPROTO_RAW) = 4 fcntl64(4, F_SETFD, FD_CLOEXEC) = 0 getsockopt(4, SOL_IP, 0x40 /* IP_??? */, "nat\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., [84]) = 0 getsockopt(4, SOL_IP, 0x41 /* IP_??? */, "nat\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., [992]) = 0 setsockopt(4, SOL_IP, 0x40 /* IP_??? */, "nat\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1264) = -1 ENOENT (No such file or directory) close(4) = 0 write(2, "iptables: No chain/target/match "..., 46iptables: No chain/target/match by that name. ) = 46 exit_group(1) = ? +++ exited with 1 +++What is going wrong? [EDIT] I found that if I remove the destination port the command is working iptables -t nat -A PREROUTING -d 10.110.0.250 -p tcp -j DNAT --to 10.110.0.239:80[/EDIT Thanks.
iptables DNAT: 'No chain/target/match by that name'
IPv4 over Ethernet relies on ARP to resolve the Ethernet MAC address of the peer in order to send later unicast packets to it. As you're filtering any ARP request since there's no exception for it, those requests can't succeed and after a typical 3s timeout you'll get the standard "No route to host". There won't be any IPv4 packet sent from 192.168.1.10 to 192.168.1.1 or from 192.168.1.1 to 192.168.1.10, since the previous step failed. So add this rule for now, and see later how to fine-tune it if you really need: nft add rule bridge vmbrfilter forward ether type arp acceptIf the bridge is VLAN aware (vlan_filtering=1) or probably even if not (ie: a bridge manipulating frames and not really knowing more about them, which is probably not good if two frames from two different VLANs have the same MAC address) then here's a rule to allow ARP packets within VLAN tagged frames: nft add rule bridge vmbrfilter forward ether type vlan vlan type arp acceptBut anyway, IP will have the same kind of problem without adaptation. This requires more information of the VLAN setup. Here's a ruleset allowing tagged and non tagged frames alike, requiring duplication of rules. ARP having no further expression to filter it thus auto-selecting the protocol/type, it requires an explicit vlan type arp. table bridge vmbrfilter # for idempotency delete table bridge vmbrfilter # for idempotencytable bridge vmbrfilter { chain forward { type filter hook forward priority -100; policy drop; ip saddr 192.168.1.10 ip daddr 192.168.1.1 accept ip saddr 192.168.1.1 ip daddr 192.168.1.10 accept ether type arp accept ether type vlan ip saddr 192.168.1.10 ip daddr 192.168.1.1 accept ether type vlan ip saddr 192.168.1.1 ip daddr 192.168.1.10 accept ether type vlan vlan type arp accept } }Also older versions of nftables (eg: OP's 0.9.0) might omit mandatory filter expressions in the output when they don't have additional filters (eg, but not present in this answer: ether type vlan arp htype 1 (display truncated) vs vlan id 10 arp htype 1) , so their output should not be reused as input in configuration files. One can still tell the difference and know the additional filter expression is there by using nft -a --debug=netlink list ruleset. As far as I know there's no support yet for arbitrary encapsulation/decapsulation of protocols in nftables, so duplication of rules appears unavoidable (just look at the bytecode to see how same fields are looked up for the VLAN and non-VLAN cases: different offset).
I'd like to take a default drop approach to my firewall rules. I've created some rules for testing purposes: table bridge vmbrfilter { chain forward { type filter hook forward priority -100; policy drop; ip saddr 192.168.1.10 ip daddr 192.168.1.1 accept; ip saddr 192.168.1.1 ip daddr 192.168.1.10 accept; } }However traffic between 192.168.1.1 and 192.168.1.10 is still blocked. To see if it is a syntax issue, I tried: table bridge vmbrfilter { chain forward { type filter hook forward priority -100; policy accept; ip saddr 192.168.1.10 ip daddr 192.168.1.1 drop; ip saddr 192.168.1.1 ip daddr 192.168.1.10 drop; } }This however succeeds in blocking traffic between the two IPs. So I don't have a clue as to why my accept rules aren't being hit. The nftables wiki says:The drop verdict means that the packet is discarded if the packet reaches the end of the base chain.But I literally have accept rules in my chain which should be matching. Have I not understood something correctly? Thanks in advance for any help. Update: A.B's ARP rule suggestion is helping. However I've discovered that my VLAN tagging is causing issues with my firewall rules. The ARP rule allows tagged traffic in through the physical NIC, the ARP replies are making it over the bridge but get blocked on exit from the physical NIC.
Nftables default drop chain problem
Shorewall is a tool for configuring iptables/netfilter firewall rules, so the documentation for netfilter is a more effective place to look. It says:It is perfectly legal to specify an interface that currently does not exist; the rule will not match anything until the interface comes up. This is extremely useful for dial-up PPP links (usually interface ppp0) and the like. As a special case, an interface name ending with a `+' will match all interfaces (whether they currently exist or not) which begin with that string. For example, to specify a rule which matches all PPP interfaces, the -i ppp+ option would be used.Upon cursory inspection, running Shorewall with interfaces that do not exists seems to create -i and -o rules, which would work. This setup would cause problems with features which require knowledge of ip/routing information to function, such as routefilter.
I am planning to use Shorewall to filter traffic that originates from a virtual interface created by OpenVPN (lets call it tap0). If OpenVPN did not successfully create this interface before Shorewall started, but the interface was defined in /etc/shorewall/interfaces, would traffic be filtered if the interface was successfully created later? Would this depend on a script hook, or does Shorewall pre-create rules for interfaces that are defined in the configuration, but do not exist?
Shorewall to protect interfaces that are not yet defined
The source issue was a simple sequencing problem regarding the vlan interface. My network interface persistence file was initially misconfigured: # WAN vlan 832 internet auto enp1s0.832 iface enp1s0.832 inet dhcp up ip link set enp1s0.832 type vlan egress 0:0 1:0 2:0 3:0 4:0 5:0 6:6 7:0 iface enp1s0.832 inet6 dhcp up ip link set enp1s0.832 type vlan egress 0:0 1:0 2:0 3:0 4:0 5:0 6:6 7:0 request_prefix 1 accept_ra 2The bad part it the "up" instruction. The egress mapping is done too late, when initial ARP / DHCP / NDP has already happened. The fix is really simple, it suffices to use pre-up instead: # WAN vlan 832 internet auto enp1s0.832 iface enp1s0.832 inet dhcp pre-up ip link set enp1s0.832 type vlan egress 0:0 1:0 2:0 3:0 4:0 5:0 6:6 7:0 iface enp1s0.832 inet6 dhcp pre-up ip link set enp1s0.832 type vlan egress 0:0 1:0 2:0 3:0 4:0 5:0 6:6 7:0 request_prefix 1 accept_ra 2With that, the initial ARP/DHCP/NDP handshake is done with right QoS priority.
My linux home router sits between my ISP (Orange) and my home network. On the WAN side, Orange provide internet in a VLAN tagged 832. Some control messages (ARP, DHCP, ICMPv6 "router discovery" types, DHCPv6) need to be replied to Orange with: - VLAN priority = 6 - IPv4 or IPv6 DSCP = "CS6" (6 bits 0x30, or 48 in decimal notation) First problem, for the boot sequence DHCP v4 messages, isc-dhclient needs to use a raw ethernet packet socket, which bypass the linux kernel IP stack by design. So one cannot use netfilter to assign IPv4 DSCP or Meta Class, but let's leave that aside for now. Here's a dump of my nftables configuration, relevant to the alteration of IP DSCP and Meta Priority: me@debox:~$ sudo /usr/sbin/nft list ruleset table inet fltr46 { chain assign-orange-prio { ip version 4 udp sport { bootps, bootpc} ip dscp set cs6 meta priority set 0:6 counter packets 0 bytes 0 comment "isc-dhclient LPF socket bypass netfilter" icmpv6 type { nd-neighbor-solicit, nd-router-solicit} ip6 dscp set cs6 meta priority set 0:6 counter packets 8 bytes 480 udp sport { dhcpv6-client, dhcpv6-server} ip6 dscp set cs6 meta priority set 0:6 counter packets 4 bytes 1180 } chain postrouting { type filter hook postrouting priority 0; policy accept; oifname vmap { "enp1s0.832" : goto assign-orange-prio} } chain output { type filter hook output priority 0; policy accept; oifname vmap { "enp1s0.832" : goto assign-orange-prio } } } table arp arp4 { chain output { type filter hook output priority 0; policy accept; oifname ! "enp1s0.832" accept meta priority set 0:6 counter packets 851 bytes 35742 } }My vlan 832 configuration is as follows: me@debox:~$ sudo cat /proc/net/vlan/enp1s0.832 enp1s0.832 VID: 832 REORDER_HDR: 1 dev->priv_flags: 1001 Device: enp1s0 INGRESS priority mappings: 0:0 1:0 2:0 3:0 4:0 5:0 6:0 7:0 EGRESS priority mappings: 6:6Which means, for egress, class 6 packets -> VLAN prio 6. The nftables counters for DHCPv6, ICMPv6 "router", and ARP are incremented, as expected. However, I notice problems in my wire shark capture (done by swich port mirroring):DHCPv6: OK. DSCP = CS6 and VLAN prio = 6 ICMPv6: not OK. DSCP = CS6 but VLAN prio = 0 ARP: not OK. VLAN prio = 0 IPv4 DHCP lease renewal packets, sent through a regular UDP socket, are also OK (DSCP+VLAN prio).VLAN priority is not applied correctly to ARP and ICMPv6 packets. Is there a way to debug further why the meta class does not translate correctly to VLAN prio, for ARP and ICMPv6 messages generated by the linux kernel?
Packet meta class applied, but captured VLAN priority is wrong
The question is about the fwmark or just mark (historically named nfmark but renamed to just mark at the same time it didn't depend anymore on netfilter. The word nfmark is now a bit misleading, but still present in a few places which weren't or couldn't be updated). This mark is done on the packet's skbuff, while the conntrack mark (aka ctmark etc.) is done on a conntrack entry. The easiest way to get a packet's mark is to log it via iptables' LOG target. Something like this (with a limit to avoid flood): iptables -A INPUT -m mark ! --mark 0 -m limit --limit 8/min --limit-burst 12 -j LOG --log-prefix "IPTables-Marks: "should log packets with a mark. The mark (when non-zero, which is the case with the match chosen above) is displayed at the end of the log line. (From OP's comment) example: kern.debug kernel: [11007.886926] IPTables-Marks: IN=wlan0 OUT= MAC=e4:xx:xx:xx:97:32:28:xx:xx:xx:fb:60:08:00 SRC=192.168.8.10 DST=192.168.8.1 LEN=40 TOS=0x00 PREC=0x00 TTL=128 ID=23698 DF PROTO=TCP SPT=36764 DPT=22 WINDOW=254 RES=0x00 ACK URGP=0 MARK=0x2There are other methods, better suited for automatization but harder to implement, like iptables' NFLOG target intended to "send" the whole packet to a logging program listening on a netlink socket, which could retrieve the mark with nflog_get_nfmark() (old naming...). tcpdump can listen to the nflog facility (try tcpdump --list-interfaces) and display selected packets, which can sometimes be handier than logs to debug, but I don't know of a way to have it also display the mark (it doesn't know about nflog_get_nfmark() or how to ask libpcap about it).
I understand that iptables --set-mark does not add mark "on" the packets. The MARK target is for associating a mark with the packet in the kernel data structures. The packet itself is not modified. But is there any way to view the packet with its associated mark? We can see ctmark (connection marks which are set using CONNMARK target) from /proc/net/nf_conntrack. I am looking for something similar for viewing nfmark (packet marks). This is how we can view ctmark. iptables -I OUTPUT 1 -t mangle -j CONNMARK --restore-mark iptables -I OUTPUT 2 -t mangle -m conntrack --ctorigdst 172.30.138.151 -m mark --mark 0 -j MARK --set-mark 2 iptables -A POSTROUTING -t mangle -j CONNMARK --save-markThen we can see the connection mark in the /proc/net/nf_conntrack file. mark=2 ipv4 2 icmp 1 18 src=157.43.150.253 dst=172.30.138.151 type=8 code=0 id=54809 packets=4 bytes=336 src=172.30.138.151 dst=157.43.150.253 type=0 code=0 id=54809 packets=4 bytes=336 mark=2 zone=0 use=2Another question about the /proc/net/nf_conntrack output. What is the meaning of the field use? I have seen use=1, use=2 etc. This website says it is "Use count of this connection structure".
Is there any way to view nfmark like ctmark?
Does it go through the FTP messages to find out the data-connection tuple (dst_ip, dst_prt, src_ip, src_prt) ? I know that is way too impractical to implement.Yes, that is exactly what it does. Why do you think it's too impractical? You can look at the code yourself here: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/net/netfilter/nf_conntrack_ftp.c Edit: OK, after your later comments I think I now understand what you meant. The kernel would only need to analyze the beginning of the first few data packets of every TCP connection to see if it looks like a FTP control connection or not, and only mark the actual FTP control connections for further analysis. Only the connections that look like FTP would be monitored for data-connection tuples. But a few years ago, it turned out that such fully-automatic tracking could be abused for malicious purposes. So with modern kernels, you now need to explicitly set up iptables connection tracking helper rules for protocols that need them, and that means if you use a non-default destination port for the FTP connection, you'll need a custom rule for that. But now you can fully control which interfaces, ports and connection destinations/directions will get the tracking helpers and which will not. The connection tracking helper rule for FTP in regular should look like this: iptables -t raw -A PREROUTING -p tcp --dport 21 -j CT --helper ftpIf you have a firewall that only accepts connections to specific inbound ports, you might also need a rule like this in your INPUT and/or FORWARD chain to accept the inbound active FTP connections: iptables -A FORWARD -m conntrack --ctstate RELATED -m helper --helper ftp -p tcp --dport 1024: -j ACCEPTFor data connections of control connections using a non-default port, you'll need a slightly modified rule, e.g. to accept data connections belonging to a control connection in port 2121: iptables -A FORWARD -m conntrack --ctstate RELATED -m helper --helper ftp-2121 -p tcp --dport 1024: -j ACCEPTBy the way, there are several connection tracking helper modules available:ftp for FTP protocol, obviously. irc for the Internet Relay Chat protocol. Port numbers will vary. netbios-ns which you should not need for anything any more, since the WannaCry worm proved the SMB 1.0 protocol (that was used with the old NetBIOS style Windows filesharing) has a fatal flaw. Standard port for this would be 137/UDP. snmp for the Simple Network Management Protocol, standard port 161/UDP. RAS and Q.931 for h.323 video-conferencing sub-protocols (the old Microsoft NetMeeting etc). Ports 1719/UDP and 1720/TCP respectively. sip for the SIP internet telephony protocol. Standard port 5060, both TCP and UDP supported. sane for the network protocol of the SANE scanner software, standard port 6566/TCP. pptp for the RFC2637 Point-to-Point Tunneling Protocol, a form of VPN. tftp if you need to pass TFTP connections across a NAT. amanda for the network protocol of Amanda backup software.
An article at this URI https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-firewalls-iptables_and_connection_trackingRELATED — A packet that is requesting a new connection but is part of an existing connection. For example, FTP uses port 21 to establish a connection, but data is transferred on a different port (typically port 20).states that netfilter connection tracking mechanism is able to tag the ftp's out-of-band data-connection traffic to be in RELATED state with the ftp's control-connection. I have the following questions. 1) Can it do a similar tracking operation when data-connection is setup with non-default ports ? 2) If yes, Does it go through the FTP messages to find out the data-connection tuple (dst_ip, dst_prt, src_ip, src_prt) ? I know that is way too impractical to implement. So how does netfliter really achieves this ?
How is netfilter able to track the out-of-band data-connections of FTP to be in RELATED state with control-connection?
In both "route" steps. One is for incoming packets, one for packets created from local applications. Note that iptables and routing are intended for two totally different concepts: Routing answers the question "where should this packet be delivered to?", while iptables answers the question "do I need to filter or somehow treat this packet specially?" While you can abuse the filtering process to force the packet to be delivered somewhere, that's not the original intention, but too many people on the internet think they somehow have to do it that way. (Sorry about the rant, it's one of my pet peeves).
iptables. I have went though the manual of Iptables and known some basic concepts, e.g. chain, table, hook, rule and targets. In the Linux ecosystem, iptables is a widely used firewall tool that interfaces with the kernel’s netfilter packet filtering framework.route table. In Linux, there is another table route table.I am trying to figure out the relation between them and put them in one big picture. Here is a nice diagram to show the flow of iptables, there are two route stages. in which step, will kernel take advantage of "route table"?ReferenceDigitalOcean: A Deep Dive into Iptables and Netfilter Architecture I have read this question "StackOverflow: What's the difference between iptables vs route?", but it didn't answer my question.
During the lifecycle of "iptables", in which step, will kernel take advantage of "route table"?
Looking further into the make-up of a IPv4 header: https://en.wikipedia.org/wiki/IPv4#Header I see that TOS is the name given to the entire byte, but DSCP is the name for only the most-significant 6 bits. Based on this I guessed TOS != DSCP. I tried changing the sending code to using a TOS of 0x20 and then modified the nftables rule to look for 0x20 >> 2 == 0x08 (Shifting the TOS right two bits to convert it into a DSCP value): sudo nft add rule ip raw prerouting iifname eth1 ip dscp 0x8 counterWith this change I now see that counter increasing for that new rule. table ip raw { chain prerouting { type filter hook prerouting priority raw; policy accept; iifname "eth1" ip dscp cs1 counter packets 12 bytes 590 iifname "eth1" udp dport 41378 counter packets 12 bytes 590 } }TLDR:TOS is not the same as DSCP. The DSCP is the most-significant 6 bits of the TOS. To match a TOS in nftables using ip dscp, shift the TOS right 2 bits and match on that value.I'm positive I'm missing some core concepts with this answer, so I encourage anyone who understands this better to provide a more useful answer.
This is on Ubuntu 20.04. I am attempting to write a rule for nftables which will match all IP packets received on interface eth1 that have a specific TOS value (0x02). My attempt so far: sudo nft add table raw sudo nft -- add chain raw prerouting {type filter hook prerouting priority -300\;} sudo nft add rule ip raw prerouting iifname eth1 ip dscp 2 counter sudo nft add rule ip raw prerouting iifname eth1 udp dport 41378 counterI am sending UDP packets from a seperate computer to the computer running nftables. The code to setup this sending socket, including setting the TOS in those packets: if ((sockfd = socket(AF_INET, SOCK_DGRAM, 0)) < 0) { perror("socket creation failed"); exit(EXIT_FAILURE); } int optval = 2; setsockopt(sockfd, IPPROTO_IP, IP_TOS, &optval, sizeof(optval)); //Set TOS value servaddr.sin_family = AF_INET; servaddr.sin_port = htons(41378); servaddr.sin_addr.s_addr = inet_addr("192.168.10.100");I can see the packets arrive using sudo tcpdump -i eth1 -vv: 14:51:35.153295 IP (tos 0x2,ECT(0), ttl 64, id 7091, offset 0, flags [DF], proto UDP (17), length 50) 192.168.12.10.49089 > ubuntu.41378: [udp sum ok] UDP, length 22The raw header of these is as follows: IP Header 00 E0 4C 00 05 8B 3C 97 0E C7 E1 00 08 00 45 02 ..L...<.......E. 00 31 7E 52 .1~RDecoded it shows: IP Header |-IP Version : 4 |-IP Header Length : 5 DWORDS or 20 Bytes |-Type Of Service : 2 |-IP Total Length : 49 Bytes(Size of Packet) |-Identification : 32338 |-TTL : 64 |-Protocol : 17 |-Checksum : 8873 |-Source IP : 192.168.12.10 |-Destination IP : 192.168.12.100The problem is that when I run sudo nft list ruleset I see: table ip raw { chain prerouting { type filter hook prerouting priority raw; policy accept; iifname "eth1" ip dscp 0x02 counter packets 0 bytes 0 iifname "eth1" udp dport 41378 counter packets 8 bytes 392 } }The rule matching based on udp destination port is working well, but the rule matching on dscp of 0x02 is not. How can I make a rule to match on a TOS of 0x02? So far I have tried other values of TOS, in-case 0x02 was special. I tried decimal 8, 16, 24, and 32. Each time I see the incoming packet with the TOS value I am setting, but the nfttables rule never counts, which I believe means it never matched. Handy nftables guide: https://wiki.nftables.org/wiki-nftables/index.php/Quick_reference-nftables_in_10_minutes A handy reference for DSCP values to names: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus1000/sw/4_0/qos/configuration/guide/nexus1000v_qos/qos_6dscp_val.pdf
Nftables not matching TOS value in IP packets
I don't claim to know the source code for Linux kernel networking and Netfilter, but here's how I understand it. Is the routing configured with "ip route add .." part of the iptables flow? The short answer is yes. See below for a longer answer. Is the "routing decision" like in the following picture the routing table configured with "ip route"? The short answer is yes. See below for a longer answer. I consider "iptables" to be a command that configures the Netfilter and its hooks, so I'll talk about Netfilter instead of iptables. The routing configured with "ip route add" goes into the kernel network routing table(s), and the "routing decision" from your diagram uses the kernel network routing tables. If you look at the diagram at the following link, you'll find a more detailed version of the diagram that you posted. As in your diagram, the "routing decision" appears in the flow, and therefore seems to be part of the Netfilter flow. However, if you want to get technical, you might notice that according to the colour coding in the legend, the "routing decision" is actually part of the "Other Networking" and not actually part of the NF (Netfilter). I'll let you decide if you still consider it part of the "flow". Note there is more in the kernel network routing tables than just what was added by "ip route add", such as local routes created automatically when interfaces are configured. A ip route show table all may show you some of these additional routes. I believe that the "routing decision" is related to the routing answer you get from the ip route get command. Assuming that you use the appropriate values for the command's parameters (based on where you are in the NF flow since some network values may have changed (e.g. destination address changed by a DNAT entry)). ip route get ROUTE_GET_FLAGS ADDRESS [ from ADDRESS iif STRING ] [ oif STRING ] [ mark MARK ] [ tos TOS ] [ vrf NAME ] [ ipproto PRO‐ TOCOL ] [ sport NUMBER ] [ dport NUMBER ]Note that network namespaces can influence what kernel networking tables are used and what Netfilter rules are traversed.
This question is frequent and has already a lot of answers, but I still don't get it. Is the routing configured with "ip route add .." part of the iptables flow? Is the "routing decision" like in the following picture the routing table configured with "ip route"?
routing table configured with "ip route" part of the "iptables"?
iptables includes the u32 match method which allows to do some bitwise (but not arbitrary arithmetic) operations, range comparisons and some pointer-like indirections on packet payload to match conditions:u32 U32 tests whether quantities of up to 4 bytes extracted from a packet have specified values. The specification of what to extract is general enough to find data at given offsets from tcp headers or payloads.It has its own sub-language grammar and the grammar and examples in the manual should be examined. IHL is the IP header size (in 32 bits chunks rather than in bytes) and is part of the first 32 bits in the header (4 bits for version with value 0x04 for IPv4 followed by the 4 bits for IHL) and , so if there's no option, this size should be the minimal size: 20 (bytes) / 4 (bytes per 32 bits words) so IHL = 5 (32 bits words). I won't handle invalid cases where IHL < 5, the IPv4 stack should already have taken care of this. This translates into:take first 32 bits value mask it for IHL part shift it 24 bits compare equality to 5 (invert result with ! on the match)So to drop such incoming packet with iptables: iptables -A INPUT -m u32 ! --u32 '0 & 0x0F000000 >> 24 = 5' -j DROPwithout inversion (matching 6 or greater instead): iptables -A INPUT -m u32 --u32 '0 & 0x0F000000 >> 24 = 6:0xF' -j DROPThe manual has a similar example where it's shifted by 24 bits then multiplied by 4 (so shifted only by 22 bits) to have bytes and not 32 bits words (because the u32 pointers used later use 8 bits addresses), to retrieve the start of the layer 4 payload and continue for further operations:... 0 >> 22 & 0x3C @ 0 >> 24 = 0"The first 0 means read bytes 0-3, >>22 means shift that 22 bits to the right. Shifting 24 bits would give the first byte, so only 22 bits is four times that plus a few more bits. &3C then eliminates the two extra bits on the right and the first four bits of the first byte. For instance, if IHL=5, then the IP header is 20 (4 x 5) bytes long. [...]giving for OP's case: iptables -A INPUT -m u32 ! --u32 '0 >> 22 & 0x3C = 20' -j DROPwithout inversion (and without caring about the fact that first next possible value isn't 21 but 24 nor about the exact maximum value as long as the value given is greater): iptables -A INPUT -m u32 --u32 '0 >> 22 & 0x3C = 21:0xFF' -j DROPFirst method could be simplified into:take first 32 bits value mask it for IHL part compare equality to (5<<24) ie compare to 0x05000000 (ditto)giving: iptables -A INPUT -m u32 ! --u32 '0 & 0x0F000000 = 0x05000000' -j DROPor: iptables -A INPUT -m u32 --u32 '0 & 0x0F000000 = 0x06000000:0x0F000000' -j DROPor even:take first 32 bits value compare value with range between 0x45000000 and 0x45FFFFFF for OK (IPv4 always starts with 4 and any value after the IHL part is to be ignored) or between 0x46000000 and 0x4FFFFFFF for not OK.giving: iptables -A INPUT -m u32 ! --u32 '0 = 0x45000000:0x45FFFFFF' -j DROPor: iptables -A INPUT -m u32 --u32 '0 = 0x46000000:0x4FFFFFFF' -j DROPPick your choice.
I'd like to add a rule dropping an IPv4 packet with any IP option following the header. I understand that IHL (Internet Header Length) field in he header contains the number of 32-bit words in the IPv4 header, including options. So, my understanding is that a rule should obtain the packet+options length from IHL field and compare to 20 (IPv4 header length without options), and if it is greater than 20, drop the packet. Is there a specific iptables module that allows to inspect IP header and evaluate (do arithmetic operation) ?
netfilter: drop packets having IP options
I am new one, but also interested in nftables rules. I found in nftables wiki: "The principal (only?) use for this (netdev) family is for base chains using theingresshook, new in Linux kernel 4.2." More info here, in the end of article: https://wiki.nftables.org/wiki-nftables/index.php/Nftables_families Ingress hook allows you to filter L2 traffic. It comes before prerouting, after the packet is passed up from the NIC driver. This means you can enforce very early filtering policies. This very early location in the packet path is ideal for dropping packets associated withDDoSattacks. When adding a chain on ingress hook, it is mandatory to specify the device where the chain will be attached Source: https://www.datapacket.com/blog/securing-your-server-with-nftables How to specify the device can be found here: How to use variable for device name when declaring a chain to use the (netdev) ingress hook?
From the nftables Quick reference:family refers to a one of the following table types: ip, arp, ip6, bridge, inet, netdev.andtype refers to the kind of chain to be created. Possible types are: filter: Supported by arp, bridge, ip, ip6 and inet table families. route: Mark packets (like mangle for the output hook, for other hooks use the type filter instead), supported by ip and ip6.nat: In order to perform Network Address Translation, supported by ip and ip6.From another document which explains how to configure chains:The possible chain types are: filter, which is used to filter packets. This is supported by the arp, bridge, ip, ip6 and inet table families.route, which is used to reroute packets if any relevant IP header field or the packet mark is modified. If you are familiar with iptables, this chain type provides equivalent semantics to the mangle table but only for the output hook (for other hooks use type filter instead). This is supported by the ip, ip6 and inet table families.nat, which is used to perform Networking Address Translation (NAT). Only the first packet of a given flow hits this chain; subsequent packets bypass it. Therefore, never use this chain for filtering. The nat chain type is supported by the ip, ip6 and inet table families.Hence, according to at least two authoritative references, no chain type is supported by the netdev family. Given that, how can we use the netdev family at all?
What chain types are supported by the nftables NETDEV family?
Yes, binding a port is part of the network stack, which is separate from netfilter where iptables belongs to. netfilter will not care or be aware of this new listening port and the network stack will not be informed that there will be something special about it done in its back later. So it's fine to have a process bind to port 30001/tcp. It won't be reachable from remote with your rule, but will still be usable locally: a local (non-routed) access from the host to itself follows the chain OUTPUT for emitting, then when it's looped back, arrives directly in INPUT without hitting PREROUTING, so your nat/PREROUTING rule would not be executed in this case. This schematic should help understand how it's working for the DNAT / routed case (the non-routed local case isn't shown as clearly though). A remote access would follow PREROUTING -> routing decision -> INPUT -> local process, but in your case with your DNAT rule it will take the path PREROUTING -> routing decision -> FORWARD -> ... so will not reach the local process. So as long you don't add the equivalent rule in nat/OUTPUT, there's still an use for this case. In any case, usable or not, you can still bind to this port.
I have added a DNAT entry (in the host) for a port (say 30001) in PREROUTING using iptables $ sudo iptables -t nat -A PREROUTING -p tcp --dport 30001 -j DNAT --to-destination <my guest vm ip>:80Note: Above I have tried a port forwarding technique to allow ingress to guest vm. Is it possible for a host process to bind to the port 30001 after applying the above rule? Will Linux allow this or block by saying it is already in use?
Is it possible to bind to a port that has a entry for DNAT in iptables?