output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
Open a manpage, Hit -N then Enter. ( -, then ShiftN, then Enter)
e,g: man man:
1 MAN(1) Manual pager utils MA 2
3 NAME
4 man - an interface to the system reference manuals
5
6 SYNOPSIS
7 man [man options] [[section] page ...] ...
8 man -k [apropos options] regexp ...
9 man -K [man options] [section] term ...
10 man -f [whatis options] page ...
11 man -l [man options] file ...
12 man -w|-W [man options] page ...To remove line numbers -n Enter
To avoid duplicate lines, set the MANWIDTH variable. LESS variable set to -N to print line numbers:
MANWIDTH=100 LESS=-N man man
|
How can I add line numbers to man pages or info pages in Linux?
I want to use line numbers to navigate in man pages.
I can write the man page in a file and then open it with Vim, but is there a better way?
|
How do I add line numbers to the man page?
|
Yes, info does have a default value for INFOPATH compiled-in that it uses if you don't have INFOPATH set in your environment. (Also, if your INFOPATH ends with a colon, then the default path is appended to your value.)
The default DEFAULT_INFOPATH is .:/usr/local/info:/usr/info:/usr/local/lib/info:/usr/lib/info:/usr/local/gnu/info:/usr/local/gnu/lib/info:/usr/gnu/info:/usr/gnu/lib/info:/opt/gnu/info:/usr/share/info:/usr/share/lib/info:/usr/local/share/info:/usr/local/share/lib/info:/usr/gnu/lib/emacs/info:/usr/local/gnu/lib/emacs/info:/usr/local/lib/emacs/info:/usr/local/emacs/info (but it can be changed by defining DEFAULT_INFOPATH while compiling info).
There's also a INFODIR variable that can be set while compiling info. If set, it gets included in the path after the INFOPATH environment variable but before the DEFAULT_INFOPATH.
I don't know any way of asking your info program what values it was compiled with. (Although you can probably find the actual value of DEFAULT_INFOPATH with this command:)
strings `which info` | grep /info:
|
Does info (GNU texinfo 4.13) have a default search path for finding a dir file? Is it /usr/share/info? Even though I don't have an INFOPATH variable set in my environment, it seems to use the dir file in the path above.
Relatedly, I have a script that sets up a directory with an alternate info directory structure. It does set a valid INFOPATH environment variable but info seems not to use it as invocation simply gives a blank screen without any menu items.
Any guidance on how info progresses in searching the path?
|
GNU texinfo directory search method?
|
I never ran it manually before, but install-info looks like what you want (if you guessed it has an info manual, you're right, info install-info — although there is a man page, too).
|
I've installed some program that added menu items in info directory node (a main menu of info command). Then i manually deleted the programs info files from the system, so now when i click on 'program' item, it is not found. However the items of the program are still in the directory node. How does info compile the directory node? How it can be updated, so there are no more this program on the list? And generally how does the mechanism of finding and updating info files work, especially during install and uninstall?
Edit: One dirty solution is to just manually delete infofiles from infopath. Also i learned that in every infopath there is a file dir, which has links to every info file in the current directory, so one may just edit it instead of deleting files.
|
How to update menu of info directory node?
|
To create Info documentation, you first need a texi file.
.texi - Texinfo is a typesetting syntax used for generating documentation in both on-line and printed form (creating filetypes as dvi, html, pdf, etc., and its own hypertext format, info) with a single source file. It is implemented by a computer program released as free software of the same name, created and made available by the GNU Project from the Free Software Foundation.
.info - Info (Generated via makeinfo.) This is a specific format which essentially is a plain text version of the original Texinfo syntax in conjunction with a few control characters to separate nodes and provide navigational elements for menus, cross-references, sections, and so on. The Info format can be viewed with the info program.
makeinfo is a utility that converts a Texinfo file into an Info file; it is part of the texinfo package. texinfo-format-region and texinfo-format-buffer are GNU Emacs functions that do the same.
Here is a texi sample to use as a template:
\input texinfo @c -*-texinfo-*-
@comment $Id@w{$}
@comment %**start of header
@setfilename sample.info
@include version.texi
@settitle GNU Sample @value{VERSION}
@syncodeindex pg cp
@comment %**end of header
@copying
This manual is for GNU Sample (version @value{VERSION}, @value{UPDATED}),
which is an example in the Texinfo documentation.Copyright @copyright{} 2013 Free Software Foundation, Inc.@quotation
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3 or
any later version published by the Free Software Foundation; with no
Invariant Sections, with no Front-Cover Texts, and with no Back-Cover
Texts. A copy of the license is included in the section entitled
``GNU Free Documentation License''.
@end quotation
@end copying@dircategory Texinfo documentation system
@direntry
* sample: (sample)Invoking sample.
@end direntry@titlepage
@title GNU Sample
@subtitle for version @value{VERSION}, @value{UPDATED}
@author A.U. Thor (@email{bug-sample@@gnu.org})
@page
@vskip 0pt plus 1filll
@insertcopying
@end titlepage@contents@ifnottex
@node Top
@top GNU SampleThis manual is for GNU Sample (version @value{VERSION}, @value{UPDATED}).
@end ifnottex@menu
* Invoking sample::
* GNU Free Documentation License::
* Index::
@end menu@node Invoking sample
@chapter Invoking sample@pindex sample
@cindex invoking @command{sample}This is a sample manual. There is no sample program to
invoke, but if there were, you could see its basic usage
and command line options here.@node GNU Free Documentation License
@appendix GNU Free Documentation License@include fdl.texi@node Index
@unnumbered Index@printindex cp@byeConvert that into Info documentation with:
makeinfo mytool.texiListing a New Info File
To add a new Info file to your system, write a menu entry for it in the menu in the dir file in the info directory (/usr/share/info/ on Ubuntu). Also, move the new Info file itself to the info directory. For example, if you were adding documentation for GDB, you would write the following new entry:
* GDB: (gdb). The source-level C debugger.The first part of the menu entry is the menu entry name, followed by a colon. The second part is the name of the Info file, in parentheses, followed by a period. The third part is the description.
Conventionally, the name of an Info file has a .info extension. Thus, you might list the name of the file like this:
* GDB: (gdb.info). The source-level C debugger.However, Info will look for a file with a .info extension if it does not find the file under the name given in the menu. This means that you can refer to the file gdb.info as gdb, as shown in the first example. This looks better.
|
How can I add man page entries for my own power tools? got me thinking: How would one create an Info page?
|
How to create Info documentation?
|
man and info use different primary sources of information: man displays manpages, typically stored in /usr/share/man, while info displays Info documents, typically stored in /usr/share/info. Additionally, Info documents are normally available in a tree structure, rooted in /usr/share/info/dir, the “Directory node” displayed when you start info.
Whether a given manpage contains the same information as its corresponding Info document depends on who authored both. In some cases, they’re produced from a common source, or one is produced from the other; but in many cases they’re different.
GNU info will display a manpage if it doesn’t find an Info document. Pinfo can also display both Info documents and manpages, and it provides hyperlinks in manpages; its key bindings can also be configured to match your tastes.
|
I was looking for a way to follow "hyperlinks" in man pages, when I stumbled across the info command, which seemed to display information on commands the same as man but also allows you to tab to hyperlinks (and sadly no vim keybindings, but the arrow keys work)
But it made me wonder if this command was just displaying man pages with different formatting and functionality of display...or if it was displaying something else entirely like a separate set of documentation.
|
Does the info command display man pages?
|
You should install the info package:
sudo apt install info
|
When I run the info command I get bash: info: command not foundHow can I solve this?
|
Install "info" in Debian
|
On Debian and derivatives like Ubuntu, the bash manual is not installed by default, or only in the man format (and info falls back to displaying the man page when the info manual is not available) which is hardly usable for a manual of this size.
You need to install the bash-doc package first.
apt install bash-doc(as root).
Same goes for most manuals in info format. For instance, you'd need to install zsh-doc, gdb-doc, glibc-doc, gcc-doc, gawk-doc to get manuals of those large software in more useful formats.
More generally, the software packages come with minimal documentation in man format, a few documents in /usr/share/doc/<package-name>, but larger documentation in other format (can be info, html...), when available are supplied in a <package-name>-doc package. That makes sense as users don't necessarily need the documentation, especially when software are installed as a dependency of another package and the users will never get to use them directly.
For some software libraries, the API documentation (generally in man format) is often supplied in the <libpackage-name>-dev package (again users are unlikely to need that information unless they're going to develop software for it), though again a -doc package can complete it with richer format/content.
|
While redirecting info bash > file.info on Ubuntu, I get a file.info file with just the error:
info: No menu item 'bash' in node '(dir)Top'I tried to look at the location of the info page by using info bash -w which shows
*manpages*I also could not find bash info page under /usr/share/info
but I did find it under /usr/share/man/man1.
However, I tried info find > file.info (find command had an info page under /usr/share/info)
which was successful.
Also info bash works fine.
My system - Ubuntu 20.04.2 LTS with the kernel 5.11.0-25-generic
Kindly help.
|
Redirecting output of info command
|
When inside an info page e.g "info python" press the / key and it will give you prompt at the very bottom asking for "Regex pattern" .
Press enter and it will take you to the match.
Unlike less though, you cycle through matches with Controlx n / Controlx N or with { and } (see Searching Commands – Stand-alone GNU Info Manual).
|
I'm currently looking through the "info" page for "ls" command and I want to find all relevant sections that contain the keyword "-l". But I'm not sure how to search for keywords and how to move between different instance of same keyword.
|
How to search "info" page for keyword
|
As noted, this was originally done to reduce size. It is documented in 23.1.5 Tag Files and Split Files (GNU Texinfo 6.0):If a Texinfo file has more than 30,000 bytes, texinfo-format-buffer automatically creates a tag table for its Info file; makeinfo always creates a tag table. With a tag table, Info can jump to new nodes more quickly than it can otherwise.
In addition, if the Texinfo file contains more than about 300,000 bytes, texinfo-format-buffer and makeinfo split the large Info file into shorter indirect subfiles of about 300,000 bytes each. Big files are split into smaller files so that Emacs does not need to make a large buffer to hold the whole of a large Info file; instead, Emacs allocates just enough memory for the small, split-off file that is needed at the time. This way, Emacs avoids wasting memory when you run Info. (Before splitting was implemented, Info files were always kept short and include files were designed as a way to create a single, large printed manual out of the smaller Info files. See Include Files, for more information. Include files are still used for very large documents, such as The Emacs Lisp Reference Manual, in which each chapter is a separate file.)The splitting feature is very old. For example, when the texinfo change-log first mentions it in 1993, the feature may have been added before the change-log began in 1988:
Tue Feb 2 08:38:06 1993 Noah Friedman ([emailprotected]) * info/Makefile.in: Replace all "--nosplit" arguments to makeinfo
with "--no-split"
|
I've noticed that some software come with multiple info files. For example, tar on Fedora 21 comes with:tar.info.gz
tar.info-1.gz
tar.info-2.gzAre the tar-info-* files dependencies of some sort for the main tar.info.gz file? Is this division unique to each distro?
It seems that on the official GNU tar manual page, the Info tarball contains a single file, so I'm not sure where the -1 and -2 come from.
|
What is the purpose of numbered Info files?
|
Debian ships a script called update-info-dir which does exactly this. I suspect that Debian made its own because there wasn't a standard one at the time. You can grab the script from the install-info binary package or from the Debian patch part of the source archive (if this link dies because the version number has changed, look for a file called texinfo_*.debian.tar.gz).
|
I use MSYS2 on my Windows machine and when I installed make, as a dependency pacman installed guile. Listing the files of the guile package revealed that it has info pages (guile.info.gz) installed at /usr/share/info.
When I enter info guile, info displays the guile node. However, if I navigate to the top node, I don't see Guile listed there; same is the case with just giving info. From this question, I found out that there's a dir file in each path listed by $INFOPATH which is used by info to compile the top level node to list all the info nodes installed. I don't see Guile listed in this file.
I realize, from the answer to the linked question, that I may use install-info to update the dir file with the Guile node.
However, I'd like to know if there's an automatic way of updating all such dir files (which is used to show the Top node of info) with all such missing info pages; something on the lines of mandb used to index for apropos and whatis automatically periodically.
|
How to automatically update the top info directory
|
You can calculate it yourself for your system with simple command
$ find /usr/share/man/ -type f -exec ls -S {} + 2>/dev/null | head | while \
read -r file; do printf "%-40s" "$file"; \
man "$file" 2>/dev/null | wc -lwm; done | sort -nrk 4which returns on my box
(file) (lines) (words) (chars)
/usr/share/man/man1/zshall.1.bz2 27017 186394 1688174
/usr/share/man/man1/cmake.1.bz2 22477 106148 1004288
/usr/share/man/man1/cmake-gui.1.bz2 21362 100055 951110
/usr/share/man/man1/perltoc.1.bz2 18179 59783 780134
/usr/share/man/man1/cpack.1.bz2 9694 48264 458528
/usr/share/man/man1/cmakemodules.1.bz2 10637 42022 419127
/usr/share/man/man5/smb.conf.5.bz2 8306 49991 404190
/usr/share/man/man1/perlapi.1.bz2 8548 43873 387237
/usr/share/man/man1/perldiag.1.bz2 5662 37910 276778
/usr/share/man/e 1518 5919 58630where columns represent number of lines, words and characters respectively. Rows (commands) are sorted by last column.
We can do similar thing for info pages, but we have to bear in mind that it's content can span over many files. Thus let's use the benefits of zsh to keep above one-liner in compact form:
$ for inf in ${(u)$(echo /usr/share/info/**/*(.:t:r:r))}; do \
printf "%-40s" "$inf"; \
info "$inf" 2>/dev/null | wc -lwm; done | sort -nrk 4what gives
(info title) (lines) (words) (chars)
elisp 72925 457537 3379403
libc 69813 411216 3066817
lispref 62753 374938 2806412
emacs 47507 322194 2291425
calc 33716 244394 1680763
internals 32221 219772 1549305
zsh 34932 206851 1544909
gsl-ref 32493 179954 1518248
gnus 31723 180613 1405064
gawk 27150 167135 1203395
xemacs 25734 170403 1184250Info pages are huge mostly for gnu-related stuff what is understandable, but I find interesting that for example zsh has more lines and words but less characters than in man pages. It is interesting because at first glance the content is the same, just formatting is a little bit different.Explanation of zsh tricks in the selection of the files for the loop: for inf in ${(u)$(echo /usr/share/info/**/*(.:t:r:r))}; do
The goal is to create the list of unique file names from /usr/share/info directory and all subdirectories. Files should be stripped from dirname, extenstions and all numbers. The above snippet can be rewritten as ${(u)$(echo /usr/share/info/**/*(.)):t:r:r}, what gives the same result but uses probably more decent syntax, namely:**/*: descent into all subdirectories and mark everything there
(.): select only plain files
:t: remove pathname components (works like basename)
:r: remove extension (everything after last dot, including dot). It is applied twice to remove also unnecessary string and number (e.g. .info-6 from file zsh.info-6.bz2)
(u): show only unique words (after previous operations there are many the same words - different files/chapters for the same info command)
|
Is there an easy way to find out which command has the longest manual pages?
|
How do I tell which command has the longest manual on my system?
|
A quick test with info here tells me that pressing Enter one time works.
|
I am using info command, and when I press m, the keyboard prompt goes into a menu item: mode and I don't know how to quit from this mode. I tried q and ESC, but it didn't work.
Finally, I used Ctrl+C to quit. Is there any decent way to quit the menu item: prompt mode in the "info" command?
|
How to quit Menu Item mode in the info command page
|
Example:
$ cat .info
a.jpg
blah blah
blih blih
*.jpg
jpeg picture
$ tree --info
.
├── a.jpg
│ ⎧ blah blah
│ ⎩ blih blih
├── a.png
├── b.jpg
│ { jpeg picture
├── b.png
└── foo.user0 directories, 5 files(with a TAB preceding the comments per the manual you quoted).
|
man tree1 states:-info Prints file comments found in .info files. See .INFO FILES below for more information on the format of .info files.and further.INFO FILES
.info files are similiar to .gitignore files, if a .info file is found while scanning a directory it is read and added to a stack of .info information. Each file is composed of comments (lines starting with hash marks (#),) or wild-card
patterns which may match a file relative to the directory the .info file is found in. If a file should match a pattern, the tab indented comment that follows the pattern is used as the file comment. A comment is terminated by a non-tab
indented line. Multiple patterns, each to a line, may share the same comment.Objective
Given the following directory structure:
tree .
├── fileA.txt
├── fileB.txt
└── other_files
└── fileC.txtI would like to create a an info file(s) that would enable me to get the following output
.
├── fileA.txt # Comments on file A read from info file
├── fileB.txt # Comments on file B read from info file
└── other_files
└── fileC.txtFollowing the man pages this should be possible but I can't find an example how such an info file should be created. I've identified one potentially relevant discussion2 but it's not clear to me what should be the structure of this .info file so tree can use it to populate outputs with additional comments.1Version: tree v2.0.2 (c) 1996 - 2022 by Steve Baker, Thomas Moore, Francesc Rocher, Florian Sesser, Kyosuke Tokoro*
2 As discussed in the comments, the link is not pertinent to this question.
|
Creating .info files to be used with tree
|
The command info looks for files at places defined in $INFOPATH variable (usually /usr/share/info/, etc), but if it doesn't find the appropriate file there, as a fallback it switches to the man pages for help (see $MANPATH variable) and prints exactly the same content as man. So if info -w shows *manpages* then try man -w to get the information you wanted.
|
I want to find the location of info file of the jcal program.
It has appropriate info when I call info jcal. The output of info -w jcal is:
*manpages*Did I do wrong way to get the full location of info file? What is the
best way to get the info file location?Dist: Slackware Current.
jcal: 0.4.1
info: 4.13
|
Where does info file exist
|
The file you downloaded is an archive, you need to extract its contents:
sudo tar -C /usr/share/info -xof make.info.tar.gz &&
sudo install-info /usr/share/info/make.info /usr/share/info/dirRenaming it as you did happens to work because there’s a single file in the archive, and the tar header ends up mostly ignored by info; but zless /usr/share/info/make.info.gz should show you noise at the top of the page.
|
info make opened same as man make. I've downloaded make.info.tar.gz file from https://www.gnu.org/software/make/manual/, then:
sudo cp ~/Downloads/make.info.tar.gz /usr/share/info/sudo install-info /usr/share/info/make.info.tar.gz /usr/share/info/dirI got info (no pun intended) from https://www.gnu.org/software/texinfo/manual/texinfo/texinfo.html#Installing-an-Info-File
Now there is a new entry when I do info:Make: (make). Remake files automatically.But when I select it I get Cannot find node ''. info make still displays man page, not Info document page. What could be the problem?
|
Cannot find node ' '. How to add downloaded Info document file so that info command worked?
|
There are, info has this option:
--vi-keys
use vi-like and less-like key bindings.so your command for foo is
info --vi-keys foo
|
info fooNext, I am in a foo page with links navigated by enter and arrow keys. Are there vim keymap settings for this?
|
info: are there vim controls for the info pages?
|
Unlike man pages, info pages have the line width set when they are created using makeinfo(1) or texi2any(1) (the --fill-column option). The default is 72 characters, which is why you'll usually see line breaks there.
As far as I can tell, to reflow an info page you would have to regenerate the file from its original texi source.
|
Is it possible to make lines in the info command wider?
An example :
When running info awk I get the following output, although the terminal size is much wider.
2 Running 'awk' and 'gawk'
**************************This major node covers how to run 'awk', both POSIX-standard and
'gawk'-specific command-line options, and what 'awk' and 'gawk' do with
nonoption arguments. It then proceeds to cover how 'gawk' searches for
source files, reading standard input along with other files, 'gawk''s
environment variables, 'gawk''s exit status, using include files, and
obsolete and undocumented options and/or features.I tried to set COLUMNS=200 but it didn't change the output, interestingly the output of pinfo did change according to the COLUMNS variable.
|
How to change line width in "info" command
|
I'm not sure it is possible to do everything you ask, because man(1) sends the formatted man page data to your pager program via a pipe. This would prevent showing a file name, for one thing.
You can get a line count at least like so:Set your MANPAGER or PAGER environment variable to less.
Add -M to your LESS environment variable, to get the "long prompt", which includes the line count.Instead of -M, you can build your own less prompt with the -P option to get even more details. Again, though, there are some things in what you ask that less simply won't have access to when acting as man's pager program.
|
I know when you are in a man page it will display the file name at the bottom inside the man.
How can I do the same thing manually?
|
How can I display file info of a man?
|
info doesn’t use a separate pager, because it handles navigation — it doesn’t produce a text document to be viewed with another tool. It doesn’t support paging to less.
You might find Pinfo interesting, it’s a replacement for info (and man) with configurable colours etc.
|
On GNU/Linux is it possible to change the default pager for info command? I would like to use less as the pager (similar to man pages). I have customized less to use colors to make navigation of man pages much easier.
|
How to change the pager for info command
|
You can try starting from the command line with info info. Also, my info accepts H to enter the tutorial.
|
I would like to learn how to use the info pages - i.e. the documentation for GNU programs.
If I run info from a terminal, the info program launches
File: dir, Node: Top, This is the top of the INFO tree.This is the Info main menu (aka directory node).
A few useful Info commands: 'q' quits;
'?' lists all Info commands;
'h' starts the Info tutorial;
'mTexinfo RET' visits the Texinfo manual, etc.however if I type h I get a man page for info, not the info tutorial. How can I launch the tutorial?
|
How can I launch the tutorial for the GNU info pages?
|
history-node pops the most recently selected node, which means the history no longer contains that node; there is therefore no way to go “forward”.
There is another way of navigating the history though: list-visited-nodes (Ctrlx Ctrlb) lists all the nodes in the current window’s history, and that can be used to navigate in the history: go to a node in the list, visit it, then history-node back to the list.
|
The key bound to (history-node) lets GNU info jump to the last node visited in this window (like a browser's back button). Is there an opposite for this function in GNU info: a way to go forward again in the history if you have gone back (like a browser's forward button)?
|
Moving forward in history in GNU info
|
You're confusing info with less and other pagers. info doesn't use an
external pager and does not implement the same set of keybindings as
less and other pagers do. To jump to the next occurrence of the string press
Control-x n or use other keybindings
as described in info
manual. Also, p and n are used for a completely different purposes:
n (next-node)
C-NEXT (on DOS/Windows only) Select the ‘Next’ node. The NEXT key is known as the PgDn key on some keyboards.
p (prev-node)
C-PREVIOUS (on DOS/Windows only) Select the ‘Prev’ node. The PREVIOUS key is known as the PgUp key on some keyboards.
|
I'm using the Fish shell on openSUSE Tumbleweed (20200414). I am able to initially search for a string within an info page by typing '/string'. However, when I try to search for the next occurrence using 'n', I receive the following error at the bottom:
No 'Next' pointer for this nodeIn addition, searching backwards with 'p' produces a similar message (replaces 'Next' with 'Prev').
Is there a way to fix this? Unfortunately, I haven't been able to find anything online thus far.
As a workaround, is there a way to change the pager? Info pages seem to ignore $PAGER (I've set it using 'set -Ux PAGER most', which is working just fine on man pages).
|
Getting error when searching within an info page
|
There are two primary Info readers: info, a stand-alone program
designed just to read Info files (see What is Info?)
and the info package in GNU Emacs, a general-purpose editor.The stand-alone Info reader (greater or equal to version 6.0) may be configured to hide some information (related to note and menu items).
For more information, look at the GNU Info manual, more precisely, look at chapter "Manipulating Variables" (hide-note-references) and look at chapter "Custom Key Bindings", section "infokey format" (last paragraphs).
prompt% info info hide-note-references
prompt% nano ~/.infokey
prompt% cat ~/.infokey
#var
hide-note-references=OnGNU Emacs (Info mode) may also be configured to hide some informations (related to note and menu items), setting the variable Info-hide-note-references¹, in an Emacs initialization file (see GNU Emacs, chapter "Customization").¹ Feature available in Texinfo, since version 4.8 (2005).
|
If I view man pages using info (e.g. info man), I see common hyperlinks (e.g. apropos(1)).
However, Texinfo commands that create hyperlinks (e.g. @xref), in Info output, add a label (*Note) before the name of an hyperlink.
Concretely, @xref{Node name}. produces *Note Node name::. but I would like to get Node name. (without the inserted *Note).
How can I get common hyperlinks in Info manuals?
|
Can I get common hyperlinks in Info manuals?
|
Short answer: not possible. The difficulty of getting the exact dependencies from a source distribution is the reason why package management is so popular on Linux (okay, one of several reasons). In fact, if you just need to get it done and don't care so much how, the most reliable way to get the dependencies will probably be to grab a distro package (gentoo ebuilds are easy to work with) and pull the list of dependencies from that.
Otherwise, if you're lucky, the maintainers will have created a listing of the dependencies in the README file or similar - that'd be the first place to check. Failing that, if it's a C project and you don't mind getting your hands dirty, you can look inside the configure script (or better yet the configure.ac or whatever it's generated from) and figure out the dependencies from that based on what it checks.
|
When installing something from source (say, Ruby 1.9.2), what command can I run to get a complete list of all the dependencies needed to install that application? Is this possible?
|
Get list of required libraries when installing something from source
|
The traditional unix command at is usually used for this purpose. e.g.
echo 'sudo port install gcc45' | at midnight
|
I need to compile gcc45 on my computer and that's a lengthy and resource-intensive process for my computer, so I'd prefer to have it do it while I sleep (at night).
What's the closest thing to:
$ @2300 sudo port install gcc45
|
How do I execute a script later?
|
The canonical reference for this is The OpenBSD FAQ - 5.1
The install4.8.iso in the 4.8 directory is the 4.8 before patches. So, if you want the patches, you need to install 4.8 then patch your system yourself.
The install48.iso in the snapshots directory is more than just the patches to the OS listed on the errata page, it's also everything new that is being developed as the system moves towards 4.9. Snapshots are just that "snapshots" of the code as it's moving towards the next release.
So, to answer your question, no. If you install using the install48.iso CD, you will not have a patched system, you will need to apply the patches yourself.
For information on applying these patches, see each individual patch.
You may also choose to follow the "stable" branch of OpenBSD, the reference is OpenBSD - Following stable, which includes these patches already.
In either case, you will have to have a checkout of the OpenBSD source.
There is no one-liner, or automated way to apply these patches.
|
If I install OpenBSD from CD-ROM: http://www.openbsd.org/ftp.html with install48.iso then is it patched?All 10 patches from here are in the ISO file?
If those are not included, how can I apply these patches? Is there a one-liner command (like under Fedora: yum upgrade or Debian based, apt-get upgrade) or do I have to download and apply all 10 patches one by one?
|
OpenBSD patch system
|
PackageKit, the default package management tool in Fedora 13, does not include a method and it's unlikely that they ever will as it's a deliberate design choice to not include repository management.
However, could you instead package up the repo file into an RPM and distribute that? By default RPMs will open with Package Installer and that's GUI based.
|
Is there a GUI for adding additional software sources in Fedora (FC 13). I have a software repository that works fine when added manually (as a .repo file in /etc/yum.repos.d/), but I'd like to have a better way of telling end users how to install.
|
GUI for adding Fedora software source
|
config.sub is one of files generated by autoconf. Autoconf documentations states that it converts system aliases into full canonical names.
In short - you don't have to worry about it unless you're autoconf developer.
|
I'm trying to install some software from the command line. There is a file called "config.sub". Am I supposed to use this for something?
I haven't been able yet to find out by searching online what this file is supposed to do. I think part of the deal is I don't know how to ask the question correctly.
|
What is the function/point of "config.sub"
|
It's easy and low-risk. Just do the installation normally, and when the time comes to partition the disk, choose a manual partitioning strategy and make sure you override the Suse partition(s) only.
Ubuntu will want to override Suse's bootloader with its own. Let it: Grub needs some files in /boot, which you're going to overwrite. The Grub installer will automatically detect all installed operating system, so you'll still be able to boot both Linux and Windows. I don't know how Suse 11 configures Grub; the way to configure which OS gets booted by default might be different, so you should take a quick look at the Ubuntu Grub community documentation.
|
I have Windows XP and Linux Suse 11 installed on my laptop for some time.
I want to replace my Suse installation with Ubuntu and would like to know if it is possible to do that without affecting the Windows installation.
I have the Grub boot-loader that came with Suse.
What are the steps to follow (I didn't do the initial installation so I don't know if it is safe or not)?
Thank you!
|
Install Ubuntu over Suse without affecting Windows
|
Only the first is enough. It will even have at least one complete Desktop environment, GNOME. The way the content is organised is such that the most popular packages (according to popcon) are in the earlier discs.
|
I just want to install the operating system and SSH printserver .
I just burned the first DVD of the 8 avaible for the amd64 architecture.
How many DVDs should i burn in order the operating system to install ?
Note: Connection is slow as hell that is why i choose DVD solution
|
Debian 6 installation DVD required
|
In my experience always install Windows as first OS. Otherwise it will overwrite the boot loader of the previously installed OS. There are ways around it, but these just make it more complicated.
After installing Windows, install your first Linux distribution. It normally will find your Windows installation and add it to its boot loader automatically so you can dual boot with windows and Linux.
Now comes the third Linux distribution. Some distributions find other distributions and will add them to their boot loader (I don't know for sure about SUSE and Red Hat). Just try it during your installation. When all OSes are recognized install the boot loader of your third OS, otherwise boot into your first Linux distribution and add the second one manually to the boot loader. As the type and version of the boot loader depend on the distribution I can't tell you how to do it, but you'll find some good tutorials in the net.
|
Is there any existing step by step guide instructing how to install 2 different Linux OSs (say, Red Hat and SUSE), and Windows OS on the same machine?
(When tried it I entangled with the partitions configuration, and I've heard from others that the secondary Linux has to be installed without its boot loader).
|
Step-by-step guide for installation of 2 different Linux OSs and Window OS - on the same computer
|
Either you are patient and stick with what you have, or you find an official backport, or you find some unofficial backports, or you build your own package. The details depend on the particular package.
For example, in the case of PostgreSQL, you can either wait a few more weeks until the package officially enters some Ubuntu version, at which point also official backports will appear, or in the meantime you can get unofficial packages (albeit from the same packager) at https://launchpad.net/~pitti/+archive/postgresql.
Building your own packages from scratch or installing from source is probably not recommendable for the type of rather complex software that you mention, unless you are mainly interested in learning the internals rather than using them in production.
|
I have installed Ubuntu Server 10.10 and now I want to install some software like PostgreSQL, Nginx and PHP. But what is the preferred way to get the latest stable version of the software?
E.g I tried with sudo apt-get install postgresql but that installed version 8.4 of PostgreSQL but 9.0.1 is the latest version.
I have had this issue before with NginX. The solution was then to download the sourcefiles and compile the latest version which took some time. Later a friend told me that wasn't a preferred way to install software.
Any recommendations?
|
What is the preferred way to install new versions of software?
|
The packages are cryptographically signed, and the yum package installer does check those signatures when you add packages after the fact. The initial installer, however, does not check package signatures. This is a difficult problem, because: how to you verify that the cryptographic signatures you have on your install media are good when you don't, by definition, trust that install media?
See this Fedora bugzilla entry for history and details. This is the oldest bug still open in Red Hat's database, and it's so old that it's only three digits. (New bugs are now numbered well into the six hundred thousands.)
But, the entire install DVD is checksummed, and you can verify that that's good externally before starting your install against checksum files which are cryptographically signed. So, if you're very concerned (and in this day and age, it's good to be), do a non-network install after verifying the ISO you download against the GPG key from the official Fedora Project web site.
So to answer your three questions: yes, sort of, and yes.
|
I am installing Fedora 14 and I am wondering ifthe Fedora packages are cryptographically signed
package signatures are checked by the installer by default
package signatures are checked by yum when installing additional packages or doing upgrades
|
Are packages cryptographically signed in Fedora 14?
|
Have you tried installing without using any on-line repositories, just the CDs? Maybe its failing to download a package for some reason. Install from just the local media, then use yum to update afterwards.
|
While trying to install Amahi and Fedora 14 from 5 discs, I get this error:It says:A fatal error occurred when installing the module-init-tools package. This could indicate errors when reading installation media. Installation cannot continue.When prompted, I opted to install the Fedora repository, the Fedora Updates repo, and added an Amahi repo. I didn't select the Fedora Test Updates repo.
The checksums of the downloaded files are correct, and the media test approved all 5 of my CDs. What else could be the problem?
|
Why is my Fedora 14 install failing?
|
I don't know if that functionality is offered by typical installers, but it is easy enough to do from a live CD (or live USB or whatever). Both SystemRescueCD and GParted Live have the required tools readily available (there are undoubtedly many other suitable live distributions). Note that you need to boot from a separate system as ext3 filesystems cannot be shrunk while mounted.
You can use the GParted GUI to shrink the filesystem by up to 20GB or so, and resize the existing logical volume accordingly. Then, when you install another distribution, you will be able to create a logical volume in the free space. Note that not all distributions support installing to a logical volume (all the “serious” ones do, of course); for Ubuntu, you need the server installer (as opposed to the desktop installer with snazzy graphics but fewer options).
If you can't or don't want to use a GUI, here's an overview of how to do this on the command line:pvscan to detect physical volumes (if not already done during boot).
vgimport vg_token to import the volume group (ditto).
vgchange -ay vg_token to make the logical volumes accessible.
resize2fs /dev/vg_token/lv_root 72G (or whatever size you decide on).
lvreduce -L 72g /dev/vg_token/lv_root (this must be the same size of the filesystem; remember that with LVM tools, lowercase units are binary (k=1024) and uppercase units are decimal (K=1000)).
vgchange -an vg_token; vgexport vg_token; reboot.
|
My question is almost a duplicate of this question, but not quite because that one is about ext3 and I am already using LVM. I have an older HP Pavilion laptop running Fedora 11. I chose Fedora because it was semi-compatible with the hardware and it ran VMware well... but since I no longer need VMware I am looking to test out other distros and find one that's more compatible. (Specifically looking for software suspend support and maybe something more lightweight)
I'd like to try out a few new distros without hosing the existing (working) Fedora setup. Since I am using LVM, is it possible to reduce the size of my LVM LV and then install new distros into the volgroup, without the new distros destroying the Fedora setup? Here's how my LVM is set up now:
[root@token ~]# /sbin/lvm lvdisplay
--- Logical volume ---
LV Name /dev/vg_token/lv_root
VG Name vg_token
LV UUID JPCDlb-HHW7-fMDy-h8p2-Itbp-hwfK-3CwN97
LV Write Access read/write
LV Status available
# open 1
LV Size 91.96 GB
Current LE 23542
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0 --- Logical volume ---
LV Name /dev/vg_token/lv_swap
VG Name vg_token
LV UUID 3JMF4u-3jXx-Xy6H-saNt-Aljh-6Idw-73O4IS
LV Write Access read/write
LV Status available
# open 1
LV Size 1.00 GB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1[root@token ~]# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_token-lv_root
91G 68G 24G 75% /Are there distros which will allow me to install into a new logical volume without destroying the existing one? If so, which ones, and how would I go about making room for the new LV?
|
Can I alter my Fedora LVM LV to install a new distro as a dual-boot?
|
Most of the significant (including Fedora and Ubuntu) distributions prefer to install from a boot cd-rom or usb-stick these days. Windows need not part of the process at all.
Wubi is a windows application that can run Linux from a Windows file pretending to be a boot disk. Its purpose is to be have zero-impact on the Windows system:You keep Windows as it is, Wubi only
adds an extra option to boot into
Ubuntu. Wubi does not require you to
modify the partitions of your PC, or
to use a different bootloader, and
does not install special drivers. It
works just like any other application.The Fedora LiveCD and Ubuntu LiveCD and even the tiny DSL LiveCD are the simplest installation methods.
|
I am using Fedora 13, and it was my friend who did the installation for me. I have used Ubuntu also and I found it more easy to install than fedora. Ubuntu uses a WUBI installer (if I am correct) and its more easy for the users to install and remove Ubuntu. For a person who knows how to install and remove an application in Windows, can they install/remove Ubuntu also.
Why is it that its not the same with Fedora. Are there any steps being taken for making it more user-friendly.
|
Fedora vs. Ubuntu installation
|
As of today I have successfully installed this distribution and can use it as if it were Arch :) Below is the simplest way to do so:Install Arch on the hard drive
Remove everything in / (in the local disk), except for /boot
Mount the root-image.sqfs image in the linuX-gamers live DVD and copy everything inside to /
Repeat the previous step with the overlay.sqfs imageStep 2, 3, 4 may have to be performed with a live CD. Further customization is needed but the system can boot and function correctly after step 4.
This answer is really specific to this live DVD and thus not applicable to other live CD/DVD.
|
I tried out the linuX-gamers live DVD and like it so much that I want to have it as the main operating system on my desktop. The FAQ says:Can I install or copy the medium to my hard disk?
No, the software is only designed to boot from a live medium.However, as the live DVD is said to be based on Arch Linux I think it is pretty much possible to put it onto the hard drive. A painful way to do this would be to install Arch and try to make it look like this DVD. The download page says that the ISO is isohybrid, I don't know if it makes any difference.
Is there a (reliable) way to turn an ISO like that into a working installation? I wouldn't mind spending a few days to mess with it.
|
How to install from a Linux live CD that does not support installing?
|
There are three kinds of ISOs: First the DVD-ISO which is the best suited for you, I think. Then there is a set of CD images which only make sense to download if you need physical disks but don't have a DVD burner and thirdly the netinstall ISO which you seem to have downloaded.
To find a mirror which has the DVD isos directly available for download, have a look at http://www.centos.org/modules/tinycontent/index.php?id=30
A network installation means that on the ISO only the installer is included and all packages which are going to be installed are downloaded from the net during install. If you have little experience with Linux, the easiest would be to get the DVD Iso. With it, the installation is pretty easy. Then of course the techotopia.com-Link doesn't apply any more, as this only explains a network install.
If you need an installation guide (again, if you use the graphical installer it is pretty self-explainable) you might have a look at the official guide for 5.2. The installation process of 5.5 should be identical to 5.2. See http://www.centos.org/docs/5/html/5.2/Installation_Guide/
Finally if you run into problems and need live, interactive help, you might try the #centos channel in freenode IRC. If you can wait some minutes, of course better ask here.
|
I'll probably be in here a bit over the next few months.
The only exposure I've had to Linux is just some basic dabbling with Ubuntu and using Knoppix as a recovery DVD for when things go wrong.
I need to setup a VM with CentOS 5.5 64bit.
I downloaded the iso from the main site.
Have mounted it with VMWare workstation.
This link seems to be the best one I can find on getting things setup.
However it differs to some of the YouTube videos I have seen installing 5.4
Should I be doing a Network Installation?
Theres a step in the above guide that points to a mirror to pull the image from a webserver. Surely the image is on the ISO I downloaded?
However, I do not see any other option to install other than network install.
Can anyone tell me where I am going wrong? I am sure it is something silly.
|
Am I installing CentOS 5.5 right?
|
Since I use CentOS, which is a RHEL variant, the rpm command will need to be executed in terminal to accomplish this (I believe so)While RPM is used to work with the actual packages, RHEL and friends now use yum to make it less tedious.
Yum lets you install software through repositories, local or remote collections of RPM packages and index files, and handles dependency resolution and the actual fetching & install of the files for you.
You can find the list of repositories configured on your machine by peeking in the /etc/yum.repos.d/ directory.However, to use the wget command to download the package, I will need a url that points to the package. How should I find this url?By finding the appropriate .rpm file and downloading it? Or perhaps I don't understand what your question is. Regardless, if you're grabbing RPM files from somewhere on the internet, they're probably going to also have a yum repo set up, in which case it would be far more prudent to actually install their repo package first.
Hilariously, you do this by downloading and installing an RPM file.My personal research has shown that there are sites like rpm.pbone.net (the only one I know off) to search for these packagesWhile that site lets you search many known RPM packages, and you might find some handy bits and pieces there, I wouldn't try using it for things you care deeply about.
EPEL is a handy repository.
You can also take a peek at atrpms and RPMForge, though use them with caution. They are sometimes known to offer package replacements that may end up causing the worst sort of dependency hell ever experienced. It took me a few weeks to sort out a mess that someone made with clamav.
If you use either of those repositories, please consider setting their "enabled" flag to 0 in their config files in /etc/yum.repos.d/ and using the --enablerepo=... command line switch to yum.Given that version 5.0.2 is available for Fedora (another RHEL variant), where is the latest version of firefox for CentOS?There are two bad assumptions here.
First, you have the Fedora/RHEL relationship reversed. RHEL is generally based on Fedora, not the other way around. RHEL 5 is similar to Fedora 6. Any packages built for Fedora 6 have a high chance of operating on RHEL 5. However, Fedora is bleeding edge, and releases have a 12-month lifespan. Nobody is building packages for Fedora 6 any longer, it went end of life back in 2007ish.
Second, if you're trying to use CentOS 5 as a desktop OS in this day and age, you're insane. It's prehistoric. In fact, for a while modern Firefox versions wouldn't even run on CentOS 5 because of an outdated library. That's now resolved. Mozilla provides official (non-RPM) builds suitable for local installation and execution that you can use instead. Just head over to http://getfirefox.com/ for the download.
CentOS, being based on RHEL, inherits RHEL's packaging policy. RHEL never moves to newer non-bugfix versions of anything, as their goal is general stability. For example, CentOS 5 will be stuck with PHP 5.1, PostgreSQL 8.1, Perl 5.8 and Python 2.4 forever. RHEL sometimes provides newly named packages with newer versions, like python26 and php53 so that system administrators that expressly want new versions can kind of have access to them.I am unsure which package should I download to upgrade firefox.You almost certainly will not find such a package. If you want FF5 on CentOS 5, you should probably do a local installation of the official binaries from Mozilla.I am currently, just for practice, searching for the mozilla firefox and vlc's latest releases.atrpms currently seems to offer vlc. (I would not recommend simply grabbing the RPM from that page and installing it, but using yum to install it from the atrpms repo.) The official VLC RHEL download page recommends RPMForge instead, though they're shipping an older version there. Yes, that means that both of them offer vlc. Remember how I recommended setting enabled to 0? Yeah, this is why.
I want to take a moment to re-emphasize that you should not try using CentOS 5 as a desktop OS right now. Red Hat's update policies indicate that RHEL 5 will stop getting non-bugfix updates at the end of the year, and stop getting anything but security and critical bug fixes at the end of next year. It'd basically be like installing XP on a new machine.
RHEL 6 has been out for a while. The CentOS folks had to completely redo their build environment in order to accommodate it. Apparently the CentOS 6 images are being distributed to mirrors now, or so their QA calendar suggests. We'll see. Regardless, it would be a slightly better idea for a new installation today, if you expect the machine to have a long life in production.
On the other hand, if you're seriously looking at Linux on the desktop, consider a distribution that keeps itself up to date with modern software, like Fedora itself or even something Debian-based like Ubuntu. Ubuntu has a lot of mindshare in desktop installs, and it seems like apt repositories (apt is their yum-like tool) are far, far more easily found than yum repositories.
|
Backstory: Recently, it was explained to me that to upgrade any package via terminal on a linux machine, I will need to use the distributions package management system to install or upgrade the package.
Since I use CentOS, which is a RHEL variant, the rpm command will need to be executed in terminal to accomplish this (I believe so). Therefore if I need to upgrade or install a package I will first use the wget command to download the package and then the rpm command to install it. The process is clear till here!
The actual question: However, to use the wget command to download the package, I will need a url that points to the package. How should I find this url?
My personal research has shown that there are sites like rpm.pbone.net (the only one I know off) to search for these packages. A search for 'firefox' (selecting search for rpms by name) as the keyword has given results for a whole lot of distributions. CentOS 5 is listed on page 3 but the latest version seems to be 3.6.18. Given that version 5.0.2 is available for Fedora (another RHEL variant), where is the latest version of firefox for CentOS? I am unsure which package should I download to upgrade firefox.
Plea: It'll be great if someone can point out how should I go about searching for packages to install on CentOS. Is there an official site for CentOS packages similar to rpm.pbone.net (which I believe is unofficial). I am currently, just for practice, searching for the mozilla firefox and vlc's latest releases.
|
How should I search for packages to install on CentOS 5.5?
|
You have the ssh program. You don't have the package called ssh.
Ubuntu splits ssh into two packages: openssh-server and openssh-client. The reason for the split is that many people just need the client, not the server. Having the server installed and running when you don't want it isn't just (tiny) a waste of resources, it's a security risk if you have weak passwords.
There's also a package called ssh. It's intended as a way to say “I just want ssh, all of it, don't bother me with the details”.
APT suggests a package P if one of the packages it's installing suggests P and P isn't installed yet. A suggestion means thatthe listed packages are related to this one and can perhaps enhance its usefulness, but that installing this one without them is perfectly reasonable.(in the words of the Debian Policy Manual, which defines the packaging format introduced by Debian and also used by Ubuntu).
|
On Ubuntu Desktop 10.04.1, I have just run apt-get install to install a package. Amongst the list of suggested packages is ssh.
I'm confused, because I am sure that I already have ssh installed:
# ssh --version
usage: ssh [-1246AaCfgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
[-D [bind_address:]port] [-e escape_char] [-F configfile]
[-i identity_file] [-L [bind_address:]port:host:hostport]
[-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port]
[-R [bind_address:]port:host:hostport] [-S ctl_path]
[-w local_tun[:remote_tun]] [user@]hostname [command]So...
When apt-get install suggests packages, does it take my existing set-up into consideration before making the suggestion? If so, why does it suggest something that I already have?
|
Why does 'apt-get install' suggest packages that I already have?
|
Why don't you think of Fedora? Perhaps you think it cannot be installed with a single CD? It has always been an option, to install Fedora from the first ISO in the series. That was quite misleading but if you read the description carefully enough there is text that says something like "Only first CD is required, the rest are just additional software".
In Fedora 14, however, the confusion is solved because the website makes it quite clear that you install Fedora using one CD only. I guess you can still get the rest of the CDs in the series somewhere, but I don't think you will ever need to.
Update: I'm not sure about /etc/init.d or /etc/profile.d, but Fedora should be the closest thing to RHEL, excluding RHEL itself.
|
Can someone suggest a RHEL-like distribution that can install from a single CD? I am looking for a distribution where services are started from /etc/init.d, environment variables are set from scripts in /etc/profile.d and so-on (meaning that Ubuntu doesn't do the job).
A network installer isn't an option...
I think in terms of the OS's behaviour, CentOS would do the trick, except I can't find a single CD installation.
Any suggestions?
|
Single CD install of a RHEL-like distribution?
|
You should always avoid building Python yourself, unless you have a very good reason to. You don't want to mess with the Python environment provided by your distro. If you are just tinkering, always do it in a restricted environemnt, e.g. a VM or virtualenv. Also, why are you installing from source? Python 2.7 ought to be available from your distro, even if not by default (e.g. if you are running Debian 6, enable Testing repository to get it).
|
I installed Python 2.7 from source. A dependency of some packages is python. Is there a way I can prevent the install of a lesser version of Python, or let apt know it's already been fulfilled?
|
Prevent a package from being installed?
|
Chroot alone doesn't bring any kind of security. In other words, treat a chroot as if the chrooted processes could access everything on the system — because often they do. See also chroot "jail" - what is it and how do I use it? — note in particular Michael Mrozek's remark"chroot jail" is a misnomer that should really die outChroot is a containment method for files only, and it's more of a convenience than a security feature. If you have a process that lets untrusted users specify file names (an FTP server, for example), chroot is a way to make sure that the users aren't going to be able to reference files outside the chroot directly. You should to make sure that the chroot doesn't contain any file that could lead to an escape; in particular:Put only the bare minimum of device files (/dev/*) in the chroot. Don't bind-mount /dev, for example you don't want block devices there. Only put tty devices and the miscellaneous data devices (/dev/null, /dev/zero, /dev/urandom, …).
Don't mount /proc. This is a big constraint, but /proc exposes a lot of information by design. For example, if you have a process 1234 is running as a certain user outside the chroot, then any process (chrooted or not) can access the root directory as /proc/1234/root.A chrooted process can still send signals to non-chrooted processes, open network sockets, access shared memory (on Linux, nowadays, only if /dev/shm is available), etc. If you're using chroot for containment, don't run any process outside the chroot as a user who's running processes inside the chroot.
Chroot remains a good way to run a different version of the same OS (with the same kernel)¹. When there are security concerns, there are better tools nowadays, in particular FreeBSD jails and Linux cgroups and LXC. Compared with the old days, full virtualization (VirtualBox, KVM, …) has also become a more viable option even on commodity hardware.
¹ By the way, in my answer there I explain how to not start services inside a Debian chroot. This isn't a security concern, and there's an assumption that the services are cooperative and correctly written.
|
This page describes how you can use the debootstrap utility to install a base Debian unstable/sid system on an existing Linux machine. The new install is accessible using chroot.
When doing this, what security issues should be kept in mind? For example, what needs to be done to stop background/startup processes from starting in the new chroot or otherwise interfering with the main system?
|
Debian unstable chroot security issues
|
Moving that comment to its own answer, looks like your /etc/apt/sources.list is faulty. Edit it to remove the line that contains debian-security, and replace it with
deb http://ftp.nl.debian.org/debian/ lenny main contrib non-free
for the main distribution,
deb http://security.debian.org/ lenny/updates main contrib non-free
for security updates, and
deb http://volatile.debian.org/debian-volatile lenny/volatile main contrib non-free
For so-called 'volatile' updates, then run apt-get update; apt-get -uf upgrade to bring your entire system up to date, and then try installing php5-cgi again.
(ETA: You can replace 'nl' with your own country code to get servers a little closer to your physical location and hopefully better download speeds)
|
I have a VPS with Debian GNU/Linux on it. I'm trying to install a PHP file manager so that people could access it and download stuff into a directory.
I don't have anything in my /bin about PHP so this is probably an issue.
I installed PHP with this command:
apt-get install php5 php5-cgi php5-cli php5-gd php5-mysql libapache2-mod-php5and it says
Reading package lists... Done
Building dependency tree
Reading state information... Done
php5 is already the newest version.
Package php5-cgi is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package php5-cgi has no installation candidateIt's saying php is already installed? I try to confirm this by typing php -v and it says
Command php not foundWhy is this and how can I get php running?
|
Fixing debian installer
|
I can think of different ways to do what you want. All of which carry a level of risk and difficulty. The main risk being that if the install goes wrong/breaks you will end up with an unbootable system that needs installing manually.
My main thought (which depends on your boot loader and similar) would be to use exactly the procedure that you have now. Basically copy the new install image onto your USB stick which is permanently left in the machine. Then just reboot and let it boot from that and install normally.
It relies on the followingHands-off install. I'm assuming that you have that otherwise an overnight reinstall would not be a problem.
Your boot loader being able to choose between USB or local filesystem boot automatically (or via an application level command before rebooting)
At the end you need to configure your boot loader to boot from the local board rather than the USB device or just erase the contents of the USB device/make it unbootable so that the bootloader falls overAn alternative to that would be to have two boot/root partitions on your board and just install into the one that you aren't using and at the end of the reboot force your bootloader to boot into the other. You could use a chroot environment to force your installer to think that it was booting from scratch. That is probably a big change in your environment though and would not be a quick win.
|
I am developing an embedded Linux system. The system is usually installed by creating a ISO file which is written to a USB stick the board can boot from.
To make the installation possible to do automatically (say, over night) I would like to be able to do the installation on the board while the old system is running.
My installation has two parts: An initrd file which contains busybox and install scripts, and a .tar.gz archive that has the rest of the root file system to install.The bootloader loads the kernel and points it to the initrd, and boots the kernel.
The initrd install scripts mounts the target drive /dev/sda, formats it, installs the bootloader, and finally copies the root file system from .tar.gz and initrd.Now I instead want toCopy install.iso from host computer to target device. (No problem)
Do the installation steps as above.My problem is that I don't know how I should go about replacing the currently running system with my new one. I assume that the currently mounted root (/) would have to be unmounted and replaced by the initrd. But I can't seem to figure out how!
|
Swap root at runtime
|
You can safely leave the swap partition as is, it can be shared among different distros. The root partition definitely has to be wiped, as you expect.
The home partition is somewhere in the middle. Of course your data and settings will not harm the new installation, but a difference in configuration options may give you weird errors.
A better approach is to back up the home partition somewhere, then install the new distro (wiping the home partition on the way). When you are done with installing the new distro, simply recover from the backup.
Or, if you don't like backups, just cross your finger and install it that way, keeping the home partition. In the case of errors, try creating a new user to check. If the new user does not have the problem then you know you have to clean up your configurations :) I don't like this approach because it's less clean. Arch and Ubuntu are so different that I'm quite sure there will be lots of unused dot files in your home directory.
|
I wonder if I have previously installed ubuntu with root, home, and swap partition. And now I want to change distro to arch linux. Is it so that I only need to wipe my root-partition and install arch linux there instead?
|
When changing distro
|
Simple answer is no, and this goes back to at least 2005, if you are doing this en-mass with grub, then you should still be able to specify in the boot options the paths to mirror, just like you can with specifying a path to a Kickstart file.
Some brilliant examples of how to do this can be found on the Fedora Infrastructure wiki pages, mainly just -x "method=<path to RPM directory" should do the trick.
|
I installed Fedora 14 using a slightly non-standard method, i.e. loading the install media vmlinuz/initrd.img files via an existing grub2 instance. (I fetched them from a mirror).
The installer works fine, but I was a little bit surprised that after selecting the network install route I had to manually enter the URL of a FC14 mirror.
Luckily, I am having a secondary computer with network access available for looking up mirror URLs.
Does the FC14 not include any default install mirror urls? Or am I missing something?
|
Does the Fedora installer not include default URLs for installation mirrors?
|
The question lies in the SATA3 controller. This forum thread answers the question.
http://ubuntuforums.org/showthread.php?t=1456238
In summary, in the BIOS change the SATA3 controller mode to AHCI, this should allow linux to find and use the drive.
|
I'm building a new rig and got the RealSSD C300 for its supposedly stellar performance, but it is not recognized when I try to install Ubuntu 10.4LTS 64-bit. Is there anything that I can do to get this recognized?
|
Installing Ubuntu, how do I get it to recognize the Crucial RealSSD C300?
|
try a
sudo apt-get cleanyour local repository may be out of date
|
I've tried to install emacs on Ubuntu server 64bit 11.04, but it's complaining about emacs23-common. If I do:sudo apt-get -f installI still see the same problem:Errors were encountered while processing:
/var/cache/apt/archives/emacs23-common_23.2+1-7ubuntu2_all.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)How can I solve this situation?
|
apt-get has unmet dependencies, but apt-get -f install doesn't solve problem
|
It should be straightforward to make a persistent installation directly on a USB stick, as if it was an internal disk. Plug in your Ubuntu installation media (I recommend not putting it on the same stick, so that the two are bootable separately), your USB stick, and point the installer to the stick.
The server installer (alternate CD) supports creating and installing to an encrypted partition (with dm-crypt).
|
Is it possible to create a single-user USB installation (with persistence) of Ubuntu Linux such that the entire USB stick is encrypted and requires a passphrase at boot time?
Is there an online tutorial for this?
|
USB Ubuntu with whole-disk encryption
|
If you just need any 2.6.34-kernel, you might head over to koji and try to find a precompiled one for you version of fedora. You can install it as root after downloading all required rpms with yum localinstall kernel-*.rpm and it will automatically appear in Grub.
If you need to modify the kernel, it is best to also start with the distribution kernel and modify it to suit your needs. There is an extensive howto in the fedora wiki.
Lastly if you really need to start from scratch with the sources from kernel.org, you have to download the source and extract the archive. Then you have to configure the kernel. For this, say make menuconfig for a CLI or make menuconfig for a graphical configuration. You might want to start with the old configuration of the running kernel, see Recompile Kernel to Change Stack Size.
When you are finished configuring, say make to build the kernel, then make modules to build kernel modules.
The following steps have to be done as root: Say make modules_install to install the modules (this will not overwrite anything of the old kernel) and finally make install which will automatically install the kernel into /boot and modify the Grub configuration, so that you can start the new kernel alongside the old one.
|
I need to install another kernel (2.6.34) into my fedora machine (x86) and i need to show the old and new boot up options in the boot menu (both new and old kernel)
I have downloaded the new kernel and i need to compile it and need to build it.
can you explain me the steps for doing that?
I got the correct steps from this discussion and am having doubts in the steps 6 and 7 in the below link which explains the installation of new kernel.
http://www.cyberciti.biz/tips/compiling-linux-kernel-26.html
Also can you explain the effective configuration of 'menuconfig' and its what it actually aims?
|
Installing new kernel (by commandline) as side of old kernel and effective configuration of ' menuconfig'
|
Firestarter hasn't been in the fedora repository since Fedora 11. It's listed as a deprecated package on their wiki. It's recommended that you use system-config-firewall instead.
Do you have a particular reason for wanting to use firestarter? If you really want to install it you can try grabbing the source from sourceforge.
|
I am feeling stupid. I have been searching for 3 hours with no success.
I installed Fedora 14 and tried to do yum install firestarter but the package was not found.
I also tried from a GUI and found nothing.
It is at fedoras repository?
Maybe I should configure the repository, but all I found are dated. Any help? Thank you.
|
I can't figure out how to install Firestarter Fedora
|
Yeah sorry about that. :)
It is possible, but is only easy if you made /home be a separate partition. Despite my best efforts, this isn't the default.
You don't have a lot of files yet, though, do you? I think the best bet is to boot into single user mode and copy the contents to a USB memory stick. That should be easy.
You'll need to mount it manually -- plug it in, wait a few seconds, and then type dmesg and note the device that it says was inserted. Then, mount that with:
mount /dev/sdc /mntreplacing sdc with whatever dmesg said. (You may need sdc1, depending on how the device was formated).
Then, change to the root directory (cd /) and run
tar cJvf /mnt/mattdm-is-sorry.tar.xz /home
and when that completes, run
sync; sleep 3; umount /mnt
(The sleep is for superstition.)
The reason for tar rather than just copying is to preserve the Unix metadata, because the USB drive will be FAT formatted, and we don't want to mess with that right now.
Then, once you have your system repaired (I still recommend the F15 alpha!), you can extract it with tar xf /mnt/mattdm-is-sorry.tar.xz. If you do that in / as root, it'll overwrite everything in your new /home, so probably the best thing to do is boot the new system into single user mode and do that first thing.Oh, and this time, while you're installing, make /home its own partition. :)
|
Is it possible to reinstall Fedora (I have the DVD that I used to install it yesterday), and keep the files in my home directory?
I seem to have messed up my system while trying to get my monitor resolution to work correctly: Installed Fedora in dual boot Windows desktop. Now I can't get full monitor resolution with my AMD Radeon HD 6450
The step that caused my problem was yum --enablerepo=rawhide upgrade kernel xorg-x11-drv-ati xorg-x11-drv-ati-firmware, so I'm looking to either figure out how to get fedora to boot, or just reinstall fedora, but keep the files I've set up so far.
|
Reinstall Fedora, keep files?
|
You have the user libraries installed, but you also need to install the developer libraries and header files.
Taking ao as an example:
The normal user package includes files like:
/usr/lib/libao.so.4.0.0
/usr/lib/libao.so.4whereas the developer package include files like:
/usr/include/ao/ao.h
/usr/include/ao/os_types.h
/usr/include/ao/plugin.h
/usr/lib/pkgconfig/ao.pcAnd it's the second set of files you're missing.
I'm not familiar with SUSE's YaST2, but the commands should look something like
yast2 --install libao-devel.
And the same for the other packages of course.
One way to double check the name of the RPM to install is to go to rpmfind.net and paste one of the missing file names in, e.g. /usr/lib/pkgconfig/ao.pc. It will give you a list of RPMs: look for the OpenSUSE 11.3 one and use that name when running yast2 --install.
UPDATE
According to Using zypper to determine what package contains a certain file, you can use zypper rather than needing to use rpmfind.net.
Try this:
zypper wp ao.pc(untested)
Also, on an RPM-based system, you might find it better to try searching for an RPM .spec file, and build using that.
I found a focuswriter spec file on the OpenSUSE web site.
Then if you build using rpmbuild, it should give you an error telling you which packages you still need to install so you can build it.
This also has the advantage of giving you an RPM you can easily install, upgrade, and uninstall, which uses the SUSE recommended build options.
|
I'm trying to compile some software (FocusWriter) on openSUSE 11.3, (linux 2.6.34.7-0.5-desktop). (I can't find an actual download link to the alleged openSUSE RPM...just lots of metadata about the RPMs). So I unpacked the source from git, and, following instructions, ran qmake. I get this:Package ao was not found in the pkg-config search path.
Perhaps you should add the directory containing `ao.pc'
to the PKG_CONFIG_PATH environment variable
No package 'ao' found
Package hunspell was not found in the pkg-config search path.
Perhaps you should add the directory containing `hunspell.pc'
to the PKG_CONFIG_PATH environment variable
No package 'hunspell' found
Package libzip was not found in the pkg-config search path.
Perhaps you should add the directory containing `libzip.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libzip' found
Package ao was not found in the pkg-config search path.
Perhaps you should add the directory containing `ao.pc'
to the PKG_CONFIG_PATH environment variable
No package 'ao' found
Package hunspell was not found in the pkg-config search path.
Perhaps you should add the directory containing `hunspell.pc'
to the PKG_CONFIG_PATH environment variable
No package 'hunspell' found
Package libzip was not found in the pkg-config search path.
Perhaps you should add the directory containing `libzip.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libzip' found
I know that all those packages are in fact installed, according to both YaST and zypper. /usr/lib64/ contains files such as libao.20.2 and libzip.so.1 and libzip.so.1.0.0 -- but nowhere on the harddrive can I find anything called ao.pc, hunspell.pc, or libzip.pc.
Any suggestions what I'm missing here?
Thanks.
|
qmake looking for lib files named *.pc
|
You can always use another tool, like GParted, to set up your partitions and then return to the Arch installer and proceed with installing the packages and setting up the rest of your system.
In terms of the filesystems, the Beginners' Guide has a thorough section on what partitions you need (basically, you can get away with just /, but are better off with a minimum of / & /home) and the benefits and drawbacks of the filesystem types for each.
|
im installing arch linux, the x86-64 net install.
during the part where the hard drive is being processed/erased, i get:
"Warning: Could not create all needed filesystems.
Either the underlying blockdevices didn't become available in 10 iterations, or process filesystem failed."
How do i fix that?
How do i know which filesystem to use?
|
arch linux installation, "could not create filesystems"
|
You're doing it the right way. It may be that the device /dev/sda1 doesn't exist yet. You also probably don't need to specify -t ext3 since that should be default. I don't expect having it would cause any problem though.
|
I am installing crunchbang linux (#!) to my eeePC and it is unable to start the disk partitioner. I traced the problem to partman and partman-lvm that statesNo volume groups found.So I have done some snooping, and I can get around that part of the installer (that just hangs) if I can mount my future root partition to /target and then go from there.
However, I'm having a lot of trouble with the mount command.
I want to mount /dev/sda1 to /target. /dev/sda1 is ext3.
When I trymount -t ext3 /dev/sda1 /target
it states:
mount -t ext3 /dev/sda1 /target/ failed: Invalid argument.To get a place (/target) I simply did mkdir /target. Perhaps this is not the proper way to do this?
Gracias =)
|
mount root fs to /target
|
Because the full debian distribution for even a single architecture now well exceeds seven DVDs, and the packages on each DVD are sorted by popularity, not by common theme.
Every single installation manual strongly recommends installing from a minimal CD or USB image (100 MB or less, generally) and installing over the internet, or a local apt proxy if you have multiple systems to set up.
In addition to that, the DVDs do not contain the latest versions of all packages -- security updates aren't automatically integrated into the DVD images until the next point release. Regenerating the entire image set (Remember there's roughly seven DVDs per architecture for a dozen or so actively supported architectures and you'll see why the development team prefers not to do live rebuilds every single time a package is updated.)
|
I know Debian is general purpose, so a Debian install could just as easily be a server as a desktop. It seems, however, that even with a DVD of disk1 from Debian, it is downloading a tremendous number of files from the network mirror. I would have thought a DVD would be enough to get a base desktop installation running. What am I missing here?
Let me clarify: I chose "SSH Server" and "Desktop Environment" in tasksel. Is GNOME not a part of the primary installation DVD?
|
Why does installing Debian from DVD download many packages?
|
What you do is look up the processor architecture. This is an intel processor of the x86 family, supporting 64-bit mode (you can see this very clearly in the list of supported operating systems, for example). So you need either i386 or amd64.
Almost all current servers, and most desktops for that matter, have amd64 processors anyway (also known as x86_64 or a number of variations, not to be confused with ia64 which is completely different). All amd64 processors also support i386 instructions.
|
When I try to install FreeBSD I always get confused as to which iso is the correct one for Rack mounted servers. In their download choices, there are too many versions of FreeBSD
Which one is the correct one to burn in CDROM or USB drive to install?
|
Which FreeBSD version or ISO for Dell R310?
|
You can use ipset save/restore commands.
ipset save manual-blacklistYou can run above command and see how you need to create your save file.
Example output:
create manual-blacklist hash:net family inet hashsize 1024 maxelem 65536
add manual-blacklist 10.0.0.1
add manual-blacklist 10.0.0.2And restore it with below command.
ipset restore -! < ips.txtHere we use -! to ignore errors mostly because of duplication.
|
I am using iptables with ipset on an Ubuntu server firewall. I am wondering if there is a command for importing a file containg a list of ip's to ipset. To populate an ipset, right now, I am adding each ip with this command:
ipset add manual-blacklist x.x.x.xIt would be very helpfull if I can add multiple ip's with a single command, like importing a file or so.
At command
for ip in `cat /home/paul/ips.txt`; do ipset add manual-blacklist $ip;doneI get this response
resolving to IPv4 address failed to parse 46.225.38.155for each ip in ips.txt
I do not know how to apply it.
|
How to import multiple ip's to Ipset?
|
Turns out richard was right. The list:set type is indeed the solution although I find the wording in the documentation somewhat confusing, if not misleading.
It is possible to have, say the following contents to be used with ipset restore:
create dns4 hash:ip family inet
create dns6 hash:ip family inet6
create dns list:set
add dns dns4
add dns dns6you can then use ipset add to add IPs to the member sets (i.e. dns4 and dns6 respectively), but not to the super set (dns) of type list:set.
However the SET (-j SET --add-set dns src --exist) target can actually be told to add the IP to dns and will then only add to the set for which it's possible, which in our case depends on the family option. This will be harder with more sets that could be eligible for adding and IP (or network or ...) in which case the first one will be used to add the entry.
This means that list:set can be used to halve the number of rules where otherwise you'd have to match an IP set per IPv4 and IPv6 rule respectively with an otherwise identical rule.
|
Is it possible to have one IPv4 and one IPv6 IP set (ipset(8)) within the same rule?
I have several rules that depend on one set of IPv4 addresses and another set of IPv6 addresses respectively, but are otherwise identical.I should add that there is a feature in ipset(8) which sounded hopeful, but turns out to offer no solution to the problem at hand:
list:set
The list:set type uses a simple list in which you can store set names.
[...]
Please note: by the ipset command you can add, delete and test the
setnames in a list:set type of set, and not the presence of a set's
member (such as an IP address).
|
Is there a way to match an inet and inet6 IP set in a single rule?
|
You lost rules because: After adding rules you have to do save before restart service or server. because when you add rule, they are in memory but after saving they will save in file and restore from that file at start-up.So first You need to save added rules using:
$ /etc/init.d/iptables saveThis will save all rules in /etc/sysconfig/iptables, then just enable the iptables service at start-up using:
$ chkconfig --level 53 iptables onMethod 2
To save rules:
$ /sbin/iptables-save > /etc/iptables.rulesTo restore rules [ Add Below entry in /etc/rc.local ]:
$ /sbin/iptables-restore < /etc/iptables.rule
|
I have one single ipset added to my iptables on a CentOS 6.x box and this rule is lost when the machine reboots.
I've found this answer showing how to make a Ubuntu system reload the iptables rules after a reboot but this directory is not present on CentOS.
How do I make this CentOS box load the firewall rules after a reboot?
NOTE: Yes, I'm saving the rules using iptables save and the file is being saved.
This is what is inside /etc/sysconfig/iptables:
# Generated by iptables-save v1.4.7 on Mon Apr 8 09:52:59 2013
*filter
:INPUT ACCEPT [2713:308071]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1649:1766437]
-A INPUT -p tcp -m multiport --dports 25,587,465,110,143,993,995 -m state --state INVALID,NEW,RELATED,ESTABLISHED -m set
--match-set blocking src -j DROP
COMMIT
# Completed on Mon Apr 8 09:52:59 2013the command shows -A INPUT but when I created it I have used -I INPUT.
The rule used to create this was:
iptables -I INPUT -p tcp -m multiport --dports 25,587,465,110,143,993,995 -m state --state NEW,ESTABLISHED,RELATED,INVALID -m set --set blocking src -j DROP
|
iptables rules not reloading on CentOS 6.x
|
Turns out it is possible, using the SET target described in iptables-extensions(8).
SET
This module adds and/or deletes entries from IP sets which can be defined by ipset(8). --add-set setname flag[,flag...]
add the address(es)/port(s) of the packet to the set --del-set setname flag[,flag...]
delete the address(es)/port(s) of the packet from the set where flag(s) are src and/or dst specifications and there can be no more
than six of them. --timeout value
when adding an entry, the timeout value to use instead of the default one
from the set definition --exist
when adding an entry if it already exists, reset the timeout value to
the specified one or to the default from the set definition Use of -j SET requires that ipset kernel support is provided, which, for standard
kernels, is the case since Linux 2.6.39.I hadn't found it, because I hadn't searched further down after finding the set module description.
|
In iptables-extensions(8) the set module is described and it is discussed that it is possible to react to the presence or absence of an IP or more generally a match against an IP set.
However, it does not seem that there is a way to append items to an IP set on the fly using an iptables rule.
The idea being that if I use the recent module, I could then temporarily blacklist certain IPs that keep trying and add them into an IP set (which is likely faster). This would mean less rules to traverse for such cases and matching against an IP set is said to be faster as well.
|
Can iptables rules manipulate IP sets?
|
Let's say we have an ipset named MYTESTSET, and that this ipset is of type hash:ip. It will store just ip adresses.
Then match against your IPset and after match against connlimit match extension, with the parameters you want.
iptables -A INPUT -p tcp
-m set --match-set MYTESTSET src
-m connlimit --connlimit-above 1 --connlimit-saddr --connlimit-mask 32
-j DROP
This will do the following: for each source inside the IP set, connections will be counted and if there is more than one (--connlimit-above 1), it will be droped, thus limiting the number of connection per source in the ipset to 1. (You can also match the other way, using --connlimit-upto xxx and -j ACCEPT instead of DROP)
If you want to consider the whole set and allow 1 connection for all sources in the ipset then set the --connlimit-mask switch to 0.
|
connlimit lets me limit the number of connections per client/service. How would I go about to combine such a rule with the IP sets available in more recent versions of the Linux kernel and netfilter?
|
How to combine connlimit with IP sets?
|
Turns out the man page describes what I was looking for. It's aptly called timeout and can be specified when adding entries to an IP set. I missed it due to a search for wrong terms.
A default timeout value can be given when creating a set and later for each entry added - if it is desired to override the set default.
Examples from ipset(8):
ipset create test hash:ip timeout 300
ipset add test 192.168.0.1 timeout 60
ipset -exist add test 192.168.0.1 timeout 600
|
I am trying to establish a whitelist of clients that have successfully logged into the system, using ipset. What options do I have to let an entry age so that I can later discard it based on its age?
Is there a better method than the idea outlined below?
I have not found anything provided by ipset directly, so I am trying to establish whether or not such a facility exists within the scope of ipset/iptables.Right now the only idea I have come up with is to use a cronjob that swaps the list every X minutes or hours. So as an example I'd have a list whitelist which is active, plus a list for the next hour (say for 21:00 whistelist_21), if I am some time between 20:00 and 20:59. Any client connecting now would be added to the active whitelist and to the whitelist for next hour (or a given period). Then at each full hour (or given period) a cronjob - e.g. at 21:00 in the above case - swaps the existing whitelist for the whitelist_21 one and disposes of the (now renamed) whitelist. E.g.:
ipset swap whitelist whitelist_21
ipset destroy whitelist_21
|
How can I let ipset entries "age"?
|
You can add another ipset to block, this time of type hash:net, and add 197.192.0.0/16 to that ipset. Or replace your ipset with one of type hash:net since hash:net can store IP addresses as well (netmask 32).
To convert from hash:ip to hash:net:
ipset save myIpset > myIpset &&
ipset destroy myIpset &&
sed s/:ip/:net/ myIpset | ipset restore &&
ipset add myIpset 197.192.0.0/16
|
I have this range of IPS 197.192.x.x that is brute force attacking my pop/imap/smtp servers day after day.
I have this ipset in place that is blocking every IP that tries to hack on my server.
I would like to block access for pop/smtp/imap for all IPs starting with 197.192
To do this, I have typed this command:
ipset -A myIpset 197.192.0.0/24but this added 65536 IPs to my ipset, making it huge and now I cannot add more IPs to it.
Is there another way to do this in a more elegant way?
|
iptables... blocking a range without flooding ipset set with IPs
|
Your iteration is wrong. the correct syntax would be something like:
#!/bin/bash
sudo wget -O /var/geoiptest.txt http://www.ipdeny.com/ipblocks/data/countries/{ad,ae,af}.zone
while read ip; do
sudo ipset add geo $ip
done < /var/geoiptest.txt
|
This is my ipset shell script file like this
#!/bin/bash
for IP in $(wget -O /var/geoiptest.txt http://www.ipdeny.com/ipblocks/data/countries/{ad,ae,af}.zone)
do
# ban everything - block country
sudo ipset add geo /var/geoiptest.txt
donei think last row have fault, how can i resolve that ?
|
how to addfile to ipset in shell script?
|
Have a look at iproute2.
You can easily configure many route-tables and define network interface that handles the connection, including solution of your problem.
Here you are some useful examples:http://linux-ip.net/html/routing-tables.html
http://lartc.org/howto/lartc.rpdb.htmlReferences:http://man7.org/linux/man-pages/man8/ip-rule.8.html
http://man7.org/linux/man-pages/man8/ip.8.html
|
I'm experimenting DNS server setup that reply different results based on source IP address. and the same time I need to dynamically change what interface external source ip should forward,
eth0 physical inteface 192.168.1.10
eth0: virtual interface 1 192.168.1.11
eth0:1 virtual interface 2 192.168.1.12I have bind9 install in my server with two views configured and both listening 192.168.1.11 and 12 respectively.
In my setup only external facing interface is eth0 and all the clients request DNS through it. I need to forward those request to my virtual interface based on my clients source IP address and change it dynamically.
as an example
for scenario 1
if user 192.168.1.40 query DNS through eth0 I need him to forward eth0: (192.168.1.11)
for scenario 2
same user (192.168.1.40) I need to forward to eth0:1 (192.168.1.1)
I want to achieve that external user can get different results by using the same dns server in two different times.
|
Forward traffic to virtual interface based on source IP address dynamically using iptables
|
You can add and remove IPs to your already defined sets on the fly. This is one of the ideas behind IPsets: if this wasn't possible, the whole set extension of iptables wouldn't make much sense.
The primary goal of ipset was to enable you to define (also dynamically) classes of matches (e.g. for dynamically blacklisting malicious hosts without the need to magically add one rule for every single host).
excerpt from the ipset homepagestore multiple IP addresses or port numbers and match against the collection by iptables at one swoop
dynamically update iptables rules against IP addresses or ports without performance penalty
express complex IP address and ports based rulesets with one single iptables rule and benefit from the speed of IP sets
|
I was wondering about the semantics of ipset(8).
Is it possible to add a rule matching a set to iptables and then manipulate the set, or can I only create a set and swap it for an older set in order to apply it to an iptables rule matching the name? I.e. can I add/remove to/from an IP set ad hoc, or can I exchange whole sets while the sets are in active use?The reason I ask is this is as follows. Say I create a set
ipset create PCs hash:ip
ipset add PCs 1.1.1.1
ipset add PCs 2.2.2.2... et cetera. And a rule that allows access to HTTP:
iptables -A INPUT -p tcp --dport 80 -m set --set PCs src -j ACCEPTWhat happens when I run:
ipset add PCs 3.3.3.3will the iptables rule now take immediate effect for IP 3.3.3.3 as well?
I saw it's possible to use -j SET --add-set ... to manipulate IP sets ad hoc from within iptables rules. This makes me think it should work to manipulate a set at any given point.
However, the ipset project site seems to suggest that swapping a new (adjusted) set for another is the better alternative. Be it via ipset swap or via ipset restore -!.
Can anyone shed light on this?
|
Do I have to swap IP sets, or can I add/remove on the fly?
|
iptables -A adds to the end of the chain (note that the long form of -A is --append). You probably have a rule similar to iptables -A INPUT -p tcp --dport XX -j ACCEPT near the top, which is interfering because it is being matched first when the rules are executed top to bottom.
There are two obvious ways to work around this:Use a separate blocking chain, which gets called before the service's accept rule. This is the approach I'd use, since if the jump to the block chain is properly conditioned, all those rules won't be tested if there is no need to. You'd need to work out which block chain to add to. Another upside is that if you need to, doing iptables -F smtp-blocks is much easier than manually finding and deleting each block to port 25. (The fact that you are using sets may alleviate this to some extent; I'm not too familiar with rule sets and what can be done with them.)
Replace iptables -A with iptables -I. Using -I inserts at the top (or before the specified index, if one is specified), ensuring that the blocking rule gets executed before the service accept rule.My supposition is this: ipset works by adding an IP to a table and that IP will be blocked the next time that IP comes to the server but the connection will not drop if the IP is already connected to the server trying to attack. Is this right? If it is, is there a way to interrupt a connection going on?It depends on the rule. If a TCP-protocol rule specifies that it should match only on SYN packets (--syn) then it will only match when the connection is being initiated; however, the default is to match on every packet. UDP has no concept of connection initiation, although you may be able to use connection tracking for similar purposes. That said, if you do not explicitly specify anything, if properly placed the newly added rule will match the next incoming packet, which will be handled according to the rule in question. If the rule says -j DROP, from the remote party's point of view that's about the same as if you just yanked the network cable.
|
I have a script running every minute by a crontab.
This script scans the system logs and grabs the IPs of every failed attempt to login on the server's dovecot, exim or ssh and add them to an ipset, blocking that IP forever.
The problem is this: the script runs every minute and is doing well what it is supposed to do, that is, grab the IP of attackers and add them to ipset, but I still have log entries of the same IP trying to attack the system for an hour.
In other words. Suppose someone tries to attack the system now. Within one minute the script will run and grab all IPs with more than 3 password failures and add them to an ipset. Even so, I have logs of IPs trying to brute force attack the site for hours and the connection is not interrupted.
My supposition is this: ipset works by adding an IP to a table and that IP will be blocked the next time that IP comes to the server but the connection will not drop if the IP is already connected to the server trying to attack. Is this right? If it is, is there a way to interrupt a connection going on?
NOTE: Just for the record: the commands I have used to add the ipset named blocking to iptables was like this:
iptables -A INPUT -p tcp --dport XX -m set --set blocking src -j DROPwhere XX is the port I am blocking.
|
iptables is not blocking
|
So, you're not really using a RHEL kernel (and the fact you used apt-get makes me wonder if it is RHEL at all), but an OpenVZ container. OpenVZ containers rely on features provided by the hosting system's kernel, which in this case doesn't support ipsets. There's nothing you can install in the container that will make the OpenVZ hosting environment support it, you'll need to talk to your hosting provider to build a kernel with iptables/ipset support.
|
I'm using a VPS from a VPS provider that run a 2.6 kernel, RHEL with OpenVZ virtualization system. I want to use ipset utility to manage ip sets on my iptables firewall.
This is the error I'm getting when creating an ipset:
mindaugas@517713:~$ sudo ipset create cf_ipv4 hash:net
ipset v6.20.1: Cannot open session to kernel.strace of the command: https://p.defau.lt/?NwzyZkxR_VgekwCRr6YlWg
Question: is it even possible to use ipset on such machine with these options? If so - how can I do it?
Mount outout:
mindaugas@517713:~$ mount
/dev/simfs on / type simfs (rw,relatime,usrquota,grpquota)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
none on /dev type devtmpfs (rw,nosuid,noexec,relatime,mode=755)
none on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/cgroup type tmpfs (rw,relatime,size=4k,mode=755)
none on /run type tmpfs (rw,nosuid,noexec,relatime,size=275252k,mode=755)
none on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
none on /run/shm type tmpfs (rw,relatime)
none on /run/user type tmpfs (rw,nosuid,nodev,noexec,relatime,size=102400k,mode=755)Here is the relevant information:
mindaugas@517713:~$ uname -r
2.6.32-042stab120.3mindaugas@517713:~$ sudo rpm -qa
vzkernel-headers-2.6.32-042stab120.3.x86_64mindaugas@517713:~$ ipset --help
ipset v6.20.1
Usage: ipset [options] COMMANDmindaugas@517713:~$ sudo apt-get install xtables-addons-common
Reading package lists... Done
Building dependency tree
Reading state information... Done
xtables-addons-common is already the newest version.
The following packages were automatically installed and are no longer required:
dmsetup grub-common grub-gfxpayload-lists grub-pc grub-pc-bin grub2-common
libdevmapper1.02.1 libfuse2
Use 'apt-get autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
|
RHEL: can't use ipset utility with error: cannot open session to kernel
|
iptables -A INPUT -p tcp --dport 25 -m set --set blocking src -j DROP
iptables -A INPUT -p tcp --dport 143 -m set --set blocking src -j DROP... or whatever ports you're using.
|
I am using ipset in conjunction with iptables to create a list of IPs I want to block. I did this:
ipset -N blocking iphash
ipset -A blocking 124.205.11.230
// and repeated this line for all IPs I want to add to "blocking" listnow I have to add this rule to iptables
if I do this
iptables -A INPUT -m set --set blocking src -j DROPthe IPs will be blocked for everything SSH, FTP, etc. I want just to block them from using my email system dovecot, exim.
how do I do that?
|
Using iptables to block for specific services
|
Try doing this :
iptables -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7
|
Suppose I add an IP to iptables blocking for exim, dovecot and FTP and this IP visits my server again.
Is there any log of this visit so I can confirm the IP was trying to reach the server again but was blocked?
|
iptables and logs
|
You can't put different types of elements in the same set with the ipset command. But you can use different sets, one for each type (full list available with ipset help):
hash:ip
hash:ip,portFor example:
ipset create blocklistip hash:ip
ipset create blocklistipport hash:ip,portipset add blocklistip 192.0.2.3
ipset add blocklistipport 192.0.2.2,80
ipset add blocklistipport 192.0.2.3,udp:53Note like above that by default the protocol for the port is TCP unless explicitly stated otherwise (udp: for UDP, sctp: for SCTP, ...).
Now your script has to check what type of element it got, to know in what ipset it will add it. A simple example here would be to check for the , to know where to put it, while reading the list from the file blocklist.txt:
while read -r element; do
if echo $element|grep -q ,; then
ipset add blocklistipport $element
else
ipset add blocklistip $element
fi
done < blocklist.txtAnd you can block everything in the list for example with:
iptables -A INPUT -m set --match-set blocklistip src -j DROP
iptables -A INPUT -m set --match-set blocklistipport src,dst -j DROPAbove src,dst means use the source IP address along the destination port address in the packet when looking for a match in the hash:ip,port set.
Also, ipset has a special set list:set consisting of a list of other sets. This won't change the way to populate separately the sets using the ipset command, but you can do this:
ipset create blocklist list:set
ipset add blocklist blocklistip
ipset add blocklist blocklistipportand replace the two previous iptables rules with only the one below:
iptables -A INPUT -m set --match-set blocklist src,dst -j DROPwhich goes toward your goal: this single iptables rule will work correctly with set elements with or without a port, as documented in ipset.
|
Now I perform this:
create blockipset hash:ip
add blockipset 192.168.1.5 -exist
add blockipset 192.168.3.115 -existIs it possible for iptables and ipset to block ip,port and ip?
for example, the list contains:
192.168.1.5
192.168.3.115
192.168.1.55,80
192.168.1.53,22
|
iptables add ip,port and also IP
|
SSH connection was established before port 22 was added to ipset so conntrack should just skip all packets, allowing SSH to work.This is not correct.
All packets will be processed through the filter rules, whether they belong to tracked connections or not.
It is a very common optimization of iptables rules to put something like this near the beginning of the relevant rule chain (FORWARD in your example):
iptables -t filter -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPTOn older distributions, you might see this version instead:
iptables -t filter -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT(I understand the conntrack match is now preferred over the state match. I think some kernel versions even nag at you about it.)
This will allow packets belonging to existing connections through, if a connection tracking information exists for them. But the point is, you can control where exactly you put that rule, or whether you use it at all. So you can make your firewall rules care about connection states as much - or as little - as you want.
|
I don't understand some basic concepts of conntrack module.
First of all, I'm sure it's enabled in my system (Ubuntu 18.04), modinfo shows info about nf_conntrack and /proc/modules file tells nf_conntrack is "live".
Second, I have the following test setup:
Machine A (192.168.1.2) <-----> Router Machine (192.168.1.1 & 192.168.2.1) <----> Machine B (192.168.2.2)
On Router Machine I have the following iptables rule:
iptables -t filter -A FORWARD -m set --match-set BlackListPort dst -j DROP
BlackListPort is an ipset table.
Now I establish an SSH connection from Machine A (1.2) to Machine B (2.2). After I confirm it works, I add port 22 (SSH default) to BlackListPort table.
SSH connection freezes/hangs until I remove port 22 from that ipset table.
Now the question: Since conntrack is present in my system, why SSH block is successful? SSH connection was established before port 22 was added to ipset so conntrack should just skip all packets, allowing SSH to work.
|
Conntrack and dynamic ipset/iptables rules
|
Your shell knows where to find executables (like ipset) by looking in your PATH, which is set by your environment. cron does not share the same environment. Adding this at the top of the crontab (or your script) should tell it where to find commands as you expect:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
I have this txt files that contains IPs, one per line of file, that I want to block using ipset.
I have this bash script that essentially reads from the plain txt file and constructs an array. Then it iterates the array elements and add each one to the ipset I have created for that purpose.
The problem is this: if I execute the script manually from the terminal, it works perfectly, but when I add the script to run periodically using crontab, the script runs but the IPs are not added to the ipset.
This is the relevant part of the script.
index=0
while true; do
ipset -quiet -A myIpset $[arrayOfIPS[$index]}
index=$[$index + 1]
if [ "$index" -gt "$lastIndexOfArray" ];
then break
fi
doneThis works perfectly from terminal but not running from a crontab task.
why?
|
ipset not executing from crontab
|
I won't limit my answer to a tiny part of an algorithm because OP is also hitting implementation limits. The answer has to work with them.The routing stack, part of Linux' network stack is limited to handling only 33 CIDR netmasks equivalent to /32 (/255.255.255.255) till /0 (/0.0.0.0) .
Other parts of the network stack, such as some of the firewall facilities are not so limited. By using a netmask such as 255.0.0.255 one can ignore the 2 middle numbers parts in an IPv4 address. Alas, netmasks, which can easily handle power of two, or remainders of divisions by power of two, can't help with a modulo 10 (remainder of division by 10) and a division by 10 is not available in packet processing. So there are still 25 values between 9 and 249 to check and a list for these values is needed. Alas again, in addition ipset is also constrained to CIDR netmasks, and iptables' use of ipset doesn't allow to use a non-constrained netmask along: ipset can't be used at all.
Either use ~ 25 iptables rules (which isn't that much) or use the much more flexible nftables with a set.
With iptables
Use a netmask of 255.0.0.0 only to check the IPv4 address is starting with 10. and check the 25 possibilities for the last part. It's possible depending on the exact problem to solve that additional restrictions for the incoming interfaces to consider could be added.
iptables -t mangle -N specialrouting
iptables -t mangle -A PREROUTING -s 10.0.0.0/255.0.0.0 -j specialroutingTo populate the user chain specialrouting with the 25 possibilities for example use this bash script:
#!/bin/bashiptables -t mangle -F specialrouting # for idempotence
for i in {9..255..10}; do
iptables -t mangle -A specialrouting -s 10.0.0.$i/255.0.0.255 -j MARK --set-mark 0xcafe
doneIn the end, the 2^16 (for the two ignored middle bytes) x 25 = 1638400 possibilities will get a mark.
Had the modulo been 16 (a power of 2), thus filtering the 16 values 9, 25, 41, 57 ... 249 instead of the 25 modulo 10 values then everything above could have been replaced with a single rule because a netmask can handle any modulo that is a power of 2. Here the netmask 255.0.0.15 (0xff00000f) can be used with the single rule below:
iptables -t mangle -A PREROUTING -s 10.0.0.9/255.0.0.15 -j MARK -set-mark 0xcafeOr instead with nftables
With the elements list of the set pre-computed with a script similar to above, the ruleset with the modulo 10 cases would be (to be loaded with nft -f specialrouting.nft):
specialrouting.nft:
table ip specialrouting # for idempotence
delete table ip specialrouting # for idempotencetable ip specialrouting {
set modulo10 {
type ipv4_addr
flags constant
elements = { 10.0.0.9, 10.0.0.19,
10.0.0.29, 10.0.0.39,
10.0.0.49, 10.0.0.59,
10.0.0.69, 10.0.0.79,
10.0.0.89, 10.0.0.99,
10.0.0.109, 10.0.0.119,
10.0.0.129, 10.0.0.139,
10.0.0.149, 10.0.0.159,
10.0.0.169, 10.0.0.179,
10.0.0.189, 10.0.0.199,
10.0.0.209, 10.0.0.219,
10.0.0.229, 10.0.0.239,
10.0.0.249 }
} chain prerouting {
type filter hook prerouting priority -150; policy accept;
ip saddr & 255.0.0.255 == @modulo10 meta mark set 0xcafe
}
}For modulo 16 the single nftables rule (without its boiler plate) would have been similar to the single iptables rule above:
ip saddr & 255.0.0.15 == 10.0.0.9 meta mark set 0xcafeUsing the mark for routing
The mark set on the packet (only during the internal life of this packet in Linux' network stack) can then be used as a message for the routing stack and be matched with a routing rule to use an alternate routing table:
ip rule add fwmark 0xcafe lookup 100Followed by routes in table 100, the most important ones including dev eth5 in them, in addition to including any other involved routes copied from the main routing table:
ip route add ... dev eth5 table 100
...Return traffic from eth5 should probably use the same table:
ip rule add iif eth5 lookup 100Additional notes
At least when testing, to avoid traffic dropped because of incomplete routes, SRPF, ie rp_filter, should not be enabled (in case the distribution default enables it):
for i in $(sysctl -N -ar 'net\.ipv4\.conf\..*\.rp_filter'); do sysctl -w $i=0; doneThere are other possibilities when using marks, such as storing the mark in the conntrack entry of the flow, but I can't know what would be needed here without a precise description of the setup. This blog has a few examples: To Linux and beyond ! Netfilter Connmark.
Note that involving a mark to reroute locally initiated (or the same locally terminated) traffic rather than forwarded/routed traffic hits corner cases (and more for UDP than for TCP), so I didn't even try to address this case.
|
Senior programmer here but hate the linux networking limitations which make things difficult compared to all programming languages.
Practically I need to make policy based routing that allows specific lan ip addresses to pick specific outgoing interface (let's say eth5).
Even ipset is not powerful enough in my situation. I want to allow all lan ips that first octate is "10" and the last octate is ending up with "9". That would be 10.*.*.??9 or in javascript if(ip.match("\10\..*\.(\d+9|9)\g")) ...USE eth5
Does anyone knows some kind of trick to achive that ? That would be probably thounsands of IP CIDR if we have to stick with CIDR, which is insanity.
Thanks
|
ipset alternative or some kind of smart idea for wildcards
|
It turns out that this is working, after all. I am very, very sorry for the false alarm.
I incorrectly thought that it wasn't working properly for the following reason ...
I am using both postfix and dovecot, and I have set up postfix to use dovecot to perform its authentication services.
I have set up dovecot to write its debug and logging messages to a file called /var/log/mailclient.log, while postfix is configured to log via syslog.
I wasn't thinking clearly, and forgot about the fact that postfix's authentication attempts would also cause entries to appear in this same dovecot log file, given that dovecot is the one that is performing this postfix authentication.
I am only using my iptables rules to block pop3 and imap (ports 110, 143, 993, and 995), and I am not blocking postfix's ports.
Given this way in which I set up postfix authentication, there are entries in that /var/log/mailclient.log file for all of the postfix login attempts, as well as for the dovecot login attempts. I was not paying attention well when reading those entries in this log file, and I mistakenly thought that they were login attempts for pop3 and imap, instead of smtp login attempts. Therefore, I mistook these smtp login attempts (which I am not blocking) for pop3 and imap login attempts.
Once I understood my error, I more carefully examined and analyzed my dovecot log file, and I now realize that indeed, none of the pop3 nor imap connections are coming to dovecot, except for those which originate from the small subset of hosts which I have put into my "allowed-hosts" ipset list.
Therefore, the iptables entries that I have listed above are indeed working properly, after all.
Once again, I apologize for my false alarm, and I'm just glad that this is working.
Perhaps this question and discussion could help someone else in the future who might make the same mistake as I made.
|
I'm running Debian 8.11 with iptables v1.4.21 and ipset v6.23, protocol version: 6.
I'm trying to block access to certain ports for all but a small set of hosts, but it doesn't seem to be working.
First of all, I put a small list of IP addresses into an ipset list called allowed-hosts. Then, after running sudo /sbin/iptables -F and sudo /sbin/iptables -X, I issue the following commands:
sudo /sbin/iptables -I INPUT -p tcp -m multiport --destination-port 110,143,993,995 -j DROP
sudo /sbin/iptables -I INPUT -p tcp -m multiport --destination-port 110,143,993,995 -m set --match-set allowed-hosts src -j ACCEPTHowever, even after doing this, clients from IP addresses that are not present allowed-hosts are still successfully connecting to all of the named ports.
There are no other iptables rules in effect.
Here are the results of sudo /sbin/iptables -L ...
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere multiport dports pop3,imap2,imaps,pop3s match-set allowed-hosts src
DROP tcp -- anywhere anywhere multiport dports pop3,imap2,imaps,pop3sChain FORWARD (policy ACCEPT)
target prot opt source destination Chain OUTPUT (policy ACCEPT)
target prot opt source destination And here are the results of sudo /sbin/iptables-save ...
# Generated by iptables-save v1.4.21 on Wed Jun 8 11:53:09 2022
*security
:INPUT ACCEPT [16777464:2727427757]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [18889599:33356814491]
COMMIT
# Completed on Wed Jun 8 11:53:09 2022
# Generated by iptables-save v1.4.21 on Wed Jun 8 11:53:09 2022
*raw
:PREROUTING ACCEPT [21444955:3000669583]
:OUTPUT ACCEPT [18889599:33356814491]
COMMIT
# Completed on Wed Jun 8 11:53:09 2022
# Generated by iptables-save v1.4.21 on Wed Jun 8 11:53:09 2022
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT
# Completed on Wed Jun 8 11:53:09 2022
# Generated by iptables-save v1.4.21 on Wed Jun 8 11:53:09 2022
*mangle
:PREROUTING ACCEPT [21444955:3000669583]
:INPUT ACCEPT [21444952:3000669415]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [18889599:33356814491]
:POSTROUTING ACCEPT [18889599:33356814491]
COMMIT
# Completed on Wed Jun 8 11:53:09 2022
# Generated by iptables-save v1.4.21 on Wed Jun 8 11:53:09 2022
*filter
:INPUT ACCEPT [2130649:527089827]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [4465281:1887206637]
-A INPUT -p tcp -m multiport --dports 110,143,993,995 -m set --match-set allowed-hosts src -j ACCEPT
-A INPUT -p tcp -m multiport --dports 110,143,993,995 -j DROP
COMMIT
# Completed on Wed Jun 8 11:53:09 2022What might I be doing incorrectly?
Thank you in advance.
**UPDATE**
First of all, "src" indeed is being specified, contrary to what was suggested in the comment below. It appears in the "... src -j ACCEPT" line, above.
Secondly, the syntax of these iptables commands that I am using comes from what is shown both in the iptables docs and in discussions that were found via web searches.
Thirdly, look above at the iptables -L output. This clearly shows that connections to the ports should be accepted from source=anywhere to destination=anywhere for the IP addresses in the allowed-hosts list. This also clearly shows that connections to the ports should be dropped from source=anywhere to destination=anywhere for the IP addresses that are not in the allowed-hosts list.
At least that's what iptables seems to be telling me. However, connections to these ports from IP addresses that are not in the allowed-hosts list are still being accepted on my machine.
Also, if I do ipset test allowed-hosts aaa.bbb.ccc.ddd, where "aaa.bbb.ccc.ddd" represents an IP address which is not in allowed-hosts, I properly get this following output:
aaa.bbb.ccc.ddd is NOT in set allowed-hosts.And if I do ipset test allowed-hosts www.xxx.yyy.zzz, where "www.xxx.yyy.zzz" represents an IP address which is in allowed-hosts, I properly get this following output:
www.xxx.yyy.zzz is in set allowed-hosts.Looking at the output from iptables-save, above, what else in my configuration could be causing these connections to ports not in allowed-hosts to be accepted?
Thank you again, in advance.
|
iptables not blocking access via ports?
|
In the first ruleset, you only allow outgoing traffic as you specified -i $LAN: so the reply will be filtered out. It will probably work simply by removing -i $LAN` ?
But in this case the whole traffic will be counted (upload + download)
If you want to count separately upload and download, you'll probably have to create two marking policy:one for the upload, where src mac is marked
one for the download, where dst mac is marked.
|
I'm building a captive portal (yeah, just-another ;) )
and now I'm trying to handle the core feature, the iptables rules.
Based on ipset I have a list of valid mac-addresses with name allow-mac.
Sothis is the current config (stripped to the problem itself):
echo 1 >/proc/sys/net/ipv4/ip_forwardipset create allow-mac hash:mac counters
ipset add allow-mac XX:XX:XX:XX:XX:XXIPT="/usr/sbin/iptables"WAN="eth0"
LAN="eth1"$IPT -P FORWARD DROP
$IPT -t nat -A POSTROUTING -o $WAN -j MASQUERADE
$IPT -I FORWARD -i $LAN -m set --match-set allow-mac src -j ACCEPTThis should work but it didn't! so, if I change the default FORWARD chain to ACCEPT and change the rule to the inverse:
$IPT -P FORWARD ACCEPT
$IPT -I FORWARD -i $LAN -m set ! --match-set allow-mac src -j DROPI have the desired result, and only clients with known MAC-address in list can forward.
So my question, why is it not working in the first setup? And my second missing feature is, if the counters module is already added, but now the "upload" traffic from client is counted, how can (in a separated counter) I also count the download traffic as well?
|
iptables - allow forward rules by set
|
I think the problem is 81.212.0.0/14 have bigger IP count than 65535, maybe idk.You may be exactly correct here. If you are using an IPset of type hash, it has a maximal number of elements it can store, settable by the maxelem parameter when creating the IPset... and the default value for maxelem is 65536. And if you use a hash of type bitmap, 65536 addresses is the maximum size of the map.
But what are you using the IPset for? If you are simply matching against the whole /14 segment, a hash-based IPset will be much less efficient than a simple network address & mask-based match.
But if you are just setting up an initial set and planning to later selectively knock out specific IP addresses from it, then it would make sense to use an IPset.
Even so, if the number of knocked-out IPs is expected to be relatively small, it might be sensible to invert the sense of whatever you're doing and use a mask-based match as the general rule and the IPset-based match as exceptions to it.
Something like:
iptables -N maybeAllow81_212
iptables -A maybeAllow81_212 -m set --match-set denyiplist_81_212 src -j DROP
iptables -A maybeAllow81_212 -j ACCEPTiptables -A INPUT -s 81.212.0.0/14 -j maybeAllow81_212This way, any traffic that is not coming from within the 81.212.0.0/14 can be processed in the main INPUT chain with essentially just two assembler instructions: one 32-bit AND and one 32-bit comparision. You cannot get much faster than that.
Any traffic from within that segment gets diverted to the maybeAllow81_212 subchain which will do the hash match (with an inverted, hopefully much smaller set to match against with!) and then allow everyone who doesn't match the set to pass.
|
I need to add this 81.212.0.0/14 ip range to ipset. But it doesnt calculate lower than /16.
I want to add from 81.212.0.0 to 81.215.255.255 IP addresses. Is there any other way but /14.
Im trying to allow connections from a specific IP range.
What I tried:ipset -A allowiplist 81.212.0.0/14What I expected:That should allow connections between 81.212.0.0 - 81.215.255.255
P.S: All other rules works fine except but this. I think the problem is 81.212.0.0/14 have bigger IP count than 65535, maybe idk.
|
ipset How to add IP range from x to y
|
You can use -C flag in iptables to check if a set exists:
sudo iptables -C INPUT -m set --match-set sshd src -j DROPReturn code is > 0 if the set does not exist.
|
My iptables output looks like this:
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 DROP all -- 0.0.0.0/0 0.0.0.0/0 match-set sshd srcChain FORWARD (policy ACCEPT)
num target prot opt source destination Chain OUTPUT (policy ACCEPT)
num target prot opt source destination How can i check to see if this "sshd" set was already hooked up to iptables?
Thanks
|
how to check whether an ipset was hooked up to iptables
|
Thanks to @StephenHarris
ipset command's output is generated on stderr (not stdout) and 2>&1 captures the output to the variable.
str=$(/usr/sbin/ipset test IPsetName 1.1.1.1 2>&1)if [[ $str = *"The set with the given name does not exist"* ]]; then
echo "IPsetName not found"
fiNow this if statement works as expected!
|
I am building a script to detect if IPSET exists.
#!/bin/bash
str=$(/usr/sbin/ipset test IPsetName 1.1.1.1)echo "$str" #outputs blank lineif [[ $str = *"The set with the given name does not exist"* ]]; then
echo "IPsetName not found"
fiWhen I run this script I get this output:
ipset v6.29: The set with the given name does not exist
then a blank line for echo "$str" and I don't see the expected output from the if statement.
How to store the ipset command output to the variable?
|
ipset command's output not storing to variable
|
No.
Currently it is not possible to use ipset as SNAT directives.
|
I've got many rules like:
-A POSTROUTING -s IP_LOCAL1 -j SNAT --to-source IP_PUBLIC1
-A POSTROUTING -s IP_LOCAL2 -j SNAT --to-source IP_PUBLIC2
...
...
-A POSTROUTING -s IP_LOCAL100 -j SNAT --to-source IP_PUBLIC100Is there any possibility to make an ipset with declaration IP_LOCAL1:IP_PUBLIC1 and then make only one rule using ipset?
|
IPset and making firewall simple
|
Thanks to this comment, the problem is solved:
Instead of
iptables -A FORWARD -m set --match-set myset dst -j DROP
I had to use
iptables -A FORWARD -m set --match-set myset dst,dst -j DROP (two dst instead of one)
|
The thing is, I currently have 3 virtual test machines, Client1, ip 192.168.1.10, Client2, ip 192.168.2.20 and Router with ip 192.168.1.1 and 192.168.2.1 as gateway to connect Client1 and Client2.
On Router, I have hash:ip,port set, for example:
ipset add myset 192.168.2.20,tcp:80
ipset add myset 192.168.2.20,tcp:443
On Client2 I have nginx setup to listen on port 80, and I don't want Client1 to be able to connect to Client2 via tcp on port 80, on Router I add a rule: iptables -A FORWARD -m set --match-set myset dst -j DROP
On Client1 I exec wget 192.168.2.20/index.html and due to my iptables rules on Router, Client1 shouldn't be able to connect to Client2 and grab index.html, however, it doesn't work and file can be successfully retrieved.
The problem is only with hash:ip,port set type of ipset. If I choose hash:ip type and move protocol/port part to iptables out of ipset, everything works fine. However, I need to use exactly ipset to be able to swap block lists any time.
What am I doing wrong? All 3 VMs are running on Ubuntu 17.04, minimal installation, no GUI.
|
How to block certain protocols with ipset?
|
Yes, and no.
You didn't tell us what service you are running, imap(s) or pop, smtp(s) etc., and if it's all using the default port.
But, to verify that you've collected all necessary ports, run i.e netstat -luantp to get a list of listening ports. Then compare the list of ports against it.
Also, consider put these rules into a single one,
iptables -A INPUT -p tcp -m multiport --dports 25,587,465,110,143,993,995 -m set --match-set bannedIPs src -j DROP
To save / restore ipset lists, try ipset save > ipset.rules and ipset restore < ipset.rules
|
I have created a ipset with a bunch of IPs that I want to block access to dovecot and exim.
The ipset is called "bannedIPs" and have been added to iptables using this
iptables -A INPUT -p tcp --dport 25 -m set --set bannedIPs src -j DROP
iptables -A INPUT -p tcp --dport 587 -m set --set bannedIPs src -j DROP
iptables -A INPUT -p tcp --dport 465 -m set --set bannedIPs src -j DROP
iptables -A INPUT -p tcp --dport 110 -m set --set bannedIPs src -j DROP
iptables -A INPUT -p tcp --dport 143 -m set --set bannedIPs src -j DROP
iptables -A INPUT -p tcp --dport 993 -m set --set bannedIPs src -j DROP
iptables -A INPUT -p tcp --dport 995 -m set --set bannedIPs src -j DROPmy question is:Are these rules correct? Will they block IPs on the bannedIPs ipset from accessing exim and dovecots on all ports of these services?
|
IP set to block access to exim and dovecot
|
I now realize what I was doing incorrectly. The following fix works for me:
% sudo /sbin/iptables -v -I INPUT -p tcp -m multiport --dports 110,143,993,995 -j DROP
% sudo /sbin/iptables -v -I INPUT -p tcp -m multiport --dports 110,143,993,995 -m set --match-set allow-list src -j ALLOWIn other words, first allow the IP's in allow-list to access via the port list, and then drop all other IP's who try to access via that port list.
Also, I had originally left out the -p tcp option, which is needed when dealing with TCP ports.
UPDATE: Originally, I incorrectly used -A INPUT above. I have changed it to the correct -I INPUT.
FURTHER UPDATE: ... and with -I, I had to change the of the rules: DROP needs to be defined before ALLOW in this case.
|
I am using Debian 8 linux.
I'm trying to block input access to a few ports for most IP addresses, except for a small, select list of IP addresses. I am doing the following, but it does not seem to work:
% sudo /sbin/iptables -v -A INPUT -p tcp -m set '!' --match-set allow-list src -m multiport --dports 110,143,993,995 -j DROPWhenever there is an access attempt to any of those ports from an IP address that is not in allow-list, that attempt is still succeeding.
These are the first few lines of allow-list:
% sudo /sbin/ipset list allow-list
Name: allow-list
Type: hash:net
Revision: 6
Header: family inet hashsize 16384 maxelem 262144
Size in memory: 687888
References: 2
Members:
125.8.0.0/13
160.94.0.0/15
104.37.68.0/22
205.233.22.0/23
[ ... more CIDR entries ... ]And this is the current iptables configuration:
% sudo /sbin/iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
DROP tcp -- anywhere anywhere ! match-set allow-list src multiport dports pop3,imap2,imaps,pop3sChain FORWARD (policy ACCEPT)
target prot opt source destination Chain OUTPUT (policy ACCEPT)
target prot opt source destination What am I doing incorrectly?
Thank you very much in advance.
|
iptables: Failure when trying to block port access for most IP addresses, except for a few
|
First create a whitelist, with a name (identifier) of choice. I named mine mylist in this example.
$ sudo ipset -N mylist iphashIn my whitelist, I wana allow 10.10.10.0/24 and 10.80.80.0/24 (and then drop everything not listed)
$ sudo ipset -A mylist 10.10.10.0/24$ sudo ipset -A mylist 10.80.80.0/24Drop any traffic from any host not defined in the whitelist
$ sudo iptables -A INPUT -m set ! --match-set mylist src -j DROP Then allow hosts defined in the whitelsit to match the level of access needed per your requirement.
|
I have a whitelist of ip addresses I'm storing in a ipset. I want to craft an iptables rule for my input chain where any IP NOT on the whitelist gets dropped immediately and no rules further down the chain get considered. If a ip matches an address on the whitelist then it continues down the Chain, checking other rules.
If I just put a default policy of DROP and an ALLOW rule based on the whitelist, ip addresses not on the whitelist might be compared against other rules in the chain and allowed through based on those criteria, which I do not want. I also don't want to immediately let through traffic matching the whitelist rule (I guess whitelist is a but of a misnomer here) but, rather, apply the traffic to further scrutiny. Does iptables support this "DROP on not match" logic?
|
iptables: drop any ip not on whitelist, short circuiting chain
|
It sounds like you've got a decent grasp on what happened.
Yes, because you hard-powered-off the system before your changes were committed to disk, they were there when you booted back up.
The system caches all writes before flushing them out to disk. There are several options which control this behavior, all located at /proc/sys/vm/dirty_* [kernel doc]. Unless a flush is explicitly performed by an application via fsync() [man 2 fsync], the data is committed when it is either old enough, or the write cache is filled up.
The definition of "data" as used above includes modification to the directory entry to delete the file.
Now, as for the journal, that's one of the common misconceptions of what the journal is for. The purpose of a journal is not to ensure changes get replayed, or that data is not lost. The purpose of a journal is to prevent corruption of the filesystem itself, not the files in it. The journal simply contains information about the changes being made, and not (typically) the full data of the change itself. The exact details are dependent upon the filesystem, and journal mode. For ext3/4, see the data mount option in man 8 mount.To answer your supplementary question of whether there's a way to prevent the pending writes without a reboot:
From doing a quick read through the kernel source code, it looks like you can use the magic sysrq u command ([wikipedia], [kernel doc]) to do an emergency remount-read-only operation. It appears this will immediately remount all volumes read-only without a sync operation.
To use this, simply press Alt+SysRq+u.
|
Classical situation: I ran a bad rm and realized immediately afterwards that I had removed the wrong files. (Nothing critical and I had tolerably recent backups, but still annoying.)
Knowing that further disk activity was my enemy if I wanted to recover the files with extundelete or such tools, I immediately powered the machine down physically (i.e., with the power button, not with halt or any such command). This was a laptop with no important tasks running or anything open, so it was an acceptable operation. (By the way, I learned since then that the first thing to do in such a situation would be to estimate first if the missing files may still be opened by a process https://unix.stackexchange.com/a/101247 -- if they are, you should recover them this way rather than power down the machine.)
Still, once the machine was powered down I thought for a while and decided the files were not worth the time investment of booting a live system for proper forensics. So I powered the machine back up. And then I discovered that my files were still sitting on disk: the rm hadn't been propagated to disk before I had powered down. I did a little dance and thanked the god of sysadmins for His unexpected forgiveness.
My question is now to understand how this was possible, and what is the typical delay before an rm is actually propagated to disk. I know that disk IO isn't flushed immediately but that it sits in memory for some time, but I thought that the disk journal would make sure quickly that pending operations do not get entirely lost. https://unix.stackexchange.com/a/78766 seems to hint at a separate mechanism to flush dirty pages and to flush journal operations but does not give sufficient detail about how the journal would be involved for a rm, and the expected delay before operations are flushed.
Some more details: the data was in an ext4 partition inside a LUKS volume, and when booting the machine back up I saw the following in syslog:
Sep 24 10:24:58 gamma kernel: [ 11.457007] EXT4-fs (dm-0): 1 orphan inode deleted
Sep 24 10:24:58 gamma kernel: [ 11.458393] EXT4-fs (dm-0): recovery complete
Sep 24 10:24:58 gamma kernel: [ 11.482475] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)but I am not confident it is related to the rm.
Another question would be whether there is a way to tell the kernel to not perform any of the pending disk operations (but rather, say, dump them somewhere), rather than powering the machine down. (Of course, it sounds dangerous to not perform the pending operations, but this is what would happen when powering the machine down anyway, and it some cases it could save you.) This would be "cleaner", of course, and also interesting for e.g. remote servers where physical powerdown is not an easy option.
|
Why did powering down my machine after a bad `rm` save my files?
|
There are no guarantees. A Journaling File System is more resilient and is less prone to corruption, but not immune.
All a journal is is a list of operations which have recently been done to the file system. The crucial part is that the journal entry is made before the operations take place. Most operations have multiple steps. Deleting a file, for example might entail deleting the file's entry in the file system's table of contents and then marking the sectors on the drive as free. If something happens between the two steps, a journaled file system can tell immediately and perform the necessary clean up to keep everything consistent. This is not the case with a non-journaled file system which has to look at the entire contents of the volume to find errors.
While this journaling is much less prone to corruption than not journaling, corruption can still occur. For example, if the hard drive is mechanically malfunctioning or if writes to the journal itself are failing or interrupted.
The basic premise of journaling is that writing a journal entry is much quicker, usually, than the actual transaction it describes will be. So, the period between the OS ordering a (journal) write and the hard drive fulfilling it is much shorter than for a normal write: a narrower window for things to go wrong in, but there's still a window.
Further reading from archived IBM pages:Anatomy of Linux journalling filesystems
Anatomy of ext4
|
I am asking this question on behalf of another user who raised the issue in the Ubuntu chat room.
Do journaling filesystems guarantee that no corruption will occur if a power failure occurs?
If this answer depends on the filesystem, please indicate which ones do protect against corruption and which ones don't.
|
Do journaling filesystems guarantee against corruption after a power failure?
|
Don't get misled by the fact that only writeback mentions internal filesystem integrity.
With ext3, whether you use journal, ordered or writeback, file system metadata is always journalled and that means internal file system integrity.
The data modes offer a way of control over how ordinary data is written to the file system.
In writeback mode, metadata changes are first recorded in the journal and a commit block is written. After the journal has been updated, metadata and data write-outs may proceed. data=writeback can be a severe security risk: if the system crashes while appending to a file, after the metadata has been committed (and additional data blocks allocated), but before the data has been written (data blocks overwritten with new data), then after journal recovery that file may contain blocks filled with data from previously deleted files – from any user1.
So, if data integrity is your main concern and speed is not important, data=journal is the way to go.
|
I have an embedded setup using an initramfs for the root file system but using a custom ext3 partition mounted on a compact flash IDE drive. Because data integrity in the face of power loss is the most important factor in the entire setup, I have used the following options to mount (below is the entry from my /etc/fstab file
<file system> <mount pt> <type> <options> <dump><pass>
/dev/sda2 /data ext3 auto,exec,relatime,sync,barrier=1 0 2I came by these options from reading around on the internet. What I am worried about is that the content of /proc/mounts give the following:
/dev/sda2 /data ext3 rw,sync,relatime,errors=continue,user_xattr,acl,
barrier=1,data=writeback 0 0From what I understand from reading around is that I want to use data=journal option for my mount as this offers the best protection against data corruption. However, from the man page for specific ext3 options for mount it says the
following about the writeback option:Data ordering is not preserved - data may be written into the main
filesystem after its metadata has been committed to the journal.
This is rumoured to be the highest-throughput option. It guarantees
internal filesystem integrity, however it can allow old data to appear
in files after a crash and journal recovery.I am very confused about this - the man page seems to suggest that for file system integrity I want to specify data=writeback option to mount but most other references I have found (including some published books on embedded linux) suggest that I should be using data=journal. What would be the best approach for me to use? Write speed is not an issue at all - data integrity is though.
|
What mount option to use for ext3 file system to minimise data loss or corruption?
|
Yes, it is possible, but you passed the wrong switch to journalctl.
According to journalctl(1) man page:To read messages with a given syslog identifier (say, "foo"), issue journalctl -t foo or journalctl SYSLOG_IDENTIFIER=foo;
To read messages with a given syslog facility, issue journalctl SYSLOG_FACILITY=1 (note that facilities are stored and matched using their numeric values).More generally, the syslog identifier and facility are stored in the journal as separate fields (SYSLOG_IDENTIFIER and SYSLOG_FACILITY). If you ever need to access the journal from, say, the C API, you will have to add matches on these fields directly.
The journalctl -u switch is used to add a match on the name of a systemd unit which owned the process which has generated the message. So this is the wrong switch to use.
|
I was wondering if it is possible to pull log messages for a particular log with systemd's journal logging. For example, when I open a log in C, openlog('slog', LOG_CONS | LOG_PID, LOG_LOCAL1), to pull just messages logged under 'slog' or LOCAL1?
When I do something like journalctl -u slog or journalctl -u LOG_LOCAL1, it just tells me when the log begins and ends, not the actual log messages.
|
Pulling log messages for a particular log in systemd journal?
|
The nointegrity option has no direct relation with atime, noatime, relatime or nodirtime. You could choose only one of the time options for files. Using noatime imply nodirtime. So, noatime will make all files and directories noatime.
In my system I can not find the option nointegrity for ext4. Please check the man mount in the section for ext4 to find available options for it. The only options ext4 allows for journaling are journal, ordered and writeback. If you don't want possible filesystem corruption on a crash, do not use writeback.
So, for an SSD, make sure the discard option is enabled (it is by default). It will probably be safer to use relatime. The noatime may be infinitesimally faster but there is some risk of some programs failing to work correctly.
In ext4 there is no nointegrity option, but, in any case, do not use it if you care about having reliable data (you have been warned!).
|
What is the difference between nointegrity, noatime and relatime? And what is the best option for a SSD? I am using ext4 as my filesystem. And why disabling journaling on my system, data loss can occur? Can I use for example nointergrity & noatime together in fstab, or only one option is accepted?
Thank you!
|
Difference between nointegrity, noatime & relatime
|
When a file or directory is "deleted" its inode number is removed from the directory which contains the file. You can see the list of inodes that a given directory contains using the tree command.
Example
$ tree -a -L 1 --inodes .
.
|-- [9571121] dir1
|-- [9571204] dir2
|-- [9571205] dir3
|-- [9571206] dir4
|-- [9571208] dir5
|-- [9571090] file1
|-- [9571091] file2
|-- [9571092] file3
|-- [9571093] file4
`-- [9571120] file55 directories, 5 filesLinks
It's important to understand how hardlinks work. This tutorial titled: Intro to Inodes has excellent details if you're just starting out in trying to get a fundamental understanding of how inodes work.
excerptInode numbers are unique, but you may have noticed that some file name and inode number listings do show some files with the same number. The duplication is caused by hard links. Hard links are made when a file is copied in multiple directories. The same file exists in various directories on the same storage unit. The directory listing shows two files with the same number which links them to the same physical on te storage unit. Hard links allow for the same file to "exist" in multiple directories, but only one physical file exists. Space is then saved on the storage unit. For example, if a one megabyte file is placed in two different directories, the space used on the storage is one megabyte, not two megabytes.Deleting
That same tutorial also had this to say about what happens when a inode is deleted.Deleting files causes the size and direct/indirect block entries are zeroed and the physical space on the storage unit is set as unused. To undelete the file, the metadata is restored from the Journal if it is used (see the Journal article). Once the metadata is restored, the file is once again accessible unless the physical data has been overwritten on the storage unit.Extents
You might want to also brush up on extents and how they work. Again from the linux.org site, another good tutorial, titled: Extents will help you get the basics down.
You can use the command filefrag to identify how many extents a given file/directory is using.
Examples
$ filefrag dir1
dir1: 1 extent found$ filefrag ~/VirtualBox\ VMs/CentOS6.3/CentOS6.3.vdi
/home/saml/VirtualBox VMs/CentOS6.3/CentOS6.3.vdi: 5 extents foundYou can get more detailed output by using the -v switch:
$ filefrag -v dir1
Filesystem type is: ef53
File size of dir1 is 4096 (1 block of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 0: 38282243.. 38282243: 1: eof
dir1: 1 extent foundNOTE: Notice that a directory always consumes at a minimum, 4K bytes.
Giving a file some size
We can take one of our sample files and write 1MB of data to it like this:
$ dd if=/dev/zero of=file1 bs=1k count=1k
1024+0 records in
1024+0 records out
1048576 bytes (1.0 MB) copied, 0.00628147 s, 167 MB/s$ ll | grep file1
-rw-rw-r--. 1 saml saml 1048576 Dec 9 20:03 file1If we analyze this file using filefrag:
$ filefrag -v file1
Filesystem type is: ef53
File size of file1 is 1048576 (256 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 255: 35033088.. 35033343: 256: eof
file1: 1 extent foundDeleting and recreating a file quickly
One interesting experiment you can do is to create a file, such as file1 above, and then delete it, and then recreate it. Watch what happens. Right after deleting the file, I re-run the dd ... command and file1 shows up like this to the filefrag command:
$ filefrag -v file1
Filesystem type is: ef53
File size of file1 is 1048576 (256 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 255: 0.. 255: 256: unknown,delalloc,eof
file1: 1 extent foundAfter a bit of time (seconds to minutes pass):
$ filefrag -v file1
Filesystem type is: ef53
File size of file1 is 1048576 (256 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 255: 38340864.. 38341119: 256: eof
file1: 1 extent foundThe file finally shows up. I'm not entirely sure what's going on here, but it looks like it takes some time for the file's state to settle out between the journal & the disk. Running stat commands shows the file with an inode so it's there, but the data that filefrag uses hasn't been resolved so we're in a bit of a limbo state.
|
I hope I've got this right: A file's inode contains data such as inode number, time of last modification, ownership etc. – and also the entry: »deletion time«. Which made me curious:
Deleting a file means removing it's inode number, thus marking the storage space linked to it as available. There are tools to recover (accidentally) deleted files (e.g. from a journal, if available). And I know the stat command.
Question
What does a "deleted file" entry look like in the journal?
My guess is a quite unspectacular looking output as such as if issued the stat command.
I know that deleting a file and trying to recover it would be a first-hand experience, but then I'm not at a point where I could do this without outside help and I want to understand exactly what I'm doing. Getting into data resurrection would be sidetracking for me at the moment, as I try to get a firm grip on the basic stuff... I'm not lazy, this isn't homework, this is for private study.
|
what does a "deleted file" entry look like in the journal
|
All right, so for the first question it turns out the debugfs stats command tells what the starting blocks for every section of a group are. In addition, I guessed that inumbers had to be consecutive and increasing, so basic addition of the offset into the inode table and the imap command gave me the first inumbers; it also confirmed my suspicion about the last bad sector, where my block group calculations indicated it was in the wrong group.
byte address block group what first inumber
0x8B00020000 145752096 4448 inode table block 0 36438017
0x8B00027000 145752103 4448 inode table block 7 36438129
0x8B0002C000 145752108 4448 inode table block 12 36438209
0x8B00209000 145752585 4448 inode table block 489 36445841
0x8B0029A000 145752730 4449 inode table block 122 36448161Since a block is 4096 bytes and each inode table entry is 256 bytes, there are 16 inodes per block. So I now have all 80 lost inode table entries by inumber.Now let's turn to the journal. I wrote a small tool that dumps information in each block of the journal. Since the journal superblock was missing, there were two pieces of information that I needed for this that were lost:whether the journal held 64-bit block numbers
whether the journal used version 3 checksumsFortunately, if I forced one (or both) of these switches on, some of the descriptor blocks in the journal overflowed its block, proving that those flags were not set.
One awk script (fulllog.awk) later, I have a log of the form
0x0002A000 - descriptors
0x0002B000 -> block 159383670
0x0002C000 -> block 159383671
0x0002D000 -> block 0
0x0002E000 -> block 155189280
0x0002F000 -> block 195559440
0x00030000 -> block 47
0x00031000 -> block 195559643
0x00032000 -> block 195568036
0x00033000 -> block 159383672
0x0002B000 - invalid/data block
0x0002C000 - invalid/data block
0x0002D000 - invalid/data block
0x0002E000 - invalid/data block
0x0002F000 - invalid/data block
0x00030000 - invalid/data block
0x00031000 - invalid/data block
0x00032000 - invalid/data block
0x00033000 - invalid/data block
0x00034000 - commit record
commit time: 2014-12-25 16:53:13.703902604 -0500 ESTWith this, another awk script (dumpallfor.awk) dumps all the blocks:
byte address block number of journaled blocks
0x8B00020000 145752096 6
0x8B00027000 145752103 10
0x8B0002C000 145752108 206
0x8B00209000 145752585 1
0x8B0029A000 145752730 0So that last block is truly lost :( With any luck I can find out what files were there with debugfs's ncheck command.So I have a bunch of blocks. And they all appear to differ! Now what?
I could go by the revocation records, but I can't seem to parse that structure meaningfully. I could go by the commit record timestamps, but before I try that, I want to see just how each inode table block differs. So I wrote another quick program (diff.go) to find that out.
For the most part, files that do differ differ only in timestamps, so we can just choose the file with the latest timestamps. We'll do that later. For all other files, we get this:
36438023 - size differs
36438139 - OSD1 (file version high dword) differs
36438209 - OSD1 differsHm, that's not good... The file with differing size will be a problem, and I have no idea what to do about the two OSD1 files. I also tried using debugfs's ncheck to see what the files were, but we don't have a match.
I then found out which block dumps have the latest timestamps for now (same repo, latest.go). The important thing to note is that I had the blocks scanned in chronological order by commit time. This is not necessarily the same as numerical order by block number; the journal is not always stored in chronologically increasing order.
As it turns out, however, the newest block (by commit time) is indeed the one with the latest timestamps!Let's try these latest blocks and see if we can recover anything from them.
sudo dd if=BLOCKFILE of=DDRESCUEIMG bs=1 seek=BYTEOFFSET conv=notruncAfter that my home directory is back!Now let's find out what those three differing files were...
Inode Pathname
36438023 /pietro/.cache/gdm/session.log
36438209 /pietro/.config/liferea
36438139 /pietro/.local/share/zeitgeist/fts.indexThe only important thing there is Liferea's configuration directory, but I don't think that was corrupted; it was one of the OSD1-differing ones.
And let's find out about those 16 inodes in the final block, the one that we could not recover:
Inode Pathname
36448176 /pietro/k2
36448175 /pietro/Downloads/sOMe4P7.jpg
36448174 /pietro/Downloads/picture.png
36448164 /pietro/Downloads/tumblr_nfjvg292T21s4pk45o1_1280.png
36448169 /pietro/Downloads/METROID Super Zeromission v.2.3+HARD_v2.4.zip
36448165 /pietro/Downloads/tumblr_mrfex1kuxa1sbx6kgo1_500.jpg
36448173 /pietro/Downloads/1*-vuzP4JAoPf9S6ZdHNR_Jg.jpeg
36448162 /pietro/.cache/upstart/gnome-settings-daemon.log.6.gz
36448163 /pietro/.cache/upstart/dbus.log.7.gz
36448171 /pietro/.cache/upstart/gnome-settings-daemon.log.3.gz
36448161 /pietro/.local/share/applications/Knytt Underground.desktop
36448166 /pietro/Documents/Screenshots/Screenshot from 2014-12-03 15:47:29.png
36448170 /pietro/Documents/Screenshots/Screenshot from 2014-12-03 16:51:26.png
36448172 /pietro/Documents/Screenshots/Screenshot from 2014-12-03 19:08:54.png
36448168 /pietro/Documents/transactions/premiere to operating transaction 4305747926.pdf
36448167 /pietro/Documents/transactions/transaction 4315883542.pdfIn short:a text file with only one or two things in that I could get back by brute force since I know that it has a date stamp and something that's also in my chat logs
some images downloaded from the internet; if I can't get the URLs back from Firefox's history then I can use photorec
a ROM hack that I can easily get on the Internet again =P
log files; no loss here
the .desktop file for a Steam game
screenshots; I can get these back with photorec assuming gnome-screenshot added the datestamp as metadata
bank account transaction records; if I can't get them from the bank I could probably use them with photorecSo not casualtyless but not a total loss, and I learned more about ext4 in the process. Thanks anyway!UPDATE
Might as well put this out there:
NOT YET /pietro/k2
FOUND /pietro/Downloads/sOMe4P7.jpg
NOT YET /pietro/Downloads/picture.png
FOUND /pietro/Downloads/tumblr_nfjvg292T21s4pk45o1_1280.png
GOOGLEIT /pietro/Downloads/METROID Super Zeromission v.2.3+HARD_v2.4.zip
FOUND /pietro/Downloads/tumblr_mrfex1kuxa1sbx6kgo1_500.jpg
FOUND /pietro/Downloads/1*-vuzP4JAoPf9S6ZdHNR_Jg.jpeg
UNNEEDED /pietro/.cache/upstart/gnome-settings-daemon.log.6.gz
UNNEEDED /pietro/.cache/upstart/dbus.log.7.gz
UNNEEDED /pietro/.cache/upstart/gnome-settings-daemon.log.3.gz
UNNEEDED /pietro/.local/share/applications/Knytt Underground.desktop
NOT YET /pietro/Documents/Screenshots/Screenshot from 2014-12-03 15:47:29.png
NOT YET /pietro/Documents/Screenshots/Screenshot from 2014-12-03 16:51:26.png
NOT YET /pietro/Documents/Screenshots/Screenshot from 2014-12-03 19:08:54.png
NOT YET /pietro/Documents/transactions/premiere to operating transaction 4305747926.pdf
NOT YET /pietro/Documents/transactions/transaction 4315883542.pdfAnd in case I'm not weird enough, the downloaded pictures were:sOMe4P7.jpg (a parody of the Law & Order title card with "& KNUCKLES" added to it)
tumblr_nfjvg292T21s4pk45o1_1280.png (screenshot of this tweet from J. K. Rowling)
tumblr_mrfex1kuxa1sbx6kgo1_500.jpg (picture of a "Windows did not shut down successfully." error message on a billboard at what appears to be some sporting event)
1*-vuzP4JAoPf9S6ZdHNR_Jg.jpeg (this comic)These were all shared by friends in chats.
I guess I'll keep this updated? (Not like it would make a difference...) I know I can recover everything; the only question is when =P
|
So en route from my old laptop to a new one my old laptop's hard drive got some physical damage. badblocks reports 64 bad sectors. I had a two-month-old Ubuntu GNOME setup with a split / and /home partitions. From what I can tell, a few sectors in / were damaged, but that's not an issue. On the other hand, /home's partition gives me this annotated ddrescue log:
# Rescue Logfile. Created by GNU ddrescue version 1.17
# Command line: ddrescue -d -r -1 /dev/sdb2 home.img home.log
# current_pos current_status
0x6788008400 -
# pos size status
0x00000000 0x6788000000 +
0x6788000000 0x0000A000 -
first 10 sectors of the ext4 journal
0x678800A000 0x2378016000 +
0x8B00020000 0x00001000 -
inode table entries for /pietro (my $HOME) and a few folders within
0x8B00021000 0x00006000 +
0x8B00027000 0x00001000 -
unknown (inode table?)
0x8B00028000 0x00004000 +
0x8B0002C000 0x00001000 -
unknown (inode table?)
0x8B0002D000 0x001DC000 +
0x8B00209000 0x00001000 -
unknown (inode table?)
0x8B0020A000 0x00090000 +
0x8B0029A000 0x00001000 -
unknown (inode table?)
0x8B0029B000 0x4420E65000 +I made the annotations with use of debugfs's icheck and testb commands; all the damaged blocks are marked used. Some superblock stats:
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 972
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512So my questions are:Can I find out exactly what those five unknown blocks were, if not inode entries? My suspicion is that they are inode table entries, but icheck doesn't want to say. If they are, can I find out which inodes?
Can I still recover these inode table entries from the journal by hand, even though the first 10 blocks of the journal are lost?I'd rather not do this data recovery with fsck, which will just dump all my files i n /lost+found in a giant mess of flattened directory structure and duplicate files...
Thanks.
|
Can I find out if a given ext4 block is in the inode table, and if so, can I pick it out of a journal with no header by hand?
|
The two are in no way equivalent. Disabling the journal does exactly that: turns journaling off. Setting the journal mode to writeback, on the other hand, turns off certain guarantees about file data while assuring metadata consistency through journaling.
The data=writeback option in man(8) mount says:Data ordering is not preserved - data may be written into the main
filesystem after its metadata has been committed to the journal. This is
rumoured to be the highest- throughput option. It guarantees internal
filesystem integrity, however it can allow old data to appear in files
after a crash and journal recovery.Setting data=writeback may make sense in some circumstances when throughput is more important than file contents. Journaling only the metadata is a compromise that many filesystems make, but don't disable the journal entirely unless you have a very good reason.
|
What is the difference between disabling journal on ext4 file system using:
tune2fs -O ^has_journal /dev/sda1and using data=writeback when mounting? I thought ext4 - journal = ext2. means when we remove journal from a ext4 file system, it is automatically converted to ext2(thus we can not benefit from other ext4 features)
|
disabling journal vs data=writeback in ext4 file system
|
I don't agree with the squashfs recommendations. You don't usually write a squashfs to a raw block device; think of it as an easily-readable tar archive. That means you would still need an underlaying filesystem.
ext2 has several severe limitations that limit its usefulness today; I would therefore recommend ext4. Since this is meant for archiving, you would create compressed archives to go on it; that means you would have a small number of fairly large files that rarely change. You can optimize for that:specify -I 128 to reduce the size of individual inodes, which reduces the size of the inode table.
You can play with the -i option too, to reduce the size of the inode table even further. If you increase this value, there will be less inodes created, and therefore the inode table will also be smaller. However, that would mean the filesystem wastes more space on average per file. This is therefore a bit of a trade-off.
You can indeed switch off the journal with -O ^has_journal. If you go down that route, though, I recommend that you set default options to mount the filesystem read-only; you can do this in fstab, or you could use tune2fs -E mount_opts=ro to record a default in the filesystem (you cannot do this at mkfs time)
you should of course compress your data into archive files, so that the inode wastage isn't as bad a problem as it could be. You could create squashfs images, but xz compresses better, so I would recommend tar.xz files instead.
You could also reduce the number of reserved blocks with the -m option to either mkfs or tune2fs. This sets the percentage (set to 5 by default) which is reserved for root only. Don't set it to zero; the filesystem requires some space for efficient operation.
|
summary
Suppose one is setting up an external drive to be a "write-once archive": one intends to reformat it, copy some files that will (hopefully) never be updated, then set it aside until I need to read something (which could be a long while or never) from the archive from another linux box. I also want to be able to get as much filespace as possible onto the archive; i.e., I want the filesystem to consume as little freespace as possible for its own purposes.
specific question 1: which filesystem would be better for this usecase: ext2, or ext4 without journaling?
Since I've never done the latter before (I usually do this sort of thing with GParted), just to be sure:
specific question 2: is "the way" to install journal-less ext4 mke2fs -t ext4 -O ^has_journal /dev/whatever ?
general question 3: is there a better filesystem for this usecase? or Something Completely Different?
details
I've got a buncha files from old projects on dead boxes (which will therefore never be updated) saved on various external drives. Collectively size(files) ~= 250 GB. That's too big for DVDs (i.e., would require too many--unless I'm missing something), and I don't have a tape drive. Hence I'm setting up an old USB2 HFS external drive to be their archive. I'd prefer to use a "real Linux" filesystem, but would also prefer a filesystem thatconsumes minimum space on the archive drive (since it's just about barely big enough to hold what I want to put on it.
will be readable from whatever (presumably Linux) box I'll be using in future.I had planned to do the following sequence with GParted: [delete old partitions, create single new partition, create ext2 filesystem, relabel]. However, I read here that
recent Linux kernels support a journal-less mode of ext4
which provides benefits not found with ext2and noted the following text in man mkfs.ext4
"mke2fs -t ext3 -O ^has_journal /dev/hdXX"
will create a filesystem that does not have a journalSo I'd like to knowWhich filesystem would be better for this usecase: ext2, or ext4 without journaling?
Presuming I go ext4-minus-journal, is the commandline to install it mke2fs -t ext4 -O ^has_journal /dev/whatever ?
Is there another, even-better filesystem for this usecase?
|
"write-once archive": ext2 vs ext4^has_journal vs
|
Hardware can still randomly glitch or fail from time-to-time. There are so many components involved in writing a file to storage - CPU, RAM, HDD, I/O BUS, etc. It's not just power-outages or reboots that can cause file-system corruption.
That said, it's still okay to use EXT2, just don't complain if something goes wrong. I would only use it for non-critical things like transporting data on a USB stick.
For my critical data, I use data mirroring on top of EXT3/4.
|
Is a journaling filesystem needed in today's desktop world?
A good OS doesn't kernel panic every month, and if we are using a laptop, then there aren't any power outages, so why shouldn't we use ext2 as the standard filesystem on a desktop or laptop?
|
Is ext2 suitable for daily use on a desktop or laptop?
|
No!
As a general rule, if you see a system file and you don't know what it is, don't remove it. Even more generally, if an action requires root permissions and you don't know what it would mean, don't do it.
The .sujournal file contains the soft updates journal. The file is not accessed directly as a file; rather, it's space that is reserved for internal use by the filesystem driver. This space is marked as occupied in a file for compatibility with older versions of the filesystem driver: if you mount that filesystem with an FFS driver that supports journaled soft updates, then the driver uses that space to store the SU journal; if you mount that filesystem with an older FFS driver, the file is left untouched and the driver performs an fsck upon mounting instead of replaying the journal.
|
I am running out of space and when I checked, I found that I have
# pwd
/usr
# ls -l .sujournal
-r-------- 1 root wheel 33554432 Dec 31 1969 .sujournalI wanted to ask should/can I remove it? Any implications of that?
|
huge .sujournal file on FreeBSD
|
You can purge the journal by either un-mounting, or remounting read-only (arguably a good idea when cloning). With ext4 you can also turn off the journal altogether (tune2fs -O ^has_journal), the .journal magic immutable file will be removed automatically. The journal data will still be on the underlying disk of course, so removing the journal and then zero-filling free space might get the best results.
The comments above hit the nail on the head though, dd sees the bits underneath the filesystem, how they came to be in any particular arrangement depends on all the things that have happened to the filesystem, rather than just the final contents of files. Features such as pre-allocation, delayed allocation, multi-block allocation, nanosecond timestamps and of course the journal itself all contribute to this. Also, there is one potentially random allocation strategy: the Orlov allocator can fall-back to random allocation (see fs/ext4/ialloc.c).
For completeness the secure deletion feature with random scrubbing would also contribute to differences (assuming you deleted your zero-filled ballast files), though that feature is not (yet) mainline.
On many systems the dump and restore commands can be used for a similar cloning method, for various reasons it never quite caught on in Linux.
|
Some preamble: I'm taking bitwise copy of disk devices (via dd command) from twin hosts (i.e. with the same virtualized hardware layout and software packages, but with different history of usage). To optimize image size I trailed all empty space on partitions with zeroes (e.g. from /dev/zero). I'm also aware of reserved blocks per partition and temporarily downgraded that value to 0% before trailing.
But I'm curious about discrepancy of the final compressed (by bzip2) images. All hosts have almost the same tar-gziped size of files, but compressed dd images have significant variety (up to 20%). So how could it be? Is there a reason in the filesystem journals data which I was unable to purge? There are over ten partitions on the host and each reported of 128Mb journal size. (I also checked defragmentation, it's all ok: 0 or 1 due to e4defrag tool report)
So, my question is it possible somehow to clean ext3/ext4 filesystem journals? (safely for stored data of course :)
CLARIFICATION
I defenitely asked a question about how to clean (purge/refresh) journals in ext3/ext4 filesystem if possible or maybe I'm mistaken and there is no such feature as reclaiming disk space occupied by filesystem journals, so all solutions are welcome. An intention to ask the question I put as premise into the preamble and the answer to my question would help me to investigate the issue I encountered with.
|
How to clean journals in ext3/ext4 filesystem? [closed]
|
I have a simple Python snippet managed by a systemd service which logs to the rsys[l]ogd daemon […]No you haven't.
What you have is a service that logs to the systemd journal. The server listening on the well-known /dev/log socket that your Python program is talking to is not rsyslogd. It is systemd-journald. rsyslogd is attached to the other side of systemd-journald, and your Python program is not talking to it.
From this, it should be apparent that the only way to not send stuff via systemd-journald is to use some other route to rsyslogd, not the well known socket that your Python library uses by default. That all depends from how you have configured rsyslogd. It is possible that you have turned on a UDP server with the imudp module, in which case you could tell your Python program to use that by using a different Python library that speaks to such a UDP server. (The Python syslog library is hardwired to use the well-known local socket.)
Or (and better, given that you have to be careful about not opening a UDP service to the world outwith your machine) you could have given rsyslogd a second, not well known, AF_LOCAL socket to listen to by configuring this in the imuxsock module's configuration. Again, you'll have to tell your Python program to use that and use a different Python library.What exactly you do in your Python program is beyond the scope of this answer.
Further readinghttps://unix.stackexchange.com/a/294206/5132
|
I have a simple Python snippet managed by a systemd service which logs to the rsysogd daemon where I've defined a configuration file to put it to a syslog server with a format I've defined. This is working fine so far.
In the code below, I'm passing the argument as the string I want to log on the server. I'm using this code below as a module and using it for logging alone, the actual script uses this for logging purposes.
#!/usr/bin/env pythonimport syslog
import syssyslog.openlog(facility=syslog.LOG_LOCAL0)
syslog.syslog(syslog.LOG_INFO, sys.argv[1])Since the application is managed by systemd it is making a copy of the syslog available when seen from the journalctl -xe and the journalctl -u <my-service> which I do not wish to happen because I've other critical information I'm logging in journal logs.
The service definition is
[Unit]
Description=Computes device foobar availability status[Service]
Type=simpleEnvironmentFile=/etc/sysconfig/db_EndPoint
ExecStart=/usr/bin/python /opt/foobar/foobar.py
WatchdogSec=60
RestartSec=10
Restart=always
LimitNOFILE=4096[Install]
WantedBy=default.targetand in the /etc/systemd/journald.conf file, I've not enabled any of the options to be available. I looked up this journald.conf documentation to use ForwardToSyslog=no and did a restart of journald service as
systemctl restart systemd-journaldand also restarted my service unit, but I see the logs out to the syslog server and also to the journal logs. What option I'm missing here?
|
Prevent syslogs from being logged under journalctl
|
Generally speaking... yes, it does make sense.
Though you might want to run
tune2fs -l /dev/sdXY | egrep "Maxim|Check"to see how those flags are set as it all depends on the version of e2fsprogs used to create the filesystems and/or distribution specific patches applied to e2fsprogs. You might already have MAX_MNT_COUNT and CHECKINTERVAL set to -1 and 0 respectively, due to the fact that, as of v. 1.42, e2fsprogs defaults to -c1 -i0, see changelog:If the enable_periodic_fsck option is false in /etc/mke2fs.conf (which
is the default), mke2fs will now set the s_max_mnt_count superblock
field to -1, instead of 0. Kernels older then 3.0 will print a
spurious message on each mount then they see a s_max_mnt_count set to
0, which will annoy users./etc/mke2fs.conf compared:
v. 1.41.14 released 2010-12-22:
[defaults]
base_features = sparse_super,filetype,resize_inode,dir_index,ext_attr
blocksize = 4096
inode_size = 256
inode_ratio = 16384v. 1.42 released 2011-11-29:
[defaults]
base_features = sparse_super,filetype,resize_inode,dir_index,ext_attr
default_mntopts = acl,user_xattr
enable_periodic_fsck = 0
blocksize = 4096
inode_size = 256
inode_ratio = 16384
|
I've several partitions with ext4.
Now, I would want if it makes sense to use tune2fs with flags -c0 (max-mount-counts) and -i0 (interval-between-checks) in the partitions with a journal file-system since it needs less checks?
|
To use -c0 -i0 in file-systems with journal
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.